AI Tool to Mitigate Fake Profiles on LinkedIn
Key Highlights :

The growing popularity of social media has made it a microcosm of our society. Just like the real world, social media also has its own dangers and one of the most concerning of them is the issue of fake profiles. Fake profiles not only confuse other users about the authenticity of the person behind the profile but also many people's identity is stolen this way. And when such incidents occur in a professional space such as LinkedIn, the gravity of the situation increases manifold.
To tackle the issue of fake profiles on its platform, LinkedIn has announced a new AI tool that can identify 99.6 percent fake profile images. The new tool is aimed at reducing the instances of fake profiles pretending to be a person of influence to either scam or harm another user.
The new AI tool was developed in partnership with academia and it closely observes profile pictures to detect if any picture has been used in multiple profiles. The tool goes after images that have been created using an AI technique called generative adversarial network (GAN). It identifies such images using a high number of elements that looks for structural irregularities in the face, which AI-generated images usually lack.
The tool uses two specific techniques in order to train the model. The first is, a learned linear embedding based on a principal components analysis (PCA) and the second is a learned embedding based on an autoencoder (AE). “The goal of the Fourier-based embedding is to demonstrate that a generic embedding is not sufficient to distinguish synthesized faces from photographed faces and that the learned embeddings are required to extract sufficiently descriptive representations,” the post mentioned.
The new tool can catch fake profile pictures with an accuracy of 99.6 percent, although there is a false positive rate of 1 percent. This will help reduce the spread of fake profiles on the platform and ensure that users have a safe and secure experience on LinkedIn.
The introduction of the AI tool is a step in the right direction and it is expected that in the future, more such tools will be developed to ensure the safety of users on social media platforms.