DIGITAL LIFE

How to Learn to Spot Flaws in AI-Created Faces
Human faces have always been one of the most powerful signs of trustworthiness. But in the age of artificial intelligence, this intuition is being put to the test. Fake profiles, digital scams, and invented identities use increasingly realistic images, difficult to distinguish from real ones. Now, British research suggests something surprising: you don't need to be an expert or spend hours training. Just a few well-directed minutes are enough to spot what previously went unnoticed.
In recent years, AI image generation tools have evolved rapidly. Software capable of creating hyper-realistic faces is available with just a few clicks, allowing anyone to produce convincing images, even without technical knowledge. This has amplified a silent problem: the growing difficulty in differentiating real faces from artificial ones.
Researchers from four universities in the United Kingdom decided to investigate the extent to which ordinary people can make this distinction—and, above all, whether this ability can be improved quickly. To do this, they gathered more than 600 volunteers and tested their ability to identify real human faces and images generated by one of the most advanced systems available at the time.
The initial results revealed a clear limitation. Even individuals with good natural facial recognition skills got it right less than half the time. Participants with abilities considered typical had even lower rates, showing how AI can deceive the human eye with relative ease.
The unexpected impact of just five minutes of guidance...The most revealing part of the study came later. The volunteers underwent a short training session, lasting approximately five minutes, focused on teaching where artificial intelligence still tends to "go wrong." Nothing complex or technical: just visual guidance and practical examples.
After this brief training, the results changed significantly. People with advanced skills began to correctly identify most of the artificial faces. The average participants also showed a significant leap in accuracy, drastically reducing the number of errors.
The secret was not in learning algorithms or understanding how AI works, but in changing the focus of the gaze. The training taught people to pay attention to specific details that machines still have difficulty reproducing perfectly — small inconsistencies that, once noticed, become difficult to ignore.
The details that reveal an artificial face...Among the main signs pointed out by the researchers are slightly misaligned teeth, unnatural hairlines, ears with strange shapes, or accessories that don't make anatomical sense. In many cases, the face seems "too perfect" as a whole, but fails in isolated details.
These errors often go unnoticed at a quick glance, especially on social media, where images are consumed in seconds. The training showed that slowing down observation and knowing exactly where to look makes all the difference.
According to the study's authors, this type of guidance is becoming increasingly urgent. Computer-generated faces are already used to create fake profiles, deceive identity verification systems, and lend credibility to online scams. The more realistic these images become, the greater the risk for ordinary users.
Digital security begins with the gaze...The researchers emphasize that this is not a problem restricted to technology experts. On the contrary: anyone who uses social media, messaging apps, or online services is potentially exposed. Therefore, simple and quick training methods can have a direct impact on everyday digital security.
The study also suggests that combining this type of guidance with people who already possess high natural facial recognition skills can be especially effective in critical contexts, such as investigations, identity verification, and combating fraud.
As artificial intelligence continues to advance, the race is no longer just technological but also cognitive. Learning to distrust what seems too real can become an essential skill. And, as the research shows, sometimes all the brain needs is five minutes to start seeing the digital world with different eyes.
mundophone
No comments:
Post a Comment