Artificial intelligence (AI) is rapidly advancing our ability to synthesise highly realistic human faces. A striking new study published in Psychological Science reveals that people actually perceive certain AI-generated faces as more real than genuine human faces. This effect, termed “AI hyperrealism”, has profound implications for how we relate to technology and each other.
Amy Dawel, PhD, the senior author of the study from Australian National University, highlights a critical concern: “If White AI faces are consistently perceived as more realistic, this technology could have serious implications for people of colour by ultimately reinforcing racial biases online. This problem is already apparent in the current AI technologies that are being used to create professional-looking headshots. When used for people of colour, the AI is altering their skin and eye colour to those of White people.”
The study, published in the journal Psychological Science, found that AI-generated images of White individuals tend to be rated as more human-seeming than photos of real White people. This effect was not seen for AI-generated images of other races. The reasons behind this racial disparity require further investigation but might relate to the training data used in face-generation algorithms.
Dawel further explains the paradoxical confidence in AI face identification: “Concerningly, people who thought that the AI faces were real most often were paradoxically the most confident their judgements were correct. This means people who are mistaking AI imposters for real people don’t know they are being tricked.”
“It turns out that there are still physical differences between AI and human faces, but people tend to misinterpret them. For example, White AI faces tend to be more in proportion, and people mistake this as a sign of humanness. However, we can’t rely on these physical cues for long. AI technology is advancing so quickly that the differences between AI and human faces will probably disappear soon,” she adds, providing insight into the evolving nature of AI technology.
“AI technology can’t be sectioned off, so only tech companies know what’s going on behind the scenes. There needs to be greater transparency around AI so researchers and civil society can identify issues before they become a major problem.” Dawel emphasises the need for transparency in AI development.
“Given that humans can no longer detect AI faces, society needs tools that can accurately identify AI imposters,” she further points out, highlighting the urgent need for effective identification tools in the digital age.
“Educating people about the perceived realism of AI faces could help make the public appropriately sceptical about the images they’re seeing online,” Dr. Dawel suggests, underscoring the importance of public awareness in the digital era.
AI hyperrealism is a term coined by the researchers to describe the phenomenon where AI faces, especially those of White individuals, are judged as more human-like than real human faces. This counterintuitive finding is pivotal in understanding how we perceive and interact with AI-generated images.
In the first experiment, 124 adults were asked to identify whether the faces they were shown were human or AI-generated. The results were startling: White AI faces were more often judged as human compared to actual human faces. A strange trend was also seen: the people who were most sure of their decisions were also the ones who were wrong the most. This is called the Dunning-Kruger effect in AI face recognition.
The second part of the study involved 610 adults and focused on identifying key visual attributes that lead to AI hyperrealism. The researchers used face-space theory and qualitative participant reports to identify these attributes. They discovered that AI faces are perceived as more average, familiar, and attractive but less memorable than human faces, contributing to their hyperrealism.
An intriguing aspect of this study was the use of machine learning to classify AI and human faces based on human-perceived attributes. This approach not only demonstrated high accuracy but also provided valuable insights into the psychological processes involved in distinguishing AI from human faces.
The findings from this study have profound implications. They highlight the advanced state of AI technology in mimicking human features so closely that they can be perceived as more real than actual humans. This raises significant ethical and societal questions, especially given the racial bias observed where White AI faces were more likely to be perceived as real. This bias underscores the need for more ethical and inclusive practices in AI development.
This research exemplifies how psychological theory can inform our understanding of AI outputs. By combining psychological ideas with AI technology, the study gives us a new way to think about how people use and understand AI.