“Only slightly better than a coin flip” — most people can be fooled by AI images. Think you can do better?
A study of over 287,000 image guesses reveals just how easily AI visuals fool the human brain.

Identifying AI-generated or modified images will become increasingly important as tools to create content become more accessible. Years ago, it took an expert a significant amount of time and effort to create a convincing fake. Now, that can be done with a few keystrokes or clicks.
A recent study by researchers from the Microsoft AI for Good Lab reports that people's ability to identify AI images is "only slightly higher than flipping a coin." Those who participated in a game identifying AI images only had an overall success rate of 62%.
In August 2024, Microsoft shared a "Real or Not" quiz that challenged people to identify AI-generated or modified images. That game was used as a basis for the study.
Approximately 287,000 image evaluations took place, and over 12,500 people participated.
"Generative AI is evolving fast and new or updated generators are unveiled frequently, showing even more realistic output," concluded the study.
"It is fair to assume that our results likely overestimate nowadays people’s ability to distinguish AI-generated images from real ones."
The study also found that AI detection tools are more reliable than humans at identifying AI images, but the team emphasized that automated tools will also make mistakes.
All the latest news, reviews, and guides for Windows and Xbox diehards.
Identifying AI images
The results of the study suggest people are better at identifying humans than landscapes. The success rate of identifying people was around 65% while photographs of nature were only correctly identified 59% of the time.
The researchers of the study state those results could be due to the high ability of humans to identify faces. Our brains are wired to recognize faces, so the connection seems likely.
A recent study out of the University of Surrey discusses how human brains are "drawn to and spot faces everywhere."
Participants saw a similar success rate when looking at both real and AI-generated images (62%) and when focusing only on AI-generated ones (63%).
Several of the best AI image generators were used to create the images presented to quiz takers. Pictures created by Generative Adversarial Network (GAN) had the highest failure rate (55%).
The researchers of the study emphasized that the game was not designed to identify the photorealism of images created by different models.
"We should not assume that a model architecture is responsible for the aesthetic of its output, the training data is," said the paper. "The model architecture only determines how successful a model is at mimicking a training set."
The images that caused the lowest success rate included elements that look unnatural but were genuine. For example, the lighting in an image may seem "off" at first glance, but unique lighting conditions caused the effect, not AI.
The team behind the study is developing its own AI detector, which is said to have a success rate of over 95% on both real and AI-generated images. It will be interesting to see if AI detectors can outpace the AI tools they're designed to detect.
Take the quiz
The Real or Not quiz is still live, meaning anyone can take it. I confess I scored a below-average 47% when trying to identify images that were altered with or created by AI.
In some cases, there were clear signs of AI use, such as artifacts that stuck out or objects that were cut off or incomplete. But in many cases I genuinely could not tell if an image was real or fake.
I suppose I shouldn't be embarrassed, considering thousands of people played the same game and were only slightly better than a coin flip at identifying AI images.
After you take the quiz, please share your score and experience in the comments below. I'd love to know if a tech-savvy audience is better or worse at identifying AI images.

Sean Endicott is a tech journalist at Windows Central, specializing in Windows, Microsoft software, AI, and PCs. He's covered major launches, from Windows 10 and 11 to the rise of AI tools like ChatGPT. Sean's journey began with the Lumia 930, leading to strong ties with app developers. Outside writing, he coaches American football, utilizing Microsoft services to manage his team. He studied broadcast journalism at Nottingham Trent University and is active on X @SeanEndicott_ and Threads @sean_endicott_.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.