A new study from biometric security firm iProov has revealed a startling truth: 99.9% of people can’t reliably identify deepfakes.

The research, which tested 2,000 consumers in the UK and US, asked participants to distinguish between real and AI-generated images and videos. Despite being primed to look for fakes, only 0.1% of respondents got everything right. The results paint a troubling picture of just how easy it is to fool the human eye and how vulnerable society is to AI-driven deception.

Deepfake blindspot: A growing threat

While awareness of deepfakes is rising, many people remain dangerously overconfident in their ability to detect them. More than 60% of participants believed they could spot a fake, even as they consistently failed the test. Younger adults (18–34) were particularly prone to misplaced confidence, while older generations were more likely to be unaware of deepfakes altogether. Nearly 40% of people aged 65 and over had never even heard of them.

The study also revealed that videos are significantly harder to detect than images. Participants were 36% more likely to misidentify an AI-generated video compared to a fake image, showing just how convincing deepfake technology has become. This raises major concerns about video-based fraud, from impersonation scams to AI-generated misinformation campaigns.

The deepfake misinformation machine

Nearly half of those surveyed (49%) said they trust social media less after learning about deepfakes, and many believe platforms like Meta and TikTok are breeding grounds for AI-generated deception. Despite this concern, just one in five consumers would report a suspected deepfake if they encountered one online.

Deepfake technology has advanced at an alarming rate, with iProov’s 2024 Threat Intelligence Report showing a staggering 704% increase in AI-driven face-swapping scams over the past year. Cybercriminals are already exploiting this technology for fraud, using deepfakes to bypass identity verification systems, open fake accounts, and steal personal information.

The implications stretch far beyond financial crime. From political misinformation to AI-generated “evidence” in legal cases, deepfakes have the power to manipulate public perception on a massive scale. As the line between real and fake continues to blur, the risks to security, democracy, and trust in digital content grow ever more severe.

Possible solutions

Experts agree that human detection alone is no longer enough. With AI-generated content becoming indistinguishable from reality, the fight against deepfakes must rely on advanced biometric verification and AI-driven detection tools. iProov advocates for liveness detection technology—systems that verify not just whether an image or video looks real, but whether it is genuinely a live human interaction happening in the moment.

Andrew Bud, founder and CEO of iProov, said: “It’s down to technology companies to protect their customers by implementing robust security measures. Using facial biometrics with liveness provides a trustworthy authentication factor and prioritises both security and individual control, ensuring that organisations and users can keep pace and remain protected from these evolving threats.”

Post Views: 7