The Deepfake You Never Need to Make: How the Liar's Dividend Is Destroying Trust in Evidence
The real crisis is not synthetic media. It is the collapse of evidential confidence.
You do not need to create a deepfake to benefit from one. You just need deepfakes to exist.
That single insight, what researchers call the "liar's dividend", is doing more damage to public discourse than any individual piece of synthetic media. And the data now backs this up in ways that should concern anyone who cares about truth, accountability, or democratic governance.
A study published in the American Political Science Review, involving more than 15,000 adults, found that falsely claiming something is misinformation was more effective than apologising or staying silent. Simply saying "that's a deepfake" works as a defence, even when the evidence is real. The strategy has a structural limitation: it is largely ineffective against video evidence, which means it operates primarily by muddying text and audio-based evidence. But as synthetic video improves, that limitation is eroding fast.
This is the crisis I want to talk about. Not the fakes themselves, but the generalised collapse of confidence in evidence of any kind.
The numbers are worse than you think
Let me put some figures on the table.
A meta-analysis covering 56 papers and 86,155 participants found that human detection accuracy for synthetic media is 55.54% overall. That is barely above flipping a coin. Audio fares slightly better at 62%. Video sits at 57%. Images at 53%. Text at 52%.
We are, collectively, terrible at spotting fakes. And we know it.
According to iProov, only 0.1% of people correctly identified all fake and real media in their testing. Deloitte reports that half of respondents are more sceptical of all media than they were a year ago. 68% are concerned about deception. 59% say they struggle to tell the difference between real and synthetic content.
This is not just a detection problem. It is a trust problem. When people know fakes are pervasive and know they cannot reliably spot them, they do not become better consumers of information. They become more suspicious of everything, including things that are true.
Impostor bias: when scepticism becomes the weapon
There is a useful term for this phenomenon: impostor bias. Deepfake awareness spreads doubt far beyond the content that is actually faked. Every video, every audio clip, every photograph now carries an asterisk. The question is no longer "is this real?" but "can anyone prove this is real?"
That inversion matters enormously.
Consider what happened in Ireland during the 2025 presidential election. A deepfake video of candidate Catherine Connolly announcing her withdrawal from the race circulated online, reaching 30,000 views. It took Meta 12 hours to remove it. No enforcement action was taken before the election concluded.
The damage was not just the 30,000 people who saw a fake video. It was the precedent. Every future Irish election now carries the possibility that any inconvenient piece of evidence, any damaging recording, any whistleblower testimony, can be waved away as AI-generated. The liar's dividend compounds over time.
The detection arms race is already lost
I keep hearing that better detection tools will solve this. The evidence says otherwise.
The Columbia Journalism Review has documented that generative AI is advancing faster than the tools designed to detect it. OpenAI's own detection system achieves 98.8% accuracy on content generated by DALL-E 3. Against content from other tools, it manages 5 to 10%.
Read that again. The best-funded AI lab on the planet built a detector that works brilliantly on its own outputs and fails almost completely on everything else. This is not a minor gap. It is a structural limitation of the detection-based approach.
The C2PA content credentials standard, backed by Adobe, OpenAI, Google, Samsung, and others, takes a different approach. Rather than detecting fakes, it provides provenance information for authentic content, a kind of digital chain of custody. The idea is sound. The execution has problems.
Midjourney, one of the most popular image generation tools, does not embed C2PA credentials. Social media platforms routinely strip metadata from uploaded content, including provenance information. And the absence of a credential does not prove content is fake; it might just mean it was taken on an older phone or shared through a platform that strips metadata.
Provenance is a better architectural bet than detection. But it requires universal adoption, and universal adoption requires every platform, every device manufacturer, and every social media company to agree on standards and enforce them consistently. I would not hold my breath.
The scale is staggering
The numbers on synthetic media incidents are accelerating in a way that makes the policy response look almost absurd.
In Q3 2025 alone, there were 2,031 verified deepfake incidents, a 312% year-on-year increase. Voice cloning scams rose 1,600% in Q1 2025. Financial losses from deepfake-related fraud exceeded $3 billion in the United States between January and September 2025, with projections reaching $40 billion globally by 2027.
And in 2026, voice cloning crossed what researchers are calling the "indistinguishability threshold", the point at which cloned voices are reliably indistinguishable from real ones by human listeners.
That threshold matters because voice has historically been one of our most trusted forms of evidence. We trust phone calls. We trust voice messages. We trust recordings. That trust is now fundamentally misplaced.
The policy response is structurally delayed
The EU AI Act's Article 50, which requires machine-readable labelling of AI-generated content, comes into force on 2 August 2026. Fines are substantial: up to 35 million euros or 7% of global turnover.
On paper, this looks serious. In practice, there are problems.
The Code of Practice accompanying the regulation is voluntary. Private individuals are exempt. And as already noted, platforms strip metadata. A labelling requirement is only as strong as the infrastructure that preserves those labels through the distribution chain. Right now, that infrastructure does not exist.
There is also a jurisdictional question. Deepfakes do not respect borders. A fake generated in a country with no regulation, distributed through a platform headquartered in another, targeting voters in a third, falls into a gap between regulatory regimes that no single piece of legislation can close.
I do not say this to argue against regulation. The EU AI Act is better than nothing, and the signalling effect matters. But anyone who thinks Article 50 will solve the deepfake problem is underestimating the speed and scale of what is happening.
Who benefits from the erosion of trust
This is the question that does not get asked enough. The liar's dividend is not distributed evenly. It disproportionately benefits those with existing power.
A politician caught on tape saying something indefensible can now claim the recording is fake. A corporation confronted with internal documents can question their authenticity. An abuser presented with evidence can sow doubt about whether it is real.
The people who lose are the ones who rely on evidence to hold power accountable: journalists, whistleblowers, abuse survivors, citizens trying to make informed decisions. These groups already face barriers to being believed. Deepfakes have made those barriers higher.
This is not a theoretical concern. It is playing out right now, in courtrooms, newsrooms, and election campaigns. Every time a public figure dismisses inconvenient evidence as "probably AI-generated", the cost of proving truth goes up. And that cost falls hardest on people with the fewest resources.
What can actually be done
I wish I had a clean set of solutions. I do not. But there are things that would help.
Invest in provenance infrastructure, not just detection. C2PA and similar standards are the right direction. They need to become mandatory, not voluntary. Platforms should be required to preserve and display provenance metadata rather than stripping it.
Treat the liar's dividend as the primary threat. Policy conversations fixate on the production of fakes. The larger problem is the strategic use of deepfake plausibility to discredit real evidence. Legal and institutional frameworks need to account for this.
Build media literacy that goes beyond "spot the fake". Teaching people to identify deepfakes is a losing strategy when detection accuracy is 55%. Teaching people how to evaluate sources, check provenance, and understand the incentives behind claims of fakery is more durable.
Support open-source verification tools. Proprietary detection systems that only work on their own outputs are not a public good. Open-source, community-audited tools for content verification deserve public funding and institutional support.
Hold platforms accountable for distribution speed. Twelve hours to remove a deepfake of a presidential candidate is not acceptable. Platforms that profit from engagement should bear the cost of rapid verification during sensitive periods like elections.
None of these are sufficient on their own. Together, they represent a more honest approach than waiting for a detection tool that will reliably tell us what is real.
The deeper problem
The deepfake crisis is, at its core, an epistemic crisis. It is about what counts as evidence, who gets to challenge evidence, and what happens when the very concept of proof becomes negotiable.
We have spent centuries building institutions and norms around the idea that evidence matters. Courts, journalism, science, democratic accountability: all depend on the assumption that reality can be documented and that documentation can be trusted.
Synthetic media does not just produce convincing fakes. It introduces a permanent, universal reason to doubt anything documented. That doubt is the product. The fakes are just the delivery mechanism.
The technology to create a deepfake you cannot disprove already exists. The harder question, the one I keep coming back to, is whether we can rebuild the social infrastructure of trust fast enough to matter. Because right now, the people with the most to gain from a world where nothing can be proven are winning.