The Liar's Dividend: How AI Made Reality Optional

On March 16, 2026, President Trump accused Iran of using artificial intelligence to fabricate war footage. He claimed Iran had shown "kamikaze boats" that do not exist, used AI to depict a successful attack on the USS Abraham Lincoln, and posted "totally AI generated" images of 250,000 Iranians at a rally to support Supreme Leader Mojtaba Khamenei, claiming the event "never took place."

Reuters verified images from the Iraqi port of Basra showing explosive-laden Iranian boats attacking two fuel tankers, killing at least one crew member. Several pro-government demonstrations have occurred in Iran since the war began. Multiple news organizations, including Reuters, have published photographs of crowds in Tehran.

The claims were false. But that is not the point. AI has given every actor with power a permanent escape hatch from reality. The mere existence of AI-generated content means anything inconvenient can be dismissed as fake. You no longer need to disprove something. You just need to call it AI.

The Liar's Dividend

Chesney and Citron identified this mechanism in 2019 and named it the liar's dividend: as the public becomes more aware that AI can fabricate realistic content, liars gain a new defense. They can dismiss authentic evidence by claiming it was generated. The technology doesn't even need to be used. Its existence is sufficient. The possibility of fabrication becomes indistinguishable from the fact of fabrication in the mind of anyone who wants to believe the content is false.

This is is the new operational reality.

Days before Trump's claims, Israeli Prime Minister Benjamin Netanyahu posted a video address to his country. Within hours, accounts with ties to Iran dismissed it as an AI-generated deepfake. Users claimed to spot a sixth finger on Netanyahu's hand, a once-common tell for AI fabrication. Fact-checkers found no such artifact. It didn't matter. The post claiming six fingers was seen more than two million times on X.

Netanyahu, apparently recognizing the absurdity, posted a follow-up video from a coffee shop, holding up his hands and flashing five fingers. A new kind of proof of life for an era where reality itself requires authentication.

Two different governments. Two different directions. Both exploiting the same mechanism. Iran-linked accounts used AI doubt to dismiss real footage as fake. The U.S. president used AI doubt to dismiss real events as fabricated. The technology became the alibi for both sides simultaneously.

The Confirmation Engine

Here is where the architecture becomes self-reinforcing. AI did not create political bias. People have always filtered information through the beliefs they already hold. What AI has done is provide a universal, technologically plausible justification for rejecting anything that contradicts those beliefs. Before AI, dismissing inconvenient evidence required an alternative explanation. A conspiracy. A misinterpretation. Something that could itself be challenged.

Now the dismissal requires only two words: "AI generated."

If you oppose the war and see footage of Iranian civilian casualties, you believe it. If you support the war and see the same footage, it's AI. If you support Iran and see Netanyahu speaking, he's a deepfake. If you oppose Iran, the rally footage is fabricated. The content is identical. The evaluation is entirely determined by prior commitment.

This is not a failure of media literacy. It is a structural condition. The information architecture now permits every individual to construct a version of reality that confirms what they already believe, and to reject contradictory evidence with a technologically credible explanation that requires no further proof.

Quattrociocchi, Capraro, and Perc (2025) described a related condition they call epistemia: a state where linguistic plausibility substitutes for epistemic evaluation. AI generates outputs so fluent and confident that users experience the feeling of knowing something without having done the cognitive labor of actually evaluating it. The liar's dividend is epistemia's mirror image. Instead of accepting false content because it sounds true, people reject true content because AI makes falseness plausible.

Both erode the same capacity for independent judgment. One replaces it with algorithmic output. The other weaponizes algorithmic possibility to nullify evidence entirely.

The Exhaustion Function

Alberto Fittarelli of the Citizen Lab at the University of Toronto identified the operational consequence: "Verifying everything is incredibly exhausting, and not everyone can afford doing it."

This is the intended feature.

When every piece of evidence can be plausibly dismissed as AI, the cost of maintaining an accurate picture of reality rises for individuals and falls for institutions with the resources to manufacture doubt. Governments, corporations, and state media can dismiss footage at scale. Individual citizens must verify each claim independently, with tools most people do not have and time most people cannot spare.

The asymmetry is structural. The liar has one move: "It's AI." The person seeking truth has to prove authenticity for every single piece of evidence, every single time. This is unsustainable. And the predictable outcome is not that people get better at verification. It is that people stop trying.

Meta's oversight board acknowledged this problem last week, noting that AI-generated deception was circulating during the Iran conflict and calling for better identification of fabricated content. But identification alone cannot solve a problem where the mechanism of doubt is not the content itself, but the possibility that any content might be fabricated.

The Architecture of Optional Reality

This is the convergence point of everything the Cognitive Privacy Project has documented.

Cognitive offloading erodes the capacity to evaluate information independently. Epistemia replaces evaluation with plausibility-matching. Algorithmic recommendation systems sort populations into informational environments optimized for engagement, not accuracy. And the liar's dividend provides the final layer: a permanent, technologically credible mechanism for rejecting any evidence that contradicts the beliefs those systems have reinforced.

Each layer depends on the others. Remove independent judgment and people cannot verify claims. Remove epistemic labor and people accept what sounds right. Remove informational diversity and people only encounter confirming evidence. Add the liar's dividend and even contradictory evidence that penetrates the filter can be dismissed.

The result is a population for whom reality is not discovered. It is selected.

This is not a future risk. It is the operational environment of the Iran war. It is governments on both sides dismissing verified footage as AI fabrication. It is millions of people choosing which reality to inhabit based on which narrative they entered the conflict already believing.

The question the Cognitive Privacy Project has asked since its founding is whether we will establish protective boundaries around the cognitive processes that make human agency possible. The Iran war is demonstrating, in real time, what happens when those boundaries do not exist.

Reality is not supposed to be optional. The architecture we have built has made it so.

References

Cai, K. (2026, March 15). Trump accuses Iran of using AI to spread disinformation. Reutershttps://www.reuters.com/business/media-telecom/trump-accuses-iran-using-ai-spread-disinformation-2026-03-16/

Chesney, B., & Citron, D. K. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review, 107(6), 1753–1820.

Quattrociocchi, W., Capraro, V., & Perc, M. (2025). Epistemological fault lines between human and artificial intelligence. arXiv preprint, arXiv:2512.19466. https://doi.org/10.48550/arXiv.2512.19466

Thompson, S. A., & Hsu, T. (2026, March 17). Netanyahu posts 'proof of life' video as A.I. sows doubts about what's real. The New York Timeshttps://www.nytimes.com/2026/03/17/technology/netanyahu-ai-video-iran-israel.html


Timothy Cook is Director of The Cognitive Privacy Project and author of the "Algorithmic Mind" column at Psychology Today. He is Securiti Certified in AI Security & Governance.

Contact: timothy@cognitiveprivacyproject.org Web: cognitiveprivacyproject.org

© 2026 Timothy Cook / The Cognitive Privacy Project. All rights reserved. Licensed under CC BY-NC-ND 4.0. You may share this work with attribution. Commercial use and derivatives require written permission.

Next
Next

The Fiduciary Problem with AI-Assisted Analysis