The First Week I Fell for Political Deepfakes Twice (That I Know Of)

For the past decade or so, social epistemologists, among others, have been warning and theorizing about the impending risks of political deepfake images and videos. Thus, I expected the day would come when I would fall for such things.

But I suppose I always vaguely envisioned that I would first be fooled by, or at least unsure about, something of great importance. Perhaps voice cloning technology would be used to release a fake speech from a world leader. Or maybe deepfake video technology would be used to falsely depict a candidate for high political office in a career-ending compromising situation.

Read more: The First Week I Fell for Political Deepfakes Twice (That I Know Of)

I was, in some sense, prepared for such a day. What I wasn’t prepared for was the utter banality of the first political deepfakes that I would discover I had fallen for. Nor was I prepared for the happenstance way in which I (belatedly) managed to figure out they were deepfakes. As someone who works in social epistemology and the philosophy of free speech, I think it is worth reflecting on how deepfakes are actually being deployed and what the upshots might be for the dissemination of knowledge and the future of public discourse.

Both the deepfakes showed up on my social media feeds in December 2025. The first was an image that appeared to be a photo of Trump walking down a hallway using a walker with men in suits in the background watching him go down the hall. The caption with the photo (taken from a post on X) read “BREAKING: an image has leaked showing Trump using a walker moments after he signed an executive order banning states from regulating AI.”

I remember seeing the image, briefly thinking about the hypocrisy of Trump criticizing Biden’s health, and continuing to scroll. I didn’t really think much of it. After all, there have been plenty of credible reports in the last year about outward signs that Trump’s health is less than excellent, such as large bruises on his hand and his swollen ankles. This didn’t strike me as that out of keeping with those other reports and images. Thus, I was tricked. But the trick was pretty mundane, and it lacked any discernable impact on my behavior.

Just days later, I scrolled past another deepfake image on social media. This one purported to show a grainy photo of JD Vance yelling at someone who, from the back, plausibly looked like his wife, Usha. For a moment, I fell for this one too. There are at least two reasons that seem to help explain why. First, there were already lots of rumors circulating online around that time about friction in the Vances marriage. Second, I think poorly of JD Vance and his character, so I’m no doubt pre-disposed to belief things that reinforce this belief.

But this time, after I had scrolled on, I got to thinking more. It was the graininess of the image that got me. A grainy image like that seemed way too easy to fake. So I decided to run a google search to see if I could verify the image. Almost immediately, it became apparent that it was a deepfake. Snopes and other outlets had reported the image as fake, and Vance had stated on X in response to the viral deepfake “I always wear an undershirt when I go out in public to have a fight loudly with my wife.”

It was the detail about the white undershirt that really stuck out to me. In hindsight, it should have been immediately obvious to me that the image was fake. Given the scrutiny that comes with being Vice President of the United States, it’s highly unlikely Vance would have been spotted out in public at what looked like potentially an upscale restaurant wearing just a plain white undershirt.

I felt a little silly that I’d fallen for the deepfake image, even if it was only for a moment. And it got me thinking. What else might I have fallen for recently?

It was at that point that I decided to see if I could verify whether the image of Trump using the walker was real or a deepfake. After all, it would be pretty ironic (and useful political commentary) for a convincing deepfake image about Trump to be disseminated right after he supposedly “signed an executive order banning states from regulating AI.” (Trump really did sign an executive order in December 2025 plausibly fitting that description.) All it took was a quick google search and a Snopes report to determine that the image of Trump using the walker was a deepfake too.

Falling for these relatively low impact deepfakes taught me a few things.

First, I had been on guard for, in some sense, the wrong thing. I was ready for the day when a purportedly important proclamation was challenged as a deepfake. I was less ready for the infiltration in my feed of deepfake images that resonate with the kind of politicized political messaging I’ve been receiving for years.

Second, I received verification for something that social scientists have been telling us for a while: often falsehoods travel quicker and further than the truth. I never saw anything in my feed debunking either of the deepfakes I happened to identify. I didn’t even remember where I first saw them, so there was no targeted correction I could do by informing the sources. (Nor is it obvious to me that the sources would have even cared.)

Third, I probably don’t have a good sense of how many times I’ve actually been tricked by deepfakes so far. There’s simply too much content that I gloss over quickly rather than thoughtfully. I expect the same is likely true for many others.

Fourth, while I earlier stated that the impact of these deepfakes on my thought was low impact, it would be a mistake to think that they had no impact. It’s hard to tell what kind of contribution they might have made in my combined impression of Trump and Vance alongside the other information I’ve received.

All this suggests that trying to maintain a productive and informative virtual public square is a task that will require work on multiple fronts. As an individual, I should take care to be reflective and critical about what is presented to me as information or knowledge. But actions on the individual level like this alone likely won’t suffice. We also need to think about and work to create the kind of information environment that will help us collectively meet our goals.

Featured image created using ChatGPT (do with that what you will).

Mark Satta

Mark Satta is an Associate Professor of Philosophy, Linguistics, and Law at Wayne State University in Detroit, Michigan. His research interests include epistemology, philosophy of language, philosophy of law, ethics, and social and political philosophy, broadly construed.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *