← Back to blog

How to Spot Fake AI Video Content

Video used to be the gold standard. Photos could be edited. Text could be faked. But video? Video was real.

That's no longer true. Deepfake technology has matured to the point where a realistic video of someone saying something they never said can be created in under twenty minutes for roughly the cost of a coffee. A political consultant admitted in 2024 to paying $150 to create a deepfake robocall impersonating Joe Biden, urging people not to vote in New Hampshire's Democratic primary. The deepfake took less than twenty minutes to produce. He claimed it gave him five million dollars' worth of exposure.

That's where we are. Here's what to look for.

Watch the Face

The face is where deepfakes most often give themselves away, and also where AI is improving fastest. So watch carefully, and watch for specifics.

Eyes. Blinking is one of the oldest tells. Early deepfakes blinked too little or not at all. Modern ones have improved, but the rhythm can still feel slightly mechanical. Watch for blinking that happens at unnaturally regular intervals, or eyes that don't quite react to what's being said.

Lip sync. When we talk, our whole face moves, not just our lips. Cheeks lift, the jaw shifts, micro-expressions flicker around the eyes. AI still struggles to replicate all of this simultaneously. Watch the face as a whole while someone speaks, not just the mouth. If the lips are moving but the rest of the face feels static, that's a flag. According to researchers studying AI video tools, mouth movement remains one of the hardest problems for AI to fully solve.

Facial symmetry. Human faces are slightly asymmetrical. AI-generated faces often trend toward uncanny perfection or get the asymmetry wrong. If a face looks oddly smooth or too proportional, slow down.

Skin and hair texture. Deepfakes can struggle with the edges of hair, the texture of skin in motion, and the fine details around the ears and jawline. These areas are worth examining, especially in high-resolution video.

Watch the Body and Background

Hands. AI has gotten better at generating hands, but in video, hands in motion still trip up the models. Fingers that pass through objects, movement that doesn't follow the physics of weight and momentum. Watch the hands.

Lighting consistency. In real video, lighting is continuous and follows physics. In deepfakes, lighting can flicker, cast shadows from illogical directions, or shift between cuts in ways that don't make sense. Carnegie Mellon researchers specifically flag inconsistent lighting as one of the more reliable deepfake tells.

Background details. AI-generated backgrounds often contain text that's garbled or slightly wrong. Look at signs, screens, posters, anything with words. If the text is blurry or nonsensical, the video is likely synthetic.

Physics. Does the way objects and people move follow the rules of the physical world? AI video frequently gets gravity, momentum, and motion wrong in subtle ways. A person that floats slightly when they should land. Hair that moves independently of the wind. These small physics errors are giveaways.

Listen to the Audio

Voice cadence. AI-generated speech tends to have slightly unnatural pacing, particularly at the ends of sentences. The rhythm of how a person stresses syllables and trails off is hard to replicate. If speech sounds polished but slightly robotic, trust that instinct.

Background audio. Authentic video has ambient sound that's consistent and realistic. AI-generated or heavily manipulated audio often has an oddly clean quality, like the background has been removed and replaced.

Audio-video sync. At higher quality this is increasingly hard to spot, but if the audio seems even slightly mismatched from the lip movements, especially through part of the video but not others, that inconsistency is a signal.

Check the Source Before You Share

MIT's Media Lab has been researching deepfake detection for years and emphasises this consistently: no single visual tell is definitive. The more reliable approach is to verify the source.

Did a verified account post it? Is it being shared by accounts you recognise and trust? Is the content being reported by other outlets independently? Is there an original source you can trace it back to?

Shocking video content, particularly of public figures, should always clear these questions before you share it. The more emotionally charged the content, the more likely it's been designed to short-circuit your scepticism.

The Uncomfortable Truth

Deepfake fraud cost over $200 million in financial losses in Q1 2025 alone. A teacher in the UK was driven into hiding in January 2025 after a deepfake video falsely showed her making racist remarks. The harm is real, it's personal, and it's accelerating.

The tells listed above will help, for now. But the technology is improving faster than most people's ability to spot it. What catches deepfakes today may not catch them in six months.

The only long-term defence isn't better detection skills. It's platforms that verify the humans posting content in the first place. Unverified video from an unverified account should be treated with scepticism by default. That's not paranoia. At this point, it's just sense.

Hi Friction doesn't solve the deepfake problem across the whole internet. But every video posted here comes from a verified human. That's a start.