How to identify AI-generated videos online

0
38

How to identify AI-generated videos

Sorry to disappoint, but if you're looking for a quick list of foolproof ways for detecting AI-generated videos, you're not going to find it here. Gone are the days of AI Will Smith grotesquely eating spaghetti. Yes, there are some tells, but AI video makers are getting better all the time, and the latest tools can create convincing, photorealistic videos with a few clicks.

Right now, AI-generated videos are still a relatively nascent modality compared to AI-generated text, images, and audio, because getting all the details right is a challenge that requires a lot of high-quality data. "But there's no fundamental obstacle to getting higher quality data," only labor-intensive work, said Siwei Lyu, a professor of computer science and engineering at University at Buffalo SUNY.

In the past six months, AI video generators have become so good at creating realistic videos that they often dupe the casual scroller. Telltale artifacts that used to give the game away, such as morphing faces and shape-shifting objects, are seen far less frequently. There's not much fakery in evidence in the viral AI-generated videos of the emotional support kangaroo, bunnies on a trampoline, or street interviews made with Google's Veo 3 model (which can generate sound with videos).

The key to identifying AI-generated videos, as with any AI modality, lies in AI literacy. "Understanding that [AI technologies] are growing and having that core idea of 'something I'm seeing could be generated by AI,' is more important than, say, individual cues," said Lyu, who is the director of UB's Media Forensic Lab. 

This Tweet is currently unavailable. It might be loading or has been removed.

Navigating the AI slop-infested web requires using your online savvy and good judgment to recognize when something might be off. It's your best defense against being duped by AI deepfakes, disinformation, or just low-quality junk. It's a hard skill to develop, because every aspect of the online world fights against it in a bid for your attention. But the good news is, it's possible to fine-tune your AI detection instincts.

"By studying [AI-generated images], we think people can improve their AI literacy," said Negar Kamali, an AI research scientist at Northwestern University's Kellogg School of Management, who co-authored a guide to identifying AI-generated images. "Even if I don't see any artifacts [indicating AI-generation], my brain immediately thinks, 'Oh, something is off,'" added Kamali, who has studied thousands of AI-generated images. "Even if I don't find the artifact, I cannot say for sure that it's real, and that's what we want."

What to look out for: Imposter videos vs. text-to-image videos

Before we get into identifying AI-generated videos, let's distinguish the different types. AI-generated videos are generally divided into two different categories: Imposter videos and videos generated by a text-to-image diffusion model.

Imposter videos are AI-edited videos that consist of face swapping — where a person's entire face is swapped out for someone else's (usually a celebrity or politician) and made to say something fake — and lip syncing — where a person's mouth is subtly manipulated and replaced with different audio.

Imposter videos: why regulators are cracking down

Imposter videos are generally pretty convincing; the technology has been around longer, and they build off of existing footage instead of generating something from scratch. Remember those Tom Cruise deepfake videos from a few years ago that went viral for being so convincing? They worked because the creator, Chris Ume, looked a lot like Tom Cruise, worked with a professional Tom Cruise impersonator, and did lots of minute editing, according to an interview with Ume, via The Verge.

These days, there are an abundance of apps out there that accomplish the same thing and can even — terrifyingly — include audio from a short sound bite that the creator finds online.

That said, there are some things to look for if you suspect an AI video deepfake. First of all, look at the format of the video. AI video deepfakes are typically "shot" in a talking-head format, where you can just see the heads and shoulders of the speaker, with their arms out of view (more on that in a minute). 

To identify face swaps, look for flaws or artifacts around the boundaries of the face. "You typically see artifacts when the head moves obliquely to camera," said digital forensics expert and UC Berkeley Professor of Computer Science Hany Farid. As for the arms and hands, "If the hand moves, or something occludes the face, [the image] will glitch a little bit," Farid continued. And watch the arms and body for natural movements. "If all you're seeing is this," — on our Zoom call, Farid keeps his arms stiff and by his sides — "and the person's not moving at all, it's fake." 

If you suspect a lip sync, focus your attention on the subject's mouth — especially the teeth. With fakes, "We have seen people who have irregularly shaped teeth," or the number of teeth change throughout the video, said Lyu. Another strange sign to look out for is "wobbling of the lower half" of the face, said Lyu. "There's a technical procedure where you have to exactly match that person's face," he said. "As I'm talking, I'm moving my face a lot, and that alignment, if you got just a little bit of imprecision there, human eyes are able to tell." This gives the bottom half of the face a more liquid, rubbery effect.

Mashable Light Speed

This Tweet is currently unavailable. It might be loading or has been removed.

When it comes to AI deepfakes, Aruna Sankaranarayanan, a Research Assistant at MIT Computer Science and Artificial Intelligence Laboratory, says her biggest concern isn't deepfakes of the most famous politicians in the world like Donald Trump or Joe Biden, but of important figures who may not be as well known. "Fabrication coming from them, distorting certain facts, when you don't know what they look like or sound like most of the time, that's really hard to disprove," said Sankaranarayanan, whose work focuses on political deepfakes. Again, this is when AI literacy comes into play; videos like these require some research to verify or debunk.

In April 2025, Congress passed the Take It Down Act, making it a federal crime or post or share nonconsensual intimate imagery. Another bill called the NO FAKES Act is making its way through the Senate; this aims to provide legal protections against AI-generated replicas.

How to spot text-to-image videos

While regulators are cracking down on imposter videos, text-to-image generators have exploded in popularity. You can now generate AI videos directly within ChatGPT and Google Gemini. And Luma, Kling, and Freepik are just a few of the other alternatives of easy-access video generators that have proliferated online.

With a short text description, you can generate any kind of video your imagination dreams up. The majority of AI-generated videos shared online fall into the category of, "Hey, look what I can do with this cool new technology." This can range from the absurd, like a cat jumping off an Olympic diving board, to the downright misleading, like fake videos of hurricane damage. But all of it contributes to a confusing, dystopian experience, where it's harder and harder to separate AI-generated fiction from reality.

What's more, many accounts circulating AI-generated videos are profiting from the clickbait by deliberately deceiving users. On TikTok, it's practically impossible to know whether that creator selling you the latest skincare product is AI-generated or not. AI-generated videos made with TikTok's tools are automatically disclosed, but that doesn't stop users from uploading AI-generated or edited videos made with tools outside of the platform.

You can try looking for context clues, the experts say. Farid said to look out for "temporal inconsistencies," such as "the building added a story, or the car changed colors, things that are physically not possible," he said. "And often it's away from the center of attention that where that's happening." So, hone in on the background details. You might see unnaturally smooth or warped objects, or a person's size change as they walk around a building, said Lyu.

This Tweet is currently unavailable. It might be loading or has been removed.

Kamali says to look for "sociocultural implausibilities" or context clues where the reality of the situation doesn't seem plausible. "You don't immediately see the telltales, but you feel that something is off — like an image of Biden and Obama wearing pink suits," or the Pope in a Balenciaga puffer jacket

The artifacts may change, but good judgment remains.

But relying too much on certain cues to verify whether a video is AI-generated could get you into trouble. 

Lyu's 2018 paper about detecting AI-generated videos because the subjects didn't blink properly was widely publicized in the AI community. As a result, people started looking for eye-blinking defects, but as the technology progressed, so did more natural blinks. "People started to think if there's a good eye blinking, it must not be a deepfake and that's the danger," said Lyu. "We actually want to raise awareness but not latch on particular artifacts, because the artifacts are going to be amended." 

Building the awareness that something might be AI-generated will "trigger a whole sequence of action," said Lyu. "Check, who's sharing this? Is this person reliable? Are there any other sources correlating on the same story, and has this been verified by some other means? I think those are the things the most effective counter measures for deepfakes."

This Tweet is currently unavailable. It might be loading or has been removed.

For Farid, identifying AI-generated videos and misleading deepfakes starts with where you source your information. Take the AI-generated images that circulated on social media in the aftermath of Hurricane Helene and Hurricane Milton. Most of them were pretty obviously fake, but they still had an emotional affect on people. "Even when these things are not very good, it doesn't mean that they don't penetrate, it doesn't mean that it doesn't sort of impact the way people absorb information," he said.

This Tweet is currently unavailable. It might be loading or has been removed.

Be cautious about getting your news from social media. "If the image feels like clickbait, it is clickbait," said Farid before adding it all comes down to media literacy. Think about who posted the video and why it was created. "You can't just look at something on Twitter and being like, 'Oh, that must be true, let me share it.'" 

If you're suspicious about AI-generated content, check other sources to see if they're also sharing it, and if it all looks the same. As Lyu says, "a deepfake only looks real from one angle." Search for other angles of the instance in question. Farid recommends sites like Snopes and Politifact, which debunk misinformation and disinformation. As we all continue to navigate the rapidly changing AI landscape, it's going to be crucial to do the work — and trust your gut.

How are AI companies labeling AI-generated videos?

Some AI companies, including Google and OpenAI, have ways of labeling their AI-generated videos as such. With every video generated by Veo, Google has embedded an invisible watermark called SynthID. After the launch of Veo 3 caused a wave of concern, the company also added a visible watermark labeling it as AI-generated.

OpenAI, Adobe, and other companies label their AI-generated videos and images with invisible watermarks using a technical standard developed by the nonprofit Coalition for Content Provenance and Authenticity (C2PA).

While visible watermarks may seem like an obvious solution, they can also be easily removed. And there's the question of whether they even matter. A study from Stanford University's Institute for Human-Centered AI (HAI) recently found visible labels indicating AI-generated content "may not change its persuasiveness." After all, we're used to all sorts of meaningless logos on viral videos; it's easy to visually tune them out.

Invisible watermarks, on the other hand, are baked into the metadata. This makes them harder to remove and easier to track.

Standards like C2PA are a step in the right direction, but right now, it's up to the companies to voluntarily adhere to these standards. Perhaps one day, those standards will be enforced by regulators. In the meantime, our best bets are still sound judgement and strong media literacy.

Поиск
Категории
Больше
Science
What Is The Longest-Living Whale?
The Longest-Living Whale Can Live For Over 200 YearsWhat Is The Longest-Living Whale?And what is...
От test Blogger3 2025-06-09 16:00:11 0 1Кб
Истории
15 Early Electronic Composers Who Changed Music Forever
15 Early Electronic Composers Who Changed Music Forever - History Collection...
От Test Blogger2 2025-07-15 13:00:09 0 522
Technology
AI actors and deepfakes arent coming to YouTube ads. Theyre already here.
Bad actors: YouTube ads have an AI video problem Can you...
От Test Blogger7 2025-06-21 11:00:13 0 992
Игры
Clockwork Revolution finally shows off gameplay, but there's bad news
Clockwork Revolution finally shows off gameplay, but there's bad news As an Amazon Associate,...
От Test Blogger6 2025-06-08 18:00:21 0 1Кб
Technology
Judge in Kadrey v. Meta AI copyright case rules for Meta, against authors
Judge in 'Kadrey v. Meta' AI copyright case rules for Meta...
От Test Blogger7 2025-06-26 10:00:11 0 864