As AI-generated videos spread rapidly across social media platforms, cybersecurity researchers are urging the public to remain cautious, warning that increasingly sophisticated forgeries are now being used for scams, misinformation, and impersonation.
Researchers at Taiwan’s Institute for Information Industry’s Cybersecurity Center say AI-generated fake videos generally fall into four main categories and can often be detected through basic verification methods, including checking platform disclosure labels, running reverse image searches, and assessing whether the video’s overall context is credible.
The alert comes as AI-generated videos are no longer limited to celebrity impersonation scams. In recent cases, virtual doctors and fabricated experts have been used to circulate misleading health and financial information.
Dai Yu-chen, a researcher at the institute, noted that the growth of fake videos closely follows financial incentives.
“Wherever there is profit to be made, AI-generated deception tends to appear,” he observed.

For the best of our weekly content!
You are now signed up for our newsletter
Check your email to complete sign up
Four common forms of AI-generated fake videos
According to Dai, modern AI systems can now perform face swapping, lip synchronization, and fully synthetic video generation. With just a single photograph and a short audio clip, AI tools can calculate facial muscle movements and create a convincing video of a person speaking.
Based on long-term monitoring, the institute has identified four major types of AI fake videos currently circulating online.
The first category consists of celebrity deepfake pornography, among the earliest and most widely recognized uses of AI video manipulation. These videos digitally transplant the faces of public figures onto unrelated footage without consent.
The second category involves celebrity impersonation scams. Criminal groups frequently generate videos of well-known financial commentators or medical professionals, using advanced lip-sync techniques to make it appear they are promoting investment schemes or selling products. Victims are often directed to messaging groups or online storefronts as a result.
The third category includes AI “avatar” content farms. Dai pointed to YouTube channels hosted by supposed experts who do not exist in real life. Unlike traditional content farms that rely on mass-produced articles, these operations use AI-generated virtual presenters to produce videos automatically.
“As long as there is a script, videos can be generated at scale,” Dai explained, adding that upload frequency is often extremely high.
Many of these clips last only a few seconds but are looped seamlessly to form longer videos. While the avatars may appear natural at first glance, repeated gestures often emerge at regular intervals.
The fourth category involves the redistribution of AI-generated videos originally created using specialized tools such as OpenAI’s Sora. These clips, often exaggerated or surreal, are typically stripped of their original watermarks before being reposted across social media platforms.
Viewers can sometimes identify such videos by looking for blurred edges, distorted text, or subtle visual inconsistencies.

Practical ways to verify suspicious videos
As AI-generated videos become more realistic, researchers stress that viewers can still reduce the risk of being misled by applying several basic checks.
One approach is to look for AI disclosure labels. Major platforms increasingly require creators to indicate whether content has been generated or heavily altered by AI. On YouTube, for example, such disclosures appear in the expanded video description, noting whether audio or visuals were digitally created or significantly modified.
Another method is reverse image searching. Dai described this as one of the most effective ways to uncover questionable content. By capturing a screenshot and searching it online, viewers can determine whether the person shown actually exists or whether the image appears repeatedly across videos from the same source.
He cited the case of a fabricated doctor persona whose image led only to a single cluster of videos, with no verifiable professional background—an immediate warning sign.
A third method involves evaluating whether the people, events, timing, and setting presented in a video logically align.
Pang Tsai-wei, another researcher at the institute, noted that fabricated videos often contain internal inconsistencies that make manipulation necessary in the first place.
“For instance, when a news anchor appears to be delivering a report but is actually promoting a medical product, the scenario itself is implausible,” Pang said.
He added that AI systems still struggle to render perfectly straight lines, suggesting viewers zoom in on text or edges to check for subtle distortions.

Visual details still offer clues
Researchers also outlined specific visual indicators that may signal AI manipulation.
At the individual level, warning signs include unnatural facial features, abnormal blinking, blurred teeth or fingers, rough facial outlines, and mismatches between expression and vocal tone.
At the environmental and technical level, viewers should watch for inconsistencies between audio and subtitles, unrealistic lighting or shadows, odd reflections, spelling errors, distorted spatial layouts, unnatural camera movements, compression artifacts, delayed shadows, or objects that appear without explanation.
Dai acknowledged that as AI technology advances, visible flaws will become increasingly difficult to spot. Even so, cultivating skepticism and routinely applying basic checks can significantly reduce the risk of deception.