A viral post featuring a video reportedly made with Kling AI’s 2.6 Motion Control took social media by storm this week as a clip by Brazilian content creator Eder Xavier showed him flawlessly swapping his face and body with those of Stranger Things actors Millie Bobby Brown, David Harbour, and Finn Wolfhard.
The videos have spread widely across social platforms and have been viewed more than 14 million times on X, with additional versions posted since. The clips have also drawn the attention of technologists, including a16z partner Justine Moore, who shared the video from Xavier’s Instagram account.
“We’re not prepared for how quickly production pipelines are going to change with AI,” Moore wrote. “Some of the latest video models have immediate implications for Hollywood. Endless character swaps at a negligible cost.”
As image and video generation tools continue to improve, with newer models like Kling, Google’s Veo 3.1 and Nano Banana, FaceFusion, and OpenAI’s Sora 2 expanding access to high-quality synthetic media, researchers warn that the techniques seen in the viral clips are likely to spread quickly beyond isolated demos.
A slippery slope
While viewers were amazed at the quality of the bodyswapping videos, experts warn that it would undoubtedly become a tool for impersonation scams.
“The floodgates are open. It’s never been easier to steal an individual's digital likeness—their voice, their face—and now, bring it to life with a single image. No one is safe,” Emmanuelle Saliba, Chief Investigative Officer at cybersecurity firm GetReal Security, told Decrypt.
“We will start seeing systemic abuse at every scale, from one-to-one social engineering to coordinated disinformation campaigns to direct attacks on critical businesses and institutions,” he said.
According to Saliba, the viral videos featuring Stranger Things actors show how thin guardrails around abuse currently are.
“For a few dollars, anyone can now generate full-body videos of a politician, celebrity, CEO, or private individual using a single image,” she said. “There’s no default protection of a person’s digital likeness. No identity assurance.”
For Yu Chen, a professor of electrical and computer engineering at Binghamton University, full-body character swapping goes beyond the face-only manipulation used in earlier deepfake tools and introduces new challenges.
“Full-body character swapping represents a significant escalation in synthetic media capabilities,” Chen told Decrypt. “These systems must simultaneously handle pose estimation, skeletal tracking, clothing and texture transfer, and natural movement synthesis across the entire human form.”
Along with Stranger Things, creators also posted videos of bodyswapped Leonard DiCaprio from the film The Wolf of Wall Street.
“Earlier deepfake technologies operated primarily within a constrained manipulation space, focusing on facial region replacement while leaving the rest of the frame largely untouched,” Chen said. “Detection methods could exploit boundary inconsistencies between the synthetic face and the original body, as well as temporal artifacts when head movements didn't align naturally with body motion.”
Chen continued: “While financial fraud and impersonation scams remain concerns, several other misuse vectors warrant attention,” Chen said. “Non-consensual intimate imagery represents the most immediate harm vector, as these tools lower the technical barrier for creating synthetic explicit content featuring real individuals.”
Other threats both Saliba and Chen highlighted include political disinformation and corporate espionage, with scammers impersonating employees or CEOs, releasing fabricated “leaked” clips, bypassing controls, and harvesting credentials through attacks in which “a believable person on video lowers suspicion long enough to gain access inside a critical business,” Saliba said.
It's unclear how studios or the actors portrayed in the videos will respond, but Chen said that, because the clips rely on publicly available AI models, developers play a crucial role in implementing safeguards.
Still, responsibility, he said, should be shared across platforms, policymakers, and end users, as placing it solely on developers may prove unworkable and stifle beneficial uses.
As these tools spread, Chen said researchers should prioritize detection models that identify intrinsic statistical signatures of synthetic content rather than relying on easily stripped metadata.
“Platforms should invest in both automated detection pipelines and human review capacity, while developing clear escalation procedures for high-stakes content involving public figures or potential fraud,” he said, adding that policymakers should focus on establishing clear liability frameworks and mandating disclosure requirements.
“The rapid democratization of these capabilities means that response frameworks developed today will be tested at scale within months, not years,” Chen said.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。