When OpenAI launched its AI video-creation application, Sora, in September, it stated that “you are in control of your likeness end-to-end.” The app enables users to incorporate themselves and their acquaintances into videos using a function called “cameos”—the app scans a user’s face and conducts a liveness verification, supplying data to produce a video of the user and to confirm their consent for others to utilize their image on the platform.
However, Reality Defender, a firm specializing in identifying deepfakes, reports it managed to breach Sora’s anti-impersonation security measures within 24 hours. Platforms such as Sora provide a “superficial sense of security,” according to Reality Defender CEO Ben Colman, despite the reality that “anyone can employ readily available tools” to bypass authentication as another individual.
Reality Defender’s researchers utilized publicly accessible footage of prominent figures, including corporate executives and entertainers, derived from earnings calls and media interviews. The company successfully circumvented the safeguards with every likeness it attempted to imitate. Colman contends that “any intelligent 10th grader” could master the tools his company employed.
An OpenAI spokesperson conveyed in an email statement to TIME that “the researchers constructed an advanced deepfake system of CEOs and entertainers to try to bypass those protections, and we’re continuously strengthening Sora to make it more resilient against this kind of misuse.”
Sora’s introduction, along with the swift circumvention of its authentication mechanisms, serves as a warning that society is ill-prepared for the subsequent wave of increasingly convincing, personalized deepfakes. The disparity between advancing technology and slow-moving regulation leaves individuals to independently navigate an ambiguous informational landscape—and to safeguard themselves from potential deception and harassment.
“Platforms are fully aware that this is occurring, and absolutely know that they could resolve it if they chose. But until regulations catch up—we’re observing the identical pattern across all social media platforms—they will do nothing,” states Colman.
Sora garnered 1 million downloads in under five days—outpacing ChatGPT, which at the time was the fastest-growing consumer app—even though it required users to have an invitation, according to Bill Peebles, OpenAI’s head of Sora. OpenAI’s release followed a similar offering from Meta named Vibes, which is integrated into the Meta AI application.
The increasing accessibility of persuasive deepfakes has caused apprehension among some observers. “The reality is that identifying [deepfakes] visually is becoming almost unfeasible, given rapid advancements in text-to-image, text-to-video, and audio cloning capabilities,” Jennifer Ewbank, a former deputy director of digital innovation at the CIA, explained in an email to TIME.
Regulators have been contending with how to address deepfakes since at least 2019, when President Trump signed a law requiring the Director of National Intelligence to investigate the use of deepfakes by foreign governments. However, as the availability of deepfakes has grown, the legislative focus has shifted closer to home. In May 2025, the Take It Down Act was signed into federal law, prohibiting the online publication of “intimate visual depictions” of minors and of nonconsenting adults, and mandating platforms to remove offending content within 48 hours of a request—but enforcement will only commence in May 2026.
Legislation banning deepfakes can be problematic. “It’s actually really intricate, technically and legally, because there are First Amendment concerns about removing certain speech,” says Jameson Spivack, deputy director for U.S. policy at the Future of Privacy Forum. In August, a federal judge overturned a California deepfake law, which aimed to restrict AI-generated deepfake content during elections, after Elon Musk’s X sued the state on the grounds that the law violated First Amendment protections. Consequently, requirements to label AI-generated content are more common than outright prohibitions, Spivack notes.
Another promising strategy involves platforms adopting improved know-your-customer protocols, says Fred Heiding, a research fellow at Harvard University’s Defense, Emerging Technology, and Strategy Program. Know-your-customer schemes oblige users of platforms such as Sora to log in using verified identification, enhancing accountability and enabling authorities to trace illegal activities. But there are trade-offs involved as well. “The challenge is we deeply value anonymity in the West,” says Heiding. “That’s beneficial, but anonymity carries a cost, and that cost is the significant difficulty in enforcing these measures.”
While legislators grapple with the increasing prevalence and realism of deepfakes, individuals and organizations can undertake actions to safeguard themselves. Spivack recommends the utilization of authentication software such as , developed by the Coalition for Content Provenance and Authenticity, which appends metadata about origin to images and videos. Cameras from and support the watermark, as does the . Employing such authentication enhances trust in authentic images and undermines fakes.
As the online information environment transforms, making it more challenging to trust what we see and hear online, lawmakers and individuals alike must cultivate society’s resilience to fabricated media. “The more we foster that resilience, the more difficult it becomes for anyone to monopolize our attention and manipulate our trust,” states Ewbank.