Explaining Grok’s deepfake crisis

Grok

Welcome back to In the Loop, TIME’s new twice-weekly AI newsletter. We’re releasing these editions both as Time.com stories and emails. If you’re reading this in your browser, why not to have the next one sent directly to your inbox?

What to Know: The Deepfake Crisis on X

A worrying trend— Over the past few weeks, many tech leaders have made bold predictions about what AI will accomplish in 2026, from the field of biology to . But in the first week of 2026, the most prominent use of AI has been X users using Grok to digitally disrobe women.

Elon Musk’s platform X is now inundated with nonconsensual AI-generated images—requested by users—showing unclothed or scantily clad women, men, and children, sometimes in sexual positions. An analyst collaborating with Wired more than 15,000 sexualized AI-generated images produced over a two-hour span on December 31.

Musk rolled out Grok’s “Spicy Mode” for adult content generation last summer, then launched an image editing feature for users last month—movements that have triggered this crisis. X’s Safety account states it prohibits illegal content, including Child Sexual Abuse Material (CSAM). In some cases, Grok later removed the images and “apologized” for creating them.

Yet the platform remains rife with abuse. Ashley St. Clair, the mother of one of Musk’s children, this week that Grok has created “countless” explicit images of her, including some using photos of her at age 14.

Government response— The shift of what was once a mainstream social media platform into perhaps the largest digital distributor of nonconsensual explicit imagery has put Musk in the crosshairs of governments worldwide. X is now under investigation by authorities in Europe, India, France, Malaysia, and other nations. The U.K.’s tech secretary the trend “utterly appalling.” A request for comment sent to X’s press email address did not receive an immediate response.

In the U.S., an upcoming law could force X to tighten its protections against this type of image sharing. The Take It Down Act—passed last year and taking effect in May—criminalizes the sharing of illicit images and requires platforms to remove flagged nonconsensual intimate imagery within 48 hours.

Fighting back— It’s still unclear how effective the Take It Down Act will be as a deterrent, as it places a heavy burden on individuals to report violations. Elliston Berry, a 16-year-old deepfake victim whose advocacy the Take It Down Act, told TIME that this moment should serve as a wake-up call for both tech leaders and young social media users. “We have to be willing to get involved and report incidents to further stop this targeted harm. We must not feel fear or shame if we become victims,” she says. “We’re looking to Elon Musk to take the first steps in making this a top priority to protect X users.”

What We’re Reading

” by Jeffrey Kluger in TIME

My colleague Jeff—one of the world’s leading space experts—just published a feature on efforts to build data centers in space. The initiative theoretically addresses many issues with Earth-based data centers, including their power and water usage and heat generation. But the cost of launching them into orbit is massive, and risks lurk beyond the atmosphere.

AI in Action

CES 2026, one of the world’s largest tech trade shows, is currently underway in Las Vegas. Unsurprisingly, the conference is filled with AI-related products, including an eerie new humanoid model from Boston Dynamics featuring Gemini intelligence; Razer’s , an anime hologram friend in a jar; and a from LG made to unload your dishwasher and fold your laundry.

Nvidia also had a standout week at the conference, its new Vera Rubin chip, which is built to perform more computing while using less power.

Who to Know: Paul Kedrosky

Paul Kedrosky, an investor and research fellow at MIT, has established himself as one of the world’s leading thinkers on AI’s potential impacts on labor and the economy. One of his key points is that AI is both a truly transformative technology and massively overhyped. “We had a dramatic bubble during the global financial crisis that nearly crashed the global economy. But that doesn’t mean I think people should stop living in houses,” he told me in late November.

But Kedrosky is deeply concerned about the financial structures underpinning the industry. He sees AI siphoning capital from other investment areas, including manufacturing, and the industry using questionable circular financing. In fact, he views all the hallmarks of a classic bubble merged into one: overhyped technology, loose credit, ambitious real estate purchases, and euphoric government messaging. “This is literally the first moment in modern financial history that has combined all the raw ingredients of every other bubble in one place,” he says.