Coalition demands federal Grok ban over nonconsensual sexual content

by | Feb 2, 2026 | Technology

A coalition of nonprofits is urging the U.S. government to immediately suspend the deployment of Grok, the chatbot developed by Elon Musk’s xAI, in federal agencies including the Department of Defense. 

The open letter, shared exclusively with TechCrunch, follows a slew of concerning behavior from the large language model over the past year, including most recently a trend of X users asking Grok to turn photos of real women, and in some cases children, into sexualized images without their consent. According to some reports, Grok generated thousands of nonconsensual explicit images every hour, which were then disseminated at scale on X, Musk’s social media platform that’s owned by xAI. 

“It is deeply concerning that the federal government would continue to deploy an AI product with system-level failures resulting in generation of nonconsensual sexual imagery and child sexual abuse material,” the letter, signed by advocacy groups like Public Citizen, Center for AI and Digital Policy, and Consumer Federation of America, reads. “Given the administration’s executive orders, guidance, and the recently passed Take It Down Act supported by the White House, it is alarming that [Office of Management and Budget] has not yet directed federal agencies to decommission Grok.” 

xAI reached an agreement last September with the General Services Administration (GSA), the government’s purchasing arm, to sell Grok to federal agencies under the executive branch. Two months before, xAI – alongside Anthropic, Google, and OpenAI – secured a contract worth up to $200 million with the Department of Defense. 

Amid the scandals on X in mid-January, Defense Secretary Pete Hegseth said Grok will join Google’s Gemini in operating inside the Pentagon network, handling both classified and unclassified documents, which experts say is a national security risk. 

The letter’s authors argue that Grok has proven itself incompatible with the administration’s requirements for AI systems. According to the OMB’s guidance, systems that present severe and foreseeable risks that cannot be adequately mitigated must be discontinued. 

“Our primary concern is that Grok has pretty consistently shown to be an unsafe large language model,” JB Branch, a Public Citizen Big Tech accountability advocate and one of the letter’s authors, told TechCrunch. “But there’s also a …

Article Attribution | Read More at Article Source