OpenAI and Anthropic researchers decry ‘reckless’ safety culture at Elon Musk’s xAI

by | Jul 16, 2025 | Technology

AI safety researchers from OpenAI, Anthropic, and other organizations are speaking out publicly against the “reckless” and “completely irresponsible” safety culture at xAI, the billion-dollar AI startup owned by Elon Musk.

The criticisms follow weeks of scandals at xAI that have overshadowed the company’s technological advances.

Last week, the company’s AI chatbot, Grok, spouted antisemitic comments and repeatedly called itself “MechaHitler.” Shortly after xAI took its chatbot offline to address the problem, it launched an increasingly capable frontier AI model, Grok 4, which TechCrunch and others found to consult Elon Musk’s personal politics for help answering hot-button issues. In the latest development, xAI launched AI companions that take the form of a hyper-sexualized anime girl and an overly aggressive panda.

Friendly joshing among employees of competing AI labs is fairly normal, but these researchers seem to be calling for increased attention to xAI’s safety practices, which they claim to be at odds with industry norms.

“I didn’t want to post on Grok safety since I work at a competitor, but it’s not about competition,” said Boaz Barak, a computer science professor currently on leave from Harvard to work on safety research at OpenAI, in a Tuesday post on X. “I appreciate the scientists and engineers @xai but the way safety was handled is completely irresponsible.”

I didn’t want to post on Grok safety since I work at a competitor, but it’s not about competition.I appreciate the scientists and engineers at @xai but the way safety was handled is completely irresponsible. Thread below.— Boaz Barak (@boazbaraktcs) July 15, 2025

Barak particularly takes issue with xAI’s decision to not publish system cards — industry standard reports that detail training methods and safety evaluations in a good faith effort to share information with the research community. As a result, Barak says it’s unclear what safety training was done on Grok 4.

OpenAI and Google have a spotty reputation themselves when it comes to promptly sharing system cards when unveiling new AI models. OpenAI decided not to publish a system card for GPT-4.1, claiming it was not a frontier model. Meanwhile, Google waited months after unveiling Gemini 2.5 Pro to publish a safety report. However, these companies historically publish safety reports for all frontier AI models before they enter full production.

Techcrunch event

San Francisco
|

Article Attribution | Read More at Article Source