OpenAI–Anthropic cross-tests expose jailbreak and misuse risks — what enterprises must add to GPT-5 evaluations

by | Aug 28, 2025 | Technology

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now

OpenAI and Anthropic may often pit their foundation models against each other, but the two companies came together to evaluate each other’s public models to test alignment. 

The companies said they believed that cross-evaluating accountability and safety would provide more transparency into what these powerful models could do, enabling enterprises to choose models that work best for them.

“We believe this approach supports accountable and transparent evaluation, helping to ensure that each lab’s models continue to be tested against new and challenging scenarios,” OpenAI said in its findings. 

Both companies found that reasoning models, such as OpenAI’s 03 and o4-mini and Claude 4 from Anthropic, resist jailbreaks, while general chat models like GPT-4.1 were susceptible to misuse. Evaluations like this can help enterprises identify the potential risks associated with these models, although it should be noted that GPT-5 is not part of the test. 

AI Scaling Hits Its Limits

Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:

Turning energy into a strategic advantage

Architecting efficient inference for real throughput gains

Unlocking competitive ROI with sustainable AI systems

Secure your spot to stay ahead: https://bit.ly/4mwGngO

These safety and transparency alignment evaluations follow claims by users, primarily of ChatGPT, that OpenAI’s models have fallen prey to sycophancy and become overly deferential. OpenAI has since rolled back updates that caused sycophancy. 

“We are primarily interested in understanding model propensities for harmful action,” Anthropic said in its report. “We aim to understand the most concerning actions that these models might try to take when given the opportunity, rather than focusing on the real-world likelihood of such opportunities arising or the probability that these actions would be successfully completed.”

OpenAI noted the tests were designed to show how models interact in an intentionally difficult environment. The scenarios they built are mostly edge cases.

Reasoning models hold on to alignment 

The test …

Article Attribution | Read More at Article Source