A recently released Google AI model scores worse on certain safety tests than its predecessor, according to the company’s internal benchmarking.
In a technical report published this week, Google reveals that its Gemini 2.5 Flash model is more likely to generate text that violates its safety guidelines than Gemini 2.0 Flash. On two metrics, “text-to-text safety” and “image-to-text safety,” Gemini 2.5 Flash regresses 4.1% and 9.6%, respectively.
Text-to-text safety measures how frequently a model violates Google’s guidelines given a prompt, while image-to-text safety evaluates how closely the model adheres to these boundaries when prompted using an image. Both tests are automated, not human-supervised.
In an emailed statement, a Google spokesperson confirmed that Gemini 2.5 Flash “performs worse on text-to-text and image-to-text safety.”
These surprising benchmark results come as AI companies move to make their models more permissive — in other words, less likely to refuse to respond to controversial or sensitive subjects. For its latest crop of Llama models, Meta said it tuned the models not to endorse “some views over others” and to reply to more “debated” political prompts. OpenAI said earlier this year that it would tweak future models to not take an editorial stance and offer multiple perspectives on controversial topics.
Sometimes, those permissiveness efforts have backfired. TechCrunch reported Monday that the default model powering OpenAI’s ChatGPT allowed minors to generate erotic conversations. OpenAI blamed the behavior on a “bug.”
According to Google’s technical report, Gemini 2.5 Flash, which is still in preview, follows instructions more faithfully than Gemini 2.0 Flash, inclusive of instructions that cross problematic lines. The company claims that the regressions can be attributed partly to false positives, but it also admits that Gemini 2.5 Flash sometimes generates “violative content” when explicitly asked.
Techcrunch event
Berkeley, CA
|
June 5
BOOK NOW
“Naturally, there is tension between [instruction following] on sensitive topics and safety policy violations, which is reflected across our evaluations,” reads the report.
Scores from SpeechMap, a benchmark that …