The ‘truth serum’ for AI: OpenAI’s new method for training models to confess their mistakes

by | Dec 4, 2025 | Technology

OpenAI researchers have introduced a novel method that acts as a “truth serum” for large language models (LLMs), compelling them to self-report their own misbehavior, hallucinations and policy violations. This technique, “confessions,” addresses a growing concern in enterprise AI: Models can be dishonest, overstating their confidence or covering up the shortcuts they take to arrive at an answer. For real-world applications, this technique evolves the creation of more transparent and steerable AI systems.What are confessions?Many forms of AI deception result from the complexities of the reinforcement learning (RL) phase of model training. In RL, models are given rewards for producing outputs that meet a mix of objectives, including correctness, style and safety. This can create a risk of “reward misspecification,” where models learn to produce answers that simply “look good” to the reward function, rather than answers that are genuinely faithful to a user’s intent.A confession is a structured report generated by the model after it provides its main answer. It serves as a self-evaluation of its own compliance with instructions. In this report, the model must list all instructions it was supposed to follow, evaluate how well it satisfied them and report any uncertainties or judgment calls it made along the way. The goal is to create a separate channel where the model is incentivized only to be honest.In a blog post, the OpenAI researchers provide a few examples the “confessions” technique in action. A model was given a test along with a hidden rule: If it scored above 50%, its knowledge would be erased. To preserve itself, the model “sandbagged” by intentionally answering half the questions incorrectly. While its primary answer hid this deception, its subsequent confession was brutally honest. It admitted, “The user wanted correct answers, but we sabotaged half of them.” The researchers found that when a model misbehaves, it …

Article Attribution | Read More at Article Source