AI coding tools are getting better fast. If you don’t work in code, it can be hard to notice how much things are changing, but GPT-5 and Gemini 2.5 have made a whole new set of developer tricks possible to automate, and last week Sonnet 2.4 did it again.
At the same time, other skills are progressing more slowly. If you are using AI to write emails, you’re probably getting the same value out of it you did a year ago. Even when the model gets better, the product doesn’t always benefit — particularly when the product is a chatbot that’s doing a dozen different jobs at the same time. AI is still making progress, but it’s not as evenly distributed as it used to be.
The difference in progress is simpler than it seems. Coding apps are benefitting from billions of easily measurable tests, which can train them to produce workable code. This is reinforcement learning (RL), arguably the biggest driver of AI progress over the past six months and getting more intricate all the time. You can do reinforcement learning with human graders, but it works best if there’s a clear pass-fail metric, so you can repeat it billions of times without having to stop for human input.
As the industry relies increasingly on reinforcement learning to improve products, we’re seeing a real difference between capabilities that can be automatically graded and the ones that can’t. RL-friendly skills like bug-fixing and competitive math are getting better fast, while skills like writing make only incremental progress.
In short, there’s a reinforcement gap — and it’s becoming one of the most important factors for what AI systems can and can’t do.
In some ways, software development is the perfect subject for reinforcement learnin …