The enterprise risk nobody is modeling: AI is replacing the very experts it needs to learn from

by | May 16, 2026 | Technology

For AI systems to keep improving in knowledge work, they need either a reliable mechanism for autonomous self-improvement or human evaluators capable of catching errors and generating high-quality feedback. The industry has invested enormously in the first. It’s giving almost no thought to what’s happening to the second.I’d argue that we need to treat the human evaluation problem with just as much rigor and investment as we put into building the model capabilities themselves. New grad hiring at major tech companies has dropped by half since 2019. Document review, first-pass research, data cleaning, code review: Models handle these now. The economists tracking this call it displacement. The companies doing it call it efficiency. Neither are focusing on the future problem.Why self-improvement has limits in knowledge workThe obvious pushback is reinforcement learning (RL). AlphaZero learned Go, chess, and Shogi at superhuman levels without human data and generated novel strategies in the process. Move 37 in the 2016 match against Lee Sedol, a move professionals said they would never have played, didn’t come from human annotation. It emerged from AI self-play. What enables this is the stability of the environment. Move 37 is a novel move within the fixed state space of Go. The rules are complete, unambiguous, and permanent. More importantly, the reward signal is perfect: Win or lose, and immediate, with no room for interpretation. The system always knows whether a move was good because the game eventually ends with a clear result.Knowledge work doesn’t have either of those properties. The rules in any professional domain are dynamic and continuously rewritten by the humans operating in them. New laws get passed. New financial instruments are invented. A legal strategy that worked in 2022 may fail in a jurisdiction that has since changed its interpretation. Whether a medical diagnosis was right may not be known for years. Without a stable environment and an unambiguous reward signal, you cannot close the loop. You need humans in the ev …

Article Attribution | Read More at Article Source