Amazon Web Services on Wednesday introduced Kiro powers, a system that allows software developers to give their AI coding assistants instant, specialized expertise in specific tools and workflows — addressing what the company calls a fundamental bottleneck in how artificial intelligence agents operate today.AWS made the announcement at its annual re:Invent conference in Las Vegas. The capability marks a departure from how most AI coding tools work today. Typically, these tools load every possible capability into memory upfront — a process that burns through computational resources and can overwhelm the AI with irrelevant information. Kiro powers takes the opposite approach, activating specialized knowledge only at the moment a developer actually needs it.”Our goal is to give the agent specialized context so it can reach the right outcome faster — and in a way that also reduces cost,” said Deepak Singh, Vice President of Developer Agents and Experiences at Amazon, in an exclusive interview with VentureBeat.The launch includes partnerships with nine technology companies: Datadog, Dynatrace, Figma, Neon, Netlify, Postman, Stripe, Supabase, and AWS’s own services. Developers can also create and share their own powers with the community.Why AI coding assistants choke when developers connect too many toolsTo understand why Kiro powers matters, it helps to understand a growing tension in the AI development tool market.Modern AI coding assistants rely on something called the Model Context Protocol, or MCP, to connect with external tools and services. When a developer wants their AI assistant to work with Stripe for payments, Figma for design, and Supabase for databases, they connect MCP servers for each service.The problem: each connection loads dozens of tool definitions into the AI’s working memory before it writes a single line of code. According to AWS documentation, connecting just five MCP servers can consume more than 50,000 tokens — roughly 40 percent of an AI model’s context window — before the developer even types their first request.Developers have grown increasingly vocal about this issue. Many complain that they don’t want to burn through their token allocations just to have an AI agent figure …