Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now
AWS is banking on the fact that by bringing its Automated Reasoning Checks feature on Bedrock to general availability, it will give more enterprises and regulated industries the confidence to use and deploy more AI applications and agents.
It is also hoping that introducing methods like automated reasoning, which utilizes math-based validation to determine ground truth, will ease enterprises into the world of neurosymbolic AI, a step the company believes will be the next major advancement — and its biggest differentiation — in the world of AI.
Automated Reasoning Checks enable enterprise users to verify the accuracy of responses and detect model hallucination. AWS unveiled Automated Reasoning Checks on Bedrock during its annual re: Invent conference in December, claiming it can catch nearly 100% of all hallucinations. A limited number of users could access the feature through Amazon Bedrock Guardrails, where organizations can set responsible AI policies.
Byron Cook, distinguished scientist and vice president at AWS’s Automated Reasoning Group, told VentureBeat in an interview that the preview rollout proved systems like this work in an enterprise setting, and it helps organizations understand the value of AI that can mix symbolic or structured thinking with the neural network nature of generative AI.
AI Scaling Hits Its Limits
Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:
Turning energy into a strategic advantage
Architecting efficient inference for real throughput gains
Unlocking competitive ROI with sustainable AI systems
Secure your spot to stay ahead: https://bit.ly/4mwGngO
“There’s this notion of neurosymbolic AI, that’s the sort of moniker under which you might call automated reasoning,” Cook said. “The rise of interest in neurosymbolic AI caused people, while they were using the tool, to realize how important this work was.”
Cook said that some customers allowed AWS to review their data and the documents used to annotate the answers as right or wrong, and found that the work generated by the tool performed similarly to humans with a copy of the rule book in front of them. He added that the concept of truth or correct can often be subject to interpretation. Automated reasoning doesn’t have quite the same issue.
“It was really amazing! It was amazing to have people with logic backgrounds be in an internal communication channel arguing about what is true or not, and in five or six messages point to the tool and re …