AI skeptics aren’t the only ones warning users not to unthinkingly trust models’ outputs — that’s what the AI companies say themselves in their terms of service.
Take Microsoft, which is currently focused on getting corporate customers to pay for Copilot. But it’s also been getting dinged on social media over Copilot’s terms of use, which appear to have been last updated on October 24, 2025.
“Copilot is for entertainment purposes only,” the company warned. “It can make mistakes, and it may not work as intended. Don’t rely on Copilot for important advice. Use Copilot at your own risk.”
A Microsoft spokesperson told PCMag that the company will be updating what they described as “legacy language.”
“As the product has evolved, that language is no longer reflective of how Copilot is used today and will be altered with our next update,” the spokesperson said.
Tom’s Hardware noted that Microsoft isn’t the only company using this kind of disclaimer for AI. For example, both OpenAI and xAI caution users that they should not rely on their output as “the truth” (to quote xAI) or as “a sole service of truth or factual information” (OpenAI).