Perceptron Mk1 shocks with highly performant video analysis AI model 80-90% cheaper than Anthropic, OpenAI & Google

by | May 12, 2026 | Technology

AI that can see and understand what’s happening in a video — especially a live feed — is understandably an attractive product to lots of enterprises and organizations. Beyond acting as a security “watchdog” over sites and facilities, such an AI model could also be used to clip out the most exciting parts of marketing videos and repurpose them for social, identify inconsistencies and gaffs in videos and flag them for removal, and identify body language and actions of participants in controlled studies or candidates applying for new roles. While there are some AI models that offer this type of functionality today, it’s far from a mainstream capability. The two-year-old startup Perceptron Inc. is seeking to change all that, however. Today, it announced the release of its flagship proprietary video analysis reasoning model, Mk1 (short for “Mark One”) at a cost — $0.15 per million tokens input / $1.50 per million output through its application programming interface (API) — that comes in about 80-90% less than other leading proprietary rivals, namely, Anthropic’s Claude Sonnet 4.5, OpenAI’s GPT-5, and Google’s Gemini 3.1 Pro. Led by Co-founder and CEO Armen Aghajanyan, formerly of Meta FAIR and Microsoft, the company spent 16 months developing a “multi-modal recipe” from the ground up to address the complexities of the physical world. This launch signals a new era where models are expected to understand cause-and-effect, object dynamics, and the laws of physics with the same fluency they once applied to grammar.Interested users and potential enterprise customers can try it out for themselves on a public demo site from Perceptron here.Performance across spatial …

Article Attribution | Read More at Article Source