Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more
Major AI providers like OpenAI, Google, xAI and others have all launched various AI agents that conduct exhaustive or “deep” research across the web on behalf of users, spending minutes at a time to compile extensively cited white papers and reports that, in their best case versions, are ready to be circulated to colleagues, customers and business partners without any human editing or reworking.
But they all have a significant limitation out-of-the-box: they are only able to search the web and the many public facing websites on it — not any of the enterprise customer’s internal databases and knowledge graphs. Unless, of course, the enterprise or their consultants take the time to build a retrieval augmented generation (RAG) pipeline using something like OpenAI’s Responses API, but this would require a fair bit of time, expense, and developer expertise to set up.
But now AlphaSense, an early AI platform for market intelligence, is trying to do enterprises — particularly those in financial services and large enterprises (it counts 85% of the S&P 100 as its customers) — one better.
Today the company announced its own “Deep Research,” an autonomous AI agent designed to automate complex research workflows that extends across the web, AlphaSense’s catalog of continuously updated, non-public proprietary data sources such as Goldman Sachs and Morgan Stanley research reports, and the enterprise customers’ own data (whatever they hook the platform up to, it’s their choice).
Now available to all AlphaSense users, the tool helps generate detailed analytical outputs in a fraction of the time traditional methods require.
“Deep Research is our first autonomous agent that conducts research in the platform on behalf of the user—reducing tasks that once took days or weeks to just minutes,” said Chris Ackerson, Senior Vice President of Product at AlphaSense, in an exclusive interview with VentureBeat.
Underlying model architecture and performance optimization
To power its AI tools — including Deep Research — AlphaSense relies on a flexible architecture built around a dynamic suite of large language models.
Rather than committing to a single provider, the company selects models based on performance benchmarks, use case fit, and ongoing developments in the LLM ecosystem.
Currently, AlphaSense draws on three primary model families: Anthropic, accessed via AWS Bedrock, for advanced reasoning and agentic workflows; Google Gemini, valued …