Stanford adjunct professor and successfully exited founder Zain Asgar just raised an $80 million Series A for a startup that solve the AI inference bottleneck problem in an astute way. The round was led by Menlo Ventures.
The company, Gimlet Labs, has created what it claims is the first and only “multi-silicon inference cloud” which is software that allows an AI workload to be simultaneously run across diverse types of hardware. It can split an AI app’s work across both traditional CPUs and AI-tuned GPUs, as well as high-memory systems.
“We basically run across whatever different hardware that’s available,” Asgar told TechCrunch.
A single agent may chain together multiple steps, and each “requires different hardware: Inference is compute-bound; decode is memory-bound; and tool calls are network-bound,” writes lead investor, Menlo’s Tim Tully, in a blog post about the funding.
No chip yet does it all, but as new hardware gets rolled out, and aging GPUs get redeployed, “the multi-silicon fleet is ready — it’s just missing the software layer to make it work.” That’s what Tully believes Gimlet Labs offers.
If the current deploy-more-compute trend continues, McKinsey estimates data center spending will tally nearly $7 trillion by 2030. Asgar says that apps are only using the existing hardware already deployed “somewhere between 15 to 30 percent” of the time.
“Another way to think about this: you’re wasting hundreds of billions of dollars because you’re just leaving idle resources,” he said. “Our goal was basically to try to figure out how you can get AI workloads to be 10x more efficient than ever, today.”
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
So he and his cofounders, Michelle Nguyen, Omid Azizi, and Natalie Serrino, set about building orchestration software that slices up agentic workloads so that they can be simultaneous spread across all kinds of hardware.
Gimlet Labs claims it reliably speeds AI inference up b …