Subtle Computing’s voice isolation models help computers understand you in noisy environments

by | Nov 6, 2025 | Technology

California-based startup Subtle Computing is tackling the problem of capturing people’s voices in noisy environments with its own voice isolation models — a technology that could benefit voice-based AI products and services.

Consumer apps using voice AI are today seeing tremendous growth. AI Meeting notetakers like Granola, Fireflies, Fathom, and Read AI have received both user and investor attention. Existing companies like OpenAI, ClickUp, and Notion have integrated voice transcription solutions. App makers like Wispr Flow and Willow are working on voice dictation. Then there are hardware companies like Plaud and Sandbar that are using devices as a medium to transcribe your voice, then use AI for insight generation and interaction.

One of the challenges for these companies is capturing users’ voices in any kind of environment, such as loud cafes or offices.

To address this, Subtle Computing developed an end-to-end voice isolation model that can understand what you are saying even in noisy environments. Chen said that there are a lot of companies working on voice understanding. He noted that at times, device manufacturers send the voice to the cloud to get a clean output, but that’s not efficient.

The startup trains specific models to suit the acoustics of a particular device and adapt to the user’s voice instead of training one model that works across devices.

“What we found is that when we preserve the acoustic characteristics of a device, we get an order of magnitude better performance than generic solutions. This also means we can give personalized solutions to the user,” Chen said.

[embedded content]

The company was founded by Tyler Chen, David Harrison, Savannah Cofer, and Jackie Yang, who met at Stanford. Chen, Harrison, and Yang were pursuing their PhD while Cofer was doing an MBA. They came together in Steve Blank’s Lean Launchpad course, where they worked on alternative interfaces for computing and started building Subtle Computing.

Techcrunch event

San Francisco
|
October 13-15, 2026

“As we are interacting more with AI, we are moving towards a future where we talk with our devices,” Chen said. “But the obvious question is how much our devices understand us, the users, in all the environments where we work day to day. Be it a super loud coffee shop or a shared office where there are other people around you, and you might be talking about something private — voice doesn’t work that way today,” he added.

The startup said it can run the model just for voice isolation on some devices, which is just a few megabytes in size and has 100ms of latency. The company can also run a different model to transcribe the voice and give text output for other devices. Chen said thanks to its isolation model, the company’s transcription model can understand users better, and in turn, creates a more accurate transcript.

Subtle Computing said that Qualcomm has selected the startup as a member of its voi …

Article Attribution | Read More at Article Source