Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
As enterprises continue to explore different ways to optimize how they handle different workloads in the data center and at the edge, a new startup, Ubitium, has emerged from stealth with an interesting, cost-saving computing approach: universal processing.
Led by semiconductor industry veterans, the startup has developed a microprocessor architecture that consolidates all processing tasks – be it for AI inferencing or general-purpose tasks – into a single versatile chip.
This, Ubitium says, has the potential to transform how enterprises approach computing, saving them the hassle of relying on different types of processors and processor cores for different specialized workloads. It also announced $3.7 million in funding from multiple venture capital firms.
Ubitium said it is currently focused on developing universal chips that could optimize computing for edge or embedded devices, helping enterprises cut down deployment costs by a factor of up to 100x. However, it emphasized that the architecture is highly scalable and can also be used for data centers in the future.
It is going up against some established names in the edge AI compute space such as Nvidia with its Jetson line of chips and Sima.AI with its Modalix family, showing how the race to create AI-specific processors is moving down funnel from large datacenters to more discrete devices and workloads.
Why an all-in-one chip?
Today, when it comes to powering an edge or embedded system, organizations rely on system-on-chips (SoCs) integrating multiple specialized processing units — CPUs for general tasks, GPUs for graphics and parallel processing, NPUs for accelerated AI workloads, DSPs for signal processing and FPGAs for customizable hardware functions. These integrated units work in conjunction to ensure that the device delivers the expected performance. A good example is the case of smartphones which often use NPUs with other processors for efficient on-device AI processing while maintaining low power consumption.
While the approach does the job, it comes at the expense of increased hardware and software complexity and higher manufacturing costs — making adoption difficult for enterprises. On top of it, when there’s a patchwork of components on the stack, underutilization of resources can become a major issue. Essentially, when the devic …