Presented by Avalon HolographicsThe pace of AI continues to be staggering. From simple pattern recognition systems to large language models (LLMs), and now as we move into the physical AI reality, the power of these systems continues to improve our lives. But humans always need to be in the loop. We need to see the data, interact with it, and identify the simulation-to-reality gaps; we need to help these systems help us. Spatial computing has traditionally been in the realm of human understanding; we now share this space with AI. Understanding the different ways humans should interact with 3D data helps guide the medium where we can get the best from AI. 1. The 2D screen: the precision desktopThe 2D screen has been the reliable workhorse since spatial computing started and continues to be the primary interface, with most professional work still happening here. For a developer training a model or a single user doing 3D modeling, the 2D screen is great for the individual contributor. However, using a 2D screen forces a “3D-to-2D” mental translation, where the user has to keep the model in their mind, rotating, zooming, and interacting with this specific corner of the spatial world. The cognitive load of this mental model can cause the brain to work overtime to understand it. 2. VR: the immersive workspaceVR offers that first jump beyond 2D. By completely immersing yourself in the 3D world, you gain a capability that is acces …