PRESS RELEASE

New Categories of High-Performance AI PCs Are Here to Do What Data Centers Can’t

Analyst(s): Olivier Blanchard
Publication Date: June 24, 2025

Futurum Highlights How Desktop-Sized AI Supercomputers Will Change the Face of Secure, Local Compute for Organizations

Olivier Blanchard, Research Director at The Futurum Group, zooms in on new types of AI PCs dubbed “AI developer workstations.” He explains how these new high-performance AI PCs, which pack AI workload capabilities generally only available in data centers into desktop and deskside form factors, are operational and ROI accelerators for any organization currently building, training, testing, and fine-tuning AI solutions such as chatbots, assistants, and agents.

Key Points:

  • While mainstream AI PCs are designed to enhance traditional PC workloads with a mix of productivity and security-boosting AI-enabled features, a more high-performance category of AI PC is beginning to find its way to the market. These high-performance AI PCs, which are designed to provide advanced levels of AI-accelerated compute performance for professionals such as AI developers, AI engineers, and data scientists, feature far more advanced capabilities and compute power. Rather than leaning on power-efficient NPUs or balanced NPU-GPU coordination to optimize resources for AI-enhanced mainstream workloads, high-performance AI PCs prioritize maximum local system performance for the most demanding model training and AI inference workloads.
  • High-performance AI PCs are still a nascent category within the segment, with offerings spanning, on the one hand, notebook, desktop, and deskside form factors, and on the other, a rich range of AI-accelerated compute capability tiers aimed at addressing the needs (and budgets) of every type of likely use case for high-performance professional AI PCs.
  • In recent months, NVIDIA, AMD, Intel, and Qualcomm, along with their PC OEM partners, have introduced the first wave of credible high-performance AI PCs targeting AI developers, software engineers, and data scientists operating across enterprise and SMB markets. While these systems’ performance characteristics vary, their overall value proposition is simple: Moving complex AI workloads from the cloud to the edge, whether motivated by security needs, operational efficiency goals, cost reduction targets, other operational reasons, or all of the above, doesn’t have to be difficult.
  • These new high-performance AI PCs leverage a broad range of AI-enabled platforms and system configurations to serve the use cases they are designed for, from NVIDIA’s GB10 and GB300 AI “superchips” and discrete Qualcomm AI 100 GPUs to AMD’s RTX GPUs. This report will help articulate a framework for this emerging segment and clarify the key differences between these types of systems relative to the use cases they aim to serve.

Overview:

Figure 1: The Four Categories of High-Performance AI PCs

New Categories of High-Performance AI PCs Are Here to Do What Data Centers Can’t

AI-enabled PCs—“AI PCs” for short—are generally considered the next evolution of PCs and are expected to replace the vast majority of pre-NPU PCs in the next few years, culminating in upward of 93% of new PCs shipped in 2030 being AI-enabled. Futurum’s research validates this prediction, with 89% of enterprise IT decision-makers (ITDMs) sharing that opinion already.

While these mainstream AI PCs are designed to enhance traditional PC workloads with a mix of productivity- and security-boosting AI-enabled features, a more high-performance category of AI PC is beginning to find its way to the market. Unlike their more mainstream counterparts, these new high-performance AI PCs are designed to provide far more advanced levels of AI-accelerated compute performance for professionals such as AI developers, AI engineers, and data scientists. Therefore, rather than leaning on power-efficient NPUs or balanced NPU-GPU coordination to optimize resources for AI-enhanced mainstream workloads and deliver all-day battery life, high-performance AI PCs prioritize maximum local system performance for demanding model training and AI inference workloads that have, until now, been handled exclusively in data centers and the cloud.

In essence, high-performance AI PCs can be best understood as AI servers packed into PC form factors. Broadly, they are specialized computers designed to handle computationally intensive tasks common to AI, machine learning (ML), and deep learning (DL). They currently turn up in three familiar types of PC form factors, namely notebook, desktop, and deskside, and span a range of AI-accelerated compute capability tiers designed to address both the workload needs and budgets of every type of organization and type of user.

Because the high-performance AI PC is still a nascent category within the PC segment, its nomenclature remains a bit fluid. We have also heard them referred to as “AI developer PCs” and “AI developer workstations,” but we caution that this type of nomenclature may be too narrow for the effective operational range of these types of PCs. As we see today’s market opportunity, high-performance AI PCs are not limited to AI developer use cases. They are also relevant to creators in media and entertainment spaces, financial institutions, Independent Software Vendors (ISVs), specialized services providers, software developers, architects and engineers, and data scientists. More to the point, we anticipate that the use cases for high-performance PCs will grow, not shrink, suggesting that keeping the nomenclature as open as possible, at least for now, may be preferable to an unnecessarily restrictive one.

At the 2025 Computex conference in Taiwan, just a few weeks ago, Acer, ASUS, Dell Technologies, GIGABYTE, HP, Lenovo, and MSI introduced two new types of high-performance AI PCs dubbed “personal AI supercomputers” powered by NVIDIA silicon: DGX Spark and DGX Station. The pitch behind these PCs was that they would “empower a global ecosystem of developers, data scientists, and researchers” in enterprises, government agencies, startups, research institutions, and other organizations needing the performance and capabilities of an AI server but in a desktop form factor.

The packable DGX Spark comes equipped with the NVIDIA GB10 Grace Blackwell Superchip and fifth-generation Tensor Cores to deliver up to 1 petaflop of AI compute (with 128GB of unified memory). These specs allow it to easily export models to the NVIDIA DGX Cloud (or any equivalent AI-friendly cloud and data center infrastructure).

On the other hand, the much larger DGX Station is powered by the NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip to deliver a far more potent 20 petaflops of AI performance (with 784GB of unified system memory). For obvious reasons, DGX Station also comes with NVIDIA’s ConnectX-8 SuperNIC, which can support networking speeds of 800GB/s, not only for high-speed connectivity but also for potential multistation scaling scenarios. DGX Station notably supports NVIDIA Multi-Instance GPU technology, allowing it to partition into up to seven instances with their own compute cores, caches, and high-bandwidth memory.

Because both systems use the NVIDIA DGX operating system, come with the latest NVIDIA AI software stack, and make it easy to access NVIDIA NIM microservices and NVIDIA Blueprints, they essentially echo the types of cloud-based AI factory environments that AI developers are already familiar with. Their compatibility with common developer tools such as PyTorch, Ollama, and Jupyter allows developers to securely build, test, and fine-tune models and perform inference on DGX Spark before deploying them to a cloud environment.

Dell Technologies delivered what are perhaps the most well-articulated versions of both systems at its concurrent Dell Technologies World 2025 conference: The Dell Pro Max GB10 and Dell Pro Max GB300. HP, for its part, introduced the GB10-powered HP ZGX Nano AI Station.

As a quick reference, NVIDIA’s GB10 Grace Blackwell Superchip delivers 1,000 on-system TOPS. While significantly less capable than the much larger GB300-powered DGX Stations, GB10-powered DGX Spark systems nonetheless manage to deliver roughly 20x the (NPU) TOPS AI processing power currently available on Copilot+ PCs. This allows these Mac Mini-sized systems to handle popular models of up to 200 billion parameters—such as GPT-3, Meta’s Llama 3, DeepSeek R1, Claude 3, Gemma, Grok 2, BLOOM, Mixtral 8x22B, and others—completely locally.

This capability alone immediately sets these high-performance, professional-grade AI PCs apart from even the most premium versions of the more mainstream AI PCs currently driving the Windows PC refresh cycle: These are not merely more powerful desktop versions of Windows 11 AI PCs. They are designed specifically to be able to handle the training, fine tuning, and testing of some of the most popular AI models on the market.

Along similar lines, Dell Technologies also introduced a high-performance AI PC in a notebook form factor at its 2025 Dell Technologies World event. Powered by a Qualcomm AI100 inference card as a discrete NPU, the Dell Pro Max Plus mobile workstation hopes to deliver a balanced compromise between high-performance AI PC specs for AI developers, software engineers, and data scientists, the portability advantages of a notebook form factor, and maximum performance per watt power efficiency. Qualcomm’s AI100 inference card’s 32 dedicated AI cores and 64GB of LPDDR4x of dedicated memory help this mobile workstation deliver up to 450 TOPS for 8-bit integer (INT8) AI compute and roughly 400 TeraFLOPS, or 0.4 petaFLOPS (FP16).

These systems can run AI models ranging from 30 billion to 109 billion parameters directly on-device. This means that they are well-adapted to develop, test, and refine models such as Meta’s Llama 3.3 and Llama 4 Scout, OpenAI’s GPT-3, Cohere’s Command R, AI21 Labs’ Jamba, Google’s Gemma 3, Mistral AI’s Mistral Small, and DBRX. Qualcomm’s AI100 NPU’s AI cores are especially useful in accelerating inference by reducing their memory footprint through 8-bit quantization (INT8), making these models responsive in real time.

Another category of high-performance AI PCs represents a more natural, linear evolution of traditional high-performance desktops than the evolutionary leap that NVIDIA’s GB10 and GB300 systems (and Qualcomm’s AI-100 systems) represent. This category of high-performance AI PC builds on generations of PCs used by engineers, architects and designers, data scientists, VFX and animation professionals, and researchers needing to work with massive datasets and complex computational workloads that would overwhelm more mainstream desktops. Adding powerful AI capabilities to this category updates established, trusted systems for the age of AI, making the refresh cycle simpler for existing users. These systems also have the advantage of being extremely customizable, making them adaptable to a broader variety of environments, use cases, and budgets than their comparatively more rigid GB10, GB300, and AI-100 counterparts.

Base specs for these systems essentially match the types of system TOPS and petaflops found in GB10 systems, but being upward-configurable, they can be exceeded by a 4x factor if need be. In other words, because of their versatility, these systems can fill a critical gap between GB10 and GB300 systems for use cases that require more horsepower than a GB10 system but don’t require anywhere near the capabilities, let alone the cost, of a GB300 system. I would also caution that these systems tend to appeal to a broader range of users and may not be as focused on AI model development, training, and fine-tuning as the more purpose-built systems powered by NVIDIA GB10 and GB300, and Qualcomm Cloud AI-100 parts. They can handle these tasks but may be more likely to be used in applications with extremely high render requirements (such as VFX and animation, engineering and design, architecture and construction, and complex system modeling and management).

The full report is available via subscription to Futurum Intelligence’s Intelligent Devices IQ service—click here for inquiry and access.

Futurum clients can read more in the Futurum Intelligence Platform, and non-clients can learn more here: Intelligence Devices Practice.

About the Futurum Intelligent Devices Practice

The Futurum Intelligent Devices Practice provides actionable, objective insights for market leaders and their teams so they can respond to emerging opportunities and innovate. Public access to our coverage can be seen here. Follow news and updates from the Futurum Practice on LinkedIn and X. Visit the Futurum Newsroom for more information and insights.

Author Information

Olivier Blanchard

Olivier Blanchard is Research Director, Intelligent Devices. He covers edge semiconductors and intelligent AI-capable devices for Futurum. In addition to having co-authored several books about digital transformation and AI with Futurum Group CEO Daniel Newman, Blanchard brings considerable experience demystifying new and emerging technologies, advising clients on how best to future-proof their organizations, and helping maximize the positive impacts of technology disruption while mitigating their potentially negative effects. Follow his extended analysis on X and LinkedIn.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.