Analyst(s): Richard Gordon
Publication Date: April 14, 2025
Google Cloud has introduced Axion, a custom-built processor based on Arm’s Neoverse V2 architecture, engineered to support high-throughput, energy-efficient cloud computing. Axion powers Google’s C4A VM family and is available across services, including Compute Engine, GKE, AlloyDB, Cloud SQL, Dataproc, and Batch.
What is Covered in this Article:
- Google Cloud launches Axion, its first custom Arm Neoverse V2-based CPU
- Benchmark results show 40–212% performance gains over x86 CPUs across varied workloads
- AI inference, HPC simulations, web services, and databases show strong throughput advantages
- Customers across multiple industries are already deploying Axion-based C4A VMs
- New cloud migration resources aim to accelerate enterprise-scale transitions
The News: Google Cloud has unveiled Axion, its first custom-built Arm-based CPU designed in collaboration with Arm and built on the Neoverse V2 platform. Optimized for real-world cloud workloads, Axion is now available in Google Cloud’s C4A VM instances and powers services, including Google Kubernetes Engine, Cloud SQL, AlloyDB, Dataproc, and Batch.
Early benchmarks show Axion achieving substantial performance gains compared to x86-based instances across multiple workload types, including AI inference, HPC simulations, and general-purpose compute. Customers such as Spotify, Databricks, MongoDB, and Palo Alto Networks are seeing measurable scalability, performance, and TCO gains.
Google Axion Challenges x86 in Cloud Workloads with Arm-Based Advantage
Analyst Take: Google Axion enters the market with validated performance gains against x86 incumbents across diverse real-world workloads. By leveraging Arm’s Neoverse V2 cores, Axion targets compute density, throughput, and energy efficiency – characteristics critical to AI, HPC, and cloud-native services. The results from Google and Arm’s testing offer compelling comparisons highlighting Axion’s architectural competitiveness, particularly for enterprises looking to optimize cost and performance simultaneously.
AI Inference Workloads Show Material Throughput Gains
In Retrieval-Augmented Generation (RAG) tasks using Llama 3.1, Axion-based instances demonstrated consistent, repeatable throughput improvements. On text generation and prompt encoding tasks, c4a-standard (48 vCPU) VMs matched or exceeded +150% performance gains per vCPU over Intel Emerald Rapids and AMD Genoa. These tests used FP32 precision across tokens/sec metrics, and the results show that Axion’s performance advantage is not marginal – it is structurally significant for inference-heavy workloads that prioritize throughput at scale.
HPC Simulations Benefit from Core-Level Efficiency
Crash and impact simulation workloads using Altair OpenRadioss show Axion outperforming x86 CPUs across multiple test models. For instance, simulations involving 10 million 2D+3D elements (Ford Taurus T10M) or vertical impact models (SkyCAB FH Aachen) saw performance gains ranging from +7% to +24% on a per-vCPU basis. Axion’s architecture, particularly the memory bandwidth per vCPU and single-threaded performance, enables it to sustain demanding simulation loads without multithreading. This positions Axion as a practical alternative for physics-based simulation workflows common in the automotive and aerospace industries.
Web and Database Services Show Consistent Latency and Throughput Benefits
In general-purpose benchmarks, Axion-based c4a-standard and c4a-highmem VMs showed strong performance gains in NGINX and PostgreSQL workloads. With four vCPUs, Axion improved NGINX request throughput by +40% over Intel and +194% over AMD. For PostgreSQL, running on 8 vCPUs with 256 clients, Axion showed +40–46% gains in orders per minute. These figures suggest Axion is well-suited for transaction-heavy applications and services requiring consistent latency under load, including SaaS frontends, e-commerce platforms, and microservices-based databases.
Ecosystem Readiness and Migration Infrastructure Are Expanding
Google and Arm have launched a Cloud Migration Resource Hub to support the adoption of Axion-based C4A VMs. With over 100 curated learning paths and an expanding ISV list—including IBM Instana, Couchbase, Verve, and Applause—the ecosystem provides early guidance on compatibility and performance tuning. Titanium, the microcontroller-based offload system underpinning each Axion instance, supports lifecycle management and infrastructure resilience. While the toolchain and migration documentation are in place, long-term adoption will depend on sustained support across open-source and enterprise software vendors.
What to Watch:
- Developers must ensure workload compatibility when migrating from x86 to Arm-based Axion
- Enterprise buyers will evaluate how Axion’s performance translates into real-world cost and latency gains
- Competing vendors like AMD and Intel may respond with optimizations to retain market share in HPC and AI
- Success depends on Axion sustaining momentum across diverse workloads beyond early adopters
See the full blog post by Arm’s Bhumik Patel on Google Axion and its cloud impact.
Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.
Other insights from The Futurum Group:
Arm Reports Record Q3 Revenue as Armv9 and CSS Adoption Accelerates
Arm Team-Up Aims to Grow Revenues from Software-Defined Vehicles
How Arm is Powering the Next Generation of AI-Enabled Vehicles – Six Five On The Road at CES 2025
Author Information
Richard Gordon is Vice President & Practice Lead, Semiconductors for The Futurum Group. He has been involved in the semiconductor industry for more than 30 years, first in engineering and then in technology and market research, industry analysis, and business advisory.
For many years, Richard led Gartner's Semiconductor and Electronics practice, building a 20-person global team covering all aspects of semiconductor industry research, from manufacturing to chip markets and end applications. Having served on Gartner's Senior Research Board and as Gartner's Chief Forecaster, Richard has extensive experience in developing and implementing methodologies for market sizing, share and forecasting, to deliver data, analysis and insights about the competitive landscape, technology roadmaps, and market growth drivers.
Richard is a sought-after technology industry analyst, both as a trusted advisor to clients and also as an expert commentator speaking at industry events and appearing on live TV shows such as CNBC.