Google Axion Challenges x86 in Cloud Workloads with Arm-Based Advantage

Google Axion Challenges x86 in Cloud Workloads with Arm-Based Advantage

Analyst(s): Richard Gordon
Publication Date: April 14, 2025

Google Cloud has introduced Axion, a custom-built processor based on Arm’s Neoverse V2 architecture, engineered to support high-throughput, energy-efficient cloud computing. Axion powers Google’s C4A VM family and is available across services, including Compute Engine, GKE, AlloyDB, Cloud SQL, Dataproc, and Batch.

What is Covered in this Article:

  • Google Cloud launches Axion, its first custom Arm Neoverse V2-based CPU
  • Benchmark results show 40–212% performance gains over x86 CPUs across varied workloads
  • AI inference, HPC simulations, web services, and databases show strong throughput advantages
  • Customers across multiple industries are already deploying Axion-based C4A VMs
  • New cloud migration resources aim to accelerate enterprise-scale transitions

The News: Google Cloud has unveiled Axion, its first custom-built Arm-based CPU designed in collaboration with Arm and built on the Neoverse V2 platform. Optimized for real-world cloud workloads, Axion is now available in Google Cloud’s C4A VM instances and powers services, including Google Kubernetes Engine, Cloud SQL, AlloyDB, Dataproc, and Batch.

Early benchmarks show Axion achieving substantial performance gains compared to x86-based instances across multiple workload types, including AI inference, HPC simulations, and general-purpose compute. Customers such as Spotify, Databricks, MongoDB, and Palo Alto Networks are seeing measurable scalability, performance, and TCO gains.

Google Axion Challenges x86 in Cloud Workloads with Arm-Based Advantage

Analyst Take: Google Axion enters the market with validated performance gains against x86 incumbents across diverse real-world workloads. By leveraging Arm’s Neoverse V2 cores, Axion targets compute density, throughput, and energy efficiency – characteristics critical to AI, HPC, and cloud-native services. The results from Google and Arm’s testing offer compelling comparisons highlighting Axion’s architectural competitiveness, particularly for enterprises looking to optimize cost and performance simultaneously.

AI Inference Workloads Show Material Throughput Gains

In Retrieval-Augmented Generation (RAG) tasks using Llama 3.1, Axion-based instances demonstrated consistent, repeatable throughput improvements. On text generation and prompt encoding tasks, c4a-standard (48 vCPU) VMs matched or exceeded +150% performance gains per vCPU over Intel Emerald Rapids and AMD Genoa. These tests used FP32 precision across tokens/sec metrics, and the results show that Axion’s performance advantage is not marginal – it is structurally significant for inference-heavy workloads that prioritize throughput at scale.

HPC Simulations Benefit from Core-Level Efficiency

Crash and impact simulation workloads using Altair OpenRadioss show Axion outperforming x86 CPUs across multiple test models. For instance, simulations involving 10 million 2D+3D elements (Ford Taurus T10M) or vertical impact models (SkyCAB FH Aachen) saw performance gains ranging from +7% to +24% on a per-vCPU basis. Axion’s architecture, particularly the memory bandwidth per vCPU and single-threaded performance, enables it to sustain demanding simulation loads without multithreading. This positions Axion as a practical alternative for physics-based simulation workflows common in the automotive and aerospace industries.

Web and Database Services Show Consistent Latency and Throughput Benefits

In general-purpose benchmarks, Axion-based c4a-standard and c4a-highmem VMs showed strong performance gains in NGINX and PostgreSQL workloads. With four vCPUs, Axion improved NGINX request throughput by +40% over Intel and +194% over AMD. For PostgreSQL, running on 8 vCPUs with 256 clients, Axion showed +40–46% gains in orders per minute. These figures suggest Axion is well-suited for transaction-heavy applications and services requiring consistent latency under load, including SaaS frontends, e-commerce platforms, and microservices-based databases.

Ecosystem Readiness and Migration Infrastructure Are Expanding

Google and Arm have launched a Cloud Migration Resource Hub to support the adoption of Axion-based C4A VMs. With over 100 curated learning paths and an expanding ISV list—including IBM Instana, Couchbase, Verve, and Applause—the ecosystem provides early guidance on compatibility and performance tuning. Titanium, the microcontroller-based offload system underpinning each Axion instance, supports lifecycle management and infrastructure resilience. While the toolchain and migration documentation are in place, long-term adoption will depend on sustained support across open-source and enterprise software vendors.

What to Watch:

  • Developers must ensure workload compatibility when migrating from x86 to Arm-based Axion
  • Enterprise buyers will evaluate how Axion’s performance translates into real-world cost and latency gains
  • Competing vendors like AMD and Intel may respond with optimizations to retain market share in HPC and AI
  • Success depends on Axion sustaining momentum across diverse workloads beyond early adopters

See the full blog post by Arm’s Bhumik Patel on Google Axion and its cloud impact.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

Arm Reports Record Q3 Revenue as Armv9 and CSS Adoption Accelerates

Arm Team-Up Aims to Grow Revenues from Software-Defined Vehicles

How Arm is Powering the Next Generation of AI-Enabled Vehicles – Six Five On The Road at CES 2025

Author Information

Richard Gordon

Richard is a sought-after technology industry analyst, both as a trusted advisor to clients and also as an expert commentator speaking at industry events and appearing on live TV shows such as CNBC.

Related Insights
Texas Instruments Q1 FY 2026: Data Center and Industrial Demand Lift Outlook
April 27, 2026

Texas Instruments Q1 FY 2026: Data Center and Industrial Demand Lift Outlook

Brendan Burke, Research Director at Futurum, analyzes Texas Instruments’ Q1 FY 2026 earnings, focusing on data center power content, and how supply readiness shapes outlook for the next quarters....
STMicroelectronics Q1 FY 2026 Earnings Show Early AI and Satellite Upside
April 27, 2026

STMicroelectronics Q1 FY 2026 Earnings Show Early AI and Satellite Upside

Brendan Burke, Research Director, analyzes STMicroelectronics’ Q1 FY 2026 earnings, focusing on AI data center and LEO satellite momentum, photonics ramp timing....
Intel Q1 FY 2026 Earnings Point to Agentic CPU Demand and Foundry Upside
April 27, 2026

Intel Q1 FY 2026 Earnings Point to Agentic CPU Demand and Foundry Upside

Brendan Burke, Research Director at Futurum, analyzes Intel’s Q1 FY 2026 earnings, focusing on CPU demand tied to inference and agentic workloads, customer LTAs, and foundry and packaging execution signals....
SAP Q1 FY 2026 Earnings Show Cloud ERP Suite Acceleration
April 27, 2026

SAP Q1 FY 2026 Earnings Show Cloud ERP Suite Acceleration

Futurum Research reviews SAP Q1 FY 2026 earnings, focusing on cloud ERP Suite momentum, the path to trusted business AI, and what SAP’s guidance implies for enterprise software planning....
Cohere Acquires Aleph Alpha: A Deal Born of Sovereignty & Necessity
April 27, 2026

Cohere Acquires Aleph Alpha: A Deal Born of Sovereignty & Necessity

Nick Patience, VP & Practice Lead at Futurum, analyses the Cohere and Aleph Alpha deal, examining strategic synergies, sovereign AI positioning, and key integration risks for the combined transatlantic enterprise...
Meta’s AWS Pact Reframes the Graviton CPU as an AI Workhorse
April 27, 2026

Meta’s AWS Pact Reframes the Graviton CPU as an AI Workhorse

Brendan Burke, Research Director at Futurum, examines Meta's agreement with AWS to deploy tens of millions of Graviton cores, signaling a shift toward purpose-built CPUs as critical infrastructure for scaling...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.