Menu

Google Delivers Titanium Hardware Offload for Efficient Performance

Google Delivers Titanium Hardware Offload for Efficient Performance

The News: During the Google Cloud Platform presentations at Cloud Field Day 20, Jeff Welsh presented capabilities for running enterprise workloads on GCP and the capabilities of the Titanium architecture to deliver hardware offload for efficient performance. Watch the presentation here.

Google Delivers Titanium Hardware Offload for Efficient Performance

Analyst Take: The Google Cloud Platform presentations at Cloud Field Day 20 reinforced my understanding that hyperscale cloud platforms use specialized hardware for specialized tasks. There was a time when on-premises vendors said that software running on identical, commodity x86 virtualization hosts was the only conceivable way to operate business IT. The reality is that specialized hardware has always been used in both cloud platforms and on-premises. Jeff Welsh presented a collection of capabilities for running enterprise applications on the Google Cloud Platform. The Titanium element caught my attention as it provides hardware offload for efficient performance. The visible part of Titanium is the Infrastructure Processing Unit (IPU), an add-in card co-developed with Intel, which resides in some of the newest VM families on GCP. Jeff talked about Titanium as more than the IPU; Titanium is an updated technical infrastructure that underpins GCP. Using hardware offloading for efficient performance is central to the innovation in the new infrastructure. Titanium offloading tasks to Borg, Google’s internal scheduler, particularly intrigues me.

CPUs for Business, Offload Infrastructure

Where does the offloading end? Why don’t we offload everything? This is where the difference between business IT and cloud-scale IT is apparent. Unique business code differentiates businesses and usually requires the flexibility of a general-purpose CPU. Cloud platforms are built to allow tenants to focus on the code that is unique to their business by delivering common infrastructure components. The Cloud platform is shared by all tenants and operates on a vastly larger scale than any one tenant business. Cloud platforms also have full control of their platform and its code but little control of the tenant’s business code. Hardware offload provides efficient performance in the cloud infrastructure, and general-purpose CPUs provide flexibility for unique business applications. Importantly, offloading the infrastructure tasks leaves more CPU performance for the business applications. The result is better application performance without any effort by the tenant, a significant benefit for bringing enterprise applications to the Google Cloud Platform.

Titanium to Offload More

The idea of hardware offload is not new; network adapters started with TCP offload engines and have existed in network cards since the 1990s. The capabilities of offload cards have increased and, in the last few years, have seen huge development. The Titanium IPU has all the NIC offloading features, plus security offloads, including a root of trust for the system boot. Storage offloading is also present, delivering up to 650K IOpS and eliminating IO wait times. The IO wait time is often a silent killer of application performance, leaving a CPU idle while it awaits a response from storage. Offloading the storage function frees the CPU from waiting, allowing more time to run unique business codes. I mentioned before the ability of Titanium to offload computer tasks to Google’s Borg scheduler. Jeff explained the current state where Borg is used as part of the storage offload. I am interested in the future of offloads where Titanium and Borg are used to offload queries to data services such as Bigtable, further reducing the CPU load for complex applications.

Where can I get Titanium?

The Titanium-enabled functions require an IPU in the physical hosts, so it isn’t going to be an option to enable or disable them; they will require new servers and so new machine shapes. Jeff discussed the new C3:metal shapes as general-purpose bare metal with hardware offload for efficient performance. For VM options, there are C4 high-performance and N4 cost/performance optimized shapes, which again include hardware offload for efficient performance. I expect to see mostly new Titanium shapes over time as the benefits to both GCP and tenants are enormous.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

Broadcom and Google Cloud’s FinOpsX 2024 Announcements: A Deep Dive

Google Cloud AI Impact to Application Modernization | DevOps Dialogues: Insights & Innovations

Google I/O 2024

Author Information

Alastair has made a twenty-year career out of helping people understand complex IT infrastructure and how to build solutions that fulfil business needs. Much of his career has included teaching official training courses for vendors, including HPE, VMware, and AWS. Alastair has written hundreds of analyst articles and papers exploring products and topics around on-premises infrastructure and virtualization and getting the most out of public cloud and hybrid infrastructure. Alastair has also been involved in community-driven, practitioner-led education through the vBrownBag podcast and the vBrownBag TechTalks.

Related Insights
Micron Technology Q1 FY 2026 Sets Records; Strong Q2 Outlook
December 18, 2025

Micron Technology Q1 FY 2026 Sets Records; Strong Q2 Outlook

Futurum Research analyzes Micron’s Q1 FY 2026, focusing on AI-led demand, HBM commitments, and a pulled-forward capacity roadmap, with guidance signaling continued strength into FY 2026 amid persistent industry supply...
NVIDIA Bolsters AI/HPC Ecosystem with Nemotron 3 Models and SchedMD Buy
December 16, 2025

NVIDIA Bolsters AI/HPC Ecosystem with Nemotron 3 Models and SchedMD Buy

Nick Patience, AI Platforms Practice Lead at Futurum, shares his insights on NVIDIA's release of its Nemotron 3 family of open-source models and the acquisition of SchedMD, the developer of...
Broadcom Q4 FY 2025 Earnings AI And Software Drive Beat
December 15, 2025

Broadcom Q4 FY 2025 Earnings: AI And Software Drive Beat

Futurum Research analyzes Broadcom’s Q4 FY 2025 results, highlighting accelerating AI semiconductor momentum, Ethernet AI switching backlog, and VMware Cloud Foundation gains, alongside system-level deliveries....
Synopsys Q4 FY 2025 Earnings Highlight Resilient Demand, Ansys Integration
December 12, 2025

Synopsys Q4 FY 2025 Earnings Highlight Resilient Demand, Ansys Integration

Futurum Research analyzes Synopsys’ Q4 FY 2025 results, highlighting AI-era EDA demand, Ansys integration momentum, and the NVIDIA partnership....
Hewlett Packard Enterprise Q4 FY 2025 ARR Surges as AI Orders Build
December 8, 2025

Hewlett Packard Enterprise Q4 FY 2025: ARR Surges as AI Orders Build

Futurum Research analyzes HPE’s Q4 FY 2025 results, highlighting networking-led margin resiliency, AI server order momentum, and GreenLake ARR growth....
AWS re:Invent 2025: Wrestling Back AI Leadership
December 5, 2025

AWS re:Invent 2025: Wrestling Back AI Leadership

Futurum analysts share their insights on how AWS re:Invent 2025 redefines the cloud giant as an AI manufacturer. We analyze Nova models, Trainium silicon, and AI Factories as AWS moves...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.