Analyst(s): Nick Patience, Keith Kirkpatrick, Mitch Ashley, Alex Smith, Keith Townsend
Publication Date: April 14, 2025
What is Covered in this Article:
- Google introduced the Agent2Agent (A2A) protocol, a vendor-agnostic multi-agent communication system supported by over 50 partners, enabling secure collaboration across diverse platforms
- New application-centric cloud tools, including App Hub, Application Design Center, and Cloud Hub, provide comprehensive visibility and management of service dependencies
- The Ironwood TPU, Google’s seventh-generation tensor processor, delivers 42.5 exaflops of computing power optimized for AI inference
- Google Gemini is coming to your premises: Google Distributed Cloud for on-premises deployments, including air-gapped environments for industries with sensitive data requirements
- Growth of the Google Cloud Marketplace & new dev tools: The AI Agent Development Kit (ADK) and Firebase Studio previews offer frameworks for building multi-agent systems and agentic application development
The News: Google’s Next 2025 conference delivered a substantial array of AI and cloud announcements that aim to reshape enterprise technology stacks. Google launched the event at the Las Vegas Sphere with a powerful demonstration of generative AI whereby Google and Warner Bros. enhanced the Wizard of Oz using AI to make it suitable for the massive Sphere screen. More broadly applicable announcements included unveiling the Agent2Agent protocol alongside application management tools, development frameworks, and next-generation hardware—all designed to expand Google’s footprint in the competitive AI landscape. While the ambitious roadmap is packed with promising capabilities, many offerings remain in preview as Google works to transform its technical innovations into must-have business solutions. The success of these initiatives will ultimately depend on how effectively they deliver the practical AI capabilities that organizations increasingly demand.
The event was held against the backdrop of multiple tariff announcements (and retreats from) by the US administration, which caused a lot of market volatility. Such talk of tariffs is relevant to Google’s global infrastructure strategy as it could impact Google’s hardware supply chain for its data centers and chips, and Google’s customers outside the US – and not just within the EU – are now very interested in data sovereignty and regional compliance amid trade tensions, so Google’s pivot to support for on-premises and multi-cloud deployments could be seen as a hedge against potential regulatory or trade barriers that might limit cross-border data flows.
Agents Unleashed
Google made several announcements that will impact the company’s applications and overall enterprise software. The company announced its new Agent2Agent (A2A) protocol, a multi-agent system that enables agents to communicate regardless of the framework or vendor platform on which the agents are built. The A2A protocol incorporates support from more than 50 partners, including large SaaS vendors such as Salesforce, SAP, and ServiceNow, and technology partners and integrators.
Ultimately, AI agents will only deliver outsized benefits when they can handle complex, multi-step, and cross-organizational workflows without human intervention. Creating customized integrations for every potential application, data source, or agentic framework would be too time-consuming and expensive for any vendor or end-user organization to undertake.
The A2A protocol is designed to allow AI agents to communicate with each other, securely exchange information, and coordinate actions on top of various enterprise platforms or applications with little or no custom integration work required. As agents are increasingly used to handle complex workflows that incorporate real-time data and dependencies across organizational boundaries, the value of the A2A protocol will likely become even more apparent. You can read more about Google’s database announcements as they relate to agentic AI here.
Applications
Google also highlighted its enhancements to its Customer Engagement Suite, which includes a contact-center-as-a-service offering (based on what was called Contact Center AI) and conversational AI solutions designed to enhance human-agent interactions. The company announced a new console for the Customer Engagement Suite, which enables users to quickly create AI agents for self-service using natural language to describe how the AI agent should resolve the query and point the agent to relevant knowledge and data sources, which reduces hallucination and improves response relevancy and accuracy.
Perhaps most interestingly, Google Customer Engagement Suite agents will be able to leverage the latest Gemini image, voice, audio, and video models, which enable greater levels of engagement and functionality for both self-service use cases as well as human-assist scenarios. According to Google product executives, the goal is to leverage organizational data to create the most timely and personalized experiences for customers while closely mimicking the natural and human-like flow of human-to-human conversations. By creating a more natural agentic experience for customers, Google says it will allow humans to focus on delivering the best experiences possible in situations where human interactions are most valued. These enhancements help to deliver experiences that may come close to supporting efficiency and productivity goals while ensuring better overall end-customer experiences.
Chips and Infrastructure
Google announced Ironwood, its seventh-generation Tensor Processing Unit (TPU), specifically engineered for the demands of AI inference. Described as Google’s most powerful and energy-efficient TPU yet, Ironwood aims to power the next wave of AI applications where agents proactively deliver insights. Ironwood can scale up to 9,216 chips per pod, delivering 42.5 exaflops of computing power. Its High Bandwidth Memory (HBM) capacity is 1.92GB, and its peak FLOPs per chip is more than an order of magnitude greater than its sixth-generation predecessor. Google is pushing it as the first TPU for the ‘age of inference,’ which mainly amounts to better power efficiency and networking throughput. Improvements to Google’s Hypercomputer supercomputing stack include the introduction of Pathways – the distributed runtime developed by Google Deepmind that it uses in its training and inference infrastructure on Google Cloud for the first time.
Another first came in the form of customer access to Google’s backbone network in the form of Google Cloud WAN. The network features more than two million miles of fiber, 202 points of presence, and 33 subsea cables. Google’s own testing showed up to a 40% reduction in latency compared to the public internet and the same percentage in terms of cost savings over a commercial WAN alternative.
Google’s Gemini for Google Distributed Cloud (GDC) brings AI capabilities to customer-controlled environments, supporting cloud-connected and air-gapped deployments. This offering targets industries with sensitive data or real-time processing needs like government, healthcare, finance, biotech, and retail. For organizations with strict data residency requirements, Gemini on GDC can be deployed through pre-engineered appliances or on customer-approved hardware, typically Nvidia systems. The air-gapped setup runs Google’s control plane, models, and edge services without external connections while still providing access to Google Cloud-native development tools within the cluster.
By offering the full Gemini model for on-premises deployment, the company appears to be a leading initiative in the industry, as other hyperscalers currently tend to focus on hybrid or edge-based approaches for AI deployments. Performance benefits include reduced latency for real-time applications and local inferencing capabilities, eliminating the need to transfer large data volumes to the cloud. Deployment options range from small servers to full data center racks, with pricing based on a predictable subscription model rather than usage-based billing.
Application-Centric Cloud
Google announced a suite of tools to streamline application development and infrastructure management. Google App Hub provides an application-centric view of all the services and workloads that applications depend on. Organizations can view across Google Cloud which services support the applications, services, and workloads in production, which are mission-critical, and what infrastructure supports which applications.
Application Design Center provides a visual canvas for creating and managing application templates and configuration applications templates for deployment, which are automatically registered in App Hub. Cloud Hub provides health and troubleshooting, resource optimization, maintenance, quotas and reservations, and support cases.
Google introduced new observability capabilities with two new features: Application Monitoring for automatic tagging telemetry (logs, metrics, alerts, traces, and dashboards) and Cost Explorer, which provides granular visibility into application costs and utilization metrics. All of the application-centric cloud services mentioned here were introduced in the public preview.
AI In Development
Also launched at Next 2025, the AI Agent Development Kit (ADK) preview is a new open-source framework for building multi-agent systems for hierarchical and parallel workflows, with planning capabilities to break down tasks into smaller tasks. Using ADK, developers can define AI agents’ behavior, orchestration, and tool usage. ADK will also manage short-term state and memory across multiple sessions. Complementing ADK, Google announced Agent Garden, a repository of pre-built agents, tool libraries, and integration connectors.
The Google Firebase Studio preview provides an agentic approach to prototyping and creating production cloud applications. Using natural language input or pre-defined templates, Firebase Studio performs the multi-step process of generating the full code for an application, creating and running tests, and debugging and fixing errors to reach a fully working application. Agentically created applications can be taken from prototype to production, including frontend, backend, web, and mobile code. Firebase App Hosting and Firebase Data Connect reached general availability. Also announced in the preview is Google’s App Testing agent, which uses Gemini to generate test cases, test management, and test execution from natural language input and codebase analysis.
Model Rivalry
Google’s Gemini Pro 2.5 model was announced a couple of weeks before Next and is available in public preview on Vertex AI. Its performance places it just slightly ahead of Meta’s Llama 4 model in the respected Chatbot Arena benchmark. Incidentally, Meta chose the Saturday before Next to make its own announcement, and Google had it available in its Model Garden later the same day. Google reports 40% growth in Gemini model usage on Vertex AI since last year’s Next. Although its coding capabilities have grabbed headlines, Gemini’s native multimodality was highlighted most at Next. Google executives we spoke with said maintaining Gemini Pro 2.5’s competitive pricing against commercial rivals is important, and its 2 million token-long context window remains a differentiator.
A growing Google Cloud Ecosystem
For any major technology vendor, it is not just a case of what innovations it is bringing to market but also how it is expanding its relationships with the wider technology community. For example, the previously mentioned Agent2Agent protocol is one of the key launches from the ecosystem team at Google Cloud. Still, the primary vehicle for Google’s engagement with the ecosystem to date is its Google Cloud Marketplace, which has become a multi-billion-dollar business (in terms of third-party revenue flowing through that vehicle). The largest partner on that platform, Palo Alto Networks, announced at Google Next that it has surpassed $1.5 billion in cumulative revenue through the Google Cloud Marketplace, making it one of the flagship ISV partners. Overall, Google Cloud Marketplace saw over 170% growth in gross transactions through the marketplace and over 2,000 new product listings. Google Cloud continues to invest in this offering, including a recently launched marketplace fee structure (that reduces based on volume) that will replace the previous 3% flat fee.
It also highlighted its recently launched AI Agent Marketplace, which exists as a standalone category within the wider Marketplace offering. Currently, there are over 130 AI Agent listings. Interestingly, Google Cloud has seen a larger participation from the services community (particularly Global Systems Integrators) in this space compared with traditional ISVs. This is an early signal that this community is rapidly exploring ways to ensure it is at the top of mind as many service offerings start to be augmented by AI agents.
Google Cloud Next 2025: The Yellow Brick Road to AI Transformation
Analyst Take: In its second iteration in Las Vegas, crammed 32,000 attendees into the Mandalay Bay convention center for Google Cloud Next. The main themes laid out by Google Cloud CEO Thomas Kurian were an AI-optimized platform, open and multi-cloud support, and interoperability. That’s a bit different from the ‘put Google at the heart of your business, and we’ll transform it’ story of recent years, but we feel it better reflects reality, especially the multi-cloud part. Chuck in the announcement that Google Gemini models are available on-premises (see infrastructure section), you have a company capable of delivering AI solutions wherever needed.
AI agents were, predictably enough, to the fore, and Google called Agentspace its fastest-growing enterprise product ever, with hundreds of thousands of user licenses for its AgentSpace tool already sold, even though it was only announced in December 2024 and not yet generally available. The Agent2Agent announcement (see Agents section) is a laudable aim, but given the absence so far of AWS or Microsoft at the time of the announcement – and doing it at Google Cloud’s conference means it will be seen as a Google initiative, no matter how open it turns out to be.
Google is simplifying and presenting some of its offerings with an application-centric approach. Their Application Design Center, App Hub, and Cloud Hub announcements made this clear. The app-centric approach includes predefined templates to help users construct and configure the technology platform for development and operations. We’ve seen this approach previously by hyperscalers, such as AWS’ CloudFormation templates. While they may aid in providing a guide of what services to use and how to configure them, they still don’t address the overall complexity and long list of services customers must comprehend to meet their needs.
Just as L. Frank Baum’s original story, The Wizard of Oz, published in 1900 during William McKinley’s presidency, served as an allegory for the populist movements and gold-standard debates of that era. This technological reimagining arrives during another presidency marked by populist rhetoric and protectionist trade policies – with President Trump’s admiration for McKinley and similar affinity for tariffs suggesting that America’s economic debates may be traveling full circle, even as the yellow brick road now leads to a 360-degree digital spectacle.
What to Watch:
- Despite AWS and Microsoft’s absence from the Agent2Agent protocol announcement, watch whether this Google initiative can truly establish industry-wide adoption beyond its initial 50 partners. On the flip side, SaaS providers like Salesforce, ServiceNow, and SAP joining the A2A protocol creates a precedent for how enterprise applications will integrate with AI agents.
- Google’s AgentSpace’s rapid growth bears monitoring as it approaches general availability, potentially validating the market demand for multi-agent collaboration tools.
- The on-premises Gemini deployment could significantly impact competing hyperscalers’ strategies, potentially forcing similar offerings from AWS and Azure.
- Application-centric tools may simplify Google Cloud adoption, but observe whether they meaningfully reduce the complexity barrier for new customers. Cloud-native application management tools from VMware ( part of Broadcom) and Red Hat may need to evolve to compete with Google’s application-centric approach, especially if Google can successfully reduce the complexity of managing multi-cloud environments.
- Traditional Global System Integrators’ strong participation in Google’s AI Agent Marketplace signals a potential shift in service delivery models. Watch how firms like Accenture, Deloitte, and others evolve their offerings as AI agents begin handling workflows previously requiring custom integration work.
You can read all the announcements from Google Cloud Next 2025 on Google Cloud’s blog.
Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.
Other insights from The Futurum Group:
At Google Cloud Next, Google Brings Its Databases to Bear on Agentic AI Opportunity
Why Organizations Are Switching to Google Cloud
Google Cloud’s AI-First Vision: Empowering Businesses for the Generative AI Era
Will Google’s AI Enhancements Help Drive Greater User Adoption?
Image Credit: Google Cloud
Author Information
Nick is VP and Practice Lead for AI at The Futurum Group. Nick is a thought leader on the development, deployment and adoption of AI - an area he has been researching for 25 years. Prior to Futurum, Nick was a Managing Analyst with S&P Global Market Intelligence, with responsibility for 451 Research’s coverage of Data, AI, Analytics, Information Security and Risk. Nick became part of S&P Global through its 2019 acquisition of 451 Research, a pioneering analyst firm Nick co-founded in 1999. He is a sought-after speaker and advisor, known for his expertise in the drivers of AI adoption, industry use cases, and the infrastructure behind its development and deployment. Nick also spent three years as a product marketing lead at Recommind (now part of OpenText), a machine learning-driven eDiscovery software company. Nick is based in London.
Keith has over 25 years of experience in research, marketing, and consulting-based fields.
He has authored in-depth reports and market forecast studies covering artificial intelligence, biometrics, data analytics, robotics, high performance computing, and quantum computing, with a specific focus on the use of these technologies within large enterprise organizations and SMBs. He has also established strong working relationships with the international technology vendor community and is a frequent speaker at industry conferences and events.
In his career as a financial and technology journalist he has written for national and trade publications, including BusinessWeek, CNBC.com, Investment Dealers’ Digest, The Red Herring, The Communications of the ACM, and Mobile Computing & Communications, among others.
He is a member of the Association of Independent Information Professionals (AIIP).
Keith holds dual Bachelor of Arts degrees in Magazine Journalism and Sociology from Syracuse University.
Mitch Ashley is VP and Practice Lead of DevOps and Application Development for The Futurum Group. Mitch has over 30+ years of experience as an entrepreneur, industry analyst, product development, and IT leader, with expertise in software engineering, cybersecurity, DevOps, DevSecOps, cloud, and AI. As an entrepreneur, CTO, CIO, and head of engineering, Mitch led the creation of award-winning cybersecurity products utilized in the private and public sectors, including the U.S. Department of Defense and all military branches. Mitch also led managed PKI services for broadband, Wi-Fi, IoT, energy management and 5G industries, product certification test labs, an online SaaS (93m transactions annually), and the development of video-on-demand and Internet cable services, and a national broadband network.
Mitch shares his experiences as an analyst, keynote and conference speaker, panelist, host, moderator, and expert interviewer discussing CIO/CTO leadership, product and software development, DevOps, DevSecOps, containerization, container orchestration, AI/ML/GenAI, platform engineering, SRE, and cybersecurity. He publishes his research on FuturumGroup.com and TechstrongResearch.com/resources. He hosts multiple award-winning video and podcast series, including DevOps Unbound, CISO Talk, and Techstrong Gang.
Alex is Vice President & Practice Lead, Channels & Go-to-Market at the Futurum Group. He is responsible for establishing and maintaining the Channels Research program as part of the overall Futurum GTM and Channels Practice. This includes overseeing the channel data rollout in the Futurum Intelligence Platform, primary research activities such as research boards and surveys, delivering thought-leading research reports, and advising clients on their indirect go-to-market strategies. Alex also supports the overall operations of the Futurum Research Business Unit, including P&L segmentation, sales and marketing alignment, and budget planning.
Prior to joining Futurum, Alex was VP of Channels & Enterprise Research at Canalys where he led a multi-million dollar research organization with more than 20 analysts. He played an integral role in helping the Canalys research organization migrate into Omdia after having been acquired in 2023. He is an accomplished research leader, as well as an expert in indirect go-to-market strategies. He has delivered numerous keynotes at partner-facing conferences.
Alex is based in Portland, Oregon, but has lived in numerous places, including California, Canada, Saudi Arabia, Thailand, and the UK. He has a Bachelor in Commerce and Finance Major from Dalhousie University, Halifax Canada.
Keith Townsend is a technology management consultant with more than 20 years of related experience in designing, implementing, and managing data center technologies. His areas of expertise include virtualization, networking, and storage solutions for Fortune 500 organizations. He holds a BA in computing and an MS in information technology from DePaul University. He is the President of the CTO Advisor, part of The Futurum Group.