Analyst(s): Tom Hollingsworth
Publication Date: April 28, 2026
AI Networking is pushing the boundaries of data center resources. In order to provide more resilience and work with constraints such as power and cooling, networking teams must look to scale-across designs that tie buildings and sites together in the same network fabric. Future planning must include this pillar of infrastructure design to keep pace with developments in other areas of AI build-outs.
Key Points:
- Power and cooling availability is responsible for 22.5% of reported constraints on AI data center build-outs.
- Traditional InfiniBand transport architecture is designed for single-site deployments.
- Data center operators must balance constraints with the capabilities of infrastructure to meet rising demand.
Overview:
AI data centers are facing constraints for power and cooling. A total of 22.5% of companies report that they are not able to build out fast enough due to a lack of these resources. Companies that don’t have the capacity to bring new GPUs online are concerned about wasted capital resources sitting around in boxes. Scale-across networking designs offer a path forward for operators that need to add infrastructure but worry about geographic challenges.
InfiniBand and Ethernet: InfiniBand is the dominant interconnection method for GPU networking today, with 46.6% of the market. But Infiniband is built for a single data center design. Ethernet networking for AI data centers offers an approach to let operators build scale-across networks that tie buildings together on a campus or across a wider area.
Optical Networking Evolution: Current optical networking modules are power-intensive and rob data centers of critical power and cooling infrastructure. Newer technologies such as co-packaged optics (CPO) and linear pluggable optics (LPO) offer increased performance and reduced resource utilization.
Clouds Need Scale Too: The scale-across dilemma is not just for customers with on-premises workloads. Cloud providers use the same concepts in their availability zone offerings. Companies such as Equinix are looking to scale-across networking as a way to build out more capacity in the short term while capturing business from companies looking to distribute workloads across multiple locations for performance and stability reasons.
Conclusion
Scale-across networking is the key to quick implementation of new GPU resources even when data center infrastructure is constrained in a single site. Proper engineering of network designs will increase power and cooling capacity, allowing for more resource utilization. Adoption of new technologies such as AI Ethernet networking and optical evolutions means organizations can use that spare capacity to drive AI workloads that produce value for everyone.
The full report, “Scale-Across AI Networking: The Third Dimension of AI Infrastructure Design,” is available via subscription to Futurum Intelligence’s IQ service—click here for inquiry and access.
About Futurum Intelligence for Market Leaders
Futurum Intelligence’s IQ service provides actionable insight from analysts, reports, and interactive visualization datasets, helping leaders drive their organizations through transformation and business growth. Subscribers can log into the platform at https://app.futurumgroup.com/, and non-subscribers can find additional information at Futurum Intelligence.
Follow news and updates from Futurum on X and LinkedIn using #Futurum. Visit the Futurum Newsroom for more information and insights.
