Menu

Futurum Research 2025

Key Issues & Predictions

Welcome to
Futurum’s 2025 Key Issues & Predictions Report

As we reach the midpoint of 2025, one thing remains clear: disruption hasn’t slowed; it’s accelerating. Organizations that anticipated the pace of change earlier this year are already reaping the rewards. But for those still catching up, the time to accelerate your efforts is now.

At Futurum Research, we aim to help leaders decode complexity, navigate volatility, and turn emerging trends into tangible advantages. In the first half of this year, we’ve seen our predictions begin to play out in real time – from the rise of agentic AI transforming digital labor to cloud marketplaces redefining go-to-market strategies, and AI-powered PCs reshaping workplace productivity.

The next six months will demand more agility, boldness, and strategic clarity. Business models across many industries are being tested, customer expectations continue to climb, and the pressure to align technology, talent, and transformation has never been greater.

This isn’t just about seeing the future; it’s about looking for the Signals to help you build a sustainable competitive advantage.

I encourage you to revisit the updated predictions outlined in this report and ask:

What’s shifted, and how will that impact my current growth strategy? What’s emerging faster than expected, and what impact is that having? And what investments are showing tangible results?

As we move into the second half of the year, now is the time to recommit to growth, innovation, and building experiences that elevate both customer success and employee engagement.

Here’s to shaping what’s next, together.

Tiffani Bova

Tiffani Bova

Chief Strategy and Research Officer
The Futurum Group

AI Platforms: Agentic AI’s Takeover of Enterprise Workflows

Prediction:

By the end of 2025, the impact of agentic AI on enterprise software will be marked by strategic adoption rather than just widespread mediation. At least 15-20% of routine tasks within specific corporate functions–notably HR and customer service–will be initiated and managed by AI agents, with software firms leading the way in adoption. This will catalyze a significant evolution in software licensing, with a clear trend toward outcome-based and consumption-driven pricing models over traditional per-seat licenses.

“The conversation for CIOs has shifted from ‘if’ to ‘how.’ In 2025, we see a strategic imperative to build an agent-based AI strategy. The focus is less on the novelty of a single agent performing a task and more on creating a cohesive fabric of specialized agents that can automate entire business functions. This is the year enterprise agentic AI moves from the lab to the core of strategic IT planning.”

Nick Patience

Vice President & Practice Lead
AI Platforms

Predictions:

The acceleration of this trend has become more nuanced and is now driven by three evolved factors:

  • Maturation of Foundational Models: The first half of 2025 saw the launch and enterprise-grade hardening of next-generation large language models (LLMs, including OpenAI’s GPT-4.5 and Google’s Gemini 2.5). These models possess vastly improved multi-step reasoning and reliability, moving agents from promising prototypes to dependable digital coworkers capable of handling complex, multi-system business processes with greater autonomy.
  • From API Availability to AI-Centric Integration: The critical mass of available APIs has been reached. The new focus for H2 2025 is on AI-centric integration frameworks and a move toward more flexible, AI-integrated APIs. This enables agents to connect to various systems and do so with a deeper understanding of the underlying data and business context.
  • SaaS Consolidation as an AI Catalyst: While typical large companies still juggle more than a hundred SaaS applications, a clear trend of consolidation has emerged. This is not a headwind for agentic AI, but a catalyst. CFOs and CIOs are now prioritizing platforms with strong, embedded AI and agentic capabilities, using AI as a key criterion to decide which applications to keep, ultimately creating a more streamlined, agent-ready software stack.
  • Cross-Platform Business Process Automation: An HR agent now executes the majority of the employee onboarding process autonomously. It not only creates accounts and schedules orientations but also intelligently provisions role-specific software licenses by interfacing with procurement systems, cross-referencing security clearances with IT compliance platforms, and confirming task completion, only flagging exceptions for human review.
  • Intelligent Resource Optimization: An agent continuously monitors enterprise resource planning (ERP) and supply chain platforms, demonstrating tangible ROI. It now autonomously adjusts inventory parameters based on real-time sales data from e-commerce platforms. It even predicts potential disruptions by analyzing logistics provider data, moving beyond simple reordering to proactive supply chain management.
  • Integrated Customer Experience Management: A customer service agent is now the primary orchestrator for issue resolution. It autonomously identifies high-value customers with urgent issues via CRM data, retrieves their interaction history from support platforms, analyzes sentiment from social media, and presents a complete, actionable brief to a human agent, often with a recommended solution already drafted.

Cybersecurity & Resilience: Agentic AI Is a Focus for Cybersecurity Teams

Prediction:

As we reach the halfway mark for 2025, our initial prediction about increased efforts around agentic AI in cybersecurity was accurate: we saw and continue to see significant activity around agentic AI in security, both in terms of “securing agentic” and “using agentic for security.” We expect increased adoption of agentic workflows in security operations teams, particularly as large vendors fine-tune their agentic offerings.

“The broad evolution of and interest in agentic AI is extremely important to cybersecurity teams. As a new technology, the adoption of agentic AI across the organization means that security teams must quickly understand the technology, analyze the impact it may have on security posture, determine how to secure it, and implement these changes while supporting innovation and experimentation. That same technology, however, can potentially be a boon to security teams themselves, as they use it selectively to assist with well-defined security tasks.”

Fernando Montenegro

Fernando Montenegro

Vice President & Practice Lead
Cybersecurity & Resilience

Predictions:
  • Widespread Popularity Within the Business: The emergence of agentic technology is of great interest to businesses seeking efficiencies across numerous processes, and technology vendors have responded in earnest, with numerous announcements in the past few months.
  • Securing Agentic AI Is a Complex Undertaking: Agentic technology includes several aspects that must be addressed, including code security, identity management, data security, and more. The emergence of protocols such as Model Context Protocol (MCP) and Agent2Agent (A2A) brings new challenges for security teams to tackle.
  • Technology Can Be Applied to Security Use Cases: On the flip side, agentic technology has increasingly been seen as well-suited for well-defined use cases in cybersecurity, including scenarios in application security, security operations, and more.
  • Enrichment for Security Events and Alerts: Agents can be beneficial in aggregating information from multiple sources based on a deeper understanding of the underlying content. This can be applied to time-sensitive investigations where security analysts must understand the context of possible incidents. Releases from security operations vendors, including but not limited to CrowdStrike, SentinelOne, Microsoft, Cisco, Palo Alto Networks, Trend Micro, and others, evidence this.
  • Better Event Triage: Agentic AI is proving particularly useful in scenarios where the domain model, which describes how complex the context the agent needs to understand, is relatively well-defined. This works well in many security operations scenarios where automated triage can help reduce the analyst workload. A good example of this has been Microsoft’s release of a suite of agents that, among other things, automatically handle lower-level email alerts.
  • Scale Up Threat Hunting: Many security teams proactively look for signs of potential intrusion through threat hunting, but this can be a time-consuming activity requiring deep domain knowledge. Agentic technology can potentially assist here by offloading well-defined tasks from the human threat hunters. The recent advancements with the use of MCP servers connecting agents to existing security tools can be particularly useful here.

Data Intelligence, Analytics, & Infrastructure: Data as the Gateway to True AI Value

Prediction:

By the end of 2025, the primary bottleneck for scaling enterprise AI will shift from model development to data readiness. In response, most enterprises will focus on deploying an “AI-Powered Data Control Plane” to automate data discovery, preparation, and governance, making it the most critical investment for unlocking value from AI workloads.

“We’re seeing a critical pivot from experimenting with AI to industrializing it, and this has exposed a massive gap in data readiness. The winning strategy in 2025 won’t be about building or integrating the best AI models but creating the best data to feed those models at scale and without risk.”

Brad Shimmin

Brad Shimmin

Vice President & Practice Lead, Data Intelligence, Analytics, & Infrastructure

Predictions:

Emerging enterprise concerns over data quality and the maturation of data engineering platform capabilities drive this trend.

  • Generative AI Paradox Created an Urgent Crisis: Companies have invested in AI initiatives only to find that their underlying data is a liability, riddled with quality issues, bias, and inconsistencies. This has elevated the need for “AI-ready” data from a technical problem to a C-suite imperative.
  • Data Infrastructure Wars Are Over: The pragmatic realization that neither a centralized data fabric nor a decentralized data mesh is a silver bullet has led to complex hybrid environments. This new reality demands an intelligent abstraction layer to orchestrate and govern data across these disparate architectures without disruption.
  • Metadata Management Has Evolved: From a passive cataloging exercise into an active, AI-driven system of governance, modern data catalogs, are no longer just maps of data; they are dynamic engines of meaning that can continuously analyze, classify, and even remediate data, forming the foundational technology for a data control plane (e.g., a semantic layer).
  • Automated “AI-Readiness” Assessment and Remediation: A data control plane continuously envisions data assets across the enterprise’s lakes, warehouses, lakehouses, and applications. Leveraging AI, these solutions can identify and score data for quality, bias, and compliance readiness before it can be used to train or feed context to generative AI models, triggering cleansing and enrichment pipelines to remediate issues automatically.
  • Dynamic Governance for AI Agents: Autonomous AI agents optimizing workflows, such as a supply chain, require access to real-time data from partner ERPs, logistics platforms, and internal financial systems. Accessed via popular protocols such as Anthropic’s MCP, a data control plane can act as an intelligent gatekeeper, interpreting the agent’s request and dynamically creating a secure and compliant data “product” on the fly with only the necessary attributes. All while enforcing access policies before dissolving the connection once the task is complete.
  • Data Product Lifecycle Automation: A product team wants to launch a new curated dataset on customer behavior as an internal “data product.” The control plane can automate the entire value chain for this product, helping consumers discover and assemble the relevant data points. For example, practitioners can enhance data by leveraging generative AI to create business-friendly descriptions and documentation for complex schemas and data models. They can then automatically apply masking policies to protect personal identifiable information (PII) and finally publish the certified data product to an internal marketplace with defined service level agreements (SLAs).

Digital Leadership & CIO: AI’s Growing Pains Force CIOs to Reinvent Cloud, Security, and Operational Strategies

Prediction:

Enterprise IT in 2025 is entering a period of profound reinvention as CIOs push AI beyond isolated functions, wrestle with the immature foundations of agentic AI, and escalate efforts to counter suddenly looming post-quantum threats. Futurum’s Q2 2025 CIO Insights survey reveals 89% of IT leaders driving AI for strategic transformation, 80% elevating quantum-resilient security to board-level priority, and 71% rethinking optimal cloud environments under intensifying AI workload pressures.

“As AI becomes integral to business strategy, CIOs are being forced to reconsider how and where it’s optimal to deploy compute resources. The need for low latency, cost efficiency, and compliance in AI applications is driving a rapid shift toward hybrid and multi-cloud strategies. For IT leaders, this means 2025 will be a pivotal year for a comprehensive realignment of their infrastructure with the realities of the AI era.”

Dion Hinchcliffe

Dion Hinchcliffe

Vice President & Practice Lead
Digital Leadership & CIO

Predictions:

Three primary reasons are driving this change.

  • Legacy Architectures Fall Behind. The rise of generative AI and large-scale machine learning models has introduced unprecedented compute and storage requirements that legacy architectures cannot support.
  • Balancing Cloud Deployments. Organizations are seeking to balance the flexibility of public clouds with the control and cost predictability of private or hybrid cloud environments.
  • Compliance & Data Sovereignty. Increased awareness of data sovereignty and compliance needs is driving CIOs to redesign their cloud strategies with AI in mind.
  • AI-Optimized Data Centers: Enterprises deploying on-premises GPU-based architectures to support cost-effective training and inferencing workloads while also maintaining data control.
  • Making AI Affordable: Making AI workloads\inexpensive enough to operate, especially for complex knowledge work in price-sensitive industries like healthcare and insurance, to achieve ROI requirements.
  • Dynamic Cloud Bursting: Leveraging hybrid cloud environments to seamlessly scale AI workloads to public clouds during peak demands.
  • AI-Enhanced Business Resiliency and Disaster Recovery: Implementing AI-driven predictive analytics to optimize failure- resistance, operational failover and recovery processes across multi-cloud architectures.

Ecosystems, Channels, & Marketplaces: Cloud Marketplaces Accelerate Agentic AI Adoption

Prediction:

Cloud marketplaces will become as big a Go-to-Market (GTM) for Independent Software Vendors (ISVs) as traditional distribution is for commercial hardware. Over $300 billion of committed cloud spending will continue to help fuel this engine.

“Every vendor is trying to figure out their marketplace strategy, which ones to prioritize, how to operationalize it, and how to bring traditional partners on that journey. The most successful vendors in the marketplace will be the ones that understand how to include service delivery partners as part of their marketplace strategy.”

Alex Smith

Alex Smith

Vice President & Practice Lead
Ecosystems, Channels, & Marketplaces

Predictions:
  • First, marketplace fees have gradually been coming down. When marketplaces first came on the scene, fees were north of 20%, making them a very expensive proposition. Now, they are at about 3% as standard, and in some cases, as low as 1.5%. At this price point, marketplaces are as cost competitive as traditional distribution channels, and leave more room in the margin stack for ecosystem partners to take part. This fee reduction comes as a result of increased volume in marketplace activity, as well as the underlying goal that hyperscalers have: driving more infrastructure consumption.
  • Second, cloud commits across the major hyperscalers continue to surge. As the cloud becomes increasingly pivotal to enterprises the world over, companies are increasingly entering into long-term contracts with hyperscalers that ensures they have the best pricing, and a guarantee of resource availability. As of Q3 2024, cloud committed spending across the leading three hyperscalers surged to $393 billion (representing nearly 30% growth year-on-year). Certain portions of this commitment can be utilized on third-party products on the cloud marketplace, leading to a ready-made marketplace economy for ISVs to tap into.
  • Third, the hyperscalers have all launched programs that allow their partners to participate in the cloud marketplace. These ‘Private Offer’ programs enable partners to create custom offers for their customers via the marketplace. This could include pricing, bundling as well as their own value-add services. These programs ensure that ISVs that want to participate in the cloud marketplace can still leverage their partner ecosystem, and crucially, reward them for that activity via their own partner programs. Partners will play an integral role in cloud marketplaces. AWS, which has the most mature program, has indicated that north of 30% of marketplace transactions already feature a partner as the selling agent. This number will continue to grow.
  • Crowdstrike is one of a handful of companies that has surpassed $1 billion in total sales in the AWS Marketplace. Since launching on AWS in 2017, it has been its fastest growing route-to-market, and is also responsible for delivering a higher-than-average deal size (compared to its other sales channels). Crowdstrike has over 20 integrations with AWS products including AWS Control Tower and AWS GuardDuty.
  • NetApp recently launched NetApp Data Infrastructure Insights on the Azure Marketplace to help customers planning an Azure migration with streamlined observability and real-time telemetry data. NetApp has leaned heavily into its hyperscaler GTM strategy. In addition to offering products on the marketplaces, it is the first vendor to offer first-party services with all three leading hyperscalers.
  • Salesforce and AWS announced a wide-reaching strategic partnership agreement in 2023. One aspect of the agreement was the availability of select Salesforce products in the AWS marketplace for the first time, including Data Cloud, Service Cloud, Sales Cloud, Industry Clouds, Tableau, MuleSoft, Platform, and Heroku. In its Q3 2024 earnings, Salesforce highlighted AWS as a key growth driver with transactions doubling quarter-over-quarter and 10 deals exceeding $1 million in sales.

Enterprise Software & Digital Workflows: Agentic AI Becomes Table Stakes

Prediction:

Generative AI-powered features will enter widespread use in 2025, thereby requiring significant shifts in pricing models, with seat-license models being supplanted by consumption-based and outcome-based approaches. A 2024 Futurum Intelligence survey of 895 decision-makers and influencers found that 40% of respondents were paying for software on a consumption-based pricing model, and 15% were using an outcome-based model.

 “As vendors continue to roll out new and enhanced versions of AI agents, consumption-, interaction-, and outcome-based pricing models are quickly becoming the most common approaches for linking the benefits of AI with the cost of the resource. This will be increasingly important to CEOs that need to justify their investment into AI, and particularly agentic AI systems. However, vendors need to ensure that any pricing model deployed – as well as any ROI promises made – clearly lay out all ancillary costs and restrictions so customers are able to make an accurate assessment of whether the pricing model works for their business and use cases.”

Keith Kirkpatrick

Research Director & Practice Lead Enterprise Software & Digital Workflows

Predictions:

Three primary reasons are driving this change.

  • Both vendors and enterprise customers are realizing that AI is enabling work to be completed more quickly and efficiently than ever before, and paying for a full-seat license is both inefficient and also lacks a direct connection to business results.
  • Moreover, the results of a January 2025 study conducted by Kearney and The Futurum Group of more than 200 CEOs operating globally across diverse industries such as finance, manufacturing, retail, and healthcare with more than $1 billion USD in annual revenue found that incumbent organizations (those with operating histories of 10+ years) are more focused on aligning AI initiatives with recognized industry standards (49%) and achieving tangible ROI (49%).
  • As AI agents proliferate, we expect a strong shift to outcome-based pricing models in 2025, as these CEOs are prioritizing tangible and visible ROI from their AI investments. This outcome-based approach to pricing ensures that customers are not paying for software that is not delivering promised results, which can be contrasted with a consumption-based model that does not incorporate any type of ROI guarantee.

  • Zendesk has announced the use of outcome-based pricing with its AI agents, under which customers will only pay for successful interactions, based on agreed-upon interaction metrics.
  • Workhuman has fully shifted to an ROI guarantee model, , under which larger overarching metrics, such as employee engagement or retention, are used to assess whether Workhuman has delivered on its promises and will be compensated for the use of its platform.
  • Salesforce is offering interaction-based pricing starting at $2 per interaction, with volume discounts available as use ramps up. The company’s goal is to ensure that customers see a direct link between AI usage and cost, instead of a more opaque flat fee per user model.

Intelligent Devices: AI PCs Continue to Drive PC Market Refresh While On-Device AI Capabilities Redefine UX Across Key Device Categories

Prediction:

AI-capable PCs (PCs equipped with an NPU and capable of running some AI training and inference workloads locally) will come to represent at least 40% of new PC shipments by the end of 2025.

“The AI PC is, first and foremost, a radically better PC than pre-AI PCs. It is tangibly faster, more powerful, more capable and more useful. The all-day battery life alone is such a radical system improvement that even without its AI capabilities, it would be worth the upgrade. But perhaps more importantly in the long term, the AI PC also lays the necessary foundation for the next generation of software experience, which will be dominated by agentic AI. As agentic AI begins to insert itself into every application, from search, system management and security to productivity and creativity software, users in both the consumer and the commercial segments will need PCs designed securely to handle agentic AI workloads both in the cloud and locally, in order to take full advantage of the coming disruption/opportunity.”

Olivier Blanchard

Olivier Blanchard

Research Director & Practice Lead
Intelligent Devices

Predictions:

Three primary reasons are driving this change.

  • NPU for PCs: The introduction of NPUs into device system architectures, which includes PCs, is enabling devices to perform previously energy-intensive tasks far more efficiently than they could with traditional CPUs and GPUs. This new capability unlocks next-gen AI training and inference capabilities directly on the device, which in turn creates entirely new horizons of added utility for users and their organizations. NPU-equipped PCs also happen to deliver vastly superior performance per watt to their predecessors, translating into all-day (and even multi-day) battery life to users.
  • OEM Commitment to the Transition: Every major PC OEM is fully committed to this market transition, with aggressive competition between silicon vendors Qualcomm, AMD, and Intel accelerating performance improvements at both the processor and system levels. NVIDIA is also rumored to enter the market within 6-12 months. The PC ecosystem is moving forward, not backwards. AI PCs are already beginning to replace soon-to-be-obsolete traditional pre-AI PCs.
  • PC Resfresh Cycle: The end of support for Windows 10 (slated for October 2025) will also help drive the PC refresh cycle towards AI PCs and accelerate the adoption of AI PCs in the commercial segment.

As AI-capable PCs are an evolution of pre-AI PCs, all previous use cases for PCs still apply. However, new use cases have already begun and will continue to emerge.

  • Moving some AI Processing from the Cloud to Devices to expand the reach of AI beyond the data center. As large language models and large mixed models (multimodal AI) become more efficient, and AI PC systems become more capable, AI PCs will accelerate the expansion of AI workloads from the cloud to AI-enabled devices. Many of the large language models trained in the cloud a year ago can already be trained directly on-device today. As that trend continues, organizations will increasingly be able to train, test and fine-tune many of these models securely, onsite and at a fraction of the cost they would have otherwise incurred. Additionally, AI PCs allow pre-trained models to be quickly and securely customized by organizations locally rather than in the cloud.

  • Agentic AI in the PC. As agentic AI begins to transform the way users interface with apps and software, AI-capable PCs will be uniquely positioned to deliver secure, local, highly individualized on-device agentic-AI experiences to users concurrent with more general-use cloud-based agentic AI experiences, Use case examples range from AI agents drafting email responses, managing calendars and performing complex searches in seconds to reducing the time it takes to design a presentation, report or proposal from hours to minutes.

  • All Day & Multi-Day Battery Life. PCs capable of delivering all-day and multi-day battery life even in thin-lightweight form factors will also transform the way users work and play with their PCs, not only in hybrid and remote work scenarios but at the office as well, with notebook PCs becoming far easier to carry around between meetings. 

     

Semiconductors, Supply Chain, and Emerging Tech: Advanced Packaging and HBM Capacity Remain Key Bottleneck for Global Compute

Prediction:

In 2H 2025, advanced packaging (AP) and HBM capacity are expected to remain critical bottlenecks for global compute deployment, driven by accelerating AI inference workloads and the growing adoption of evolving LLMs with multimodal capabilities. The technology advancement of AP and HBM and their adoption should be closely watched, as the two will underpin the future development of semiconductors used in AI servers and smartphones.

Predictions:

As Moore’s Law has slowed down, leading chipmakers are aiming to leverage various types of advanced packaging technology to improve the overall performance of semiconductors further, making advanced packaging an increasingly vital front of technology.

In 2H 2025, advanced packaging (AP) and HBM capacity are expected to remain critical bottlenecks for global compute deployment, driven by accelerating AI inference workloads and the growing adoption of evolving LLMs with multimodal capabilities. The technology advancement of AP and HBM and their adoption should be closely watched, as the two will underpin the future development of semiconductors used in AI servers and smartphones.

  • Memory Is Bedrock for AI Accelerator: Memory bandwidth is vital as model training is often bandwidth-constrained rather than purely compute-constrained. The attention mechanism in the transformer model has to store and calculate the relationship between all the tokens. Memory requirement is quadratic in proportion to the sequence length. Similarly, memory is also a bigger constraint during inference, due to the need to handle longer context windows and an enlarged key‑value cache (KV cache) in the transformer model. Memory consumption for KV cache grows linearly with the token size. To that end, HBM has become the essential component for AI, offering a higher speed of transferring data and lower power consumption than traditional DRAM products.

  • Advanced Packaging Pushes Beyond Moore’s Law: Advanced packaging has become an essential technology in the AI hardware supply chain, especially as Moore’s Law has slowed down in recent years. Chipmakers are turning to advanced packaging as a new solution to sustain performance improvements. By integrating the compute die, memory, and packaging substrate more closely, advanced packaging enables better power efficiency, higher performance, and faster data transfer between components. This is an ongoing industry shift and a key trend to follow.

  • Supply Constraint Remains in AI Compute: Compute demand has surged since the introduction of ChatGPT-3 in late 2022, and the acceleration of AI inference workloads has driven a second wave of growth. Today, we assess that the global compute landscape remains supply-constrained, driven not only by extraordinary demand for AI chips globally but also by structural bottlenecks across the supply chain. These include the complexity and capacity constraints of advanced packaging and HBM, technical difficulties, and slower ramp-up production in downstream assembly and integration. While compute supply–demand dynamics might be alleviated as time passes, we believe the possibility of supply constraints in compute deployment will continue in 2H 2025.

  • HBM for GPUs and ASICs: HBM remains the critical memory technology powering nearly every major AI accelerator globally, from NVIDIA’s GB200 to Google’s TPUv6p. The fifth-generation HBM3E 12-hi is expected to be the dominant product through the remainder of 2025 and into 1H 2026. Looking ahead, SK Hynix will likely first introduce HBM4 (6th gen HBM) in the market, powering the next generation of AI chips, including NVIDIA’s VR200 and AMD’s MI400X.
  • Advanced Packaging (AP): Advanced packaging technologies are indispensable for all the AI chips, just like HBM. Advanced packaging for AI accelerators, such as TSMC’s CoWoS-S, CoWoS-R, and CoWoS-L, plays a critical role in not only packaging the compute die, HBM, I/O die, and other necessary components into a single AI chip but also further improving the performance of an AI accelerator. Demand for CoWoS-L has increased due to NVIDIA’s transition from Hopper GPUs to Blackwell, which is later supported by CoWoS-L technology instead of CoWoS-S. Another packaging technology to watch is WMCM (Wafer-Level Multi-Chip Module), expected to power Apple’s next-generation A20 SoC for the iPhone 18, which will come into play in 2026.

  • Co-Packaged Optics (CPO): CPO represents an advanced form of heterogeneous integration that combines optical components and semiconductor devices within a single package, aiming to overcome high-bandwidth data center applications’ performance and power limitations. While the industry continues to debate the commercial viability and timing of CPO adoption, the growing need for high-speed, low-power interconnect solutions makes its eventual deployment increasingly necessary. The earliest realistic timeline for CPO to enter the market is around late 2026 and 2027. One potential real-world application of CPO could be NVIDIA’s Rubin-series GPUs, which are expected in 2026, and the following Feynman product line.

Software Lifecycle Engineering: Developers Engage AI to Augment Work

Prediction:

By the end of 2025, over 70% of software developers will use generative AI to augment development work, including code generation, completion, review, analysis, bug fixing, or unit tests. Of developers and DevOps engineers, 20% will use AI and open standards to automate tasks and basic workflows as the use of agentic AI expands to platform engineering.

“Generative AI is becoming as essential to the software development process as IDEs, testing, and DevOps automation tools. Engineers are learning the best uses of reasoning LLMs across a growing number of agentic, multi-step tasks. MCP servers extend the capabilities of development and automation tools vendors so they are now accessible by LLMs used in development.”

Mitch Ashley

Mitch Ashley

Vice President & Practice Lead
Software Lifecycle Engineering

Predictions:
  • AI Within CLIs and IDEs: This brings generative AI directly into the workflows of developers who frequently work across multiple interfaces to perform development work, configuration, and automation.
  • Reasoning-Capable LLMs: These are incrementally capable of taking on and agentically performing core multi-step development tasks, including code pull requests, environment setup, execution of unit tests, code translation, and documentation.
  • Larger Context Windows and Reasoning LLMs: These enable the understanding of codebases and technical documentation and aid developers in working across longer conversation windows and tasks.
  • Vendor MCP Servers: These servers, accessible via development tools, extend access to the services and tools across development and automation tools used by developers and platform engineers.
  • Agentic AI Automation of Development Tasks: These tasks include analyzing and validating requirements and product definition documents, code refactoring across multiple files, creating unit and some integration-level testing, and performing root cause analysis.
  • Automate and Monitor DevOps Tool Chains: Analyze log files and telemetry data across multiple tools and platforms to automate and monitor DevOps tool chains and critical continuous integration/continuous deployment (CI/CD) processes.
  • Analyze Full Codebases: This process identifies complex interdependencies between code, libraries, packages, and images.
  • Follow the Steps of a Code Plan: Apply multistep reasoning to ensure the proper execution of steps, effective error handling, and appropriate human intervention in cases of errors or unexpected results.
  • Treating Prompts as Code: Both prompts written by engineers and meta-prompts created by LLMs are treated as code to ensure they are captured, tracked, tested, version-controlled, and dependencies managed as these become an integral part of software development processes.

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.