Menu

AWS re:Invent 2025: Wrestling Back AI Leadership

AWS re:Invent 2025: Wrestling Back AI Leadership

Analyst(s): Mitch Ashley, Fernando Montenegro, Nick Patience, Brad Shimmin, Alex Smith
Publication Date: December 5, 2025

What is Covered in this Article:

  • AWS’s strategic pivot from neutral host to full-stack AI model manufacturer with Nova and Trainium
  • Major AI infrastructure updates: Trainium3 UltraServers, Trainium4 roadmap, and AI Factories for sovereign AI
  • The evolution of agentic development: Kiro IDE, Frontier Agents, and autonomous DevOps/Security workflows
  • Security enhancements: Embedded AI governance, AWS Security Agent, and expanded Bedrock policies
  • AWS Marketplace transformation: AI-driven Agent Mode for automated procurement and discovery
  • AWS is setting up a universal database within its object storage layer and bringing domain data directly into Amazon Nova frontier-scale models.

The Event – Major Themes & Vendor Moves: The 14th AWS re:Invent conference was held in Las Vegas this week, drawing a record 60,000 attendees over its four-day span. The overall theme was AWS reasserting itself as a full-fledged AI model player, announcing more capable second-generation Nova models, a path for enterprises to build highly customized models, and tighter coupling to Trainium economics, aiming to wrestle back the crown as not only the leading cloud hyperscaler but also the leading AI cloud platform.

On the model front, AWS announced the Nova 2 family with Nova 2 Pro for advanced reasoning, Nova 2 Omni for long-context multimodal workloads, and Nova 2 Sonic for real-time speech-to-speech, tightly aligned with Amazon Connect and contact-center automation. Amazon Nova Forge debuted as a way for enterprises to build custom frontier models on proprietary data. On infrastructure, AWS made Trainium3 UltraServers generally available, packing 144 Trainium3 chips per system and delivering up to 4.4 times more performance and 4 times better energy efficiency than the prior generation, directly targeting AI training and inference economics.

AWS also unveiled AI Factories: On-premises racks combining Trainium and Nvidia accelerators with Bedrock and AWS networking, aimed at sovereign AI and highly regulated sectors and posing a direct challenge to Dell, HPE, and Lenovo in AI data center buildouts. Additionally, AWS announced Trainium4, which will integrate Nvidia NVLink Fusion, enabling it to participate in Nvidia’s rack-scale fabric alongside Blackwell GPUs.

On the development stack, AWS advanced its tooling with several coordinated releases aimed at enabling continuous agent-driven software creation. Kiro moved to general availability as a spec-centered AI IDE with autonomous agent capabilities, multi-repo context, and native integrations with Jira, GitHub, and Slack. Frontier Agents introduced a new class of autonomous, long-running, and horizontally scalable agent execution, with prebuilt DevOps and Security agents expanding the operational surface agents can cover. AWS also introduced Transform with support for any language or stack to help teams reduce technical debt without slowing agent adoption.

In security and compliance, AWS Security Hub is now generally available with near real-time risk analytics. Amazon GuardDuty saw improvements, including broader support for ECS and EC2 environments. AWS also announced the AWS Security Agent, which is in preview, designed to autonomously manage security workflows, including reviews, vulnerability scans, code fixes, and penetration tests. Amazon Bedrock AgentCore introduced Policy and Evaluations in preview to help teams govern and monitor agent performance. Amazon API Gateway added Model Context Protocol proxy support, enabling standard REST APIs to function securely as tools for AI agents. Additionally, AWS announced a partnership with CrowdStrike for the accelerated integration of cloud environment information and previewed AWS Interconnect – multicloud for easier connectivity between AWS and other clouds, starting with Google Cloud.

Within the realm of enterprise data intelligence, analytics, and infrastructure, AWS elevated its game on several fronts. First, the company added new capabilities to its S3 storage substrate, turning it further into a de facto database. And second, the company introduced tooling designed to democratize the creation of custom frontier models with Nova Forge. What the company did not address, however, is the concern that Futurum sees as the next major battleground: the semantic layer. Regardless, AWS’s strong set of announcements at re:Invent signaled a sophisticated play on building data-centric AI, a play that refreshingly seeks to do away with unnecessary complexity and cost.

Finally, AWS Marketplace delivered arguably its most significant release cycle to date, unveiling Agent Mode as a conversational interface for tailored software discovery, Multi-Product Solutions for bundling third-party ISV software with professional services, and Express Private Offers using AI to instantly approve discounts based on pre-set parameters.

AWS re:Invent 2025: Wrestling Back AI Leadership

Analyst Take: AWS re:Invent 2025 signaled a shift in the company’s approach to AI. By launching the Nova 2 model family and Nova Forge, AWS is no longer content just hosting partners like Anthropic; it is aggressively positioning itself as a primary model vendor with a structural cost advantage. The strategy is vertically integrated: run Nova models on Trainium3 UltraServers at near-zero margin to undercut GPT-4o and Claude 3.5 on price, making AWS the most economic platform for scaled enterprise inference.

The introduction of AI Factories—managed racks of Trainium and Nvidia silicon deployed on-premise—is a direct strike against hardware OEMs like Dell and HPE. It turns sovereign AI from a hardware purchase into an extension of the AWS cloud operating model, locking in regulated industries that can’t move to the public cloud. Meanwhile, the move toward ‘Frontier Agents and the Kiro IDE attempts to shift the developer experience from writing code to specifying outcomes, betting that agentic workflows will define the next decade of software creation.

Ultimately, AWS is playing to its core strength: industrializing innovation. It isn’t trying to win on model hype but on unit economics, infrastructure sovereignty, and operational scale.

Agents & AI Development

AWS used re:Invent to re-establish itself as a development platform for building software and AI agents, not just infrastructure. The company has spent years establishing substrate for compute, storage, and data, and the introduction of tools such as Amazon Q Developer, Amazon Transformer, Bedrock AgentCore, and Kiro AI IDE. 2025’s year-end announcements show AWS building up its development stack with agent development and operations in its sites: it wants to be the place where agentic development happens and where agent workloads operate at scale. The message is no longer about individual services. It is about creating a development environment where autonomous agents can plan, build, test, secure, and ship software inside a continuous operational loop.

AWS addressed critical challenges that any vendor must face to move AI agents beyond helpful assistants to agentic agents that tackle work extending from hours to a day or more. AWS Frontier Agents are a new class of agents that run autonomously, are scalable, and run independently for long periods of time. While not overly touted in the announcements, AWS has clearly invested in agent memory persistence techniques to enable agents to execute more complex tasks over longer periods of time. Included in the announcements were AWS’s own frontier agents, Kiro autonomous agent, AWS DevOps Agent (emphasis on OPS), and AWS Security Agent.

AWS is signaling that an agent platform must support autonomy, concurrency, and persistent state, not just task completion. It is also anchoring development around specifications instead of ad hoc prompt threads. Kiro’s spec-driven workflow creates the structure agents need to take on more meaningful work, such as feature implementation, triage, coverage improvements, and cross-tool automation without constant human correction. AWS Transform (mainframe, VMware, and .NET) and Transform Custom (no language limitations) widen the road for modernization efforts.

Together, these developments show AWS is designing for a rich agent development lifecycle. It is building an environment where agents can work continuously, where humans direct with clarity rather than micromanaging, and where the platform carries a greater share of the operational burden. This is AWS positioning itself not just as a cloud for running applications, but as a platform for creating software through agents that collaborate with developers and scale across everything the organization builds.

AI

With the Nova 2 family of models, AWS is becoming more of a direct model competitor to OpenAI, Anthropic, and Google. Nova 2 Pro is positioned as AWS’s highest-intelligence reasoning model, optimized for instruction-following, tool use, and agent workflows, and already in use by companies including Cisco, Siemens, Sumo Logic, and Trellix for applications, such as threat detection and analytics. Nova 2 Omni is Amazon’s multimodal model, handling long-context multimodal inputs (hundreds of pages of text, hours of audio, long videos) in a single pass, enabling use cases such as catalog analysis, multi-document grounding, and video agents. Nova 2 Sonic is a real-time, low-latency speech-to-speech model with a 1M-token context window, multilingual voices, and bidirectional streaming; it integrates directly with the Amazon Connect automated contact center app plus Twilio, Vonage, and AudioCodes.

Because AWS runs Nova on its own Trainium3-powered Trn3 UltraServers, it can treat silicon margin as fungible and push near-zero-margin pricing on Nova tokens to undercut GPT‑4o or Claude 3.5 when necessary. If Nova reaches ‘good-enough’ parity on quality, AWS’s ability to cross-subsidize with Trainium gives it a structurally better cost position for enterprise inference than cloud rivals who must pay external model and chip vendors.

AWS announced its AI Factories, which are effectively AWS in a rack and a key part of its sovereign AI proposition. AI Factories are dedicated infrastructure delivered into the customer’s data center: racks combining Trainium3 and/or Nvidia Blackwell/GB300/B300 GPUs, AWS’s high-speed networking, storage, and security stack.​ They are fully managed by AWS, but powered, housed, and connected by the customer, meeting data residency and regulatory demands while keeping the AWS software stack (Bedrock, SageMaker) and control plane. As such, AWS AI Factories are direct challenges to similar offerings from the likes of Dell, HPE, Lenovo, and NVIDIA (although obviously relies in part on NVIDIA’s GPUs). It’s a hardware sale with an ongoing AWS service relationship.

And down at the chip and server level, AWS announced Trainium4, just as it announced Trainium3 at last year’s re:Invent. Performance-wise, AWS claimed ~6× effective performance uplift for Trainium4 at FP4 vs. Trainium3 per socket, with the potential for larger memory domains. Interestingly, Trainium4 will integrate Nvidia NVLink Fusion, enabling Trainium4 to participate in Nvidia’s rack-scale fabric alongside Blackwell GPUs and Nvidia’s CPU. We believe this is a hedge against NVIDIA dependency and a way to keep the highest-volume, most margin-sensitive workloads (inference, mid-scale training) on Trainium.

AWS’s new Trn3 UltraServers make Trainium3 the default engine for frontier-scale training and high-volume inference, despite the training-themed name. Inferentia is still positioned for classic high-volume inference, but the line is blurring as Trainium3 is explicitly marketed for both training and serving frontier-scale models. The Jeff Bezos quote, “Your margin is my opportunity,” applies here: every percentage point Trainium saves versus H100/GB200 becomes room for AWS to cut Nova/Bedrock prices or increase its bank margin. Trainium3 UltraServers pack 144 Trainium3 chips into a single integrated system, delivering up to 4.4x more compute performance and 4x greater energy efficiency than Trainium2 UltraServers.

Security

AWS is systematically embedding “security for AI” directly into the base stack, with Amazon Bedrock AgentCore policy controls and API Gateway’s MCP support as recent examples. This aligns with the company’s long-standing strategy of eliminating “undifferentiated heavy lifting.” By commoditizing the complex security scaffolding required for agentic workflows, AWS aims to accelerate the deployment of AI initiatives by making the underlying infrastructure inherently trustworthy.

Simultaneously, the introduction of the AWS Security Agent signals a move that can potentially disrupt parts of the “AI for security” market. By automating high-value workflows, such as design reviews and penetration testing, AWS is leveraging its model capabilities to enhance operational security. However, this new “frontier agent” likely faces significant challenges ahead; while it may excel in standard scenarios, its utility will ultimately be determined by its ability to navigate the nuances of highly complex, non-standard, and often political customer environments. This will require careful positioning of the agent’s capabilities and intended scope, especially if its usage will sit at the boundary between developers and security teams.

Beyond security, AWS’s strategy reveals a growing embrace of multicloud architectures, marking a welcome evolution in its support for the “messy reality” of enterprise IT. Features like AWS Interconnect acknowledge that customers must navigate a fragmented landscape of multiple cloud providers, on-premises data centers, co-location facilities, and diverse SaaS relationships. By simplifying the interoperability between these environments, AWS is moving to support the complex, hybrid operational models that define modern organizations.

This flexibility extends to digital sovereignty with the release of AWS AI Factories and the upcoming European Sovereign Cloud (ESC). These announcements represent a significant commitment to addressing the complex legal, operational, and technological challenges associated with supporting sovereign operations. Rather than a one-size-fits-all approach, these initiatives offer the architectural elasticity customers need to balance innovation with strict regulatory and data residency mandates.

Data Intelligence, Analytics, and Infrastructure

At re:Invent 2025, AWS sent a refreshingly pragmatic message to the market, arguing that the next phase of enterprise AI will be won in the trenches of data infrastructure. To this end, the company unveiled a set of strategic initiatives aimed at evolving its foundational data service, S3, into an active engine for AI; democratizing the creation of deeply customized models with Nova Forge; and, by omission, highlighting the critical work that remains in providing business context to AI agents. Overall, these announcements reflected an engineering-led strategy, focused on equipping enterprises with the foundational tools to make agentic AI a durable reality.

The most significant shift is the fundamental re-imagining of Amazon S3, moving it from a passive storage shed to an active, queryable foundation for both analytics and AI. This is a direct play on data gravity, bringing compute capabilities to the data instead of forcing customers into costly and complex data migration projects. Now generally available, S3 Vectors establishes the first native vector search capability within a major cloud object store. This allows enterprises to index and query billions of vectors directly alongside their source documents, images, and other unstructured files. With the ability to scale up to two billion vectors per index and 10,000 indexes per bucket, it promises substantial cost reductions (AWS cited up to 90% in some scenarios) compared to dedicated vector databases. Tight integrations with Bedrock Knowledge Bases and Amazon OpenSearch Service are already in place, which will further help simplify the construction of Retrieval-Augmented Generation (RAG) and hybrid SQL/RAG search pipelines. Complementing vectors, AWS also fortified S3 Tables, its managed Apache Iceberg offering. Following rapid adoption that has already surpassed 400,000 tables, S3 Tables now has Intelligent Tiering, a feature that promises to automatically trim storage costs by as much as 80% without impacting performance. Obviously, these claims need to be proven out in the market and will vary according to customer use cases.

Recognizing the complexities and limitations of in-prompt context engineering (e.g., semantic search) and simple model fine-tuning, AWS also introduced Amazon Nova Forge. This service provides enterprises with the tools to build their own proprietary frontier models. Nova Forge introduces the concept of “Open Training Models,” which grants customers access to various checkpoints of Nova foundation models during the training cycle. This approach directly addresses the long-acknowledged problem of catastrophic forgetting, where a model loses its core reasoning abilities when it is force-fed new, domain-specific information. By allowing customers to blend their proprietary data with AWS-curated training sets at multiple stages (from pre-training to fine-tuning), the model can deeply absorb new domain knowledge while retaining its foundational intelligence. The result is a unique model asset, which AWS refers to as a “Novella,” that can be securely deployed via Bedrock to create a defensible competitive advantage built on an organization’s unique data.

When pressed on the topic of a unified semantic layer, however, AWS leadership pointed to a constellation of existing capabilities rather than a dedicated new service. The current strategy requires customers to piece together Bedrock Knowledge Bases for unstructured context, Amazon QuickSight for business intelligence semantics, and the nascent S3 Vectors as a kind of de facto semantic representation of the data lake. This posture leaves a noticeable gap in the AWS portfolio, lacking a direct answer to the centralized, governable semantic platforms that rivals like Microsoft, Databricks, and Snowflake are making central to their enterprise AI stories.

AWS Marketplace

AWS Marketplace delivered arguably its most significant release cycle to date at re:Invent 2025, unveiling a suite of features designed to fundamentally alter how enterprise software is discovered and bought. While the headline grabbers were Nova and Kiro, the most immediate “frontier” application for many enterprises is the new Agent Mode in AWS Marketplace. AWS is effectively deploying agentic AI to disrupt the traditional, friction-heavy software procurement process.

Agent Mode appears as a conversational interface capable of ingesting a buyer’s specific business requirements (uploaded directly as documents) and responding with tailored recommendations. Rather than browsing lists, procurement teams can now task an agent to find a solution based on highly specific inputs, and the agent will parse the catalog, verify compliance data, and generate side-by-side comparisons of viable candidates. This shifts the Marketplace into the role of an active procurement consultant. By allowing the agent to handle initial vetting, requirements mapping, and the generation of purchasing proposals, AWS is targeting the weeks of manual analysis typically required for enterprise software selection. If reliable, this capability could make AWS a powerful hub for capturing demand signals and key requirements. It is a major step in the multi-year evolution of the digital buyer journey.

Other Marketplace announcements were made, including Multi-Product Solutions and Express Private Offers, which address other transactional friction points that often stall deals after discovery. The new Multi-Product Solutions allow partners to bundle third-party ISV software with their own professional services into a single listing, enabling AWS Marketplace to mimic the complex, multi-vendor reseller contracts that previously had to occur offline. Combined with Express Private Offers, which utilize AI to approve discounts based on preset parameters instantly, AWS is attempting to automate the negotiation phase, just as Agent Mode automates discovery. By injecting agents and automation into the loop, AWS captures more intent data earlier in the buying cycle, democratizing access to complex software ecosystems and allowing line-of-business owners to bypass traditional RFP bottlenecks.

What to Watch:

  • How well do the new AWS agents fare in the real world? Particularly security, as some topics like pen testing are very detailed.
  • New models will continue to be launched by rivals with better benchmarked results than Nova, but it’s not just about the models; it’s about the full stack. However, the Nova Sonic model could gain significant traction in contact center deployments.
  • Expect Google, Microsoft, and others to offer something similar to Nova Forge in fairly short order.
  • AWS AI Factories – on-premises racks combining Trainium and NVIDIA accelerators with Bedrock and AWS networking – pose a direct challenge to Dell, HPE, and Lenovo in AI data center buildouts and are a cornerstone of AWS sovereign AI play, which will be a big focus in 2026.
  • Look for the lines between S3 and AWS’s purpose-built databases to blur even further. The planned integration between Postgres (via the PGVector extension) and S3 Vectors is just the first shoe to drop. Watch for more zero-ETL integrations from services like Aurora and Redshift that begin to treat S3 Tables and S3 Vectors as first-class extensions of their own storage.
  • Regarding the semantic layer, AWS will not cede this battleground forever. Expect a major service announcement or a strategic acquisition in the semantic space within the next nine to twelve months. The question is not if AWS will act, but how. Will they attempt to build upon the existing AWS Glue Data Catalog, or will they launch a new service to compete directly with rivals?
  • Adoption of Kiro’s spec-driven workflow, which determines how much real development work agents can assume.
  • How Frontier agents (Kiro autonomous, DevOps, and Security) are improved and are adopted for operating agent-driven development and delivery.

You can read all the re:Invent announcements on AWS’s dedicated blog.

Disclosure: Futurum is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Futurum as a whole.

Other insights from Futurum:

Amazon Q3 FY 2025 Earnings: AWS Reaccelerates, Retail and Ads Grow

Are We in a New Westphalian World Web? – Report Summary

Is Open Semantic Interchange the Treaty AI Needs to Deliver Value?

Image Credit: AWS

Author Information

Mitch Ashley is VP and Practice Lead of Software Lifecycle Engineering for The Futurum Group. Mitch has over 30+ years of experience as an entrepreneur, industry analyst, product development, and IT leader, with expertise in software engineering, cybersecurity, DevOps, DevSecOps, cloud, and AI. As an entrepreneur, CTO, CIO, and head of engineering, Mitch led the creation of award-winning cybersecurity products utilized in the private and public sectors, including the U.S. Department of Defense and all military branches. Mitch also led managed PKI services for broadband, Wi-Fi, IoT, energy management and 5G industries, product certification test labs, an online SaaS (93m transactions annually), and the development of video-on-demand and Internet cable services, and a national broadband network.

Mitch shares his experiences as an analyst, keynote and conference speaker, panelist, host, moderator, and expert interviewer discussing CIO/CTO leadership, product and software development, DevOps, DevSecOps, containerization, container orchestration, AI/ML/GenAI, platform engineering, SRE, and cybersecurity. He publishes his research on futurumgroup.com and TechstrongResearch.com/resources. He hosts multiple award-winning video and podcast series, including DevOps Unbound, CISO Talk, and Techstrong Gang.

Fernando Montenegro serves as the Vice President & Practice Lead for Cybersecurity & Resilience at The Futurum Group. In this role, he leads the development and execution of the Cybersecurity research agenda, working closely with the team to drive the practice's growth. His research focuses on addressing critical topics in modern cybersecurity. These include the multifaceted role of AI in cybersecurity, strategies for managing an ever-expanding attack surface, and the evolution of cybersecurity architectures toward more platform-oriented solutions.

Before joining The Futurum Group, Fernando held senior industry analyst roles at Omdia, S&P Global, and 451 Research. His career also includes diverse roles in customer support, security, IT operations, professional services, and sales engineering. He has worked with pioneering Internet Service Providers, established security vendors, and startups across North and South America.

Fernando holds a Bachelor’s degree in Computer Science from Universidade Federal do Rio Grande do Sul in Brazil and various industry certifications. Although he is originally from Brazil, he has been based in Toronto, Canada, for many years.

Nick Patience is VP and Practice Lead for AI Platforms at The Futurum Group. Nick is a thought leader on AI development, deployment, and adoption - an area he has researched for 25 years. Before Futurum, Nick was a Managing Analyst with S&P Global Market Intelligence, responsible for 451 Research’s coverage of Data, AI, Analytics, Information Security, and Risk. Nick became part of S&P Global through its 2019 acquisition of 451 Research, a pioneering analyst firm that Nick co-founded in 1999. He is a sought-after speaker and advisor, known for his expertise in the drivers of AI adoption, industry use cases, and the infrastructure behind its development and deployment. Nick also spent three years as a product marketing lead at Recommind (now part of OpenText), a machine learning-driven eDiscovery software company. Nick is based in London.

Brad Shimmin is Vice President and Practice Lead, Data Intelligence, Analytics, & Infrastructure at Futurum. He provides strategic direction and market analysis to help organizations maximize their investments in data and analytics. Currently, Brad is focused on helping companies establish an AI-first data strategy.

With over 30 years of experience in enterprise IT and emerging technologies, Brad is a distinguished thought leader specializing in data, analytics, artificial intelligence, and enterprise software development. Consulting with Fortune 100 vendors, Brad specializes in industry thought leadership, worldwide market analysis, client development, and strategic advisory services.

Brad earned his Bachelor of Arts from Utah State University, where he graduated Magna Cum Laude. Brad lives in Longmeadow, MA, with his beautiful wife and far too many LEGO sets.

Alex is Vice President & Practice Lead, Ecosystems, Channels, & Marketplaces at the Futurum Group. He is responsible for establishing and maintaining the Channels Research program as part of the overall Futurum GTM and Channels Practice. This includes overseeing the channel data rollout in the Futurum Intelligence Platform, primary research activities such as research boards and surveys, delivering thought-leading research reports, and advising clients on their indirect go-to-market strategies. Alex also supports the overall operations of the Futurum Research Business Unit, including P&L segmentation, sales and marketing alignment, and budget planning.

Prior to joining Futurum, Alex was VP of Channels & Enterprise Research at Canalys where he led a multi-million dollar research organization with more than 20 analysts. He played an integral role in helping the Canalys research organization migrate into Omdia after having been acquired in 2023. He is an accomplished research leader, as well as an expert in indirect go-to-market strategies. He has delivered numerous keynotes at partner-facing conferences.

Alex is based in Portland, Oregon, but has lived in numerous places, including California, Canada, Saudi Arabia, Thailand, and the UK. He has a Bachelor in Commerce and Finance Major from Dalhousie University, Halifax Canada.

Related Insights
Karpathy’s Thread Signals AI-Driven Development Breakpoint
December 30, 2025

Karpathy’s Thread Signals AI-Driven Development Breakpoint

Mitch Ashley, VP and Practice Lead for Software Lifecycle Engineering at Futurum, examines why industry researcher Andrej Karpathy’s X thread signals a breakpoint in AI-driven software development and what it...
CIO Take Smartsheet's Intelligent Work Management as a Strategic Execution Platform
December 22, 2025

CIO Take: Smartsheet’s Intelligent Work Management as a Strategic Execution Platform

Dion Hinchcliffe analyzes Smartsheet’s Intelligent Work Management announcements from a CIO lens—what’s real about agentic AI for execution at scale, what’s risky, and what to validate before standardizing....
Will Zoho’s Embedded AI Enterprise Spend and Billing Solutions Drive Growth
December 22, 2025

Will Zoho’s Embedded AI Enterprise Spend and Billing Solutions Drive Growth?

Keith Kirkpatrick, Research Director with Futurum, shares his insights on Zoho’s latest finance-focused releases, Zoho Spend and Zoho Billing Enterprise Edition, further underscoring Zoho’s drive to illustrate its enterprise-focused capabilities....
Micron Technology Q1 FY 2026 Sets Records; Strong Q2 Outlook
December 18, 2025

Micron Technology Q1 FY 2026 Sets Records; Strong Q2 Outlook

Futurum Research analyzes Micron’s Q1 FY 2026, focusing on AI-led demand, HBM commitments, and a pulled-forward capacity roadmap, with guidance signaling continued strength into FY 2026 amid persistent industry supply...
NVIDIA Bolsters AI/HPC Ecosystem with Nemotron 3 Models and SchedMD Buy
December 16, 2025

NVIDIA Bolsters AI/HPC Ecosystem with Nemotron 3 Models and SchedMD Buy

Nick Patience, AI Platforms Practice Lead at Futurum, shares his insights on NVIDIA's release of its Nemotron 3 family of open-source models and the acquisition of SchedMD, the developer of...
Will a Digital Adoption Platform Become a Must-Have App in 2026?
December 15, 2025

Will a DAP Become the Must-Have Software App in 2026?

Keith Kirkpatrick, Research Director with Futurum, covers WalkMe’s 2025 Analyst Day, and discusses the company’s key pillars for driving success with enterprise software in an AI- and agentic-dominated world heading...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.