Analyst(s): Nick Patience, Brad Shimmin, Alex Smith, Mitch Ashley
Publication Date: July 18, 2025
What is Covered in this Article:
- AWS introduced several new capabilities, emphasizing the breaking down of data silos between foundational storage and advanced AI services to accelerate time to value across the entire data-to-insight pipeline.
- AWS introduced Amazon Bedrock AgentCore targeted at reducing the “undifferentiated heavy lifting” impeding agent projects moving into production.Agentic against and tools and now included in the AWS Marketplace, giving developers a centralized catalog for discovering and deploying agent applications.
- AWS also made foundational data layer improvements to support AI adoption, introducing expanded Amazon S3 Metadata and S3 Vectors for efficient access to high-quality data and AI-specific workloads, and Amazon SageMaker Catalog, which serves as a central hub for semantic meaning using GenAI to enrich metadata.
The Event – Major Themes & Vendor Moves: AWS brought its AWS Summit to New York City with a vision for an AI-driven era that focused on enabling enterprise-grade AI agents at scale. During his keynote, Swami Sivasubramanian, AWS’ VP of Agentic AI, stressed that the very nature of enterprise software development is fundamentally changing, with institutional knowledge increasingly finding its way into AI models capable of reasoning, planning, and acting like expert humans. The key product announcements included Amazon BedRock AgentCore, a new AI Agents and Tools service in AWS Marketplace, and Amazon S3 Vectors.
AWS Summit New York City: AWS Forges the Enterprise-grade Pipeline for Agentic AI
Analyst Take: The most significant announcement at the New York AWS Summit was Amazon Bedrock AgentCore, which represents AWS’s most significant infrastructure investment in agentic AI, directly competing with Microsoft’s Copilot Studio and Google’s Vertex AI Agent Builder. By positioning itself as framework agnostics – support for CrewAI, LangGraph, and LlamaIndex, was cited – AWS is positioning itself as the enterprise’s AI infrastructure layer. On the same theme, the AWS Marketplace expansion into agents allows users to test agents from various vendors, all within the AWS environment. And as AI is based on data, abstracting away the complexities of enterprise data within AWS with object storage layer and metadata cataloging capabilities will be crucial.
Agent Development Lifecycle: Build, Deploy, and Operate
AWS outlined the blueprint for successful agents: the model and application capabilities, tools and frameworks to build, services to deploy and operate, and the ability to discover, push, and procure. As is generally acknowledged, deploying AI agents into production is a significant challenge, as they often remain in development or as a proof of concept. AWS is tackling head-on the burden of the “undifferentiated heavy lifting” necessary to deploy and operate agents in production.
As the capabilities of agents and models increase, vendors are beginning to look beyond building agents to filling the gaps required to deploy them into production. For the first time, AWS is taking a more systematic approach with its Amazon BedRock AgentCore announcement. An approach that software, DevOps, and platform engineers recognize as the Software Development Lifecycle (SDLC), adapted to the unique needs of agents. An Agent Development Lifecycle (ADLC) approach.
AgentCore offerings align with the “heavy lifting” roadblocks presented in the AWS Summit keynote by addressing key areas of identity, memory, runtime, gateway, and observability, with the added benefits of a code interpreter (sandbox runtime for agent-generated code) and a browser tool that enables agents to work within users’ browsers. What’s important to recognize is that solving the heavy lifting roadblocks one by one is essential, but it does not make an agent development lifecycle. Software engineers understand that creating software is about creating a flow that moves work through pipelines leading to production deployment.
AgentCore, now in preview, lays the groundwork for an AWS agent development process. AWS has the benefit of being highly proficient at creating successful software-based cloud services and products. This is vital to compete with Microsoft and Google, who have unique strengths in their cultures and processes. AWS must apply its engineering expertise to the unique AI agent pipelines and workflows, building a seamless flow for agent developers across AgentCore components that increases velocity and reduces the friction of deploying AI agents into production.
Gaps remain in AgentCore’s agent development pipeline, including security, quality, governance, revision management, and platform engineering. Futurum expects to see further progress in these areas in 2025 and 2026. The first hyperscaler to successfully deploy customer agents into production stands to gain the most in the competitive agent-based AI space.
AWS also announced Kiro, an agentic IDE that streamlines the developer experience from concept to production. It combines natural language with software engineering practices to generate scalable, maintainable code. Kiro is aimed at helping developers quickly create specs using natural language. It then uses this language to generate tasks, identify dependencies, and link them to requirements, transforming static documents into evolving sources of truth. Additionally, Agent Hooks automates tedious tasks by triggering workflows based on file changes or manual prompts, such as updating tests or refreshing documentation.
AWS Marketplace
AWS has significantly expanded its Marketplace to include AI agents and agentic tools. This strategic move aims to accelerate enterprise adoption of agentic AI by streamlining how organizations discover, procure, and implement advanced AI solutions.
The AWS Marketplace is a centralized catalog for AI agents, tools, solutions, and services from partners such as Anthropic, Salesforce, and Deloitte. Customers can deploy pre-built agents and tools utilizing the AgentCore runtime and access a wide array of solutions, including pre-built agents, agent tools, knowledge bases, guardrails, and professional services.
This Marketplace enables customers to quickly test and deploy AI agent solutions from various vendors, facilitating rapid production and scaling. It is designed to be a secure and governable platform that connects AI builders and buyers, simplifying the adoption and implementation of AI technologies for organizations.
AWS doubles down on metadata and semantic meaning
While AI agent tooling dominated much of AWS Summit 2025, a more foundational story unfolded in the data layer. AWS introduced capabilities that effectively transform its storage and analytics services into a more cohesive, AI-ready data plane. To that end, AWS expanded its object storage layer and metadata cataloging capabilities, directly aiming at persistent bottlenecks in AI adoption, namely unfettered access to high-quality data at scale.
This unification starts at the foundation with an update to Amazon S3 Metadata, which was initially released earlier this year. This solution now grants visibility into all S3 objects, both existing and new, via Amazon S3 Tables. Users can now use Amazon’s analytics tools (e.g., QuickSight) across S3 storage footprints to capture and inspect disparate AI assets such as video annotation information. Complementing this, the introduction of Amazon S3 Vectors shows that AWS can match rivals blow for blow in pushing AI-specific workloads down to its object layer. This object-level vector capability is gaining traction in the market, as it addresses the fact that traditional vector databases cannot efficiently handle some AI agentic workloads with rapidly updating memory and RAG application requirements.
The linchpin of this strategy, however, is Amazon SageMaker Catalog, a new offering capable of serving as a central clearinghouse for semantic meaning in the enterprise. Accessible as a complement to and extension of AWS Glue Data Catalog (initially intended to streamline data integration), SageMaker Catalog enhances data discovery by using GenAI to enrich metadata with business context. Through tight integration with Amazon QuickSight and SageMaker Unified Studio, this new catalog can help business users, data professionals, and AI agent processes find, understand, and use data consistently. This will help AWS directly address customer challenges such as data sprawl, data quality blind spots, inconsistent access policies, and regulatory compliance complexities.
Ultimately, these announcements can help customers transition from isolated AI prototypes to production-grade applications, especially where data accuracy and governance are paramount. They also demonstrate AWS’s ambition to abstract away the complexity of its own data infrastructure, allowing customers to focus less on provisioning and plumbing and more on turning data into tangible business outcomes.
Customizable Nova models
Amazon has made repaid progress with its Nova model family. Since their announcement in December 2024, Amazon has introduced 8 models in just over 6 months. However, until now, they have not been customizable, limiting their appeal to enterprise customers with specific data sets and needs. At the Summit, AWS announced comprehensive customization options for Nova models on SageMaker AI, covering all stages from pre-training to post-training, which has surprisingly unlocked entirely new use cases. Furthermore, Amazon has improved the reliability of agents with Nova Act, which enables developers to build agents that take actions in web browsers. AWS says Nova Act has achieved over 90% end-to-end task completion rates on early enterprise use cases, which AWS pitches as a step towards artificial general intelligence (AGI).
What to Watch:
- Will AWS be able to drive S3 as a native platform for vectors in support of AI agent memory and RAG applications, compared with rival vector store solutions from storage hardware and database competitors? Will this truly shift the economics of large-scale vector embedding, indexing, storage, and semantic search retrieval?
- Moving forward, AWS will need to refine how SageMaker Catalog works with or in opposition to rival catalog offerings from Databricks, Snowflake, Atlan, Alteryx, and Collibra (with which AWS currently offers bi-directional integration). Data catalogs are fast becoming a significant source of data gravity in the enterprise. AWS will need to build bridges or stake out a clear competitive and/or cooperative space across the market.
- While in preview, Amazon Bedrock AgentCore will continue to improve incrementally as customers exercise its capabilities. With multiple components in AgentCore, its general availability may take longer or may occur on a component-by-component basis. Look for customer success stories that are highlighted as one indicator of progress.
Read more about AWS’s announcements from its NYC Summit on the AWS website.
Disclosure: Futurum is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Futurum as a whole.
Other insights from Futurum:
AWS re:Inforce 2025 – Identity, Application Security, and More
Amazon Unveils Models, Chips, and Tools at re:Invent, Boosting its AI Credentials