AWS Pushes the Agent Stack: Quick, Connect Verticals, OpenAI on Amazon Bedrock

AWS Pushes the Agent Stack Quick, Connect Verticals, OpenAI on Amazon Bedrock

Analyst(s): Mitch Ashley, Keith Kirkpatrick, Fernando Montenegro, Alex Smith
Publication Date: May 4, 2026

What is Covered in This Article:

  • AWS held its What’s Next with AWS event on April 28, 2026, anchoring its agent stack strategy across productivity surfaces, vertical workflows, and platform infrastructure.
  • Amazon Quick launched in preview, Connect rebranded around four vertical agentic solutions, and OpenAI models, Codex, and Managed Agents arrived on Amazon Bedrock in limited preview.
  • Bedrock Managed Agents Powered by OpenAI provides the opinionated managed runtime; AgentCore is repositioned as the open, model-agnostic orchestration layer that provides the default compute environment for Managed Agents.
  • Codex on Amazon Bedrock removes the procurement and identity friction that AWS-anchored enterprises faced when adopting OpenAI’s coding agent under Microsoft billing.
  • AWS is now contesting the agent stack across three layers simultaneously, with the control point sitting in procurement, identity, and observability rather than in any single product.

The Event — Major Themes & Vendor Moves: AWS held “What’s Next with AWS” on April 28, 2026, in San Francisco, CA, anchored by a fireside conversation with AWS CEO Matt Garman on enterprise agent adoption. The event consolidated AWS’s agentic strategy under a single message: putting agents to work across business domains. Amazon Connect was rebranded to stand for “agentic business solutions,” and announcements spanned productivity, vertical applications, and platform infrastructure.

At the productivity layer, AWS launched Amazon Quick Desktop in preview. New capabilities include Apps in Quick, social sign-in, Google Workspace connectivity, document generation, and Microsoft Office extensions. Connect expanded into three new agentic solutions: Connect Decisions for supply chain planning, Connect Talent for high-volume hiring, and Connect Health for clinical workflows. Connect Health has logged more than one million ambient documentation visits at One Medical, with five additional launch partners across health systems and EHR vendors.

OpenAI CEO Sam Altman joined the event via recording to cover the expansion of the AWS-OpenAI partnership, the latest OpenAI models, and OpenAI’s Codex coding agent. At the platform layer, AWS announced Amazon Bedrock Managed Agents Powered by OpenAI, with preview opening this week. The offering combines OpenAI’s agent harness with OpenAI frontier models on Bedrock infrastructure, framed as the opinionated counterpart to AgentCore. AgentCore was repositioned as the open, model-agnostic, framework-agnostic orchestration layer that provides the default compute environment for Managed Agents. AgentCore’s recently shipped pieces, including the Agent Registry preview launched April 9 and A2A protocol support live in AgentCore Runtime since November 2025, were positioned at the event as the discovery and interoperability foundation under the new managed runtime.

AWS Pushes the Agent Stack: Quick, Connect Verticals, OpenAI on Amazon Bedrock

Analyst Take — Amazon Quick: AWS has launched Amazon Quick, a desktop AI assistant that promises integration across disparate enterprise applications, local files, and cloud platforms. This move challenges the fragmented AI assistant landscape and pressures Microsoft, Google, and Salesforce to accelerate their cross-platform agent strategies.

Amazon Quick’s pitch is that it can orchestrate actions and surface insights across multiple platforms, potentially reducing the friction of context-switching and shadow IT. The enterprise AI assistant market has been defined by vendor lock-in, with Microsoft Copilot and Google Gemini each optimizing for their native productivity suites.

AWS’s cross-platform approach could appeal to organizations running hybrid environments or resisting single-vendor dependency. However, AWS must deliver not just breadth of integration but depth, meaning reliable, context-rich actions across dozens of APIs.

The main challenges surrounding Quick involve balancing real autonomy with user control, securely integrating diverse data sources and identities, and creating a trusted, proactive AI companion. Additionally, there may be hurdles in explaining its role within the broader AWS agent stack and encouraging adoption among non-technical users without extensive training.

Databricks and Anthropic are also building agentic platforms, but lack AWS’s distribution muscle. That said, for Amazon Quick to attract enterprise attention, it must prove it can securely access, contextualize, and act on sensitive enterprise data without introducing new governance risks.

Amazon Connect

Connect, long known as AWS’s cloud contact center, is now a suite of four vertical agentic AI solutions: Amazon Connect Decisions (supply chain optimization), Talent (hiring), Customer (customer engagement), and Health (healthcare administration). Each offering is built to integrate with existing business workflows, aiming to minimize disruptive process overhauls while embedding Amazon’s operational expertise.

The expansion of Amazon Connect into Decisions, Talent, Customer, and Health is a direct challenge to the horizontal platform narrative that’s dominated enterprise AI for the past two years. By embedding agentic AI into vertical workflows, Amazon is betting that deep operational expertise will matter more than generic orchestration. Competitors such as Microsoft, Salesforce, and ServiceNow have all bet heavily on agentic AI, but Amazon’s approach leans on domain-specific operational knowledge and claims of seamless integration.

Amazon is promising minimal workflow disruption, but integration is where most AI projects fail. Incumbents such as ServiceNow and Salesforce have spent years building connectors, compliance frameworks, and governance controls. Amazon Connect’s new agentic AI solutions will need to prove they can slot into complex, regulated enterprise environments without introducing new risks or integration headaches.

Amazon’s vertical agentic AI solutions promise to unify fragmented business processes, but every platform vendor claims the same. The winners in this wave will be those who deliver open, interoperable agentic frameworks that empower enterprises to orchestrate across best-of-breed and platform-native tools.

Furthermore, customers are already invested in contact center suites, supply chain planning tools, and healthcare systems. While these existing tools may be suboptimal, they are entrenched. Amazon Connect will need to demonstrate how these solutions deliver outcomes that justify ripping and replacing, or deeply augmenting, those existing systems.

Bedrock Managed Agents Powered by OpenAI

AWS now ships a packaged path that combines OpenAI’s agent harness with OpenAI frontier models on Bedrock infrastructure. The managed offering handles memory, skills, identity, and compute selection, so teams no longer have to assemble those components themselves. For teams under delivery pressure, that collapses the integration work that has historically gated production deployment.

The structural move is that AWS is hosting a competitor’s inference runtime at margin-positive scale. OpenAI’s commercial relationship with Microsoft has historically routed enterprise inference through Azure. Bedrock now becomes a viable second venue, and AWS extracts the platform rent on every token.

For buyers, the opinionated path presents a choice that the open path does not. Speed and managed governance trade against model portability and the ability to substitute providers. Teams that selected AgentCore for vendor neutrality will need to defend that choice when Bedrock Managed Agents ship measurable speed advantages.

The control plane question is where this gets sharper. Bedrock Managed Agents uses AgentCore as the default environment, meaning the opinionated runtime sits atop the open orchestration layer. That architectural choice protects AWS from lock-in to a single model provider and preserves a migration path if OpenAI’s commercial terms shift.

The real test comes when pricing, rate limits, and SLA terms are made public. If OpenAI routes meaningful enterprise inference volume through AWS, the multi-cloud math changes for any vendor whose agent stack assumes Azure exclusivity. Watch the customer logos that surface during the first 90 days of the preview.

OpenAI Codex on Amazon Bedrock

Codex enters AWS one day after Microsoft and OpenAI restructured their exclusivity arrangement. Enterprise teams can now authenticate with AWS credentials, route Codex inference through Amazon Bedrock, and apply usage toward existing AWS cloud commitments. The CLI, desktop app, and VS Code extension all connect to Bedrock through the same API surface that customers already use for Claude and Nova.

Procurement Is the Structural Shift

The technical integration is straightforward. The procurement consolidation changes the dynamics of enterprise adoption. Cloud commitment dollars now flow toward an OpenAI coding agent without separate vendor onboarding, security review, or financial governance setup. That removes the friction enterprise CIOs cite as the primary blocker on coding agent adoption outside the GitHub Copilot footprint.

The competitive read points squarely at GitHub Copilot’s distribution moat. GitHub still owns the code-host context Copilot exploits, and Codex on Bedrock does not erase that advantage. What it does erase is the cloud-procurement gap that previously pushed AWS-anchored teams toward Microsoft-billed tooling. Coding agent selection becomes a question of harness quality and model fit, where the bill lands stops being a deciding factor.

Verification Debt Becomes a Bedrock Workload

The verification debt question gets sharper at AWS scale. More than four million weekly Codex users are already generating code, and Bedrock now serves as a venue for that code to reach enterprise repos under AWS identity and CloudTrail. Security and platform teams gain a new control surface in the inference path. The audit trail for agent-authored code sits inside the same observability stack that governs other Bedrock workloads, which simplifies governance for AWS-standardized organizations and creates harder questions for those running coding agents outside that perimeter.

AgentCore Stakes the Open Layer Claim

AgentCore now positions itself as the open orchestration layer underneath everything AWS is doing in agents. The Agent Registry shipped in preview earlier in April; A2A support has been live in AgentCore Runtime since November; and What’s Next has been reframed as the discovery and interoperability foundation under the new OpenAI-powered managed runtime. Together, these pieces make AgentCore the closest competitor offering to a fully realized agent control plane.

The registry is the consequential piece. Discovery infrastructure decides which agents enterprise teams can find, evaluate, and deploy. That control point sits at the canonical knowledge authority layer, where authoritative metadata about agents is curated, governed, and rate-limited.

The competitive read sharpens against Microsoft and Google. Microsoft Foundry and Copilot Studio have similar stack ambitions with stronger productivity-suite integration. Google Vertex AI Agent Builder has the model and infrastructure pieces with weaker enterprise distribution. AWS is now bringing matching architectural components with the inference scale and customer base that make the registry consequential.

The marketplace adjacency deserves attention. Once registries proliferate inside large enterprises, AWS controls the discovery surface for every agent published into Bedrock. Cross-registry federation and external catalog integration are on the roadmap, and if those land, the registry becomes a distribution layer adjacent to a marketplace. The economics of that shift would change the calculus for ISVs that currently treat AWS as raw infrastructure.

The architectural test for AgentCore is whether the control plane claim holds when the opinionated path ships at scale. If Bedrock Managed Agents Powered by OpenAI grows faster than open AgentCore deployments, the practical center of gravity moves toward model-specific runtimes. Platform engineering leaders should treat the registry roadmap as the leading indicator. Watch publishing controls, identity and delegation models, and observability hooks over the next two quarters.

Cybersecurity as a foundational component

At this week’s AWS “What’s Next” event, the spotlight was dominated by application and partnership announcements, including Amazon Quick, the expanded Connect family, and a massive OpenAI partnership. Yet, beneath the surface of these productivity announcements lies a profound, albeit implicit, cybersecurity narrative.

AWS is making it clear that as AI capabilities expand, the enterprise security perimeter must evolve to meet them. They are hardening their infrastructure for zero-trust inference, enforcing “zero operator access” within Amazon Bedrock. This ensures that neither AWS engineers nor any other users have SSH or console access to customer data. Instead, hardware-level isolation is achieved via its well-known Nitro architecture, which isolates production workloads, and all system updates require cryptographic signatures to maintain physical and architectural boundaries.

Perhaps most compelling is AWS’s deployment of automated reasoning. Moving beyond stochastic guesswork, AWS’s long-term investment in robust logic primitives compiles business and security rules into verifiable logic instructions. This allows enterprises to deterministically verify that an AI model’s output strictly followed instructions, effectively moving authorization and policy enforcement entirely outside the model.

Finally, the new OpenAI integration underscores the power of existing AWS security primitives. By running Bedrock Managed Agents within native AWS environments, enterprises don’t have to reinvent the wheel. These agents can operate securely behind Virtual Private Clouds (VPCs) and are governed by standard Identity and Access Management (IAM) policies and CloudTrail for rigorous organizational auditing.

While the keynote didn’t fly a “cybersecurity” banner, the subtext is undeniable: the path to enterprise AI adoption is paved with deeply embedded, mathematically verified security controls.

What to Watch:

  • Watch whether AWS extends the managed agents pattern to Anthropic and other frontier model providers in the next two quarters. A portfolio of opinionated runtimes would create internal gravity against the open AgentCore path and make the open-layer claim harder to defend operationally.
  • Watch the agent supply chain controls AWS adds to the registry over the next two quarters: signing, provenance, dependency tracking, and revocation. Without these, the registry becomes a distribution surface rather than a control plane, and enterprise security teams will treat published agents as unvetted code.
  • Watch how Codex on Bedrock affects GitHub Copilot enterprise account positions inside organizations with significant AWS spend. Whether Microsoft accelerates Copilot’s multi-cloud billing options or doubles down on Azure-anchored differentiation will signal how seriously it takes the threat.
  • Watch which Connect Health partners convert to live deployments past One Medical’s ambient documentation footprint. Healthcare AI moves only as fast as EHR vendor cooperation, and the named launch partners signal which SI accounts AWS expects to land in the next two quarters.
  • Watch whether Quick’s free social sign-in feature drives enterprise pull-through beyond IT procurement. If line-of-business buyers self-onboard at scale, AWS gains a distribution flywheel that Microsoft 365 and Google Workspace have historically dominated, and the risk of channel disintermediation for SI partners doing custom dashboard work becomes immediate.

You can read the full announcement on What’s Next with AWS 2026 on the AWS website.

Disclosure: Futurum is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Futurum as a whole.

Other Insights From Futurum:

Futurum Agent Control Plane Framework: A Reference Model for Production AI Agents

Can NetSuite’s Agentic ERP Model Survive the SaaS ‘Apocalypse’ and Win the Next AI Platform War?

Anthropic Glasswing: AI Vulnerability Detection Has Crossed a Threshold

Marketplace Ecosystem Map: Companies Reshaping Software Buying – Report Summary

Author Information

Mitch Ashley is VP and Practice Lead of Software Lifecycle Engineering for The Futurum Group. Mitch has over 30+ years of experience as an entrepreneur, industry analyst, product development, and IT leader, with expertise in software engineering, cybersecurity, DevOps, DevSecOps, cloud, and AI. As an entrepreneur, CTO, CIO, and head of engineering, Mitch led the creation of award-winning cybersecurity products utilized in the private and public sectors, including the U.S. Department of Defense and all military branches. Mitch also led managed PKI services for broadband, Wi-Fi, IoT, energy management and 5G industries, product certification test labs, an online SaaS (93m transactions annually), and the development of video-on-demand and Internet cable services, and a national broadband network.

Mitch shares his experiences as an analyst, keynote and conference speaker, panelist, host, moderator, and expert interviewer discussing CIO/CTO leadership, product and software development, DevOps, DevSecOps, containerization, container orchestration, AI/ML/GenAI, platform engineering, SRE, and cybersecurity. He publishes his research on futurumgroup.com and TechstrongResearch.com/resources. He hosts multiple award-winning video and podcast series, including DevOps Unbound, CISO Talk, and Techstrong Gang.

Keith Kirkpatrick is VP & Research Director, Enterprise Software & Digital Workflows for The Futurum Group. Keith has over 25 years of experience in research, marketing, and consulting-based fields.

He has authored in-depth reports and market forecast studies covering artificial intelligence, biometrics, data analytics, robotics, high performance computing, and quantum computing, with a specific focus on the use of these technologies within large enterprise organizations and SMBs. He has also established strong working relationships with the international technology vendor community and is a frequent speaker at industry conferences and events.

In his career as a financial and technology journalist he has written for national and trade publications, including BusinessWeek, CNBC.com, Investment Dealers’ Digest, The Red Herring, The Communications of the ACM, and Mobile Computing & Communications, among others.

He is a member of the Association of Independent Information Professionals (AIIP).

Keith holds dual Bachelor of Arts degrees in Magazine Journalism and Sociology from Syracuse University.

Fernando Montenegro serves as the Vice President & Practice Lead for Cybersecurity & Resilience at The Futurum Group. In this role, he leads the development and execution of the Cybersecurity research agenda, working closely with the team to drive the practice's growth. His research focuses on addressing critical topics in modern cybersecurity. These include the multifaceted role of AI in cybersecurity, strategies for managing an ever-expanding attack surface, and the evolution of cybersecurity architectures toward more platform-oriented solutions.

Before joining The Futurum Group, Fernando held senior industry analyst roles at Omdia, S&P Global, and 451 Research. His career also includes diverse roles in customer support, security, IT operations, professional services, and sales engineering. He has worked with pioneering Internet Service Providers, established security vendors, and startups across North and South America.

Fernando holds a Bachelor’s degree in Computer Science from Universidade Federal do Rio Grande do Sul in Brazil and various industry certifications. Although he is originally from Brazil, he has been based in Toronto, Canada, for many years.

Alex is Vice President & Practice Lead, Ecosystems, Channels, & Marketplaces at the Futurum Group. He is responsible for establishing and maintaining the Channels Research program as part of the overall Futurum GTM and Channels Practice. This includes overseeing the channel data rollout in the Futurum Intelligence Platform, primary research activities such as research boards and surveys, delivering thought-leading research reports, and advising clients on their indirect go-to-market strategies. Alex also supports the overall operations of the Futurum Research Business Unit, including P&L segmentation, sales and marketing alignment, and budget planning.

Prior to joining Futurum, Alex was VP of Channels & Enterprise Research at Canalys where he led a multi-million dollar research organization with more than 20 analysts. He played an integral role in helping the Canalys research organization migrate into Omdia after having been acquired in 2023. He is an accomplished research leader, as well as an expert in indirect go-to-market strategies. He has delivered numerous keynotes at partner-facing conferences.

Alex is based in Portland, Oregon, but has lived in numerous places, including California, Canada, Saudi Arabia, Thailand, and the UK. He has a Bachelor in Commerce and Finance Major from Dalhousie University, Halifax Canada.

Related Insights
Atlassian Q3 FY 2026 Earnings Show Continued Cloud And AI-Led Expansion
May 4, 2026

Atlassian Q3 FY 2026 Earnings Show Continued Cloud And AI-Led Expansion

Futurum Research reviews Atlassian’s Q3 FY 2026 earnings, focusing on Cloud momentum, AI adoption via Rovo, and Service Collection traction, with takeaways for enterprise workflow and ITSM strategy....
Twilio Q1 FY 2026 Earnings Show Accelerating Voice and Messaging Demand
May 4, 2026

Twilio Q1 FY 2026 Earnings Show Accelerating Voice and Messaging Demand

Futurum Research reviews Twilio’s Q1 FY 2026 earnings, focusing on accelerating voice and messaging demand, growing multi-product adoption, and how AI-driven use cases are shaping Twilio’s platform direction....
Amazon Q1 FY 2026: AWS Momentum Builds as AI Infrastructure Spend Surges
May 4, 2026

Amazon Q1 FY 2026: AWS Momentum Builds as AI Infrastructure Spend Surges

Futurum Research analyzes Amazon’s Q1 FY 2026 earnings, focusing on AWS re-acceleration, custom silicon expansion, and agentic AI product moves shaping near-term spending and longer-term positioning....
Microsoft Q3 FY 2026 Earnings Show Cloud Growth, With Capacity Still Tight
May 4, 2026

Microsoft Q3 FY 2026 Earnings Show Cloud Growth, With Capacity Still Tight

Brad Shimmin and Futurum Research analyze Microsoft Q3 FY 2026 earnings, focusing on cloud demand, Azure capacity constraints, Copilot usage intensity, and the shift toward user plus usage commercial models....
Agentic ERP Model
May 1, 2026

Can NetSuite’s Agentic ERP Model Survive the SaaS ‘Apocalypse’ and Win the Next AI Platform War?

Keith Kirkpatrick, Vice President & Research Director, Enterprise Software & Digital Workflows at Futurum, examines how NetSuite's agentic ERP model aims to deliver real AI ROI and counter the fragmenting...
Fusion Applications
May 1, 2026

Oracle Bets on Outcome-Driven AI Agents, But Will Enterprises Buy the Vision?

Keith Kirkpatrick, Vice President & Research Director, Enterprise Software & Di at Futurum, examines Oracle's pivot toward AI agents embedded in Fusion Applications, analyzing enterprise demand for measurable business value,...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.