Search
Close this search box.

Adobe Experience Platform AI Assistant Is Generally Available

Discover Adobe Experience Platform AI Assistant

The News: Adobe announced the general availability of Adobe Experience Platform AI Assistant, which is a simple conversational interface that can answer technical questions, automate tasks, simulate outcomes, and generate new audiences and journeys. The Adobe Experience Platform AI Assistant is embedded within Adobe Experience Cloud applications, including Adobe Real-Time Customer Data Platform, Adobe Journey Optimizer, and Adobe Customer Journey Analytics, and are powered via generative experience models. These models capture Adobe product knowledge and insights based on an organization’s unique data, campaigns, audiences, and business goals, while also leveraging brand guidelines and strict privacy controls.

You can read the press release describing the features and capabilities at Adobe’s website.

Adobe Experience Platform AI Assistant Is Generally Available

Analyst Take: Adobe Experience Platform AI Assistant, the generative AI-powered conversational interface embedded within Adobe Experience Cloud Applications, is now generally available to customers. The assistant utilizes generative AI models that incorporate Adobe product knowledge and insights based on a customer’s own data, campaigns, audiences, and business goals. Further, Adobe has implemented controls to enable the assistant to adhere to brand guidelines and data privacy rules and controls.

Key Adobe Experience Platform AI Assistant Capabilities

According to Adobe, the key Adobe Experience Platform AI Assistant can provide several distinct capabilities designed to reduce worker time and effort by leveraging automation and generative AI.

  • Product Expertise: Adobe Experience Platform AI Assistant can perform a range of operational tasks, providing users with insights that reduce time spent on problem exploration and resolution.
  • Content Generation and Automation: Brands are able to generate everything from new customer experiences to audiences for personalization campaigns and visualizations for data analysis, including generating entire marketing assets.
  • Predictive Insights and Recommendations: Upcoming capabilities will enable teams to simulate outcomes and optimize their marketing efforts in addition to recommending the next-best action.

How Adobe Leverages Models to Provide Actions in Context

AI Assistant uses a collection of experience models that are designed to address the context of each specific AI Assistant use case and enables fast data navigation as needed. According to Adobe, the generative AI experience models incorporate three key dimensions:

  • Base models are foundational models to AI Assistant and are applied for all customers, including large language models (LLMs), linguistic models, and task-specific language models that understand natural language within prompts from users. Base models are grounded in Adobe data so that AI Assistant users can ask open-ended questions and get guidance and insights to help them progress through tasks and answer the questions they may have while using Adobe applications. This limits the likelihood of model hallucination or the incorporation of non-Adobe vetted content.
  • The second dimension includes custom models that are designed to augment base models and are grounded in customer data to give customer-specific context to AI Assistant. These allow customers to query their data, discover data insights, and understand trends. Custom models can also be used to power predictive use cases, such as forecasting and making recommendations relevant to a customer’s business. Adobe notes that because these models utilize customer data, they won’t be shared outside of that specific enterprise, and role-based access control limits what each user can access.
  • Finally, decisioning services are layered on top of models and data to help inform what AI Assistant should serve up to the user based on current and historical context, which could include next steps, recommendations, or a response to previous questions. The decisioning services support multi-thread or multi-turn questions, enabling support for more complex use cases.

Enacting Guardrails to Prevent Hallucination, Data Leakage, and Inaccurate Responses

Utilizing generative AI tools to provide responses to questions is still novel and can be a fun way to pass the time. But the only way to truly leverage its power within an enterprise environment is to ensure that it is developed with enterprise-grade trust, governance, and customer data stewardship in mind. That is the philosophy Adobe employs, which encompass several key principles:

  • Generative AI models developed on top of customer data are truly bespoke for each customer, thereby preventing data leakage between customers.
  • No LLM is trained or fine-tuned using any of the customer interactions or customer data. Furthermore, logging for LLMs has been disabled as an extra precaution.
  • Adobe uses filters on the prompt and answer pipeline to ensure that the conversation is safe and leverages third-party LLMs’ content filtering services to moderate sensitive or dangerous content. Adobe also stated it has developed other filters to scrub personally identifiable information (PII) and filter out sensitive inputs. As such, prompt responses are only provided to the user if they pass both checks.
  • No third-party sources are used to provide responses back to the customer.
  • Every answer provided by AI Assistant has appropriate layers of verifiability.
  • All Adobe generative AI features go through Adobe’s AI governance process and are aligned with Adobe’s AI ethics.
  • Adobe has developed a series of models that help with intent classification, natural language to query expressions, citations, and more, which are internal models that operate within the Adobe ecosystem and allow for controls to continuously improve the correctness of the answers. It also allows Adobe to be very transparent about the internal architecture and keep customers informed.

AI Safety Becoming Table Stakes

Organizations are rightly concerned about the use of generative AI in commercial settings, given the significant negative impact that an incorrect, biased, or misleading response can have on a company’s financial position or corporate reputation. Indeed, Futurum Intelligence’s survey of 1,009 decision makers conducted in mid-2023 found that the top vendor decision criteria were expertise and experience (40.2% of respondents), followed by price & contractual terms (31.3%), and data handling and privacy controls (30.3%). Clearly, Adobe focused on these criteria, and its comprehensive strategy, principles and messaging are designed to help assuage customers’ fears about generative AI.

However, while Adobe is clearly banking on these principles to set themselves apart from their competitors – and they do benefit from being a leader in the space – we’re quickly seeing other participants level up their strategy and messaging around AI safety and guardrails. Essentially, we believe that a comprehensive, transparent, and thorough approach to AI model development, use, and guardrails will become table stakes over time, which is a good thing for the industry and the users of generative AI.

If Adobe wants to continue to use AI safety as a differentiator, and I believe they can, they must demonstrate its consistency across complex use cases that incorporate real-time data, cross-platform workflows, and federated data.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

Adobe Announces Firefly Image 3 Model and Photoshop Enhancements

Adobe’s Use of Midjourney

Incorporating Generated Images into Adobe’s Firefly Model

Image Credit: Adobe

Author Information

Keith has over 25 years of experience in research, marketing, and consulting-based fields.

He has authored in-depth reports and market forecast studies covering artificial intelligence, biometrics, data analytics, robotics, high performance computing, and quantum computing, with a specific focus on the use of these technologies within large enterprise organizations and SMBs. He has also established strong working relationships with the international technology vendor community and is a frequent speaker at industry conferences and events.

In his career as a financial and technology journalist he has written for national and trade publications, including BusinessWeek, CNBC.com, Investment Dealers’ Digest, The Red Herring, The Communications of the ACM, and Mobile Computing & Communications, among others.

He is a member of the Association of Independent Information Professionals (AIIP).

Keith holds dual Bachelor of Arts degrees in Magazine Journalism and Sociology from Syracuse University.

SHARE:

Latest Insights:

Sovereign Cloud Deployments: The Race Among Hyperscalers
Steven Dickens, Chief Technology Advisor at The Futurum Group, shares insights on Oracle’s US$6.5 billion investment in Malaysia's sovereign cloud. This move, alongside strategic hyperscaler partnerships, positions Oracle to lead in AI innovation and regulated cloud deployments.
VAST Data Adds to Its AI Capabilities With New InsightEngine Targeting RAG Workloads
Mitch Lewis, Research Analyst, Camberley Bates, CTA, and Mitch Ashley, CTA, at The Futurum Group share their analysis on the VAST Data’s InsightEngine with NVIDIA announcements.
Krista Case, Research Director at The Futurum Group, overviews NetApp Insight 2024.
HPE Aruba Networking Central: Now Scintillating Yet Smoothing
The Futurum Group’s Ron Westfall examines why the new HPE Aruba Networking Central solution can deliver the purpose-built AI, contextual observability, architectural expandability, and improved configurability key to swiftly improving network management, security, performance, and visibility.