Menu

watsonx.ai Leverages Foundation Models to Accelerate AI Application Development

watsonx.ai Leverages Foundation Models to Accelerate AI Application Development

The News: IBM announced on July 11 that it has begun to roll out IBM watsonx, the company’s enterprise-ready AI and data platform that is designed to help organizations accelerate and scale AI. Previewed 2 months ago at IBM THINK, watsonx encompasses the watsonx.ai studio for new foundation models (FMs), generative AI and machine learning (ML), and the watsonx.data fit-for-purpose data store, built on an open lakehouse architecture, which are now available. The watsonx.governance toolkit to help enable AI workflows to be built with responsibility, transparency, and explainability is coming later in 2023. You can read the original Press Release at this link.

watsonx.ai Leverages Foundation Models to Accelerate AI Application Development

Analyst Take: IBM announced that it is making several watsonx products generally available to enterprise customers, with the goal of helping organizations accelerate the development of AI technology, which can be then deployed at scale. watsonx.ai, the enterprise studio targeted at AI builders and developers, is among the applications that are now available, and supports the training, validation, tuning, and deployment of traditional ML and generative AI capabilities, powered by FMs.

Pre-Trained Models from IBM and Hugging Face for AI Development Tasks

Today in watsonx.ai, AI builders can leverage pre-trained models from IBM and from the Hugging Face community for a range of AI development tasks, including natural language processing (NLP)-type tasks, such as question answering, content generation and summarization, text classification, and extraction. Further, watsonx.ai includes several additional capabilities designed to enable the creation and use of generative AI-based applications, including:

  • Retrieval-Augmented Generation (RAG), which allows the creation of a chatbot or a question-answering feature grounded on specific content
  • Summarization, which allows the transformation of text into overviews with key points
  • Insight extraction, which analyzes existing unstructured text and then surfacing insights in specialized domain areas
  • Content generation, which generates text for a specific purpose
  • Named Entity Recognition, which identifies and extracts essential and specific information from unstructured text
  • Classification, which allows the reading and classification of written input with zero examples

In future releases, IBM says it will provide access to a larger variety of IBM-trained proprietary foundation models for efficient domain and task specialization.

Foundation Models Allow Enterprises to Jump-Start the Development Process

The use of foundation models allows organizations to create AI-driven applications more quickly, compared with the traditional method of training each model to handle a specific task. This is because the process starts with a model that has already been pre-trained. With traditional model development, the process begins with the initial collection of data, followed by additional curation, cleaning, and labeling of that data to create a specific task. To handle a different task, the same process needed to be run again, resulting in a significant time and resource allocation, given the need to label anywhere from a thousand to even millions of data points to ensure the model can be properly trained.

Automating Prompt Engineering Can Speed Up the Tuning Process

The use of FMs allows organizations to accelerate application development, which could be used right out of the box. To provide more domain functionality, the model can (and should be) tweaked or tuned to suit a specific use case with as little as a hundred or a thousand additional data labels, which is far less time-consuming or expensive than developing another model from scratch. This enables faster time-to-value and in many cases, the realization of a better ROI.

To tune a model, model builders often use prompt engineering, a process by which prompts are used to refine the model to format or structure its output based on the input provided via a prompt. A prompt engineer would input text with examples of how the model should respond, based on certain inputs, and over time, the model will be tuned to respond in the desired way.

However, for large enterprises with hundreds of tasks or applications, tuning is a better approach. Model tuning involves the automation of the creation of prompt, so that the prompt engineering process is automatic, enabling a more cost-effective way to further refine the model’s output as the number of tasks, functions, or variables increase. The watsonx.ai Studio includes the Prompt Lab, in which users can experiment with zero-shot, one-shot, or few-shot prompting to support a range of NLP-type tasks including question answering, content generation and summarization, text classification, and extraction.

Later in the year, future watsonx.ai releases will include capabilities for prompt tuning and fine-tuning FMs as part of its Tuning Studio to help tune FMs with labeled data for better performance and accuracy.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

IBM watsonx.data is Now Generally Available as Part of Major watsonx Announcement

IBM Brings Advanced Feature Set to New FlashSystem 5045

IBM’s Opportunity for Generative AI in the Enterprise

Author Information

Keith Kirkpatrick is VP & Research Director, Enterprise Software & Digital Workflows for The Futurum Group. Keith has over 25 years of experience in research, marketing, and consulting-based fields.

He has authored in-depth reports and market forecast studies covering artificial intelligence, biometrics, data analytics, robotics, high performance computing, and quantum computing, with a specific focus on the use of these technologies within large enterprise organizations and SMBs. He has also established strong working relationships with the international technology vendor community and is a frequent speaker at industry conferences and events.

In his career as a financial and technology journalist he has written for national and trade publications, including BusinessWeek, CNBC.com, Investment Dealers’ Digest, The Red Herring, The Communications of the ACM, and Mobile Computing & Communications, among others.

He is a member of the Association of Independent Information Professionals (AIIP).

Keith holds dual Bachelor of Arts degrees in Magazine Journalism and Sociology from Syracuse University.

Related Insights
March 11, 2026

AI Accelerators – Futurum Signal

The rapid acceleration of artificial intelligence is fundamentally reshaping the semiconductor and data center landscape. In our latest Futurum Signal Report: AI Accelerators, we examine how a new generation of...
OpenAI Acquires Promptfoo, Gaining 25% Foothold in Fortune 500 Enterprises
March 11, 2026

OpenAI Acquires Promptfoo, Gaining 25% Foothold in Fortune 500 Enterprises

Mitch Ashley, VP Practice Lead at Futurum, examines OpenAI's acquisition of Promptfoo and what it signals about the security and governance requirements blocking AI agents from enterprise production....
HPE Q1 FY 2026 Results Show Networking Strength, AI Backlog, and Higher Outlook
March 11, 2026

HPE Q1 FY 2026 Results Show Networking Strength, AI Backlog, and Higher Outlook

Futurum Research analyzes HPE’s Q1 FY 2026 earnings, focusing on networking-for-AI demand, memory-driven supply constraints, Juniper integration progress, and what the updated outlook implies for FY 2026 execution....
Claude Marketplace Tests Whether Anthropic Can Win the Procurement Heart
March 11, 2026

Claude Marketplace Tests Whether Anthropic Can Win the Procurement Heart

Alex Smith, VP and Practice Lead at Futurum examines Anthropic’s Claude Marketplace and how commitment-based procurement and partner apps shift enterprise AI buying toward consolidated spend and workflow-specific tools....
Teradata Trades Duct Tape for Unified Intelligence With Its Latest Release
March 10, 2026

Teradata Trades Duct Tape for Unified Intelligence With Its Latest Release

Brad Shimmin, VP and Practice Lead at Futurum, analyzes Teradata’s launch of the Agentic Enterprise Vector Store. This multi-modal pivot aims to challenge the standalone vector database by bringing AI...
Can Microsoft's Frontier Suite Deliver AI Excellence at Scale
March 10, 2026

Can Microsoft’s Frontier Suite Deliver AI Excellence at Scale?

Futurum analysts Keith Kirkpatrick and Fernando Montenegro share their insights on Microsoft’s Frontier Suite, and discuss the implications for both enterprise buyers and the company’s competitors....

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.