Top Trends In AI This Week: October 9, 2023

Top Trends In AI This Week: October 9, 2023

Introduction: Generative AI is widely considered the fastest moving technology innovation in history. It has captured the imagination of consumers and enterprises across the globe, spawning incredible innovation and along with it a mutating market ecosystem. Generative AI has also caused a copious amount of news and hype. To avoid AI FOMO and find the right path, the wise will pay attention to trends and not be distracted by every announcement and news byte.

Here are the top trends in AI this week:

Large Language Model and Foundation Model Mania Continues

The Trend: Large language models (LLMs) and foundation models continue to mutate. New players continue to enter the market, and alliances between model vendors and other players are forming.

  • Mistral AI offers free LLM: Mistral’s 7B model is generally available under a freemium business model. Mistral says its “small” LLM offers similar capabilities to Llama 2 at lower compute costs.
  • Researchers launch Phi-1.5, an LLM that is cheaper and faster: “By generating curated, high quality, synthetic data using existing LLMs (in this case, OpenAI’s ChatGPT) and training a new model on this, the researchers are able to achieve results comparable to leading LLMs at a fraction of the cost and training time.”
  • Imbue LLM claims to build models that robustly reason: “We believe reasoning is the primary blocker to effective AI agents,” Imbue wrote in the blog post. “Robust reasoning is necessary for effective action. It involves the ability to deal with uncertainty, to know when to change our approach, to ask questions and gather new information, to play out scenarios and make decisions, to make and discard hypotheses and generally to deal with the complicated, hard-to-predict nature of the real world.”
  • AWS, Microsoft, and Google Cloud: Tying Up LLMs: On September 25, Amazon announced what it is calling a strategic collaboration with foundation model player Anthropic. Anthropic will use Amazon Web Services (AWS) as its primary cloud provider; employ AWS Trainium and Inferentia chips to build, train, and deploy future foundation models; and provide AWS Bedrock customers with access to future generations of its foundation models. Amazon will invest up to $4 billion in Anthropic, have a minority ownership position in the company, and have access to Anthropic models to incorporate into Amazon projects. Amazon’s $4 billion investment in Anthropic follows Microsoft’s $10 billion investment in OpenAI, announced in January.

Analyst take: LLM and foundation models continue to remain the center of the generative AI universe. There is so much potential with these models and yet there continue to be challenges, and consequently, it is a sector that is completely unsettled. These models are mutating, evolving. These models are becoming “smaller” as to run more efficiently and more narrowly focused and produce more accurate, more secure results. More providers, with both open source and private options, are entering the marketplace. Now is a time when enterprises should explore and sandbox, secure deals for LLM access, but remain as flexible as possible – locking in long term to any LLM at this point would be shortsighted.

Enterprise DIY Dilemma

The Trend: The classic dilemma for enterprise is making decisions about outsourcing versus building and maintaining in-house systems. As such, the AI lifecycle presents a significant challenge.

Analysts Take: Although the LLMs and foundation models are mutating, once the market normalizes, they will not be a core differentiator. Enterprises are realizing the core differentiator will be the ability to leverage their proprietary data using foundation models.

To do so, enterprises need access to their data, and they must decide how to run the AI compute against that data. AI compute, model training, and model inference have become the heaviest compute workloads. So, where is that compute workload best handled, in house or in the cloud? In house would typically require on-premises data center and supercomputer capabilities including graphics processing units (GPUs), etc. Chips to run AI, particularly GPUs, are in very short supply.

Aside from compute workloads, more enterprises are thinking about their data management and data governance – where to store it, how to federate it, what data is leverageable for AI and what is not.

With all this in mind, enterprises are contemplating where to run their AI workloads and where to keep their data. Arguments are emerging that some enterprises will be able to run and manage all of this on premises more economically than they could by leveraging cloud compute and cloud data storage. At this point, it is difficult to say whether AI will stymie the overall broad market trend that has shifted toward cloud computing. Regardless, enterprises are weighing the economics and impact of how much AI they will house.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

Top Trends in AI This Week: August 25, 2023

AWS, Microsoft, and Google Cloud: Tying Up LLMs

Generative AI War? ChatGPT Rival Anthropic Gains Allies, Investors

Author Information

Mark comes to The Futurum Group from Omdia’s Artificial Intelligence practice, where his focus was on natural language and AI use cases.

Previously, Mark worked as a consultant and analyst providing custom and syndicated qualitative market analysis with an emphasis on mobile technology and identifying trends and opportunities for companies like Syniverse and ABI Research. He has been cited by international media outlets including CNBC, The Wall Street Journal, Bloomberg Businessweek, and CNET. Based in Tampa, Florida, Mark is a veteran market research analyst with 25 years of experience interpreting technology business and holds a Bachelor of Science from the University of Florida.

SHARE:

Latest Insights:

Loni Stark, Vice President at Adobe, joins Tiffani Bova to discuss the transformative potential of Agentic AI in elevating customer experiences, highlighting real-world success stories and Adobe's visionary approach.
Cisco leads the charge in network innovation at MWC Barcelona, showcasing the future of telecoms in the AI era. Are telcos ready to embrace the digital shift?
Greg Matson, Jacob Yundt, and Vik Malyala discuss how Solidigm SSDs and Supermicro servers boost CoreWeave's cloud solutions, offering scalable and efficient AI computing power.
Beyond Linux Announcements, SUSE Brings New Innovations in Developer Experience, Software Supply Chain Security, AI Observability, and Ethical AI Guardrails From SUSECON 2025
Mitch Ashley, VP DevOps and Application Development at the Futurum Group, shares key insights from SUSECON 2025, where SUSE's focus shifted beyond Linux to emphasize cloud-native development, developer experience tools, and AI observability—reflecting the company's strategic evolution.

Thank you, we received your request, a member of our team will be in contact with you.