Search
Close this search box.

Adults in the Generative AI Rumpus Room: The Best of 2023 | The AI Moment, Episode 8

Adults in the Generative AI Rumpus Room: The Best of 2023 | The AI Moment, Episode 8

On this episode of The AI Moment, I discuss the Best of the Generative AI Rumpus Room in 2023 – the top adult initiatives, trends and companies for the year.

Generative AI has spawned incredible innovation and along with it a mutating market ecosystem. It has also caused a copious amount of FOMO, missteps, and false starts. It is a rumpus room with a lot of “kids” going wild. The rumpus room needs adults. Guidance through the generative AI minefield will come from thoughtful organizations who do not panic, who understand the fundamentals of AI, and who manage risk. These organizations are what we at The Futurum Group call Adults In The AI Rumpus Room. Since August 21, The Futurum Group has published 10 Adults in the Generative AI Rumpus Room notes, highlighting 30 different adult initiatives from a total of 21 different companies.

I can draw some conclusions from that body of work to deliver what we see as the best of the Adults in the Generative AI Rumpus Room for 2023, including:

  • Top “adult” initiatives: Initiatives that had the greatest impact in providing adult leadership in generative AI in 2023.
  • Top “adult” trends: In reviewing the collective initiatives, what trends emerged in 2023?
  • Top “adults” in 2023: Which companies were the most “adult” in 2023, based on the number of initiatives?

Watch the video below, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Listen to the audio here:

Or grab the audio on your favorite podcast platform below:

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this webcast.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Transcript:

Mark Beccue: Hello, I’m Mark Beccue, Research Director for AI with The Futurum Group. Welcome to The AI Moment, our weekly podcast that explores the latest developments in enterprise AI. The pace of change and innovation in AI is dizzying and unprecedented. I’ve been covering AI since 2016. I’ve never seen anything like what we’ve experienced since ChatGPT launched in October of last year and kickstarted generative AI. With The AI Moment podcast, we try to distill the mountain of information and separate the real from the hype, and provide you with sure handed analysis about where the AI market will go. In each episode, we’ll dive deep into the latest trends and technologies that are shaping the AI landscape, from discussions about the latest advancements in AI technology and parsing the mutating vendor landscape, to AI regulations, ethics, risk management, and more. We’ll cover a lot.

Today, we have one subject we’re going to work with. And I’m going to go through this, this is a segment I call you if you watch the show or read some of my stuff, it’s called the adults in the AI rumpus room. And what we’re going to do is look at the best of 2023. So, let’s go ahead and get started. So, I put this together thinking about… And the way that I thought about this was since… Let me set the stage a little bit for adults in the rumpus room. So, as we mentioned, it’s been moving fast this year with generative AI and it’s really considered possibly the fastest moving technology innovation in history. When that happens, it captured imaginations of consumers and enterprises all across the globe. And there’s been a lot of innovation in a mutating ecosystem, which is normal in tech and innovation. But, it’s also caused a lot of FOMO, a lot of missteps, a lot of false starts. And these are the, again, classic signs of technology disruption. There’s lots of innovation, but there’s also lots of mistakes. And, it’s a rumpus room with a lot of kids going wild. And the rumpus room needs adults. And what we mean by that is we think about guidance through generative AI, the minefield that it is, and that guidance comes from thoughtful organizations who don’t panic, who understand the fundamentals of AI, and who are good at managing AI risk.

So, that sets the stage. That’s the adults in the AI rumpus room is us looking at companies that do that. And, I’ve been doing that since about… When was it? August. So, over the last four months, since August 21st, that was when we first published our adults in the generative AI rumpus room, our first one. We’ve done 10 research notes on that since August 21st. And, in those notes, I looked at it and wanted to go look through what was in there. And here are some highlights. I highlighted 30 different initiatives from a total of 21 different companies. And, if you look at that body of work, we can draw some conclusions about that. And, to me, it gives us a look at what was the best of the adults in the generative AI rumpus room for 2023. So there’s three categories we’re going to talk about today.

First is… Well, not first, this will be the last. But, excuse me, the first one we’re going to talk about is the top adult initiatives. And what I mean by that are initiatives that we noted that had the greatest impact in providing adult leadership in generative AI during the year. So that’s a group of something. The second category is going to be the top adult trends. So, what did we see as trends in that scenario? If you looked at the collective initiatives we watched, what were the trends? And finally, we came up with a top adult, right? So, we’re going to name who was the most adult in 2023. And we’re going to do that based on the number of initiatives that we noted. So I’ll get to that in a second, when we get to that part.

So, let’s start with the top initiatives segment. So, like I said, there were 30 initiatives that we’ve written about since August that were having this positive impact, calming the generative AI market. But to me, there were seven that made the biggest difference. And, I’m going to list them. They’re not in any particular order, so there’s not top to bottom, they’re just all about the same. But, the list submits this, and we’ll go through the details. They are Salesforce’s commitment to trusted use of AI, IBM’s commitment to list source data for new Granite AI models. AI2 has debuted an open data set for AI training. The Data Providence Organization launched audit tools for AI data sets. Google went through, what they call, the sensitive data protection service, and what they can do around generative AI. IBM watsonx.governance launched and is taking on AI risk management. And related to that, Guardrails for Amazon Bedrock, which is a similar type of a product that helps companies work in responsible AI.

So, let’s go through those details, right? So, on name seven. So Salesforce, and the Dreamforce, this is back in September, Marc Benioff, the CEO, he introduced the company’s, what he called, the tenants of trusted and ethical and humane AI. And emphasized how Salesforce was going to commit to the trusted use of AI. So the tenants are that, one, “Your data isn’t our product.” That’s Salesforce saying that. Two, “You control access to your data.” Three, “We prioritize accurate verifiable results.” Four, “Our product policies protect human rights.” Oops, that’s three. The next one is, “We advance responsible AI globally.” And the last one is that, “Transparency builds trust.” So this was really presented almost at the beginning of his keynote. And, when he went through his keynote, he spoke over and over again about AI and in the same sentence, trust. And, I’m looking at what he actually quoted. He says, “You cannot do AI without being able to trust what it does.”

And, the company is, I would say, liberally using this word trust in describing AI capabilities. And paraphrasing, he said, “We have an incredible opportunity in AI and how we do it matters. We have to do it right and we have to do it responsibly.” Now, I wrote this in a note. There’s links to what that keynote looked like. You can go into our show notes and see those links. But, I think this is a really adult move, and one of the key ones of the year, because they were really the latest in a string of responsible actions that Salesforce has taken with AI. And there’s really been a lot of public conversation about regulation and when it comes to privacy, and bias, and toxicity, and hallucinations, transparency, explainability, consumer rights, IP, and copyright laws, consumer protection, and how all those regulations would squash innovation, and business growth, and a U.S. competitive advantage. Well, to me, that discussion maybe should be a different arc. And the conversation is less about what we specifically should do about government regulation and more about what companies can do to act responsibly with AI right now. And in that process that calms the nerves of all parties, the AI users, the companies that leverage AI, and the government.

So, what’s missing in the equation is representation from a company that’s on the frontline of live AI application and one that is taking the risk of leveraging AI and reaping some reward. And Washington needs to hear about that experience and what that experience has taught a company, that implementing the pillars of the responsible AI, which is responsible, accountable, transparent, empowering, inclusive, Salesforce trusted AI principles, those are their principles is really good business discipline. And, it mitigates AI risk for both the company who’s offering the services and its customers. So, to me, Salesforce is living what enterprises need to see and how to incorporate AI responsibly. And I thought that was a significant move when they were talking about this and really moving it forward at Dreamforce. So, that’s number one.

The second is IBM’s commitment to list open source data for their new Granite AI models. So, in early September, IBM, they had launched several of these… They had a bunch of stuff going around watsonx, and it included the launch of their AI models called Granite. Those Granite series are, as the press release says, “Designed to support enterprise NLP tasks such as summarization, content generation, insight extraction.” And the models were made available in September. So, what was interesting about this launch by IBM, and I’m going to read a little bit of what they said in their press release, it’s very important. So, IBM plans to “Provide a list of the sources of data as well as a description of the data processing and filtering steps that were performed to produce the training data for the Granite series of models.” So, that’s the quote.

So, there’s growing momentum for this approach to transparency within AI models. A lot of proprietary LLMs, I mean, almost all of them, really refuse to divulge their training data sources for various reasons. And I’m including open source models like Llama. But most commonly, the vendors do that because they see the data source as competitive IP. And, what’s interesting and different about IBM is they’re positioning their models as less secret sauce, and rather that the proprietary value that IBM’s bringing is their stack. And that the watsonx is in the watsonx, it’s in their platform, or the value of the complete chain that they have to offer. So it frees IBM up to offer these models that really do meet best practices for transparency. So, I think that was a really great move, and I think we’re going to see more of that. That’s a trend that I think we’ll see a lot more over the next year.

Number three is AI2’s debut of an open data set for AI training. So, in August, AI2 is the Allen Institute for AI, and they announced that they had made available this… It’s called Dolma. It’s a data set of 3 trillion tokens. That’s a big data set. It’s the largest open data set to date. And, Dolma is the data set for their planned open source LLM called OLMo. And that’s what OLMo is based on. Nearly all the data sets on which current LLMs are trained are private, like we just said when we talked about IBM. So, I think this is an adult move, because LLMs have been built on data sets that are private. The data is typically scraped without permission from publicly available data on the web. And so, the major challenges of LLM outputs, like we just mentioned before, bias, toxicity, inaccuracy, and hallucination. And one way to address these issues is to use the LLMs to trace those issues back to the data source. So, open datasets provides that opportunity. Again, this is playing on the same theme that we saw with IBM being open source, an open dataset, they’re going to tell you what it was. Here’s another set that’s open. So it’s good, it’s open source.

Along these lines related is the fourth piece, is Data Providence Organization. So, they’ve launched this audit tool for sets. See, the theme we’ve got going here with data sets. This is in October. This is a newly formed research led group called the Data Providence Organization. They published a paper and some data that will enable organizations to audit AI datasets to use to train their LLMs. And, this is what the abstract sounds like. So again, I’m going to read this to you because it’s their words, but it says this, “The race to training language models on vast, diverse, and inconsistently documented data sets has raised pressing concerns about the legal and ethical risk.” Again, same story we’ve heard.

“We convene with a multidisciplinary effort between legal and machine learning experts to systematically audit and trace 1800 plus fine-tuning datasets. Our landscape analysis highlights the sharp divides in composition and focus of commercially open versus closed dataset, with closed monopolizing important categories, lower resource languages, more creative task, Richard Top…” I’m going to skip that part. But there were problems, right? I’ll let you read it in my reports. “This points to a deepening divide in the types of data that were made available under different license conditions and heightened implications for jurisdictional legal interpretations of copyright and fair use of that data and those outputs.” Right?

So, “We also observed the frequent miscategorizing of licenses on widely used data sets, hosting sites.” And they go on to talk about that. “This points to a crisis in misattribution and informed use of the most popular data sets, driving many recent breakthroughs.” So, looking at inaccuracies in all the problems we have at these. So, “As a contribution to ongoing improvement in dataset transparency and responsible use, we release our entire audit.” So the audit’s there for everybody to see, “With an interactive UI called the Data Providence Explorer, which allows practitioners to trace and filter on Data Providence for the most popular open source, fine-tuning data collections.” I know that’s a mouthful. But basically, they did a study of 1800 fine-tuned data sets and they were publishing the results of what they see with that audit. So, an audit tool to that is really useful for folks to look at.

And then, they have a way of you gauging this. So, this is a huge adult move, like we said, again, theme. There’s an old adage that garbage in, garbage out and it applies to large data sets used to train AI. When we get to where we’re using better and smaller data sets, we believe that will lead to better LLM outcomes, more accuracy. And, this explorer tool gives enterprises that opportunity to review all these different models, their approaches, look under the hood at the data, and see how they’re constructed, and be better informed about the models they might choose to use. So, increased use, I think, this is a first step that we’ll see more and more of is audits of these data sets. Very important and it’s a huge leap forward and an adult move for AI. Right? We’ve got to two. We’re going to go fast here, because we’ve been moving slow. But, three more to go.

The next one is Google had a announcement within their sensitive data protection service that was interesting. And, in October, they published this post, described how they have this product called the sensitive data protection service and how it can be used to secure generative AI workloads. And so, I’m going to read this a little bit according to the post, “Generative AI requires data in order to tune and extend for specific business needs. However, one concern that organizations have is how to reduce the risk of customizing and training models with their own data that may include sensitive elements such as personal information, PI, or personally identifiable information, PII. And often, this personal data is surrounded by context that the model needs so it can function properly.

So, what the service does is companies or organizations can use Google Clouds, it’s sensitive data protection, to add additional layers of data protection throughout the life cycle of generative AI model from the training, and also to the tuning in inference. And, the early adoption of these protection techniques can help ensure that your model workloads are safer, more compliant, and reduce the risk of wasted cost on having to retrain or retune later.” So, this is a great and significant piece, I think. And, what it does is when you want to unlock the proprietary power of an LLM, enterprises have to leverage their own data, right? That’s what they have to do. But, most have hesitated because of the fear that that proprietary data will be used to train the models or the PII will be exposed. And, in other words, it’s hard to trust that they can use an LLM and their own data due to the security issues. So, data security is a major issue for generative AI, and this is a tool that allows enterprises to feel more confident in the way they can leverage that data against AI foundation models.

All right, two more big ones and then we’re going to go to our next category. This is a tie to two different things. IBM’s in here again with a product called watsonx.governance. In July IBM announced this launch of the watsonx series of services, governance was one. It’s designed to help enterprises direct, and monitor, and manage AI activities in the organization. And it’s slated to be available before the end of the year. I think it just went GA. So, let’s look at what the issue is, right? Organizations that typically build AI models without clarity, they don’t really monitor or catalog this, but the product governance automates and consolidates tools, applications, and platforms. If something changes in a model, all the information is automatically collected for an audit trail through testing and diagnostics. So, some discipline put into model governance.

Now, there’s other pieces to it. So, when you’re looking at the risks around AI, you really need to look at it case by case, and emerging best practices are for organizations to build an AI ethics governance framework. And, that means you establish ethics committees to oversee these things. And the approach is very manual. It’s a very manual process. So, watsonx.governance helps address this, it automates workflows, so that you can better detect fairness, bias, and drift within the models. And it’s got an automated testing piece that ensures compliance. If an enterprise puts in standards and policies, they can plug that in and it can watch it throughout the lifecycle, very useful piece.

There’s another piece that I’m going to skip it, but let you know that it basically looks at regulations and helps companies think about that as well. So this is an adult move, a good one for the year, because AI risk management is foundational and it’s critical to operationalizing AI. And, enterprises will learn this either the hard way through ignoring it, or an easier way by embracing methodical AI risk management practices. And, IBM is in this really great position to help these enterprises navigate risk management with this tool. And the strengths of it are the vision and the blueprint for automating model management and the audit trail that we talked about. So, just automating those workflows to detect better fairness and stuff like that is good. So it’s very useful and it’s a good product and nice adult move.

Last one is Guardrails for Amazon Bedrock. And this is a similar product. I’m going to say, it’s similar in that it helps build Guardrails. Obviously, it’s called Guardrails, but for responsible AI. So at re:Invent, back in November, AWS launched this product. And, it’s basically allowing Amazon Bedrock users to define denied topics, put in content filters to remove undesirable and harmful content from interactions between their applications and the end users. And the details look like this, there’s a control piece where denied topics, and you can deny topics and configure with natural language command. So, users get this short natural language description to define a set of topics that are undesirable in the context of their application. They plug that in and it will seek that out. And the content filters in it is a good control.

So, users can configure these thresholds to filter out harmful contents, hate, insults, sexual violence categories. And, there’s a lot of foundation models that have some of these built-in protections to prevent that generation of undesirable responses and things like that. But the Guardrails product gives their users an additional controls to filter this to the desired degrees based on the user’s company use cases and responsible AI policy. So it gives that custom layer to things. So, I just think that, again, it’s an adult thing that the product itself is really a reflection of some careful thinking on the part of AWS about responsible use of AI. And, the idea that you can do prevention and be proactive, it’s an approach that is unique at this point. It’s likely going to be something that I believe Microsoft and Google will soon add similar features to their AI development platforms. But either way you look at it, it is the mark of AI leadership and it’s just another signal that AWS understands generative AI. So, it’s a good move.

All right. Those are our top adult initiatives for the year. Real quick, we’ll do the last pieces. The trends, and you saw how this rolled out while we were talking about the key initiatives, the seven, are really two. One is, building dataset transparency. And if you notice three of those top seven initiatives that we went through, the IBM’s commitment to the Granite models, the AI2’s, open Dolma dataset, and Data Providence Organization’s audit tools are all focused on building better, more responsible LLMs. And, because that major challenge is there for outputs on bias, and toxicity, and accuracy, and hallucination, these are the ways to address this is for those that use LLMs to be able to trace these issues back to the data source. That’s going to really be the biggest thing that will solve the challenges with LLMs. So, this is our top trend, I think, in adults moves for the year. And for 2023, I would highly think that they’re going to be around for 2024.

The other thing I think we saw as an adult trend was this emergence of these commercial AI risk management tools. So, the IBM watsonx.governance and the Guardrails for Amazon Bedrock. I think we’re going to see more and more of that. It’s interesting and innovative that you have companies that are trying to help enterprises automate those types of systems. And, I just think we’re going to see a lot more of that going forward. Okay. So those are your top two trends. All right, now for the drum roll. The top adults of 2023. This is very subjective. I’m not saying it’s not. You could really judge who was the most adult by the importance of their initiative for a particular piece, right? But, you could look at volume of initiatives that we covered that they were recognized for. So, I thought for the purposes of this little exercise, that volumes of initiatives was a fair way to look at this. Who did we write about the most?

And, under that criteria, who did we mention the most in our adult series, adults in the rumpus room? Out of that, the number one company by far was Google. So, Google is the top adult in the AI rumpus room for 2023. I’ll tell you what it was. They had 6 of the 30 initiatives that we wrote about this year, were Google initiatives, and it’s 20% of the total. And the work span across a range of different areas… These are the names of the way we wrote them out, right, so Google Android introduced an AI safeguard policies for Google Play apps. They did an IP indemnification for generative AI. The sensitive data protection service, which we mentioned. They had an initiative from Google Cloud that launched, what’s called, SynthID, it’s an AI watermarking tool. YouTube enlisted UMG artists to tinker in the YouTube music AI incubator to think proactively about how AI gets used in that content. And then, they did a lot of work around advancing generative AI search. So six different initiatives puts them at the top of the pile.

If you look at who was next, it was IBM. And, IBM was mentioned 3 times out of our 30. And they had 3 of our 30 initiatives. We’ve talked about two of them. The data sources for the Granite models and the watsonx.governance. The other piece was they also launched IP indemnification, it’s hard to say, for generative AI outputs. So, IBM’s number two. There was a tie at number three with two initiatives mentioned. It was AWS Anthropic, and you can read about those, like I said, in our research notes on adults in the AI rumpus room.

So, that’s it in a conclusion. If I concluded the year, I’d say, that all organizations that stepped up to lead as adults in the generative AI rumpus room in 2023 should be commended. It’s likely we’re through probably the most unruly year for generative AI that I don’t think 2024 is going to be as unruly. It should calm down some. But, we’re going to need more adults to step up, and show the way, instill calm and order in the generative AI market. So that is our show for today. I hope you enjoyed it. Thanks for joining me here on The AI Moment. Be sure to subscribe, and rate, and review the podcast on your preferred platform and we’ll see you next time.

Other insights from The Futurum Group:

The AI Moment, Episode 7 – Top AI Trends for 2024

The AI Moment, Episode 6 – On Device AI part 2

The AI Moment, Episode 5 – AI Chip Trends, RAG vs. Fine-Tuning, AI2

Author Information

Mark comes to The Futurum Group from Omdia’s Artificial Intelligence practice, where his focus was on natural language and AI use cases.

Previously, Mark worked as a consultant and analyst providing custom and syndicated qualitative market analysis with an emphasis on mobile technology and identifying trends and opportunities for companies like Syniverse and ABI Research. He has been cited by international media outlets including CNBC, The Wall Street Journal, Bloomberg Businessweek, and CNET. Based in Tampa, Florida, Mark is a veteran market research analyst with 25 years of experience interpreting technology business and holds a Bachelor of Science from the University of Florida.

SHARE:

Latest Insights:

Sergi Girona, Operations Director at Barcelona Supercomputing Center, and Scott Tease, VP at Lenovo, share insights on enhancing high-performance computing and sustainability through their partnership, highlighting the deployment of MareNostrum 5 with Lenovo's Neptune Water Cooling Technology for environmental efficiency.
Karan Batta, SVP at Oracle, joins Daniel Newman and Patrick Moorhead on the latest episode to share his insights on migrating Oracle Database workloads to AWS, underscoring the pivotal Oracle-AWS partnership.
Seamus Jones, Director at Dell Technologies, shares his insights on Sustainable AI, highlighting Dell's commitment to eco-friendly technology solutions and the role of AI in sustainability.
Avi Shetty and Roger Cummings join Keith Townsend to share their insights on leveraging high-density storage solutions like Solidigm’s D5-P5336 for groundbreaking wildlife conservation efforts, transforming both the pace and scope of global conservation projects.