AI Innovation at SxSW 2024: Better Open or Closed?

AI Innovation at SxSW 2024: Better Open or Closed?

The Six Five team discusses AI Innovation at SxSW 2024: Better Open or Closed?

If you are interested in watching the full episode you can check it out here.

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.

Transcript:

Patrick Moorhead: Yeah. Let’s move to something related. And it was a take-off of what we discussed last week. We talked about OpenAI, about is it open or closed and which is better? And I think profit versus nonprofit, this is a little bit of a derivation of that, but a lot of the conversations at South by Southwest this week were exactly as you might expect, is AI innovation better closed or open? And I participated in a speaking event and I also attended a related panel that was led off by IBM, Meta, CEO of Partnership on AI and the co-founder and executive chairman of Anyscale. And by the way, he’s a CTO of Databricks as well. And I really appreciated how… You know I’m not an intellectual. I’m just going to throw it out there. You’re probably not surprised, but I do like a good debate, right?

Because there’s a lot of people that might say, “Closed is actually good.” It’s better security. It’s better safety, less prompts, manipulation. And gosh, shouldn’t the companies that spent a $100 million dollars to train an LLM, shouldn’t they be paid back for all their investments? So I like it. On stage they actually red-teamed it. And one other thing I liked was the provocative and Dario just got a huge shit-eating grin on his face. The biggest smile, “Hey, can you really scale AI outside of the largest tech companies?” And he just got a huge smile on his face and just talked about how you can disaggregate innovation. People are actually smart enough outside of the largest 10 companies to be able to do this. A good analogy was Linux, right? Related to the safety and security and can you scale, which is, I don’t know if you remember this, Dan, but probably in the ’90s when Unix went to Linux, there was this debate on, is Linux safe?

And Microsoft was the biggest company that said it’s not safe. Only Windows is safe and very managed Unix distributions. And here we are today, the lingua franca of Edge capability and also data center is Linux, and there’s so many people banging on Linux that again, nothing is impenetrable, but it is very much a secure operating system. And look at the companies who can innovate off of it is pretty big. And what I liked about the panel is that this wasn’t a big commercial for the AI Alliance that spearheaded by IBM and Meta with 200 members. It was more of an intellectual conversation. I had the chance to interview Dario and Ion Stoica from AnyScale and Databricks and really had a good conversation diving in and asked them some really tough questions about, “Hey, this is great. I saw the press release, but what have you guys done lately?” So as soon as we’ve published this video in a few days, I urge you to go in and check it out.

Daniel Newman: Oh, yeah. This is a big topic, Pat, and I know you did it through the lens of quote/unquote open and closed versus open and closed, but this is a permutation of the Sam Altman and Elon Musk debate that’s going on. And the debate really is, one, is who owns this? Two, is the commercialization of it. Three is the safety of it. Four is should any one company be given a stark advantage through ownership and licensing of a platform? And then five, and they talked about this a lot on one of the pods you and I liked, the All-In pod, is about what is the legality of building an architecture like this? So I like what you covered. I wasn’t at the session, so I can’t speak to the content at the session itself, Pat, but what I can say is that I personally think what’s going on right now is we need more open than closed.

And not to say that there isn’t a business in closed, but the problem is right now is we’re running into all these issues with LLMs and lack of transparency is gating us from understanding where we’re going wrong. And this is a snowball. How quickly does it snowball out of effect? Whether it’s like we talked about historical accuracy. We talk about what information is fed. We’re living in a world now where… I forget what school it is. It was one of the Ivy’s or Duke or one of the schools that basically said they’re getting rid of essays now, college entrance essays, because kids are writing them with generative AI. And no longer can the admissions counselors actually tell what’s real, what’s not, what’s how to discern. And by the way, this I realize I’m a tangential of everything you talked about, but this is really important.

The world is effectively changing and we already have entered a world with social media where people are basically ingesting data that is not always fact-based, but we are interpreting it as fact. Now you have a system that is already interpreting data based on a set of algorithms, is spinning out an output that is rooted in bias, because the algorithms all have bias. It’s not a, we won’t say which way or what it is, we don’t know, because if it lacks transparency. So an open ecosystem at least forces a higher level of transparency. And this goes back to why Musk is having a fight after spending a $100 million into OpenAI. With Sam is that he, at least with Groc, he’s publishing everything. He’s open sourcing and plans to publish everything that’s in the algorithm. You’ve seen it with Twitter by the way. He’s done it with X. He’s published the algorithm. Everybody, you have to have some technical chops to know what you’re looking at. But if you actually have technical chops, you can understand how it works.

The bottom line is opaqueness is going to be problematic, especially when it comes to building these models and when it comes to the outputs of these models. And you have to assume that if we’re going to get the productivity gains out of all of this technology, it means we are going to be limited in how much we will scrutinize the outputs of these AI systems. Meaning you don’t get 10 X or 100 X productivity on emails outbound through your sales without basically assuming that the outputs are correct. We now have people building content, sending it out to our customers based upon an algorithm and very little fact checking and accuracy checking, because that’s the only way you get the productivity. Otherwise, you’re just QAing all day long, which again, won’t get you there. So I didn’t see your session, but this is a huge topic. I love it. I’d love to talk more about it. I realized I derailed a little bit of your specific panel, but Pat-

Patrick Moorhead: Oh, Dan, this wasn’t about showing the panel. This was about-

Daniel Newman: I didn’t see you shielded it. I just meant I couldn’t speak to it.

Patrick Moorhead: Yeah. It wasn’t there. Yeah, you didn’t see it, not asking to do that, but it was more on the topic which you did. So thank you.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

Brad Shimmin, VP and Practice Lead at The Futurum Group, examines why investors behind NVIDIA and Meta are backing Hammerspace to remove AI data bottlenecks and improve performance at scale.
Looking Beyond the Dashboard: Tableau Bets Big on AI Grounded in Semantic Data to Define Its Next Chapter
Futurum analysts Brad Shimmin and Keith Kirkpatrick cover the latest developments from Tableau Conference, focused on the new AI and data-management enhancements to the visualization platform.
Colleen Kapase, VP at Google Cloud, joins Tiffani Bova to share insights on enhancing partner opportunities and harnessing AI for growth.
Ericsson Introduces Wireless-First Branch Architecture for Agile, Secure Connectivity to Support AI-Driven Enterprise Innovation
The Futurum Group’s Ron Westfall shares his insights on why Ericsson’s new wireless-first architecture and the E400 fulfill key emerging enterprise trends, such as 5G Advanced, IoT proliferation, and increased reliance on wireless-first implementations.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.