Search
Close this search box.

The Six Five Connected with Diana Blass: Regulating AI — Fight to Balance Innovation with Risk

The Six Five Connected with Diana Blass: Regulating AI — Fight to Balance Innovation with Risk

This inaugural episode of The Six Five — Connected with Diana Blass dives into the regulatory environment that surrounds AI following the quick rise of ChatGPT and other forms of Generative AI. Technologies that have opened up fresh questions around copyright & IP laws, data privacy, cybersecurity, and national security. Many agree that changes to current laws are necessary, but the efforts to do so could have major impacts.

It’s a world race in AI innovation. Who will win? Get Connected on the latest in AI with Edoardo Romeo, Josh Davies, and Daniel Newman.

Be sure to subscribe to The Six Five Webcast, so you never miss an episode.

Watch the full episode here:

 

Or Listen to the full audio here:

 

Disclaimer: The Six Five webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.

Transcript:

Diana Blass: The quick rise of ChatGPT has put a spotlight on some of our worst fears related to AI, fears seemingly confirmed by the CEO behind the solution, Sam Altman.

Sam Altman: I think if this technology goes wrong, it can go quite wrong.

Diana Blass: Altman, the CEO of OpenAI, appeared on Capitol Hill on May 16th, the beginning of what will become a series of hearings on how to regulate the technology. The latest took place on June 7th, a day before a group of senators introduced a bill to create a new office that would analyze the US’ strengths in emerging technologies like AI.
Lawmakers have rushed to act on how to rule it as ChatGPT attracts hundreds of millions of monthly users, and that’s just one form of generative AI. A market that’s seen a number of startups emerge with little oversight to data protection, impacts upon national security, and the potential harm to users, concerns that the EU has been voicing for months.

Josh Davies: That’s the thing with AI and automation. They don’t demand higher wages, they don’t demand better working conditions, and they’re really quick.

Diana Blass: Italy went as far to temporarily ban the solution in March.

Eduardo Romeo: The trade-off between transparency of an algorithm and massive usage of data are not in the same line.

Diana Blass: Many of us here in the US thought that was extreme, but look at us now.

Sen. Thom Tillis: I, for one, feel like we have to have certainty and clarity or we really run the risk of having the gap. Maybe our competitive advantage today will not be the same.

Sen. Richard Blumenthal: Let me ask any of you whether you think a pause on AI, I know it’s a somewhat simplistic question, is a good idea and even doable.

Diana Blass: The US is the leader in AI development, but will that remain as we look to balance innovation with risk?

Daniel Newman: The largest technology companies will keep building, they’ll commercialize, they’ll put offers into market, and then it’ll be very hard to put the genie back in the bottle.

Diana Blass: Hi everyone. I’m so glad you’re here with me on the first episode of Connected, a series that dives into the most buzzed about topics in tech with a little bit of storytelling and a lot of interviews.

I think we can all agree that generative AI has topped the conversation lately. And it’s fascinating to think about the speed by which that conversation has changed. When I first began producing this episode, Italy had temporarily banned ChatGPT and many other EU nations considered doing the same.

Flash forward about a month and the US is holding similar discussions on how to manage and develop it as well as the patents involved in their creation, regulations that experts say must be updated or otherwise we risk natural security. Listen to this.

Rama Elluru: We are in a race to develop the future of AI, a race we are in with China. This innovation competition will shape the world’s future. The nations that hold the leadership and dominant market share in the combination of emerging technologies will be able to reinforce their societies and their economies, and importantly, assert geopolitical influence.

Diana Blass: There is no word in what AI regulations could look like, but some warn of a new agency, an FDA for algorithms as described by the Federalist. And we may see something like that come out of the bill introduced in the Senate on June 8th, which is said to have a high likelihood of passing.
It creates an office with experts from various agencies and the private sector to keep tabs on the US development of technology. There’s also talk of nutrition labels, details that provide transparency into the algorithms, but that’s difficult as AI systems become more advanced. Listen to what professor Gary Marcus of NYU had to say during that senate hearing.

Gary Marcus: The biggest scientific challenge in understanding these models is how they generalize. What do they memorize and what new things do they do? The more that there is in the dataset, for example, the thing that you want to test accuracy on, the less you can get a proper read on that.

So it’s important, first of all that scientists be part of that process and second, that we have much greater transparency about what actually goes into these systems. If we don’t know what’s in them, then we don’t know exactly how well they’re doing when we give something new and we don’t know how good a benchmark that will be for something that’s entirely novel.

Diana Blass: It’s clear that lawmakers are concerned about the responsible development of AI as it relates to the sharing of data, but they’re also very concerned over the threat of China. The theft of American intellectual property has become a critical national security issue. In 2019 and 2020, China surpassed the United States in international patent filing. That means they have control over the framework for those innovations and a heavy hand in building industries of the future.

The US patent laws weren’t developed to account for technology, a space that doesn’t exactly have a human inventor. A change to patent eligibility laws would protect AI innovations according to experts. Earlier I chatted with Futurum Network CEO and Six Five podcast co-host Daniel Newman for more perspective.

Daniel Newman: The US and China as the world’s two largest economies are going to absolutely be battling it out for not only generative AI but for general AI and for dominance in that marketplace. Everything from national defense to global technology leadership, the two economies are always in a position to try to leapfrog one another.

With recent export controls, the US limiting EUV and ASML technologies as well as certain leading edge processes from fabulous manufacturers is definitely putting China at a pretty significant disadvantage to be able to get the most advanced GPU CPU systems for training for inference. So you can expect that China wants to play and you can expect that China has plans to play in the generative AI space and they certainly will be able to develop a large language model, train it and utilize it.

But the big business and enterprise opportunities that companies like Nvidia, Microsoft, Google, and then of course AMD, Intel, Broadcom and all these software providers, Salesforce and Adobe, they’re all playing in this space. They’re able to take advantage of having access to the most leading edge semiconductors and all of the resources here in the US plus a very lightly regulated industry at this current juncture.

Diana Blass: Well, it’s interesting to hear that the export restrictions have been effective. Now, you mentioned all these different companies here that are developing generative AI solutions and I can’t help but think that it’s like the Metaverse all over again. We heard so many companies announcing metaverse innovations and then it just stopped. I’m wondering what’s happened to all these metaverse offices that we heard about. But do you think the same is true with generative AI? Will it sustain its momentum in the long run?

Daniel Newman: Well, I think we have kind of two concurrent cycles. You’ve got the kind of market hype cycle, which is maybe a little bit ahead of itself. We saw Nvidia’s recent earnings and the $4 billion up guide. And this is clearly indicative that every hyperscaler and large enterprise is trying to get their hands on every GPU that is available in the marketplace. That’s very good for a small subset of companies that are on the front end of sort of the picks and axes for building out an AI future.

At the same time, I think it’s very safe to say that enterprise is being able to be more efficient, and that is everything from the do more with less, which means doing the same thing for 50%, 60%, 70% less resources, being able to reduce monotonous back office roles or to be able to automate things like factories and warehouses to lower costs and increase productivity and uptime.

That stuff is all very real. I think at the other end of the spectrum too, it’s going to be about how much can we accelerate production. So rather than saying, can we do it for 50% less, it’s how much more could we do if we invest the same? I think that’s how we actually get meaningful productivity gains.

Think of something as simple as a lot of people are talking about writing articles with ChatGPT. Well, if you’re able to write an article accurately, factually without hallucinations and you can do it in a few seconds using good information and a little bit of editing and hopefully a little bit of your own personality, what if you could do 10 or 100 times more in this same time that it may have used to take for us for someone like an analyst or a journalist to write? You could create huge swaths of increased productivity in marketing and human resources and operations and sales driving out proposals. So that’s pretty exciting as well.

Diana Blass: All right, well it’s nice to hear some positive analysis on the arrival of AI. Speaking of jobs, a new report has found that nearly 4,000 of the 80,000 job cuts in May were a result of AI. Lawmakers and experts on Capitol Hill acknowledged the threat but also added this.

Gary Marcus: When we look back at AI of today 20 years ago, we’ll be like, “Wow, that stuff was really unreliable. It couldn’t really do planning, which is an important technical aspect. Its reasoning abilities were limited.” But when we get to AGI, artificial genital intelligence, maybe let’s say it’s 50 years, that really is going to have, I think, profound effects on labor.

Diana Blass: So maybe that’s why the AI threat taken most seriously at the moment centers upon the sharing and use of data. While here in the US, the concern is centered around national security, the EU remains focused on data privacy as seen in Italy’s temporary ban and a ChatGPT task force set up by EU regulators.

Italy has allowed the service to resume in its country after OpenAI modified ChatGPT’s platform to allow for privacy controls. Those controls would allow for users to forbid the platform from using their data to train its models and a tool to verify their ages. But some experts wonder if that’s even possible when using generative AI. And they also wonder if you can have generative AI and GDPR co-exist.

I got perspective from Edrom CEO, Eduardo Romeo. He’s based in Italy and active in its tech scene.

Hi Eduardo, thanks for your time. Now, earlier you explained to me that a key difference between the US and Europe is that in the EU, GDPR centers around citizen data, whereas in the US data privacy laws focus on company data. So in that case, I’m curious to learn how GDPR regulations impact business operations. Will the changes that we’ve seen regarding ChatGPT impact business? Will it be a threat to businesses in Europe?

Eduardo Romeo: I don’t know. In Europe at the moment, use of artificial intelligence, it’s more related to the business to business, the B2B models, in the public administration, in the big banks and insurance, in the big telco companies and so on. The tool needs to have behind a huge quantity of data to be predictive and giving correct answers. So I mean we have to wait the decision that will be taken and the trade off between the company and the lawyers engaged by the company and the authority.

Diana Blass: Okay, well, there are also reports I question if it’s even possible to go back and delete data from ChatGPT, which is interesting considering the concessions that OpenAI has made in Italy allowing users to submit to have their data removed. So are regulators lenient when it comes to defining the principles outlined in GDPR?

Eduardo Romeo: I think there is a possibility to have a conversation with the GDPR authority. They usually intervene, but they are flexible in understanding what there is behind the data. I was talking with Alec Ross three weeks ago, and for example, his opinion is that technology will change the way people approach the business. But in the past, technology helped companies to be more competitive and it’s absolutely not true that there is a risk of an increasing of the unemployment. So I mean it’s part of the path that the new technology and will be very important that-

Diana Blass: Now, as the world moves to regulate AI systems and their use of data, bad actors have already taken advantage. I mean, who’s surprised, right? And experts warn against mutating malware that uses ChatGPT.

Cybersecurity hasn’t been as widely discussed as data privacy and IP, but it’s a concern that could warrant regulation and should be top of mind for all users. At least that’s what our next guest says. Here’s Josh Davies with the cybersecurity company Fortra.

Josh Davies: With ChatGPT, it’s kind of interesting. The angles that I’ve been looking at it is first of all actually attacking ChatGPT itself. So there’s two ways you could do this. One is maybe the more traditional one, which is looking at the underlying technology that it runs on. So what servers, what operating systems is it running on? What open source modules they have out there.

Recently, I believe in March there was compromise or rather a bug where due to using Redis, which is an open source data processing database module, they’d accidentally sent the wrong responses to the wrong users due to a kind of bug or error there, which meant that people were getting data that wasn’t theirs and there was some leakage of usernames, last few letters of card details, numbers of card details rather. So that’s the one thing.

What if an attacker was to, like they can compromise any organization, get into OpenAI’s environment, sit there quietly, and then if you’ve got millions of people using this across the world, that’s the perfect candidate for a supply chain compromise. So if you compromise OpenAI, you can then use the legitimate delivery method or the kind of trusted delivery method that people interact with ChatGPT to push out further compromises.

Maybe get them to download some malware, give the people who use ChatGPT to write code for themselves. Maybe you start giving them code that has vulnerabilities in it or has back doors for your threat to act or organization to come in and stroll in. Very much like how we saw the SolarWinds supply chain compromise, I could really see how that could be a repeat with something like ChatGPT because you’re using code that you should validate and check, but you might not have the skills to or you might get complacent and you might put something in there that is just going to be allow the attacker to walk in.

And then when something like myself who does a lot of detection and response, that’s very difficult for me to detect and respond to because you basically installed a legitimate door in that I can’t really detect as an attack. I can only hopefully gather up the next actions they might do.

Diana Blass: How can these AI systems like ChatGPT speed up the development of malware?

Josh Davies: We talked about how it could be used to write code and write business programs. You can also use it to write malware for you. Years ago there was a lot of talk and trouble around polymorphic malware, which basically it mutates. The malware mutates and evolves.

You think of COVID-19 is a great example of this. We saw Delta, Omicron, and so on that mutated. It mutated, it behaved slightly differently, but ultimately it still gave you COVID and also it had different success rates against different vaccines.

That’s exactly what polymorphic malware does, where if you’re losing traditional blocks that are going to look for signatures or hashes, if it looks the same every time, you know it’s bad. Once you learn about it, you tell the machine every time it sees that, it’s going to block and stop it. Whereas if it’s able to morph, so it will change superficial elements of itself, it looks and smells different to the preventive controls and also detection controls, so you have a greater chance of success.

And so previously that’s been done by people actually writing controls in there. But if you use something like ChatGPT, you could say, “Hey, every time write me a new piece of malware that does this.” And so it will look slightly different each time, much lower overhead on the person who rather than they have to write something, you don’t want to reinvent the wheel, but ChatGPT is quite happy to reinvent the wheel because that’s the thing with AI and automation. They don’t demand higher wages, they don’t demand better working conditions and they’re really quick. So I think that’s going to speed up malware development.

But I would stress it still at this point in my opinion requires someone who is able to write malware. They would have to vet it and they have to have understanding complex malware. It’s just going to make them more effective. If you are what we call a script kiddie, somebody who doesn’t really know how to hack but knows how to point things and click and exploits, it’s not yet at a stage where it’s able to write complex malware for them. But hence why we’re being kind of cautious about this because who knows where it’ll be in a year or two’s time.

Diana Blass: Now, I understand the EU is working on the AI Act, which would cover some cybersecurity concerns. At a high level, do you think regulation is needed for AI? Do you welcome the AI Act?

Josh Davies: This is a novel type of technology that requires new legislation, things that should never be allowed to do that we should completely restrict and things that we should demand more scrutiny is placed when it has a significant or critical function. And then also letting people mess around with it when it’s got no real implications.

That’s what I think the AI Act is looking to identify and create some frameworks around. But yeah, we’ll see. Bureaucrats can take a while with these things, so we need to make sure that we don’t just wait for that to come into force and then do it. But actually companies who have a forward-thinking attitude when they’re developing these, when it comes to the regulation being enforced, they’re way ahead of the game.

Diana Blass: Okay. Lastly, we’ve seen some companies flat out ban their employees from using ChatGPT, Samsung being one of them. This is due to concerns that employees were feeding ChatGPT sensitive corporate information. Is that the right move? Should companies just be banning it?

Josh Davies: I think that’s probably a bit extreme. I kind of talked about we don’t want security to be a blocker to stopping you using the latest technology. The most secure network is a computer in a dark room with no connection to the internet. So yeah, make sure that you have policy in place. Use it because it’s a great tool, but just use it responsibly.

I think maybe as we go further, there might be safer ways to interact with it. I was exploring some options where you can maybe get more of a closed version of something like ChatGPT where you host in your own environment and it’s maybe just your company’s data then and it’s just being accessed by your users. Then that could be a good way to control it.

But that’d be very expensive. These kind of AI algorithms and the LLM large language modules, they require a lot of compute power and a lot of development. So that’s probably an extreme example, more closer to sitting in a dark room without any internet access. But that would be probably the most secure way.

If you’re going to make something like ChatGPT LLM a cornerstone of how you do business, maybe look to incorporate it into your own environment rather than using this very public one, which is accessible and used by everybody. And undoubtedly the data is going to be pooled with everyone else’s data, albeit anonymized, but only to a certain extent where they want to use that to inform the learning, make it better, make it a better tool. So really we get to see something like that be compromised and what the impacts could be. But potentially, as you said, it’s quite scary.

Diana Blass: That’s a wrap on interviews for our first episode here and a lot to think about. We’re seeing a massive build out and excitement around AI. Really that’s because generative AI has democratized the software in a way that we haven’t seen before. You don’t need to be a tech expert to play with it, and that’s pretty cool.

As you’ll learn in our next episode, it has the power to level the playing field as users create images and text from scratch. But of course, that only opens up questions around copyright, deception, and as we learned in this episode, potentially cybersecurity and national security risks.

It’ll be interesting to see what happens next. In the meantime, I’d love to hear your thoughts on this episode. Do you think regulations could help or hurt AI innovation? Let me know when the comments below and subscribe today so that you can stay connected with the latest. I’m Diana Blass signing off with Six Five Media.

Author Information

Diana Blass

Diana Blass is a journalist with a background in technology news and analysis. Her work has appeared on Fox Television Stations, The Discovery Channel, CRN, Light Reading, and other Informa-owned media brands. In addition to her work at The Six Five, she manages Diana Blass Productions, where she develops and produces digital documentaries, podcasts, and commercials for media and corporate brands.

SHARE:

Latest Insights:

Nick Coult, Director at Amazon Web Services, joins Keith Townsend to share insights on Amazon ECS's 10th anniversary, discussing its evolution, key innovations, and future vision, highlighting the impact Gen AI has on the industry.
Join hosts Patrick Moorhead and Melody Brue to explore the groundbreaking impact of high-capacity SSDs and QLC technology in driving AI's future, emphasizing Solidigm's role in leading this transformative journey.
Adobe Reports Record FY2024 Revenue Driven by Strong Digital Media and Digital Experience Segments While Leveraging AI to Drive Innovation and Meet Analyst Expectations
Keith Kirkpatrick, Research Director at The Futurum Group, analyzes Adobe’s FY2024 performance. Growth in the Digital Media and Digital Experience segments contributed to record revenue while addressing challenges like the impacts of foreign exchange.
Matt Yanchyshyn, VP at AWS, joins Dion Hinchcliffe to share insights on the evolving cloud marketplace landscape, highlighting AWS Marketplace's new features and the impact of GenAI on business operations.