Intel and Hugging Face Discuss Compute and Ethical Issues Associated with Generative AI

The News: Last week, the leading open source library for machine learning, Hugging Face, reported that Intel’s AI hardware accelerators run faster than any other GPU on the market. This is critical, as Generative AI tools, which continue to increase in popularity, require more compute than other AI use cases. Thought leaders from Intel and Hugging Face joined me for a discussion regarding the compute and ethical issues associated with the use and growth of generative AI. Read the full release and watch the video interview here.

Intel and Hugging Face Discuss Compute and Ethical Issues Associated with Generative AI

Analyst Take: It was an honor to sit down with Kavitha Prasad, Intel’s VP and GM of the Datacenter AI, Cloud Execution and Strategy Group, Lama Nachman, Intel Fellow, director of the Intelligent Systems Research Lab, and Jeff Boudier, Hugging Face Product Director to discuss some of the most pressing issues associated with an increasingly trending topic: generative AI.

For those unfamiliar, Intel and Hugging Face partnered together last summer to help democratize the acceleration of machine learning hardware. Hugging Face is a tech company committed to making good machine learning more widely available and maximizing the positive impact it can have on society and business. Intel is helping them do so by powering that acceleration on the Hugging Face website with its Intel Xeon Scalable CPU platform. On the Hugging Face hub, users can literally build, train and deploy open source ML models to grow the space thanks to Intel’s sponsorship. Both companies have great insights regarding both compute and ethical issues associated with AI.

As pointed out by Kavitha, AI isn’t new. The issues associated with AI aren’t new. The thing that makes it feel new is that the move from predictive to generative AI has made it easier for just about anyone to understand AI’s capabilities. That, in turn, means more and more users and developers are jumping into the space, all the while trying to sort ethical implications of making generative AI so widely available. It’s a huge issue—one with no clear resolution.

Perhaps the only thing that is clear about generative AI: the amount of compute needed to run the complex models associated with generative AI will make it a very expensive proposition for businesses, especially as generative AI models grow more complex. For instance, it’s estimated that it could cost OpenAI $40 million to process the millions of different problems that were fed into its ChatGPT software in a single month. It’s also estimated that the new Bing AI chatbot, powered by OpenAI’s ChatGPT, will need a minimum of $4 billion in infrastructure to answer the queries users are making. As these technologies are initially rolled out, the costs—free subscriptions, trials, etc.—will be somewhat murky. Ultimately, however, it will be the enterprise that bears that cost and determines whether generative AI is “worth it.”

Another cost we need to consider as generative AI becomes more popular: environmental cost. As demand for generative AI increases, so will the number of clouds and data centers we need to power it. All of that requires energy. And at some point, we will need to determine if the value we get from generative AI is worth the impact it could—and will—have on the environment.

Still, even with all of the gray areas, the one thing that kept me excited about these advancements in generative AI was our guests’ shared commitment to advancing and sustaining AI in an ethical way. For its part, Hugging Face has more than 150,000 AI models available free online to ensure open and democratic accessibility to the technology. And Intel has focused on ensuring that transparency is part of every step in the “AI supply chain.” This means that as developers continue to build upon one another’s code, it will always be clear how every previous step was trained—which biases it might have—and how it may need to be re-trained or un-trained to move sustainably forward.

Unlocking the full potential of generative AI will take an entire ecosystem of partners. It’s enlightening and reassuring to see that integrity and value are top of mind for powerhouses like Intel and Hugging Face as we move forward.

Disclosure: Futurum Research is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Futurum Research as a whole.

Other insights from Futurum Research:

Adobe Announces Generative AI innovations Across Adobe Experience Cloud at Adobe Summit

Generative AI from IBM Comes to Golf at the Masters Tournament

Microsoft’s AI Approach Dissected: What It Plans for AI

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.


Latest Insights:

On this episode of The Futurum Tech Webcast – Live! From the Show Floor, Craig Durr welcomes Sam Kennedy, Senior Director, Product Marketing for Crestron Electronics for a conversation on Crestron’s latest announcements at Zoomtopia 2023.
A Review of The Major Announcements and Takeaways at the MWC23 Las Vegas Show Including Big Moves by T-Mobile, Nokia, and Ericsson
The Futurum Group’s Ron Westfall and Todd R Weiss explore the top takeaways from the MWC23 Las Vegas show, including Nokia’s Network as Code debut, the launch of T-Mobile SASE, and Ericsson’s support for open fronthaul across its Cloud RAN/radio portfolio.
Veeam Acquires the Cirrus BUaaS Platform and Is Initially Targeting Microsoft 365 and Azure
Krista Macomber, Senior Analyst for The Futurum Group, discusses Veeam’s acquisition of the Cirrus BUaaS platform and the implications for the BUaaS and SaaS protection marketplaces.
Nokia Debuts Its Network as Code Platform Aimed at Accelerating CSP Network Programmability and Monetization by Spurring Broader Developer Support of APIs
The Futurum Group’s Ron Westfall examines why the new Nokia Network as Code platform and developer portal can deliver the much-needed catalyst to ensure CSPs play an instrumental role in the B2B digitalization ecosystem.