Introduction
The News: On September 19, 2023, as part of a wide range of announcements for Intel Innovation 2023, Intel moved the Intel Developer Cloud initiative from beta into general availability. According to Intel, the Intel Developer Cloud “helps developers accelerate AI using the latest Intel hardware and software innovations – including Intel Gaudi2 processors for deep learning – and provides access to the latest Intel hardware platforms, such as the 5th Gen Intel® Xeon® Scalable processors and Intel® Data Center GPU Max Series 1100 and 1550. When using the Intel Developer Cloud, developers can build, test and optimize AI and HPC applications. They can also run small- to large-scale AI training, model optimization and inference workloads that deploy with performance and efficiency. Intel Developer Cloud is based on an open software foundation with oneAPI – an open multiarchitecture, multivendor programming model – to provide hardware choice and freedom from proprietary programming models to support accelerated computing and code reuse and portability.”
You can read the Intel Innovation 2023 press release here.
You can read the Intel Developer Cloud beta launch blog post here.
Interview with Intel
Last week, I was talking to an innovative AI startup called Argilla, an open source feedback platform for large language models (LLMs), and in the conversation, the company mentioned that Argilla uses the Intel Developer Cloud and is part of Intel’s Project Liftoff. I became interested in finding out more about how a cutting-edge AI startup became involved with the world’s best known chip manufacturer.
I spoke with Abhijit Lele, Product Manager for Intel, about what is going on with the Intel Developer Cloud now a mere 4 months after moving into general availability (GA). What I learned is that Intel Developer Cloud is quickly emerging as an incubator for startups aiming to become players within the AI technology stack; a speedy feedback loop for Intel AI solutions and growing option for enterprises to run AI workloads during these times of limited capacity and restricted AI compute hardware resources.
Here is some of my conversation with Lele and some of my takeaways.
Q: Intel Developer Cloud has been GA for three and a half months. Describe what has been going on.
Lele: The activity and the payoffs are beyond our expectations.
First, feedback for our newest pre-production hardware and software, which were going on prior to GA has been really helpful and I think it has contributed to our ability to nail things down for the range of platforms we’re putting out. The key to that is in the past we really only had the option to send test pre-production hardware out to customers’ sites and wait for the feedback. That typically meant additional time spent in installs, etc. Intel Developer Cloud also allows us to scale much wider given that it’s a lot more efficient. In the past, we simply couldn’t afford to ship pre-prod hardware to a large number of customers. Now we can give them short-term access and cycle through a large number of customers quickly. With Intel Developer Cloud, customers have immediate access and we can see performance, and get immediate feedback – win-win.
A second payoff for us and our customers has been the ability for customers to try out our new AI chips, particularly the Gaudi family, without having to commit to buying them first.
Analyst Take: This is a shrewd strategy on Intel’s part. When it comes to AI workloads, so much emphasis over the past year has been on the need for proven GPUs, causing the outsized demand and limited supply of NVIDIA GPUs. Intel, in this sense, was an unproven commodity, getting their new Gaudi GPU chips to prospective customers not only fine-tuned the product, but built customer confidence in buying Gaudi and other AI chips from Intel.)
Q: This next payoff is something I wouldn’t have necessarily expected.
Lele: I agree that parts of it we didn’t expect, and the size of the response has been unexpected. The third payoff is we have been able to help enterprise customers meet their AI compute workload needs. Most of the enterprises I’m speaking of run private cloud/data centers. They and their infrastructure partners haven’t been able to source enough of the AI chips they need to run all of the AI workloads they want to run. Intel Developer Cloud has attracted more than 40 enterprises that are now paying for AI deployments on Intel Developer Cloud. We didn’t necessarily expect that big of a response. It goes to show you the desire by enterprise and the demand in the market for AI compute workloads. But the response is due to more than just the limited capacity to run AI workloads, it also due to the appealing price/performance benefits of superior Gaudi2 compute through Intel Developer Cloud – particularly as we lean into these high-performing, AI-specific chips. Then, on top of the price/performance, the package umbrella for software – runtimes, etc. aligns nicely and are integrated to the point and customers have commented that their time to market is much faster than they expected.
The fourth payoff keys on your original curiosity about AI start-ups. As of now, more than 50 AI start-ups are paying for AI deployments on Intel Developer Cloud. They benefit from the price/performance and speed to market, we benefit from getting a chance to understand and work with a lot of innovators in the AI space.
Analyst Take: Intel is famously known for silicon. Staying ahead, anticipating compute needs, is notoriously difficult to do, as can be noted by the scramble in 2023 around generative AI compute workloads. R&D and production cycles for computer chips are long. But what if that cycle can be changed a bit? Shortened? What happens when disruptions like generative AI create market barriers and opportunities unforeseen? Intel Developer Cloud is a shrewd initiative for Intel on many fronts – giving the company a much quicker way to iterate and test chip design with both traditional enterprise customers and cutting-edge startups. At the same time, Intel is taking advantage of their own cloud infrastructure to better serve the market that desperately needs to move quickly on disruptive generative AI during 2024 and perhaps beyond.
Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.
Other Insights from The Futurum Group:
AI Compute Relief? Intel Gaudi 2 Databricks Testing Indicates Yes
Intel Gaudi2: A CPU Alternative to GPUs in the AI War?
Intel AI Everywhere: Ambitious Vision for the Tech Titan
Author Information
Mark comes to The Futurum Group from Omdia’s Artificial Intelligence practice, where his focus was on natural language and AI use cases.
Previously, Mark worked as a consultant and analyst providing custom and syndicated qualitative market analysis with an emphasis on mobile technology and identifying trends and opportunities for companies like Syniverse and ABI Research. He has been cited by international media outlets including CNBC, The Wall Street Journal, Bloomberg Businessweek, and CNET. Based in Tampa, Florida, Mark is a veteran market research analyst with 25 years of experience interpreting technology business and holds a Bachelor of Science from the University of Florida.