AI in Context: UXL to Be an Open-Source Alternative to NVIDIA’s CUDA?

AI in Context: UXL to Be an Open-Source Alternative to NVIDIA’s CUDA?

The News: Though announced in September 2023, the UXL Foundation has been getting more attention lately as it plans to provide an open-source alternative to NVIDIA’s CUDA graphics processing unit (GPU) coding platform. Visit the Unified Acceleration Foundation (UXL) website and read its announcement press release and the recent Reuters article.

AI in Context: UXL to be an Open-Source Alternative to NVIDIA’s CUDA?

Analyst Take: Here’s a tried-and-true strategy: Let a vendor define a proprietary software product or offering, allow the vendor to refine the software’s features over several years and releases, and then get together with some of your industry friends to create an open-source alternative. You know there is a market need, and you can use scary phrases like “avoid vendor lock-in.” Leverage everyone’s contributions to reduce your development costs. Provide a pathway from the original vendor’s platform and APIs to your new open version. Work hard to make that software run best on your platform and support it well. Become the new market leader.

Vendors are attempting this strategy again via the UXL Foundation versus NVIDIA and its CUDA platform and APIs for programming GPUs. It could succeed if the participants invest aggressively, have a realistic timeline for success, court NVIDIA and new developers, and don’t start an API war with NVIDIA that frustrates customers.

GPUs: From Games to AI

Imagine you are playing a video game and walking through a forest. Your visual perspective changes as you move, and objects like trees become closer or farther. As you turn your virtual head, objects rotate, colors change, and the lighting smoothly transitions to your new point of view. Nature and physics are not causing these changes, but your brain still processes them as if they are at least partially life-like.

Your computer hardware, notably a GPU, is doing the mathematical calculations before your display shows the evolving images to you. This math is usually floating-point-intensive, using millions of numbers like π with decimal points. Linear algebra and trigonometry handle the rotations and transformations on the objects. Moreover, it is visually unacceptable to work on one little area of the scene and then move on to another. The GPU processes computations in parallel, working seamlessly with your main central processing unit (CPU ) cores, memory, and storage. Speed is critical; for this reason, GPUs are in a class of processors called accelerators.

We need the same fast parallel math for many AI applications, including machine learning (ML), deep neural networks, and large language models (LLMs). Hence, a company such as NVIDIA, which produced GPUs for video graphics, became highly successful as AI applications have exploded in the past several years. Kudos to NVIDIA for its success and innovation. The company is not the only GPU provider, however.

What Is CUDA?

Great hardware needs equally great software to control it. In 2006, NVIDIA introduced the CUDA APIs and platform to program its GPUs and use them with other computer hardware. CUDA originally stood for Compute Unified Device Architecture but goes by the acronym alone today. CUDA contains high-performance mathematical libraries, including functions for matrix and vector calculations, random numbers, and Fourier Transforms. As of writing, CUDA 12.4 is the latest release.

Developers admire CUDA’s programming power and the robust ecosystem NVIDIA has built around the platform. By 2020, NVIDIA reported 2 million registered CUDA developers. In 2023, NVIDIA announced that the number of registered developers had doubled to 4 million, no doubt because of the rise of AI applications.

On two occasions in my professional career as an IT industry executive, analysts suggested that we follow the CUDA model to build and promote a development environment and create an ecosystem of users. This aspect of CUDA has worked very well in many ways.

CUDA is proprietary to NVIDIA and supports its hardware. Developers have made some efforts to port the interfaces to other hardware, but other GPU vendors have not broadly adopted them.

CUDA Sets the Stage for Open Source

If I were an accelerator chip maker and saw CUDA’s proprietary success on NVIDIA’s chips, I would be very tempted to repeat the formula. Since NVIDIA has an 18-year and 12-version head start, it would be challenging to build a large ecosystem around only my hardware and the software that controls it. However, suppose I were to get together with other accelerator chip makers and integrators. In that case, we might be able to produce a new software development kit, necessarily open source, that runs on all our hardware.

The advantages of this approach include:

  • Shared cost of development and freedom of action
  • Expertise from many technologists with varied experiences
  • An ecosystem that is a superset of the sum of the individual company ecosystems
  • No proprietary software except at the lowest level close to the hardware

Possible disadvantages include:

  • Loss of control over the direction of the project
  • Slowness due to cross-company architectural disagreements
  • A new codebase that runs on all the hardware platforms, but not well
  • Migration of users away from my platform because the code is portable

The free and open-source communities have successfully worked through these issues many times over the past two decades. In particular, Linux is free software; dozens of companies support and use it, generating billions of dollars of commercial revenue. None of these possible disadvantages will necessarily be showstoppers in 2024 and beyond.

In September 2022, Arm, Fujitsu, Google, Imagination Technologies, Intel, Qualcomm, Samsung, and VMware announced the Unified Acceleration Foundation (UXL) with three goals:

  • “Build a multi-architecture multi-vendor software ecosystem for all accelerators.”
  • “Unify the heterogeneous compute ecosystem around open standards.”
  • “Build on and expand open-source projects for accelerated computing.”

Each hardware vendor must ensure the architecture supports code running optimally on their platform. If it doesn’t, it’s their fault since providing such support is a tenet of success in an open-source project.

The Quantum Connection

CUDA Quantum is a component of CUDA that controls quantum computing hardware and simulators and integrates them with classical processors such as NVIDIA’s GPUs. The partnership program has dozens of companies as members. Though the press releases keep coming, it seems more newsworthy today if a quantum company is not a NVIDIA partner than if it becomes one.

Does CUDA Quantum run the risk of some new open-source effort to compete with it? Yes, because such a thing could always happen, but no, because CUDA Quantum is already open source under the Apache License Version 2.0 on GitHub.

Moreover, before NVIDIA developed and released CUDA Quantum, there were already multi-backend open-source quantum software development kits such as IBM’s Qiskit and Google’s Cirq.

There is less pressure to apply the UXL strategy to counter CUDA Quantum, though the interaction model is NVIDIA hardware with hardware from many quantum providers. It is asymmetrical in supporting GPUs and quantum systems from many vendors. UXL and others should not ignore this situation, but they have higher priorities in the near future.

Some Hard Questions

Just because some big names have gotten behind UXL, success is not guaranteed. Here are some questions that UXL and the industry must answer or otherwise resolve:

  • Will the foundation provide first-class support for NVIDIA hardware as it is ubiquitous in AI and HPC data centers?
  • Will NVIDIA itself join the effort?
  • Are the foundation members willing to support this work for the next one to two decades?
  • Will Apple, AMD, IBM, and Microsoft join UXL? Who else?
  • What open standards will be used or developed for the effort?
  • What is the timeline for measuring success, and what are the metrics?
  • Is this effort 5 years too late to overcome NVIDIA’s and CUDA’s lead?
  • Does the existence of UXL forestall any future government antitrust efforts against NVIDIA if its market share continues to expand quickly?

Key Takeaway: Competitors providing an open-source alternative to successful proprietary software should be expected and is not a surprise. However, NVIDIA’s lead with CUDA may be too large for UXL to overcome.

I believe UXL has a 50% chance of success. It could be the tortoise to NVIDIA’s hare, but the participants must expand their roster, invest heavily in the code, and build a better developer ecosystem than NVIDIA’s. Any pause or delay will allow NVIDIA to continue its surge. NVIDIA’s hardware and software are very, very good. UXL has a considerable challenge ahead of it and must adopt the best open-source governance and development practices.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author is a former IBM employee and holds an equity position in the company. The author holds small equity positions in Arm and Google. The author does not hold any equity positions with any other company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

Quantum in Context: Microsoft & Quantinuum Create Real Logical Qubits

Quantum in Context: A Qubit Primer

AI in Context: Remarks on the NVIDIA GTC 2024 GenAI and Ethics Panel

Author Information

Dr. Bob Sutor has been a technical leader and executive in the IT industry for over 40 years.Bob’s industry role is to advance quantum and AI technologies by building strong business, partner, technical, and educational ecosystems. The singular goal is to evolve quantum and AI to help solve some of the critical computational problems facing society today. Bob is widely quoted in the press, delivers conference keynotes, and works with industry analysts and investors to accelerate understanding and adoption of quantum technologies.Bob is the Vice President and Practice Lead for Emerging Technologies at The Futurum Group. He helps clients understand sophisticated technologies in order to make the best use of them for success in their organizations and industries. He is also an Adjunct Professor in the Department of Computer Science and Engineering at the University at Buffalo, New York, USA.More than two decades of Bob’s career were spent in IBM Research in New York. During his time there, he worked on or led efforts in symbolic mathematical computation, optimization, AI, blockchain, and quantum computing. He was also an executive on the software side of the IBM business in areas including middleware, software on Linux, mobile, open source, and emerging industry standards. He was the Vice President of Corporate Development and, later, Chief Quantum Advocate, at Infleqtion, a quantum computing and quantum sensing company based in Boulder, Colorado USA.Bob is a theoretical mathematician by training, has a Ph.D. from Princeton University, and an undergraduate degree from Harvard College.

He’s the author of a book about quantum computing called Dancing with Qubits which was published in 2019, with the Second Edition scheduled for release in April, 2024. He is also the author of the 2021 book Dancing with Python, an introduction to Python coding for classical and quantum computing.Areas in which he’s worked: quantum computing, AI, blockchain, mathematics and mathematical software, Linux, open source, standards management, product management and marketing, computer algebra, and web standards.


Latest Insights:

Mike Nichols, VP of Product Management, Security, joins Krista Macomber to share his insights on how AI is revolutionizing SOCs, from enhancing threat detection with Attack Discovery to boosting productivity through AI Assistant.
Chris McHenry, VP of Product Management at Aviatrix, joins Shira Rubinoff to share his insights on the timely and crucial convergence of cloud networking and security, explaining how this integration fosters better efficiency, security, and innovation.
IBM and Palo Alto Networks Announce a Strategic Partnership that Marks a Significant Development in the Cybersecurity Industry
Steven Dickens and Krista Macomber of The Futurum Group share their insights on IBM’s and Palo Alto Networks’ collaboration and Palo Alto Networks’ acquisition of QRadar.
Cisco Q3 Results Show Progress in Portfolio Synergies Across Networking, Security, Observability, and Data with the Splunk Addition Positioned to Spur More Growth
The Futurum Group’s Ron Westfall and Daniel Newman examine Cisco’s Q3 FY 2024 results and why Cisco’s progress with the integration of Splunk and the harnessing of portfolio synergies provides a positive framework for the company.