The News: Researchers at University of California Berkeley and IBM Quantum have demonstrated that the application of new error mitigation techniques applied during the simulation of an Ising model, a standard physics models with multiple variables that grows more complex over time, could generate accurate results on a 127 qubit IBM Eagle quantum computer, when benchmarked against the results from a classical supercomputer, even as the calculation grew more complex. The research, which is published in the journal Nature, shows that the quantum computer eventually achieved a level of complexity in the calculation that the supercomputer could not match, and based on the accuracy of the results up to that point, the researchers have a high degree of confidence that the results that could not be benchmarked were also accurate.
Quantum computers are error-prone, due in part to their extreme sensitivity to the surrounding environment. To deliver reliable results, the computer must fix errors faster than they accumulate – which to date has been beyond even the most advanced machines. This research centers on how to deal with errors in quantum calculations, which have been a key issue with today’s noisy, intermediate-scale quantum (NISQ) era computers, a term coined in 2018 by noted quantum researcher John Preskill, referring to the performance limitations of error rates and amount of logical or physical qubits that could be used in a quantum computer.
You can see a summary of the research in the IBM Press Release.
‘Noisy’ Quantum Computers May Provide Value Before the Era of Fault Tolerance
Analyst Take: Researchers at IBM Quantum and the University of California, Berkeley collaborated on a project where they took turns running increasingly complex quantum computations on both quantum and classical hardware, with the goal of assessing whether today’s noisy, error-prone quantum computers were useful for producing accurate results for certain kinds of problems. Youngseok Kim and Andrew Eddins, scientists with IBM Quantum would test a calculation on the 127-qubit IBM Quantum Eagle quantum processor, while UC Berkeley’s Sajant Anand would attempt the same calculation using state-of-the-art classical approximation methods on supercomputers located at Lawrence Berkeley National Lab (LBNL) and Purdue University.
According to the researchers, Eagle returned correct answers every time, and based on the results, the researchers extrapolated that the quantum computer was still returning accurate answers, even for problems that challenged some state-of-the-art classical simulation methods. As such, the researchers believe the performance results are evidence that noisy quantum computers will be able to provide value sooner than expected, due to a combination of hardware advances in IBM Quantum hardware and the development of new error mitigation methods.
Mitigating Errors via Pulse Stretching
Today’s quantum computers are noisy, which describes fluctuations in the number of photons reaching the detector from point to point, often due to environmental factors such as temperature, signal crosstalk, quantum decoherence, and implementation errors with quantum gates. The researchers realized that they could amplify the effects of the noise using the same techniques used to control qubits through a technique called pulse stretching. The researchers found that if they increased the amount of time it took to run individual operations on each qubit, they needed to scale the amount of noise by the same factor.
Over the past several years, the team worked on enhancing the scale and quality of its hardware, and developed ways to amplify the noise with greater control, with the goal of being able to estimate expectation values to a degree of accuracy where they could become useful for actual applications.
That breakthrough became the technique and paper published earlier in 2023, Probabilistic Error Cancellation (PEC). The researchers realized that they could assume, based on knowledge of the machine, a basic noise model. Then, the team could learn certain parameters to create a representative model of the noise. When repeating the same quantum computation many times, they could study the effect of either inserting new gates to nullify or amplify the noise. This model scaled reasonably with the size of the quantum compute, allowing the researchers to modeling the noise of a large quantum processor.
The researchers found that the noise amplification on its noise model could be used to manipulate and amplify the noise more accurately, allowing the researchers to apply classical post-processing to extrapolate down to what the calculation should look like without noise, using a method called Zero Noise Extrapolation (ZNE). Through error mitigation, the researchers found they could produce certain kinds of accurate calculations before the era of full-scale error correction, even with noisy quantum computers. And these calculations could be useful in ensuring that quantum computers can address certain real-world problems.
Demonstrated and Measured Success with Processing a Real-World Problem
The results of the research are exciting for the quantum community, as it was previously thought that NISQ-era quantum computers would not be accurate enough to address real-world problems, and help usher in the era of quantum advantage, which is commonly defined as the demonstrated and measured success to process a real-world problem faster on a quantum computer than on a classic computer. Without reaching that milestone, quantum computers can be viewed as viable tools that can offer value beyond classical machines, such as using a quantum computer to verify classical algorithms.
According to the researchers, the sheer size of the circuits — 127 qubits running 60 steps’ worth of quantum gates — are some of the longest, most complex ever run successfully. However, the results should be tempered with a dose of reality: the researchers say that this is not a claim that the specific calculation that was tested on quantum computers exceed the abilities of classical computers, as other specialized classical methods may soon return correct answers for the calculation they tested.
The real value lies in the fact that the concurrent testing of quantum computers running a complex circuit and classical computers verifying the quantum results will improve both computational domains, while providing users with increasing confidence in the abilities of near-term quantum computers.
Results Indicate Potential Near-Term Use of Quantum Computers
The positive results from this research have led to IBM announcing that by end of 2023, IBM’s fleet of quantum systems running on the cloud and onsite at partner locations will be powered by the hardware used to perform this breakthrough, essentially quantum processors running with at least 127 qubits. Further collaboration and commitments to four emerging working groups comprised of leading research institutions, companies, and universities as they explore quantum’s value across will be initiated in healthcare and life sciences, high energy physics, high performance computing, and optimization.
While this research announcement is positive, achieving quantum advantage will still require significant time, resources, and effort. That said, breakthroughs such as the one announced by the IBM research team ensure that the technology does not enter a prolonged “quantum winter,” where innovation progress slows or stops, thereby chilling the environment for continued investment and testing of new quantum techniques and hardware.
Further, this announcement helps to reinforce IBM as a clear leader in the quantum hardware category, given the proposed footprint expansion of the IBM Eagle-class quantum computers across its cloud infrastructure and partner sites. While quantum computers are still nascent, the ability to mitigate quantum noise may be a catalyst for other quantum technology breakthroughs.
Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.
Other insights from The Futurum Group:
IBM Think 2023: IBM Charts the Course to Quantum Safe Computing
The Six Five In the Booth: Securing Data with IBM’s Quantum-safe Crytopgraphy Algorithms
How IBM and Vodafone are Working to Create a Quantum-Safe World for Telcos
Author Information
Keith has over 25 years of experience in research, marketing, and consulting-based fields.
He has authored in-depth reports and market forecast studies covering artificial intelligence, biometrics, data analytics, robotics, high performance computing, and quantum computing, with a specific focus on the use of these technologies within large enterprise organizations and SMBs. He has also established strong working relationships with the international technology vendor community and is a frequent speaker at industry conferences and events.
In his career as a financial and technology journalist he has written for national and trade publications, including BusinessWeek, CNBC.com, Investment Dealers’ Digest, The Red Herring, The Communications of the ACM, and Mobile Computing & Communications, among others.
He is a member of the Association of Independent Information Professionals (AIIP).
Keith holds dual Bachelor of Arts degrees in Magazine Journalism and Sociology from Syracuse University.