by Adriaan Rol
Last week Google and collaborators published a paper in which they claim to have achieved Quantum Supremacy, one of the major milestones in quantum computing. The idea of quantum supremacy is to use a programmable quantum device to perform a task that is out-of-reach for any classical computer. Google claims to have solved a problem in seconds that would take tens of thousands of years on a state of the art supercomputer. The quantum supremacy experiment has been a long-standing milestone in the field of quantum computation, and as such, skepticism has arised; soon after publication of the article a group in IBM research has challenged the results .
Rather than joining in on the controversy of whether or not Google has really achieved quantum supremacy , I want to focus on some more basic questions: what is quantum supremacy, how does one demonstrate quantum supremacy and why is this such an important milestone?
At its core the idea of quantum supremacy is quite straightforward: use a quantum computational device to perform a task that cannot be performed on a classical computer. Because the task itself does not need to be useful in any way, it can be tailor-made to take maximal advantage of the strengths of the quantum computer while containing as little structure as possible. The latter makes it harder to simulate classically. The simplest such experiment is to sample outcomes from random circuits that contain a lot of entangling gates, so that non-classical states are created. However, simply doing random things that are hard to simulate is not enough, after all, scattering laser light of a dust particle creates a similarly hard to simulate interference pattern but we would not call that a computation. The quantum computational device must involve some degree of programmability, otherwise it is just an elaborate physics experiment that is hard to simulate.
The experiment is performed on a device containing a grid of 53  superconducting qubits that are operating in the circuit based paradigm for quantum computing. A key feature of these types of devices is that they are fully programmable, one can in principle execute any quantum circuit on them . The computational task is to determine the distribution of outcome probabilities one gets after applying a pseudo-random circuit to the system. The pseudo random circuits are composed of multiple layers that perform random single qubit gates on all qubits simultaneously followed by a specific two qubit gate on specific pairs of qubits.
The moment we have a device capable of achieving quantum supremacy, a very interesting problem arises: if we cannot solve the chosen problem classically, how do we verify that the quantum computer is working and not just producing random garbage bits as output? After all, the solution to this problem cannot easily be verified as is the case in for example prime factorization. The key to this problem lies in a combination of control circuits and finding a way to quantify how well the bit outcomes of the computational device correspond to the expected outcomes. For highly entangled states the distribution of bit outcomes is not uniformly random but rather follows a specific distribution . Because of the nature of highly entangled states, even a single error in the circuit results in a very different distribution. Because of this, the deviation of the observed distribution with respect to the expected one forms a good means of quantifying circuit performance. This is expressed as the so-called linear cross-entropy benchmarking fidelity FXEB .
Because evaluating FXEB still requires comparing to the expected distribution, and thus simulating the circuit, the authors needed to play some tricks in order to trust the outcomes in the supremacy regime. The authors performed three different variants of the pseudo random circuits. “Patch” circuits in which a slice of two-qubit gates is removed to create two non-interacting patches of qubits. “Elided” circuits where only a fraction of the gates between the patches is removed, and full “verification” circuits in which all gates are present. Both the “patch” and “elided” circuits are designed to reduce the amount of entanglement involved so that it is feasible to simulate the experiments and thus determine FXEB . In addition to these three types of circuits the Google team performed a careful characterization of all operations (single- and two-qubit gates, readout) to be able to predict FXEB as they increased the number of qubits.
By performing the three variants of these pseudo random circuits while involving more and more qubits and increasing the number of cycles, it was possible to determine FXEB while gradually improving the complexity of the experiments. In the key figure of the Quantum Supremacy paper the authors showed the measured FXEB for all circuit variants slowly decreasing with increasing qubit count. More importantly however, the performance of all three variants agreed almost perfectly while also corresponding well to a prediction based on the characterized error rates of the operations. This last bit is important because it means that the performance of the simpler, “easy” to simulate, circuits is a good predictor of the performance of the full circuits. Once the performance was pushed into the supremacy regime, it was no longer possible to simulate the full circuits, and hence not possible to determine FXEB. However, the control circuits show the same trend in performance continuing as expected. Because the performance of the control circuits corresponds so well to the full circuit, it is reasonable to state that the full circuit is performing as expected, thereby demonstrating quantum supremacy.
Now that we understand what was done to demonstrate Quantum Supremacy, we can briefly discuss what it means. Let’s get the first big misconception out of the way, if you have come this far it should be pretty obvious that the Quantum Supremacy demonstration by itself is not very useful . None of the promises of quantum computers are achieved: factorizing large prime numbers, simulating big molecules and quantum phenomena, etc. One might then question why this experiment is widely compared to the Wright brothers first flight. Although the Wright brothers were not the first to build an experimental aircraft, nor was their first flight (12s, 37m) particularly useful, they are credited with the first controlled, sustained flight of a powered, heavier-than-air aircraft. In a similar spirit Google is not the first to build a prototype quantum computer but this is the first demonstration of a quantum computer performing a task that is extremely difficult for classical computers. Although useful applications of quantum computers may still be many years away and classical computers still overpower the upcoming quantum hardware for useful applications, the quantum supremacy experiments puts quantum computation on a different level than classical computers. And this gives hope in this technology for years to come.
Footnotes and References
 If you are interested in the nuances between Google’s claim and IBM’s counter claim I recommend this excellent blog post by Scott Aaronson
 Actually 54 but one is broken.
 Subject of course to practical constraints such as the connectivity, native gate set, number of qubits and error rates.
 The Porter-Thomas distribution
 With the exception of being used as a fancy random number generator
Adriaan is an experimental physicist in Leo DiCarlo’s group where they are working on quantum computing using superconducting transmon qubits. He enjoys trying to find the (abstract) essence of things while at the same time being able to experiment to see if his ideas actually work. Whenever he is not in the lab you’ll probably find him on the water or enjoying an overpriced coffee at his favourite coffee place.
Any views or opinions represented in this blog are personal and belong solely to the author and do not represent those of people, institutions or organizations that the author may or may not be associated with in professional or personal capacity, unless explicitly stated.