Quantum error correction
From Wikipedia, the free encyclopedia
Quantum error correction is used in quantum computing to protect quantum information from errors due to decoherence and other quantum noise. Quantum error correction is essential if one is to achieve fault-tolerant quantum computation that can deal not only with noise on stored quantum information, but also with faulty quantum gates, faulty quantum preparation, and faulty measurements.
Classical error correction employs redundancy. The simplest way is to store the information multiple times, and—if these copies are later found to disagree—just take a majority vote; e.g. Suppose we copy a bit three times. Suppose further that a noisy error corrupts the three-bit state so that one bit is equal to zero but the other two are equal to one. We also assume that noisy errors are independent and occur with some probability p. It is most likely that the error is a single-bit error and the transmitted message is three ones. It is possible that a double-bit error occurs and the transmitted message is equal to three zeros, but this outcome is less likely than the above outcome.
Copying quantum information is not possible due to the no-cloning theorem. This theorem seems to present an obstacle to formulating a theory of quantum error correction. But it is possible to spread the information of one qubit onto a highly-entangled state of several (physical) qubits. Peter Shor first discovered this method of formulating a quantum error correcting code by storing the information of one qubit onto a highly-entangled state of nine qubits. A quantum error correcting code protects quantum information against errors of a limited form.
Classical error correcting codes use a syndrome measurement to diagnose which error corrupts an encoded state. We then reverse an error by applying a corrective operation based on the syndrome. Quantum error correction also employs syndrome measurements. We perform a multi-qubit measurement that does not disturb the quantum information in the encoded state but retrieves information about the error. A syndrome measurement can determine whether a qubit has been corrupted, and if so, which one. What is more, the outcome of this operation (the syndrome) tells us not only which physical qubit was affected, but also, in which of several possible ways it was affected. The latter is counter-intuitive at first sight: Since noise is arbitrary, how can the effect of noise be one of only few distinct possibilities? In most codes, the effect is either a bit flip, or a sign (of the phase) flip, or both (corresponding to the Pauli matrices X, Z, and Y). The reason is that the measurement of the syndrome has the projective effect of a quantum measurement. So even if the error due to the noise was arbitrary, it can be expressed as a superposition of basis operations—the error basis (which is here given by the Pauli matrices and the identity). The syndrome measurement "forces" the qubit to "decide" for a certain specific "Pauli error" to "have happened", and the syndrome tells us which, so that we can let the same Pauli operator act again on the corrupted qubit to revert the effect of the error.
The syndrome measurement tells us as much as possible about the error that has happened, but nothing at all about the value that is stored in the logical qubit—as otherwise the measurement would destroy any quantum superposition of this logical qubit with other qubits in the quantum computer.
[edit] Models
Over time, researchers have come up with several codes:
- Peter Shor's 9-qubit-code, a.k.a. the Shor code, encodes 1 logical qubit in 9 physical qubits and can correct for arbitrary errors in a single qubit.
- Andrew Steane found a code which does the same with 7 instead of 9 qubits, see Steane code.
- Raymond Laflamme found a class of 5-qubit codes which do the same, which also have the property of being fault-tolerant.
- A generalisation of this concept are the CSS codes, named for their inventors: A. R. Calderbank, Peter Shor and Andrew Steane. According to the quantum Hamming bound, encoding a single logical qubit and providing for arbitrary error correction in a single qubit requires a minimum of 5 physical qubits.
- A more general class of codes (encompassing the former) are the stabilizer codes discovered by Daniel Gottesman ([1]), and by A. R. Calderbank, Eric Rains, Peter Shor, and N. J. A. Sloane ([2], [3]); these are also called additive codes.
- A newer idea is Alexei Kitaev's topological quantum codes and the more general idea of a topological quantum computer.
That these codes allow indeed for quantum computations of arbitrary length is the content of the threshold theorem, found by Michael Ben-Or and Dorit Aharonov, which asserts that you can correct for all errors if you concatenate quantum codes such as the CSS codes—i.e. re-encode each logical qubit by the same code again, and so on, on logarithmically many levels—provided the error rate of individual quantum gates is below a certain threshold; as otherwise, the attempts to measure the syndrome and correct the errors would introduce more new errors than they correct for.
As of late 2004, estimates for this threshold indicate that it could be as high as 1-3% [4], provided that there are sufficiently many qubits available.
[edit] External links
|