A binary digit, characterized as 0 and 1, is used to represent information in classical computers. A binary digit can represent up to one bit of Shannon information, where a bit is the basic unit of information. However, in this article, the word bit is synonymous with binary digit.
In classical computer technologies, a processed bit is implemented by one of two levels of low DC voltage, and whilst switching from one of these two levels to the other, a so-called forbidden zone must be passed as fast as possible, as electrical voltage cannot change from one level to another instantaneously.
There are two possible outcomes for the measurement of a qubit—usually taken to have the value "0" and "1", like a bit or binary digit. However, whereas the state of a bit can only be either 0 or 1, the general state of a qubit according to quantum mechanics can be a coherent superposition of both.[2] Moreover, whereas a measurement of a classical bit would not disturb its state, a measurement of a qubit would destroy its coherence and irrevocably disturb the superposition state. It is possible to fully encode one bit in one qubit. However, a qubit can hold more information, e.g. up to two bits using superdense coding.
For a system of n components, a complete description of its state in classical physics requires only n bits, whereas in quantum physics it requires 2n−1 complex numbers.