Reversible computing
From Wikipedia, the free encyclopedia
Reversible computing includes any computational process that is (at least to some close approximation) reversible, i.e., time-invertible, meaning that a time-reversed version of the process could exist within the same general dynamical framework as the original process. For example, within the context of deterministic dynamical frameworks, a necessary condition for reversibility is that the transition function mapping states to their successors at a given later time should be one-to-one.
Probably the largest motivation for the study of hardware and software technologies aimed at actually implementing reversible computing is that they offer what is the only potential way to improve the energy efficiency of computers beyond the fundamental von Neumann-Landauer limit [1] of kT ln 2 energy dissipated per irreversible bit operation, where k is Boltzmann's constant of 1.38 × 10−23 J/K, and T is the temperature of the environment into which unwanted entropy will be expelled.
There are two major, closely-related, types of reversibility that are of particular interest for this purpose: physical reversibility and logical reversibility. A process is said to be physically reversible if it results in no increase in physical entropy; it is isentropic. Although in practice no nonstationary physical process can be exactly physically reversible or isentropic, there is no known limit to the closeness with which we can approach perfect reversibility, in systems that are sufficiently well-isolated from interactions with unknown external environments, when the laws of physics describing the system's evolution are precisely known.
As was first argued by Rolf Landauer of IBM [2], in order for a computational process to be physically reversible, it must also be logically reversible. This statement is called Landauer's Principle. A discrete, deterministic computational process is said to be logically reversible if the transition function that maps old computational states to new ones is a one-to-one function.
For computational processes that are nondeterministic (in the sense of being probabilistic or random), the relation between old and new states is not a single-valued function, and the requirement needed to obtain physical reversibility becomes a slightly weaker condition, namely that the size of a given ensemble of possible initial computational states does not decrease, on average, as the computation proceeds forwards.
Contents |
[edit] More on Landauer's principle
Landauer's principle can be understood to be a simple logical consequence of the second law of thermodynamics — which states that the entropy of a closed system cannot decrease — together with the definition of thermodynamic temperature. For, if the number of possible logical states of a computation were to decrease as the computation proceeded forward (logical irreversibility), this would constitute a forbidden decrease of entropy, unless the number of possible physical states corresponding to each logical state were to simultaneously increase by at least a compensating amount, so that the total number of possible physical states was no smaller than originally (total entropy has not decreased).
Yet an increase in the number of physical states corresponding to each logical state means that for an observer who is keeping track of the logical state of the system but not the physical state (for example an "observer" consisting of the computer itself), the number of possible physical states has increased; in other words, entropy has increased from the point of view of this observer. The maximum entropy of a bounded physical system is finite. (If the holographic principle is correct, then physical systems with finite surface area have a finite maximum entropy; but regardless of the truth of the holographic principle, quantum field theory dictates that the entropy of systems with finite radius and energy is finite.) So, to avoid reaching this maximum over the course of an extended computation, entropy must eventually be expelled to an outside environment at some given temperature T, requiring that energy E = ST must be emitted into that environment if the amount of added entropy is S. For a computational operation in which 1 bit of logical information is lost, the amount of entropy generated is at least k ln 2, and so the energy that must eventually be emitted to the environment is E ≥ kT ln 2.
This expression for the minimum energy dissipation from a logically irreversible binary operation was first suggested by John von Neumann [1], but it was first rigorously justified (and with important limits to its applicability stated) by Landauer. For this reason, it is sometimes referred to as being simply the Landauer bound or limit, but if we wish to be more generous to its originator, we may prefer to refer to it as the von Neumann-Landauer (VNL) bound instead.
[edit] The reversibility of physics and reversible computing
Landauer's principle (and indeed, the second law of thermodynamics itself) can also be understood to be a direct logical consequence of the underlying reversibility of physics, as is reflected in the general Hamiltonian formulation of mechanics, and in the unitary time-evolution operator of quantum mechanics more specifically.
In the context of reversible physics, the phenomenon of entropy increase (and the observed arrow of time) can be understood to be consequences of the fact that our evolved predictive capabilities are rather limited, and cannot keep perfect track of the exact reversible evolution of complex physical systems, especially since these systems are never perfectly isolated from an unknown external environment, and even the laws of physics themselves are still not known with complete precision. Thus, we (and physical observers generally) always accumulate some uncertainty about the state of physical systems, even if the system's true underlying dynamics is a perfectly reversible one that is subject to no entropy increase if viewed from a hypothetical omniscient perspective in which the dynamical laws are precisely known.
The implementation of reversible computing thus amounts to learning how to characterize and control the physical dynamics of mechanisms to carry out desired computational operations so precisely that we can accumulate a negligible total amount of uncertainty regarding the complete physical state of the mechanism, per each logic operation that is performed. In other words, we would need to precisely track the state of the active energy that is involved in carrying out computational operations within the machine, and design the machine in such a way that the majority of this energy is recovered in an organized form that can be reused for subsequent operations, rather than being permitted to dissipate into the form of heat.
Although achieving this goal presents a significant challenge for the design, manufacturing, and characterization of ultra-precise new physical mechanisms for computing, there is at present no fundamental reason to think that this goal cannot eventually be accomplished, allowing us to someday build computers that generate much less than 1 bit's worth of physical entropy (and dissipate much less than kT ln 2 energy to heat) for each useful logical operation that they carry out internally.
The motivation behind much of the research that has been done in reversible computing was the first seminal paper on the topic [3][4], which was published by Charles H. Bennett of IBM research in 1973. Today, the field has a substantial body of academic literature behind it. A wide variety of reversible device concepts, logic gates, electronic circuits, processor architectures, programming languages, and application algorithms have been designed and analyzed by physicists, electrical engineers, and computer scientists.
This field of research awaits the detailed development of a high-quality, cost-effective, nearly reversible logic device technology, one that includes highly energy-efficient clocking and synchronization mechanisms. This sort of solid engineering progress will be needed before the large body of theoretical research on reversible computing can find practical application in enabling real computer technology to circumvent the various near-term barriers to its energy efficiency, including the von Neumann-Landauer bound. This may only be circumvented by the use of logically reversible computing, due to the Second Law of Thermodynamics.
[edit] Practical logical reversibility
Any computational algorithm may be made logically reversible for as long as a complete history of each nonreversible operation can be maintained. (This is seen in every software application which offers a multi-level "Undo" feature.) The history will grow ever larger, so it is limited by available memory. To partially circumvent that limitation, snapshots of the complete computational state can be saved at various intervals, to provide the ability to jump further backwards in time, although not smoothly. (This approach is adopted by archival "backup" systems.)
Reversible computing of this kind is usually not symmetrical in time. (Reversing an operation may take more time, or less, because the underlying mechanism is different.)
[edit] See also
- Reversible dynamics
- Maximum entropy thermodynamics -- on the uncertainty interpretation of the Second Law of thermodynamics
- Reversible process
- Toffoli gate
- Fredkin gate
- Quantum computing
- Billiard-Ball Computer
- 3 input universal logic gates
- Undo
[edit] References
- ^ a b J. von Neumann, Theory of Self-Reproducing Automata, Univ. of Illinois Press, 1966.
- ^ R. Landauer, "Irreversibility and heat generation in the computing process," IBM Journal of Research and Development, vol. 5, pp. 183-191, 1961.
- ^ C. H. Bennett, "Logical reversibility of computation," IBM Journal of Research and Development, vol. 17, no. 6, pp. 525-532, 1973.
- ^ C. H. Bennett, "The Thermodynamics of Computation -- A Review," International Journal of Theoretical Physics, vol. 21, no. 12, pp. 905-940, 1982.