Computronium

From Wikipedia, the free encyclopedia

In futurism, computronium refers to a hypothetical material engineered to maximize its use as a computing substrate. While futurists usually use it to refer to hypothetical materials engineered on the molecular, atomic, or subatomic level by some advanced form of nanotechnology, the term can also be applied both to contemporary computing materials, and to constructs of theoretical physics that are unlikely to ever be practical to build.

Many futurists speculate about futures where demand for computing power grows to the point where very large amounts of computronium are desired. Examples of applications include Jupiter Brains, planet-sized constructs made of computronium, and Matrioshka Brains, concentric Dyson spheres designed to extract all possible energy from the host star for use towards computation.

Contents

[edit] Conventional integrated circuits

Contemporary integrated circuits can be considered a form of computronium. The density and speed of integrated circuit computing elements has increased roughly exponentially for a period of several decades, following a trend described by Moore's Law. While it is generally accepted that this exponential improvement trend will end, it is unclear exactly how dense and fast integrated circuits will get by the time this point is reached. Working devices have been demonstrated that were fabricated with a MOSFET channel length of 6.3 nanometres using conventional semiconductor materials, and devices have been built that used carbon nanotubes as MOSFET gates, giving a channel length of approximately 1 nanometre.

The ultimate density and computing power of integrated circuits are limited primarily by power dissipation concerns.

In summary, conventional integrated circuits and their anticipated future descendants provide a form of high-density but power-hungry computronium limited by heat dissipation concerns.

For more details, see Low-power.

[edit] Molecular nanotechnology

Many futurists postulate some form of advanced nanotechnology able to mass-manufacture matter that is structured on a molecular or atomic level. This allows the creation of new types of computing device, that allow computronium that is even better optimized for several of the design constraints outlined above. In particular, computronium produced by molecular nanotechnology would be far lighter and more compact for a given computational or data storage capacity than computronium produced by bulk fabrication methods. Additionally, depending on implementation specifics, it is quite possible that molecular-scale devices would be able to perform computation more quickly than bulk-fabricated devices, though in practice this will likely fall back to heat limits, with the final speed relationship being unclear.

Two types of molecular-scale computing substrate are potentially practical: those based on electrical signals, and those that use the position and motion of mechanical components to represent data and to perform computation. Electrical computation would be performed in a manner reasonably similar to the implementation of circuits today. Arguably it has already been demonstrated, in the form of field-effect transistors that use carbon nanotubes as gates. Carbon nanotubes also function acceptably as wires for conduction of signals in nanoscale circuits, providing a reasonable basis for the construction of molecular-level circuits. The drawback to using electrical signalling in molecular-scale circuits is the same as that for bulk-fabricated electronic circuits: heat dissipation. Thus, it is reasonable to conclude that computronium based on electrical devices fabricated on a molecular scale would have only a modest improvement in computing speed over the best devices that bulk-fabrication techniques have to offer. An improvement would still be present, if for no other reason than improved tailoring of surface to volume ratio and improved heat conduction in an appropriately constructed substrate.

Molecular-scale computronium based on mechanical devices could function by mechanisms similar to the rod logic proposed by Eric Drexler. While these too suffer from the problem of thermally induced noise, the mechanisms by which energy is dissipated are sufficiently different from the mechanisms for electrical devices that there is hope that mechanical devices may suffer somewhat less from the heating problems that afflict electrical devices.

DNA computing might be another candidate for molecular computation, or at the very least data storage.

Lastly, the option of optical computing mechanisms exists for molecular-scale computing devices. This is not listed among the potentially practical options for two reasons. Firstly, while the size of the active optical components may be very small, the size of the communications network must be comparable to the wavelength of the light used as a carrier. While waveguide techniques can ameliorate this constraint to some degree, they fall far short of producing the device densities possible using electrical or mechanical methods. Secondly, there is a very strong relation between the speed of computation, the wavelength of the light used as a carrier, and the amount of energy required to store a bit of information. This arises from the fact that, in order for the frequency of a light pulse to be well-defined, the duration of the pulse must be comparable to (or longer than) the frequency of oscillation of the photons acting as the carrier. For visible light, this is on the order of femtoseconds, while it is in principle possible for electronic devices to operate much more quickly. Speeding up this signal transmission time requires using a shorter wavelength, which involves more energy per photon. Even disregarding energy concerns, an upper limit to photon energy of at most a few eV is imposed by the requirement that the carrier photons not destroy molecular bonds in the substrate. In practice, an engineered optical computer would use photon energies in the 0.1 eV range or lower, for better ease of tailoring molecular energy levels to interact with the carrier photons. Lastly, the energy required to encode one bit of information is approximately that needed to produce 2-4 photons of the carrier frequency. This energy is relatively large compared to the amount needed to encode information using mechanical devices, though it is still comparable both to the amount needed to cause a state change in an electrical device, and the magnitude of thermal noise at room temperature. In summary, molecular-scale computronium based on optical principles would be quite mass-efficient, but quite bulky and slow compared to electronic or mechanical implementations.

[edit] Excited atoms and nuclei

The discussion of computronium based on nanotechnology assumes that the minimum scale for storage of one bit of information is one atom or one electron. This constraint turns out not to hold: as it is possible to excite the electrons of an atom to any of a very large number of states, so it is possible to store more than one bit of information using a single atom. Manipulation of atomic structure in this manner is sometimes referred to as picotechnology by futurists, though the name is something of a misnomer, as most readily-accessible excited atomic states useful for data storage produce orbitals that are much larger than even nanometre scale.

The most promising means of using individual atoms for data storage found to date is the technique of using light impulses at carefully tuned frequencies to perturb the state of Rydberg atoms. Data storage of several hundred bits in a single atom has been demonstrated by this method. Similarly, it is in principle possible to manipulate the state of electrons in the Fermi gas within a bounded conductor (such as a quantum dot) to store information in a similar manner, though this involves the use of many atoms. In both cases, the apparatus must be kept very cold, so that thermal effects don't perturb the data storage state (energy levels are very finely spaced). In the case of Rydberg atom data storage, the atoms used for storage must additionally be suspended in some form of atom trap to prevent interactions with normal matter from perturbing the excited atoms' states.

Due to suceptibility to thermal noise and to apparatus considerations, this is not expected to be a widely-used method of data storage, except possibly in situations requiring low mass above all other design considerations. Additionally, due to the very small spacings in energy levels, transitions between energy levels tend to be very slow, resulting in very slow data access compared to other forms of data storage. It is unclear how, if at all, atoms excited in this manner would be used as a computing mechanism, as opposed to a data storage mechanism.

In principle, similar techniques can be used to store information encoded as excited states of atomic nuclei. Manipulation of the atomic nucleus in this manner is sometimes referred to as femtotechnology by futurists. It is unlikely that this mechanism will ever be practical, due to the very high energies involved in nuclear state transitions (requiring equipment capable of processing gamma rays efficiently and without degradation), and to the behavior of the strong nuclear force making excited nuclear states less stable, and forcing there to be a finite number of excited states below the binding energy of the atomic nucleus (unlike the in principle infinite number of bound states in an atom).

[edit] Limits to computation

Main article: Limits to computation

There are several physical and practical limits to the amount of computation or data storage that can be performed with a given amount of mass, volume, or energy:

  • The Bekenstein bound limits the amount of information that can be stored within a spherical volume to the entropy of a black hole with the same surface area.
  • The temperature of the cosmic microwave background radiation gives a practical lower limit to the energy consumed to perform computation of approximately 4kT per state change, where T is the temperature of the background (about 3 kelvins), and k is the Boltzmann constant. While a device could be cooled to operate below this temperature, the energy expended by the cooling would offset the benefit of the lower operating temperature.

Several methods have been proposed for producing computing devices or data storage devices that approach physical and practical limits:

  • A Matrioshka Brain is a set of concentric Dyson spheres that attempts to capture as much usable energy as possible from the host star, to make it available for computation.
  • A cold degenerate star could conceivably be used as a giant data storage device, by carefully perturbing it to various excited states, in the same manner as an atom or quantum well used for these purposes. Such a star would have to be artificially constructed, as no natural degenerate stars will cool to this temperature for an extremely long time. It is also possible that nucleons on the surface of neutron stars could form complex "molecules"[1] which some have suggested might be used for computing purposes[2], creating a type of computronium based on femtotechnology which would be faster and denser than computronium based on nanotechnology. (The novel Revelation Space by Alastair Reynolds uses a neutron star in this fashion)
  • It may be possible to use black hole as a data storage and/or computing device, if a practical mechanism for extraction of contained information can be found. Such extraction may in principle be possible (Stephen Hawking's proposed resolution to the black hole information paradox). This would achieve storage density exactly equal to the Bekenstein Bound. The scientist Seth Lloyd calculated the computational abilities of an "ultimate laptop" formed by compressing a kilogram of matter into a black hole of radius 1.485 x 10-27 meters, concluding that it would only last about 10-19 seconds before evaporating due to Hawking radiation, but that during this brief time it could compute at a rate of about 5 x 1050 operations per second, ultimately performing about 1032 operations on 1031 bits. Lloyd notes that "Interestingly, although this hypothetical computation is performed at ultra-high densities and speeds, the total number of bits available to be processed is not far from the number available to current computers operating in more familiar surroundings"[3].

All of these methods are hypothetical, and none of them are expected to be practical in the near future.

[edit] References

[edit] External links

In other languages