A New Kind of Science

From Wikipedia, the free encyclopedia

A New Kind of Science is a controversial book by Stephen Wolfram, published in 2002. It contains an empirical and systematic study of computational models such as cellular automata. Wolfram calls these models simple programs and argues that the scientific philosophy and methods appropriate for the study of simple programs are relevant to other fields of science. The book is available online (see links below).

Contents

[edit] Computation and its implications

The thesis of A New Kind of Science is twofold: that the nature of computation must be explored experimentally, and that the results of these experiments have great relevance to understanding the natural world.

Since its crystallization in the 1930's, computation has been primarily approached from two traditions: engineering, which seeks to build practical systems using computations; and mathematics, which seeks to prove theorems about computation.

Wolfram describes himself as introducing a third major tradition, which is the systematic, empirical investigation of computational systems for their own sake. This is where the "New" and "Science" parts of the book's title originate.

However, in proceeding with a scientific investigation of computational systems, Wolfram eventually came to the conclusion that an entirely new method is needed. Traditional mathematics was failing to describe the complexity seen in these systems meaningfully. Through a combination of experiment and theoretical positioning, the book introduces a new method that Wolfram argues is the most realistic way to make scientific progress with computational systems.

This difference in method casts A New Kind of Science as a "kind" of science, and allows its principles to be potentially applicable in a wide range of fields.

[edit] Simple programs

The basic subject of Wolfram's "new kind of science" is the study of simple abstract rules — essentially, elementary computer programs. In almost any class of computational system, one very quickly finds instances of great complexity among its simplest cases. This seems to be true regardless of the components of the system and the details of its setup. Systems explored in the book include cellular automata in 1, 2 and 3 dimensions, mobile automata, Turing machines in 1 and 2 dimensions, several varieties of substitution and network systems, primitive recursive functions, nested recursive functions, combinators, tag systems, register machines, reversal-addition and a number of other systems.

For a program to qualify as simple, there are several benchmarks:

  1. Its operation can be completely explained by a simple graphical illustration.
  2. It can be completely explained in a few sentences of human language.
  3. It can be implemented in a computer language using just a few lines of code.
  4. The number of its possible variations is small enough so that all of them can be computed.

Generally, simple programs tend to have a very simple abstract framework. Cellular automata, Turing Machines and combinators are examples of such frameworks. Note that some cellular automata have quite complicated detailed rules, and do not qualify as simple programs. It is also possible to invent new frameworks, particularly to capture the operation of natural systems.

The remarkable feature of simple programs is that a significant percentage of them are capable of producing great complexity. Simply enumerating all possible variations of almost any class of programs quickly leads one to examples that do unexpected and interesting things.

This leads to the question: if the program is so simple, where does the complexity come from? In a sense, there is not enough room in the program's definition to directly encode all the things the program can do. Therefore, simple programs can be seen as a minimal example of emergence.

A logical deduction from this phenomenon is that if the details of the program's rules have little direct relationship to its behavior, then it is very difficult to directly engineer a simple program to perform a specific behavior. An alternative approach is to try to engineer a simple overall computational framework, and then do a brute-force search through all of the possible components for the best match.

Simple programs are capable of a remarkable range of behavior. Some have been proven to be universal computers. Others exhibit properties familiar from traditional science, such as thermodynamic behavior, continuum behavior, conserved quantities, percolation, sensitive dependence on initial conditions, and others. They have been used as models of traffic, material fracture, crystal growth, biological growth, and various sociological, geological and ecological phenomena.

Another feature of simple programs is that making them more complicated seems to have little effect on their overall complexity. A New Kind of Science argues that this is evidence that simple programs are enough to capture the essence of almost any complex system.

[edit] Mapping and mining the computational universe

In order to study simple rules and their often complex behaviour, Wolfram believes it is necessary to systematically explore all of these computational systems and document what they do. He believes this study should become a new branch of science, like physics or chemistry. The basic goal of this field is to understand and characterize the computational universe using experimental methods.

The proposed new branch of scientific exploration admits many different forms of scientific production. For instance, qualitative classifications like those found in biology are often the results of initial forays into the computational jungle. On the other hand, explicit proofs that certain systems compute this or that function are also admissible.

There are also some forms of production that are in some ways unique to this field of study. For instance, the discovery of computational mechanisms that emerge in different systems but in bizarrely different forms. Another kind of production involves the creation of programs for the analysis of computational systems — for in the NKS framework, these themselves should be simple programs, and subject to the same goals and methodology.

An extension of this idea is that the human mind is itself a computational system, and hence providing it with raw data in as effective way as possible is crucial to research. Wolfram believes that programs and their analysis should be visualized as directly as possible, and exhaustively examined by the thousands or more.

Since this new field concerns abstract rules, it can in principle address issues relevant to other fields of science. However, in general Wolfram's idea is that novel ideas and mechanisms can be discovered in the computational universe — where they can be witnessed in their clearest forms — and then other fields can pick and choose among these discoveries for those they find relevant.

[edit] Systematic abstract science

While Wolfram promotes simple programs as a scientific discipline, he also insists that its methodology will revolutionize essentially every field of science.

The basis for his claim is that the study of simple programs is the most minimal possible form of science, which is equally grounded in both abstraction and empirical experimentation. Every aspect of the methodology advocated in NKS is optimized to make experimentation as direct, easy, and meaningful as possible — while maximizing the chances that the experiment will do something unexpected.

Just as NKS allows computational mechanisms to be studied in their cleanest forms, Wolfram believes the process of doing NKS captures the essence of the process of doing science -- and allows that process's strengths and shortcomings to be directly revealed.

Wolfram believes that the computational realities of the universe make science hard for fundamental reasons. But he also argues that by understanding the importance of these realities, we can learn to leverage them in our favor. For instance, instead of reverse engineering our theories from observation, we can simply enumerate systems and then try to match them to the behaviors we observe.

A major theme of NKS style research is investigating the structure of the possibility space. Wolfram feels that science is far too ad hoc, in part because the models used are too complicated and/or unnecessarily organized around the limited primitives of traditional mathematics. Wolfram advocates using models whose variations are enumerable and whose consequences are straightforward to compute and analyze.

[edit] Philosophical underpinnings

Wolfram believes that one of his achievements is not just exclaiming, "computation is important!", but in providing a coherent system of ideas that justifies computation as an organizing principle of science.

For instance, Wolfram's concept of computational irreducibility -- that some complex computations cannot be short-cutted or "reduced" (cf. NP-hard) , is ultimately the reason why computational models of nature must be considered, in addition to traditional mathematical models.

Likewise, his idea of intrinsic randomness generation -- that natural systems can generate their own randomness, rather than using chaos theory or stochastic perturbations -- implies that explicit computational models may in some cases provide more accurate and rich models of random-looking systems.

Based on his experimental results, Wolfram has developed the Principle of Computational Equivalence (see below), which asserts that almost all processes that are not obviously simple are of equivalent sophistication. From this seemingly vague single principle Wolfram draws a broad array of concrete deductions that reinforce many aspects of his theory.

Possibly the most important among these is an explanation as to why we experience randomness and complexity: often, the systems we analyze are just as sophisticated as we are. Thus, complexity is not a special quality of systems, like for instance the concept of "heat", but simply a label for all systems whose computations are sophisticated. Understanding this makes the "normal science" of the NKS paradigm possible.

At the deepest level, Wolfram believes that like many of the most important scientific ideas, the Principle allows science to be more general by pointing out new ways in which humans are not special. In recent times, it has been thought that the complexity of human intelligence makes us special -- but the Principle asserts otherwise. In a sense, much of Wolfram's ideas are based on understanding the scientific process -- including the human mind -- as operating within the same universe it studies, rather than somehow being outside it.

[edit] Principle of computational equivalence

The principle states that systems found in the natural world can perform computations up to a maximal ("universal") level of computational power. Most systems can attain this level.

Systems, in principle, compute the same things as a computer. Computation is therefore simply a question of translating inputs and outputs from one system to another. Consequently, most systems are computationally equivalent. Proposed examples of such systems are the workings of the human brain and the evolution of weather systems.

[edit] Applications and results

There are a vast number of specific results and ideas in the NKS book, and they can be organized into several themes.

One common theme of examples and applications is demonstrating how little it takes to achieve interesting behavior, and how the proper methodology can discover these cases.

First, there are perhaps several dozen cases where the NKS book introduces the simplest known system in some class that has a particular characteristic. Some examples include the first primitive recursive function that results in complexity, the smallest universal Turing Machine, and the shortest axiom for propositional calculus. In a similar vein, Wolfram also demonstrates a large number of minimal examples of how simple programs exhibit phenomena like phase transitions, conserved quantities and continuum behavior and thermodynamics that are familiar from traditional science. Simple computational models of natural systems like shell growth, fluid turbulence, and phyllotaxis are a final category of applications that fall in this theme.

Another common theme is taking facts about the computational universe as a whole and using them to reason about fields in a holistic way. For instance, Wolfram discusses how facts about the computational universe inform evolutionary theory, SETI, free will, computational complexity theory, and philosophical fields like ontology, epistemology, and even postmodernism.

A particularly intriguing suggestion is that the theory of computational irreducibility may provide a resolution to the existence of free will in a nominally deterministic universe. Wolfram posits that if the computational process in the brain of the being with free will is actually complex enough so that it cannot be captured in a simpler computation, due to the principle of computational irreducibility, then while the process is indeed deterministic, there is no better way to determine the being's will than to essentially run the experiment and let the being just decide.

The book also contains a vast number of individual results -- both experimental and analytic -- about what a particular automaton computes, or what its characteristics are, using some methods of analysis.

[edit] Reception

NKS received unprecedented media publicity for a scientific book, generating scores of articles in such publications as The New York Times, Newsweek, Wired, and The Economist. It was a best-seller and won numerous awards.

NKS was reviewed in a large range of scientific journals. Several themes emerged. On the positive, many reviewers enjoyed the quality of the book's production, and the clear way Wolfram presented many ideas. Many reviewers, even those who engaged in other criticisms, found aspects of the book to be interesting and thought-provoking. On the negative, many reviewers criticized Wolfram for his lack of modesty, poor editing, lack of mathematical rigor, and the lack of immediate utility of his ideas. Concerning the ultimate importance of the book, a common attitude was that of either scepticism or "wait and see". Detailed criticisms are outlined below.

Many reviewers and the media focused on the use of simple programs, and cellular automata in particular, to model nature–rather than the more fundamental idea of systematically exploring the universe of simple programs.

[edit] Criticism of NKS

The book has attracted several types of criticism.

[edit] Scientific philosophy

A key tenet of NKS is that the simpler the system, the more likely a version of it will recur in a wide variety of more complicated contexts. Therefore, NKS argues that systematically exploring the space of simple programs will lead to a base of reusable knowledge.

However, many scientists believe that of all possible parameters, only some actually occur in the universe. That, for instance, of all possible variations of an equation, most will be essentially meaningless. NKS has also been criticized for asserting that the behavior of simple systems is somehow representative of all systems.

[edit] Methodology

A common criticism of NKS is that it does not follow established scientific methodology. NKS does not establish rigorous mathematical definitions, nor does it attempt to prove theorems.

Along these lines, NKS has also been criticized for being heavily visual, with much information conveyed by pictures that do not have formal meaning.

It has also been criticized for not using modern research in the field of complexity, particularly the works that have studied complexity from a rigorous mathematical perspective.

Critics also note that none of the book's contents were published in peer-reviewed journals, the standard method for distributing new results, and complained he insufficiently credited other scientists whose work he builds on. (Wolfram relegates all discussion of other people to his lengthy endnotes and thus no one is directly credited in the text. But his critics argue that even the endnotes are misleading, glossing over many relevant discoveries and thus making Wolfram's work seem more novel.)

[edit] Utility

NKS has been criticized for not providing specific breakthroughs that would be immediately applicable to ongoing scientific research.

There has also been criticism, implicit and explicit, that the study of simple programs has little connection to the physical universe, and hence is of limited value.

[edit] Principle of computational equivalence

The PCE has been criticized for being vague, unmathematical, and for not making directly verifiable predictions.

It has also been criticized for being contrary to the spirit of research in mathematical logic and computational complexity theory, which seek to make fine-grained distinctions between levels of computational sophistication.

Others suggest it is little more than a rechristening of the Church-Turing thesis.

[edit] The fundamental theory

Wolfram's speculations of a direction towards a fundamental theory of physics have been criticized as vague and obsolete. It has also been claimed that his model is ruled out by Bell's theorem.

[edit] Natural selection

Wolfram's claim that natural selection is not the fundamental cause of complexity in biology has led some to state that Wolfram does not understand the theory of evolution. A common sentiment is that NKS may explain features like the forms of organisms, but does not explain their functional complexity.

[edit] Originality and self-image

NKS has been heavily criticized as not being original or important enough to justify its title and claims.

Edward Fredkin and Konrad Zuse pioneered the idea of a computable universe, and specifically the idea of the universe as a cellular automaton. It has been claimed that NKS tries to take these ideas as its own. Juergen Schmidhuber has been a particularly vocal critic in this regard, throwing in the additional charge that his work on Turing machine-computable physics was stolen without attribution.

It has been claimed that the core idea that very simple rules often generate great complexity is already an understood and established idea in science-particularly in chaos and complexity research.

The authoritative manner in which NKS presents a vast number of examples and arguments has been criticized as leading the reader to believe that each of these ideas was original to Wolfram. In particular, the most substantial new technical result presented in the book, that the rule 110 cellular automaton is Turing complete, was not proven by Wolfram, but by his research assistant, Matthew Cook.

Some have argued that the use of computer simulation is ubiquitous, and instead of starting a paradigm shift NKS just adds justification to a paradigm shift that has already occurred.

[edit] See also

[edit] External links

Official site
Reviews and overviews
In other languages