Hypercomputation
Hypercomputation or super-Turing computation refers to models of computation that go beyond Turing computability. This includes various hypothetical methods for the computation of non-Turing-computable functions.
The term "super-Turing computation" emerged in the early 1990s, with at least two independent sources noted in the literature. It appeared in talks, the PhD thesis,[1] and even earlier technical reports of Hava Siegelmann since the early 1990s for a very particular theory (described below) and became a subfield of computation since her 1995 Science paper.[2] It also appears in a 1990 talk[3] and 1991 technical report[4] by Mike Stannett, who also published a theoretical discussion of an X-machine-based "super-Turing machine" in March 1990.[5]
The term "hypercomputation" was introduced in 1999 by Jack Copeland and Diane Proudfoot.[6]
The terms are not quite synonymous: "super-Turing computation" by Siegelmann usually implies that the proposed model is likely to exist in nature (via biology and/or physics) and may thus be physically realizable, while "hypercomputation" may be a general mathematical or philosophical idea.
History
A computational model going beyond Turing machines was introduced by Alan Turing in his 1938 PhD dissertation Systems of Logic Based on Ordinals.[7] This paper investigated mathematical systems in which an oracle was available, which could compute a single arbitrary (non-recursive) function from naturals to naturals. He used this device to prove that even in those more powerful systems, undecidability is still present. Turing's oracle machines are mathematical abstractions, and are not physically realizable.[8]
Hypercomputation and the Church–Turing thesis
The Church–Turing thesis states that any algorithmically computable function can be computed by a Turing machine. Hypercomputers compute functions that a Turing machine cannot and which are, hence, not computable in the Church-Turing sense.
An example of a problem a Turing machine cannot solve is the halting problem. A Turing machine cannot decide if an arbitrary program halts or runs forever. Some proposed hypercomputers could simulate the program for an infinite number of steps and tell the user whether the program halted. In particular, Siegelmann showed in her 1993 PhD thesis,[1] and later in her 1998 book,[9] that the halting problem of Turing machines can be solved with analog recurrent neural networks.
Super-Turing Computation and the Super-Turing thesis
In the early 1990s Hava Siegelmann and Eduardo Sontag proved that an Artificial Recurrent Neural Network (ARNN), could perform beyond the Turing limit (hypercomputation)[10][11]
Siegelmann and colleagues revealed a hierarchy of well established computational classes that start at the Turing machine and ascend to the full super-Turing power [12] Siegelmann described in various publications that there are many ways to reach super-Turing computation. While first she showed the existence of Super-Turing computation via analog neural networks with fixed (non-learning) and unbounded weights, she went on to prove super-Turing power in many varied realizable machines including small precision weight neural networks that receive their power from: learning in analog domain, or using stochasticity,[9] or evolving over time,[13] or use data from environment, or systems that are Turing in any computable steps and in between change by one of the above methods. See also work in Verifying Properties of Neural Networks
Siegelmann's Science paper demonstrated a first step of realizing this computation[2] while current efforts exist by physicists and engineers to build such systems. Siegelmann's recent research proves that the state space of Super-Turing computation is while the space of Turing machines is only, giving a new understanding to the Turing test - as separating spaces of different size [14]
The theory led to better understanding of neural networks (the foundation of deep-learning) and supported innovative applications in such sensitive areas as radar registration and the control of nuclear plants. [15] [16] [17] [18] Siegelmann and colleagues went further to create a complexity theory for continuous time and physical systems.[19][20] As a particular example Siegelmann's group analyzed linear programming and other computer-science problems showing that analog computers can solve these faster than discrete time computers [21] [22]
A recent publication reveals that Alan Turing had been searching for Super-Turing computation based on brain principles, and show his suggested directions how to look for it perfectly match with Siegelmann's theories. [23] Siegelmann and Sontag proposed a new computational hypothesis - whereas analog, learning, and evolving systems are constrained by the Super-Turing computational power.
Other Hypercomputer proposals
- A Turing machine that can complete infinitely many steps. Simply being able to run for an unbounded number of steps does not suffice. One mathematical model is the Zeno machine (inspired by Zeno's paradox). The Zeno machine performs its first computation step in (say) 1 minute, the second step in ½ minute, the third step in ¼ minute, etc. By summing 1+½+¼+... (a geometric series) we see that the machine performs infinitely many steps in a total of 2 minutes. According to Shagrir, Zeno machines introduce physical paradoxes and its state is logically undefined outside of one-side open period of [0, 2), thus undefined exactly at 2 minutes after beginning of the computation.[24]
- Turing's original oracle machines, defined by Turing in 1939.
- In mid 1960s, E Mark Gold and Hilary Putnam independently proposed models of inductive inference (the "limiting recursive functionals"[25] and "trial-and-error predicates",[26] respectively). These models enable some nonrecursive sets of numbers or languages (including all recursively enumerable sets of languages) to be "learned in the limit"; whereas, by definition, only recursive sets of numbers or languages could be identified by a Turing machine. While the machine will stabilize to the correct answer on any learnable set in some finite time, it can only identify it as correct if it is recursive; otherwise, the correctness is established only by running the machine forever and noting that it never revises its answer. Putnam identified this new interpretation as the class of "empirical" predicates, stating: "if we always 'posit' that the most recently generated answer is correct, we will make a finite number of mistakes, but we will eventually get the correct answer. (Note, however, that even if we have gotten to the correct answer (the end of the finite sequence) we are never sure that we have the correct answer.)"[26] L. K. Schubert's 1974 paper "Iterated Limiting Recursion and the Program Minimization Problem" [27] studied the effects of iterating the limiting procedure; this allows any arithmetic predicate to be computed. Schubert wrote, "Intuitively, iterated limiting identification might be regarded as higher-order inductive inference performed collectively by an ever-growing community of lower order inductive inference machines."
- A real computer (a sort of idealized analog computer) can perform hypercomputation[28] if physics admits general real variables (not just computable reals), and these are in some way "harnessable" for computation. This might require quite bizarre laws of physics (for example, a measurable physical constant with an oracular value, such as Chaitin's constant), and would at minimum require the ability to measure a real-valued physical value to arbitrary precision despite thermal noise and quantum effects.
- A proposed technique known as fair nondeterminism or unbounded nondeterminism may allow the computation of noncomputable functions.[29] There is dispute in the literature over whether this technique is coherent, and whether it actually allows noncomputable functions to be "computed".
- It seems natural that the possibility of time travel (existence of closed timelike curves (CTCs)) makes hypercomputation possible by itself. However, this is not so since a CTC does not provide (by itself) the unbounded amount of storage that an infinite computation would require. Nevertheless, there are spacetimes in which the CTC region can be used for relativistic hypercomputation.[30] Access to a CTC may allow the rapid solution to PSPACE-complete problems, a complexity class which, while Turing-decidable, is generally considered computationally intractable.[31][32]
- According to a 1992 paper,[33] a computer operating in a Malament-Hogarth spacetime or in orbit around a rotating black hole[34] could theoretically perform non-Turing computations.[35][36]
- The infinite time Turing machine is a generalization of the Zeno machine, that can perform infinitely long computations whose steps are enumerated by potentially transfinite ordinal numbers. It models an otherwise-ordinary Turing machine for which non-halting computations are completed by entering a special state reserved for reaching a limit ordinal and to which the results of the preceding infinite computation are available.[37]
- Jan van Leeuwen and Jiří Wiedermann wrote a paper[38] suggesting that the Internet should be modeled as a nonuniform computing system equipped with an advice function representing the ability of computers to be upgraded.
- A symbol sequence is computable in the limit if there is a finite, possibly non-halting program on a universal Turing machine that incrementally outputs every symbol of the sequence. This includes the dyadic expansion of π and of every other computable real, but still excludes all noncomputable reals. Traditional Turing machines cannot edit their previous outputs; generalized Turing machines, as defined by Jürgen Schmidhuber, can. He defines the constructively describable symbol sequences as those that have a finite, non-halting program running on a generalized Turing machine, such that any output symbol eventually converges; that is, it does not change any more after some finite initial time interval. Due to limitations first exhibited by Kurt Gödel (1931), it may be impossible to predict the convergence time itself by a halting program, otherwise the halting problem could be solved. Schmidhuber ([39][40]) uses this approach to define the set of formally describable or constructively computable universes or constructive theories of everything. Generalized Turing machines can solve the halting problem by evaluating a Specker sequence.
- A quantum mechanical system which somehow uses an infinite superposition of states to compute a non-computable function.[41] This is not possible using the standard qubit-model quantum computer, because it is proven that a regular quantum computer is PSPACE-reducible (a quantum computer running in polynomial time can be simulated by a classical computer running in polynomial space).[42]
- In 1970, E.S. Santos defined a class of fuzzy logic-based "fuzzy algorithms" and "fuzzy Turing machines".[43] Subsequently, L. Biacino and G. Gerla showed that such a definition would allow the computation of nonrecursive languages; they suggested an alternative set of definitions without this difficulty.[44] Jiří Wiedermann analyzed the capabilities of Santos' original proposal in 2004.[45]
- Dmytro Taranovsky has proposed a finitistic model of traditionally non-finitistic branches of analysis, built around a Turing machine equipped with a rapidly increasing function as its oracle. By this and more complicated models he was able to give an interpretation of second-order arithmetic.[46]
Analysis of capabilities
Many hypercomputation proposals amount to alternative ways to read an oracle or advice function embedded into an otherwise classical machine. Others allow access to some higher level of the arithmetic hierarchy. For example, supertasking Turing machines, under the usual assumptions, would be able to compute any predicate in the truth-table degree containing or . Limiting-recursion, by contrast, can compute any predicate or function in the corresponding Turing degree, which is known to be . Gold further showed that limiting partial recursion would allow the computation of precisely the predicates.
Model | Computable predicates | Notes | Refs |
---|---|---|---|
supertasking | tt() | dependent on outside observer | [47] |
limiting/trial-and-error | [25] | ||
iterated limiting (k times) | [27] | ||
Blum-Shub-Smale machine | incomparable with traditional computable real functions | [48] | |
Malament-Hogarth spacetime | HYP | dependent on spacetime structure | [49] |
analog recurrent neural network | f is an advice function giving connection weights; size is bounded by runtime | [2][50] | |
infinite time Turing machine | [51] | ||
classical fuzzy Turing machine | for any computable t-norm | [45] | |
increasing function oracle | for the one-sequence model; are r.e. | [46] |
Taxonomy of "super-recursive" computation methodologies
Mark Burgin has collected a list of what he calls "super-recursive algorithms" (from Burgin 2005: 132):
- limiting recursive functions and limiting partial recursive functions (E. M. Gold[25])
- trial and error predicates (Hilary Putnam[26])
- inductive inference machines (Carl Herbert Smith)
- inductive Turing machines (one of Burgin's own models)
- limit Turing machines (another of Burgin's models)
- trial-and-error machines (Ja. Hintikka and A. Mutanen[52])
- general Turing machines (J. Schmidhuber[40])
- Internet machines (van Leeuwen, J. and Wiedermann, J.[38])
- evolutionary computers, which use DNA to produce the value of a function (Darko Roglic[53])
- fuzzy computation (Jiří Wiedermann[45])
- evolutionary Turing machines (Eugene Eberbach[54])
In the same book, he presents also a list of "algorithmic schemes":
- Turing machines with arbitrary oracles (Alan Turing)
- transrecursive operators (Borodyanskii and Burgin[55])
- machines that compute with real numbers (L. Blum, F. Cucker, M. Shub, and S. Smale)
- Static neural networks based on real weights or equivalently "Neural networks of finite precision weights but with asynchronous update, stochastic coin, evolving or learning (weights and/or structure). (Hava Siegelmann)
Criticism
Martin Davis, in his writings on hypercomputation[56][57] refers to this subject as "a myth" and offers counter-arguments to the physical realizability of hypercomputation. As for its theory, he argues against the claims that this is a new field founded in the 1990s. This point of view relies on the history of computability theory (degrees of unsolvability, computability over functions, real numbers and ordinals), as also mentioned above. In his argument he makes a remark that all of hypercomputation is trivial as : " if non computable inputs are permitted then non computable outputs are attainable." It is widely accepted that this criticism refers to earliest mathematical and philosophical suggestions and ignores much of the newer proposals that are not subject to the criticism.
Andrew Hodges wrote a critical commentary[58] on Copeland and Proudfoot's article.[6]
See also
References
- 1 2 Hava Siegelmann (Oct 1993). Foundations of Recurrent Neural Networks (Ph.D. thesis). Rutgers, The State University of New Jersey.
- 1 2 3 H.T. Siegelmann (Apr 1995). "Computation Beyond the Turing Limit" (PDF). Science 268 (5210): 545—548. doi:10.1126/science.268.5210.545. PMID 17756722.
- ↑ Mike Stannett, Super-Turing Computation. Seminar presentation (scans of original slides), Department of Computer Science, University of Sheffield, 1990.
- ↑ Mike Stannett, An Introduction to post-Newtonian and super-Turing computation. Technical Report CS-91-02, Department of Computer Science, University of Sheffield, 1991.
- ↑ Mike Stannett, 1990, X-machines and the halting problem: Building a super-Turing machine Formal Aspects of Computing, Volume 2, Issue 1, pp. 331-441. http://link.springer.com/article/10.1007%2FBF01888233
- 1 2 Copeland and Proudfoot, Alan Turing's forgotten ideas in computer science. Scientific American, April 1999
- ↑ Alan Turing, 1939, Systems of Logic Based on Ordinals Proceedings London Mathematical Society Volumes 2–45, Issue 1, pp. 161–228.
- ↑ "Let us suppose that we are supplied with some unspecified means of solving number-theoretic problems; a kind of oracle as it were. We shall not go any further into the nature of this oracle apart from saying that it cannot be a machine" (Undecidable p. 167, a reprint of Turing's paper Systems of Logic Based On Ordinals)
- 1 2 H.T. Siegelmann, "Neural Networks and Analog Computation: Beyond the Turing Limit", Birkhauser, Boston, December 1998
- ↑ H.T. Siegelmann and E.D. Sontag (1995). "Computational Power of Neural Networks" (PDF). Journal of Computer System Sciences 50 (1): 132—150. doi:10.1006/jcss.1995.1013. (Work described as most fundamental theorem about neural networks in Simon Haykin’s book of Neural Networks.)
- ↑ H.T. Siegelmann and E.D. Sontag (1994). "Analog Computation via Neural Networks" (PDF). Theoretical Computer Science 131: 331—360. doi:10.1016/0304-3975(94)90178-3. (Work described as the fundamental theorem differentiating neural networks from classical computers in Simon Haykin’s book of Neural Networks; is cited in the field and in the media.)
- ↑ J.L. Balcázar and R. Gavaldà and H.T. Siegelmann (Jul 1997). "Computational Power of Neural Networks: A Characterization in Terms of Kolmogorov Complexity" (PDF). IEEE Transactions on Information Theory 43 (4): 1175—1183. doi:10.1109/18.605580.
- ↑ J. Cabessa and H. T. Siegelmann (2014). "The Super-Turing Computational Power of Plastic Recurrent Neural Networks" (PDF). International Journal of Neural Systems 24 (8).
- ↑ J. Cabessa and H.T. Siegelmann (Apr 2012). "The Computational Power of Interactive Recurrent Neural Networks" (PDF). Neural Computation 24 (4): 996—1019. doi:10.1162/neco_a_00263.
- ↑ J.P. Neto, H.T. Siegelmann, and J.F. Costa, “Implementation of Programming Languages with Neural Nets,” International Journal of Computing Anticipatory Systems 1, 1997: 201-208
- ↑ J. Kilian and H.T. Siegelmann, “The Dynamic Universality of Sigmoidal Neural Networks,” Information and Computation 128(1), 1996: 45-56.
- ↑ H.T. Siegelmann, B.G. Horne, and C.L.Giles, “Computational Capabilities of Recurrent NARX Neural Networks,” IEEE Transaction on Systems, Man and Cybernetics – part B: Cybernetics 27(2), 1997: 208-215.
- ↑ 47. H.T. Siegelmann, E. Nissan, and A. Galperin, “A Novel Neural/Symbolic Hybrid Approach to Heuristically Optimized Fuel Allocation and Automated Revision of Heuristics in Nuclear Engineering,” Advances in Engineering Software 28(9), 1997: 581-592.
- ↑ H.T. Siegelmann, A. Ben-Hur and S. Fishman, “Computational Complexity for Continuous Time Dynamics,” Physical Review Letters, 83(7), 1999: 1463-1466.
- ↑ H.T. Siegelmann and S. Fishman, “Computation by Dynamical Systems,” Physica D 120, 1998 (1-2): 214-235.
- ↑ A. Ben-Hur, J. Feinberg, S. Fishman and H. T. Siegelmann, “Random matrix theory for the analysis of the performance of an analog computer: a scaling theory,” Physics Letters A. 323(3-4), March 2004: 204-209.
- ↑ A. Ben-Hur, J. Feinberg, S. Fishman and H. T. Siegelmann, “Probabilistic analysis of a differential equation for linear programming,” Journal of Complexity 19(4), August 2003: 474-510.
- ↑ H. T. Siegelmann, “Turing on Super-Turing and Adaptivity”. J. Progress in Biophysics & Molecular Biology. April (Sep) 2013, 113(1):117-26. doi: 10.1016/j.pbiomolbio.2013.03.013.
- ↑ These models have been independently developed by many different authors, including Hermann Weyl (1927). Philosophie der Mathematik und Naturwissenschaft.; the model is discussed in Shagrir, O. (June 2004). "Super-tasks, accelerating Turing machines and uncomputability" (PDF). Theor. Comput. Sci. 317: 105–114. doi:10.1016/j.tcs.2003.12.007. and in Petrus H. Potgieter (July 2006). "Zeno machines and hypercomputation". Theoretical Computer Science 358 (1): 23–33. doi:10.1016/j.tcs.2005.11.040.
- 1 2 3 E. M. Gold (1965). "Limiting Recursion". Journal of Symbolic Logic 30 (1): 28–48. doi:10.2307/2270580. JSTOR 2270580., E. Mark Gold (1967). "Language identification in the limit". Information and Control 10 (5): 447–474. doi:10.1016/S0019-9958(67)91165-5.
- 1 2 3 Hilary Putnam (1965). "Trial and Error Predicates and the Solution to a Problem of Mostowksi". Journal of Symbolic Logic 30 (1): 49–57. doi:10.2307/2270581. JSTOR 2270581.
- 1 2 L. K. Schubert (July 1974). "Iterated Limiting Recursion and the Program Minimization Problem". Journal of the ACM 21 (3): 436–445. doi:10.1145/321832.321841.
- ↑ Arnold Schönhage, "On the power of random access machines", in Proc. Intl. Colloquium on Automata, Languages, and Programming (ICALP), pages 520-529, 1979. Source of citation: Scott Aaronson, "NP-complete Problems and Physical Reality" p. 12
- ↑ Edith Spaan, Leen Torenvliet and Peter van Emde Boas (1989). "Nondeterminism, Fairness and a Fundamental Analogy". EATCS bulletin 37: 186–193.
- ↑ Hajnal Andréka, István Németi and Gergely Székely, Closed Timelike Curves in Relativistic Computation Parallel Process. Lett. 22, 1240010 (2012).
- ↑ Todd A. Brun, Computers with closed timelike curves can solve hard problems, Found.Phys.Lett. 16 (2003) 245-253.
- ↑ S. Aaronson and J. Watrous. Closed Timelike Curves Make Quantum and Classical Computing Equivalent
- ↑ Hogarth, M., 1992, 'Does General Relativity Allow an Observer to View an Eternity in a Finite Time?', Foundations of Physics Letters, 5, 173–181.
- ↑ István Neméti; Hajnal Andréka (2006). "Can General Relativistic Computers Break the Turing Barrier?". Logical Approaches to Computational Barriers, Second Conference on Computability in Europe, CiE 2006, Swansea, UK, June 30-July 5, 2006. Proceedings. Lecture Notes in Computer Science 3988. Springer. doi:10.1007/11780342.
- ↑ Etesi, G., and Nemeti, I., 2002 'Non-Turing computations via Malament-Hogarth space-times', Int.J.Theor.Phys. 41 (2002) 341–370, Non-Turing Computations via Malament-Hogarth Space-Times:.
- ↑ Earman, J. and Norton, J., 1993, 'Forever is a Day: Supertasks in Pitowsky and Malament-Hogarth Spacetimes', Philosophy of Science, 5, 22–42.
- ↑ Joel David Hamkins and Andy Lewis, Infinite time Turing machines, Journal of Symbolic Logic, 65(2):567-604, 2000.
- 1 2 Jan van Leeuwen; Jiří Wiedermann (September 2000). "On Algorithms and Interaction". MFCS '00: Proceedings of the 25th International Symposium on Mathematical Foundations of Computer Science. Springer-Verlag.
- ↑ Jürgen Schmidhuber (2000). "Algorithmic Theories of Everything". Sections in: Hierarchies of generalized Kolmogorov complexities and nonenumerable universal measures computable in the limit. International Journal of Foundations of Computer Science ():587-612 (). Section 6 in: the Speed Prior: A New Simplicity Measure Yielding Near-Optimal Computable Predictions. in J. Kivinen and R. H. Sloan, editors, Proceedings of the 15th Annual Conference on Computational Learning Theory (COLT ), Sydney, Australia, Lecture Notes in Artificial Intelligence, pages 216--228. Springer, . 13 (4): 1–5. arXiv:quant-ph/0011122.
- 1 2 J. Schmidhuber (2002). "Hierarchies of generalized Kolmogorov complexities and nonenumerable universal measures computable in the limit". International Journal of Foundations of Computer Science 13 (4): 587–612. doi:10.1142/S0129054102001291.
- ↑ There have been some claims to this effect; see Tien Kieu (2003). "Quantum Algorithm for the Hilbert's Tenth Problem". Int. J. Theor. Phys. 42 (7): 1461–1478. arXiv:quant-ph/0110136. doi:10.1023/A:1025780028846. or M. Ziegler (2005). "Computational Power of Infinite Quantum Parallelism". International Journal of Theoretical Physics 44 (11): 2059–2071. doi:10.1007/s10773-005-8984-0. and the ensuing literature. For a retort see Warren D. Smith. "Three counterexamples refuting Kieu's plan for "quantum adiabatic hypercomputation"; and some uncomputable quantum mechanical tasks". Applied Mathematics and Computation 178 (1): 184–193. doi:10.1016/j.amc.2005.09.078..
- ↑ Bernstein and Vazirani, Quantum complexity theory, SIAM Journal on Computing, 26(5):1411-1473, 1997.
- ↑ Santos, Eugene S. (1970). "Fuzzy Algorithms". Information and Control 17 (4): 326–339. doi:10.1016/S0019-9958(70)80032-8.
- ↑ Biacino, L.; Gerla, G. (2002). "Fuzzy logic, continuity and effectiveness". Archive for Mathematical Logic 41 (7): 643–667. doi:10.1007/s001530100128. ISSN 0933-5846.
- 1 2 3 Wiedermann, Jiří (2004). "Characterizing the super-Turing computing power and efficiency of classical fuzzy Turing machines". Theor. Comput. Sci. 317 (1–3): 61–69. doi:10.1016/j.tcs.2003.12.004.
- 1 2 Dmytro Taranovsky (July 17, 2005). "Finitism and Hypercomputation". Retrieved Apr 26, 2011.
- ↑ Petrus H. Potgieter (July 2006). "Zeno machines and hypercomputation". Theoretical Computer Science 358 (1): 23–33. doi:10.1016/j.tcs.2005.11.040.
- ↑ Lenore Blum, Felipe Cucker, Michael Shub, and Stephen Smale. Complexity and Real Computation. ISBN 0-387-98281-7.
- ↑ P.D. Welch (10 Sep 2006). "The extent of computation in Malament-Hogarth spacetimes". British J. for Philosophy of Science 59: 659–674. arXiv:gr-qc/0609035. Check date values in:
|year= / |date= mismatch
(help) - ↑ Hava Siegelmann; Eduardo Sontag (1994). "Analog Computation via Neural Networks". Theoretical Computer Science 131 (2): 331–360. doi:10.1016/0304-3975(94)90178-3.
- ↑ Joel David Hamkins; Andy Lewis (2000). "Infinite Time Turing machines". Journal of Symbolic Logic 65 (2): 567–604. doi:10.2307/2586556.
- ↑ Hintikka, Ja; Mutanen, A. (1998). "An Alternative Concept of Computability". Language, Truth, and Logic in Mathematics. Dordrecht. pp. 174–188.
- ↑ Darko Roglic (24 Jul 2007). "The universal evolutionary computer based on super-recursive algorithms of evolvability". arXiv:0708.2686 [cs.NE].
- ↑ Eugene Eberbach (2002). "On expressiveness of evolutionary computation: is EC algorithmic?". Computational Intelligence, WCCI 1: 564–569. doi:10.1109/CEC.2002.1006988.
- ↑ Borodyanskii, Yu M; Burgin, M. S. (1994). "Operations and compositions in transrecursive operators". Cybernetics and Systems Analysis 30 (4): 473–478. doi:10.1007/BF02366556.
- ↑ Davis, Martin (2006). "Why there is no such discipline as hypercomputation". Applied Mathematics and Computation 178 (1): 4–7. doi:10.1016/j.amc.2005.09.066.
- ↑ Davis, Martin (2004). "The Myth of Hypercomputation". Alan Turing: Life and Legacy of a Great Thinker. Springer.
- ↑ Andrew Hodges. "The Professors and the Brainstorms". The Alan Turing Home Page. Retrieved 23 September 2011.
Further reading
- Mike Stannett (1990). "X-machines and the halting problem: Building a super-Turing machine". Formal Aspects of Computing 2 (1): 331–341. doi:10.1007/BF01888233.
- Turing, Alan (1939). "Systems of logic based on ordinals". Proceedings of the London Mathematical Society 45: 161–228. doi:10.1112/plms/s2-45.1.161.
- Mike Stannett, The case for hypercomputation, Applied Mathematics and Computation, Volume 178, Issue 1, 1 July 2006, Pages 8–24, Special Issue on Hypercomputation
- Keith Douglas. Super-Turing Computation: a Case Study Analysis (PDF), M.S. Thesis, Carnegie Mellon University, 2003.
- L. Blum, F. Cucker, M. Shub, S. Smale, Complexity and Real Computation, Springer-Verlag 1997. General development of complexity theory for abstract machines that compute on real numbers instead of bits.
- On the computational power of neural nets
- Toby Ord. Hypercomputation: Computing more than the Turing machine can compute: A survey article on various forms of hypercomputation.
- Apostolos Syropoulos (2008), Hypercomputation: Computing Beyond the Church-Turing Barrier (preview), Springer. ISBN 978-0-387-30886-9
- Burgin, M. S. (1983) Inductive Turing Machines, Notices of the Academy of Sciences of the USSR, v. 270, No. 6, pp. 1289–1293
- Mark Burgin (2005), Super-recursive algorithms, Monographs in computer science, Springer. ISBN 0-387-95569-0
- Cockshott, P. and Michaelson, G. Are there new Models of Computation? Reply to Wegner and Eberbach, The computer Journal, 2007
- Cooper, S. B. (2006). "Definability as hypercomputational effect" (PDF). Applied Mathematics and Computation 178: 72–82. doi:10.1016/j.amc.2005.09.072.
- Cooper, S. B.; Odifreddi, P. (2003). "Incomputability in Nature". In S. B. Cooper and S. S. Goncharov. Computability and Models: Perspectives East and West (PDF). Plenum Publishers, New York, Boston, Dordrecht, London, Moscow. pp. 137–160.
- Copeland, J. (2002) Hypercomputation, Minds and machines, v. 12, pp. 461–502
- Martin Davis (2006), "The Church–Turing Thesis: Consensus and opposition". Proceedings, Computability in Europe 2006. The requested URL /~simon/TEACH/28000/DavisUniversal.pdf was not found on this server. Lecture notes in computer science, 3988 pp. 125–132
- Hagar, A. and Korolev, A., Quantum Hypercomputation—Hype or Computation?, (2007)
- Rogers, H. (1987) Theory of Recursive Functions and Effective Computability, MIT Press, Cambridge Massachusetts
- Volkmar Putz and Karl Svozil, Can a computer be "pushed" to perform faster-than-light?, (2010)
External links
- Hypercomputation Research Network
- Hypercomputation
- Toby Ord, Hypercomputation: computing more than the Turing machine
- Toby Ord, The many forms of hypercomputation
- Paolo Cotogno, Hypercomputation and the Physical Church-Turing thesis
- Gualtiero Piccinini, Computation in Physical Systems
- H.T. Siegelmann's publications page at University of Massachusetts Amherst