Talk:Super-recursive algorithm

From Wikipedia, the free encyclopedia

Articles for deletion This article was nominated for deletion on 25 May 2008. The result of the discussion was no consensus.

Contents

[edit] Are the claims made in this article accepted by the scientific community?

This article seems to be based mainly on the work of Mark Burgin, UCLA, which appeared as a Springer book. While this may look like a clear case of verified science, where only notability could be a concern, the sensational claim made in this article, and the unusual fact that a refutation of Burgin's book appeared in the Bulletin of Symbolic Logic, strongly suggest the opposite; i.e. we need to verify that the claims are accepted by the scientific community before presenting them as facts. (See WP:NPOV.) I currently do not have a strong opinion on this topic, but I will make myself acquainted with it.

In any case I would advise the new editor who has contributed this article and linked to it from various places (I reverted all links but one), to familiarise themselves with our policies and guidelines such as WP:FRINGE and WP:COI, to see if they might apply here, and, as the case may be, to prepare for arguing the opposite. --Hans Adler (talk) 18:00, 7 February 2008 (UTC)


Taking into account your comments above, I have changed "church-turing thesis is refuted" to "church-turing thesis is claimed to be refuted." in all the articles in which this phrase now appears. Kizeral (talk) 18:08, 7 February 2008 (UTC)

I'd recommend attributing the claim to a specific person and citing a relevant, reliable source for the existence of the claim. Michael Slone (talk) 18:26, 7 February 2008 (UTC)

With reagrds to notability, the wikipedia guidelines say that even a debunking or disparaging in a serious journal counts towards the notability, and you did give a reference to Bul sym log Kizeral (talk) 18:24, 7 February 2008 (UTC)

See my comments on the Talk page of the Church Turing thesis. Nobody seriously regards Burgin's work as even coming close to a refutation of the thesis. In any case, this article should be merged to hypercomputation and the only fair way to present it is as a fringe theory with no serious backing in the scientific community. Pascal.Tesson (talk) 19:39, 7 February 2008 (UTC)
My first thought was also to merge with hypercomputer, but after reflection I think that is not the best solution. My impression is that this is a particular type of hypercomputer that will take a full article to explain. It is important, though, to use due weight in describing philosophical claims about the Church-Turing thesis. Martin Davis has published a review of Burgin's book and a separate paper on Burgin's book and other notions of hypercomputing; these would be good sources to use to document the position held by the vast majority of computability theorists. — Carl (CBM · talk) 19:57, 7 February 2008 (UTC)

Just from the description I would guess that this hypercomputation model is equivalent to Zeno machines, or perhaps slightly weaker. I think the article should clarify the connection --Hans Adler (talk) 21:01, 7 February 2008 (UTC)

Yes, I agree. This means someone needs to pull Burgin's book from the library. Kizeral - can you add the explanation to the article? — Carl (CBM · talk) 21:06, 7 February 2008 (UTC)

[edit] More 0s than 1s

There was a claim that the algorithm article discusses an algorithm to tell whether there are more 0s than 1s in an infinite binary sequence. There is no such algorithm (in the article or not, it simply doesn't exist). So I removed that claim from this article. — Carl (CBM · talk)` —Preceding comment was added at 23:27, 28 February 2008 (UTC)

There are no turing machine algorithms that can do this for arbitrary sequences. However, there are things like quantum computers and variants thereof, which are "programmable", and can thus have have "algorithms", and are not isomorphic to turing machines. Some of these can make decisions about infinite sequences ... no surprise, as they are performing calculations with infinite-precision real numbers ... but I have no clue if this has any bearing on what this article is supposed to be about. linas (talk) 05:21, 29 February 2008 (UTC)
The ordinary formulations of "quantum computer" don't compute anything more than Turing machines, they only compute faster. Even accepting that "has more 0s than 1s" can be given any specific meaning, which seems unlikely, I don't believe that there is any algorithm that could decide it. Like many people in computability theory, I have a specific meaning for the word algorithm: it's a deterministic, finitistic, discrete process that terminates after a finite number of steps with the correct answer. We don't call any possible process an algorithm; at least one necessary property is that a human being could simulate the algorithm given nothing but pencil and paper. — Carl (CBM · talk) 14:52, 29 February 2008 (UTC)
It turns out this was actually added to the algorithm article at some point; see talk:algorithm for that. — Carl (CBM · talk) 15:30, 29 February 2008 (UTC)

[edit] Claims about Davis' arguments

Having read Davis' articles on Burgin's work, I don't find the following text accurate, so I am moving it here for further discussion.

To support this claim, Davis gives two arguments. First what bothers Martin is that there is no bound of time necessary to obtain the result. Second problem that he can see is that in order for a computational result to be useful one must be able to at least recognize that it is indeed the result sought. The first argument is also a problem for Turing machines as for them there is also no bound of time necessary to obtain the result. The second argument involves not the problem of what is computable but what is useful. However, different super-recursive algorithms are successfully used in practice.

— Carl (CBM · talk) 23:30, 28 February 2008 (UTC)

[edit] Ungrounded claims and false information

I was really surprised reading this discussion. It looks improbable that serious scientists, logicians and mathematicians can make such ungrounded claims and disseminate false information. However, this happens. To prove this, I will analyze what some of the discussion participants are writing.

Hans Adler writes (18:00, 7 February 2008) "that a refutation of Burgin's book appeared in the Bulletin of Symbolic Logic, ..."

If anybody, including Hans Adler, reads this short review that appeared in the Bulletin of Symbolic Logic, this person will see that there is not a single negative word in the whole review, not speaking about any "refutation". The author starts his review with the statement:

"This is a truly extraordinary book."

and concludes with the citation from the book:

"In concluding, let the author have the last word: “Such misunderstanding of modern computers that restricts them to the conditions of the Church-Turing thesis is not unique and similar ‘blindness’ is not new in society. Thus, people thought for thousands of years that the Sun rotated around the Earth and only in the 16th century Copernicus proved different.”

It would be interesting to know if at least one of the discussion participants read this short review, not speaking about the book that is discussed.

In addition, Hans Adler tries to use Wikipedia policies, WP: NPOV, WP:FRINGE, and WP:COI, to support his attack but misrepresents these policies. Indeed, a neutral point of view (NPOV) means representing fairly, and as far as possible without bias, all significant views that have been published by reliable sources (http://en.wikipedia.org/wiki/Wikipedia:Neutral_point_of_view). His attempt to refute the article "Super-recursive algorithm" contradicts that this article. Hans Adler himself violates this policy because his attempt to refute the article "Super-recursive algorithm" contradicts that this NPOV as the article contains significant views (he writes about the sensational claim made in the discussed book) that have been published by reliable sources. By the way, this claim was made by many researchers, although it looks like Hans Adler does not know this.

Hans Adler also misrepresents and misuses WP:FRINGE (http://en.wikipedia.org/wiki/Wikipedia:Fringe_theories#Sourcing_and_attribution and http://en.wikipedia.org/wiki/Wikipedia:RS), which states:

"Wikipedia relies heavily upon the established literature created by scientists, scholars and researchers around the world. Items that fit this criterion are usually considered reliable. However, they may be outdated by more recent research, or controversial in the sense that there are alternative theories. Wikipedia articles should cover all major scholarly interpretations of a topic, and all major and significant-minority views based on material that has been vetted by the scholarly community is regarded as reliable; this means published in peer-reviewed sources, and reviewed and judged acceptable scholarship by the academic journals."

If Hans Adler looks into publications in the area of super-recursive algorithms, he would be able to see that the topic completely satisfies these conditions. To make this claim grounded, a new list of references to the article is added.

Thus, Hans Adler's suggestion is in conflict with WP:COI because Wikipedians must place the interests of the encyclopedia first and any editor who gives priority to outside interests may be subject to a conflict of interest (http://en.wikipedia.org/wiki/Wikipedia:Conflict_of_interest).

The main problem for Hans Adler is that the book, which is used as a main reference, presents a new theory (theory of super-recursive algorithms). We all know that to understand a new theory demands much more efforts than to understand a new advancement in what we already know. People attacked new theories bacause they were not able to understand them and did not want to make an effort. There are many examples of this situation in the history of science (e.g., Maxwell's electrodynamics) and mathematics (non-Euclidean geometries).

Very strange ideas were suggested by Pascal.Tesson, who wrote the article "Super-recursive algorithm" should be merged to hypercomputation. Indeed, this is the same as to suggest that the article "Algorithm" should be merged to the article "Computation". Now even undergraduates know that algorithm and computation, and respectively, super-recursive algorithm and hypercomputation, are essentially different (although related) concepts.

The position of Carl looks much more grounded when he writes (19:57, 7 February 2008):

Martin Davis has published a review of Burgin's book and a separate paper on Burgin's book and other notions of hypercomputing; these would be good sources to use to document the position held by the vast majority of computability theorists.

To do this it is necessary, at least, to read both sources, the review and the paper. First, there is not a single negative word in the whole review of Davis. Second, when we look at arguments presented in the paper of Davis, we see that they are aimed not at the theory of super-recursive algorithms, but at realizability and usability of this theory. Besides, these arguments can be easily refuted. Indeed, Davis gives two arguments. In the first one, he is concerned that there is no bound of time necessary to obtain the result (Davis 2006, p. 128). The eecond argument is that in order for a computational result to be useful one must be able to at least recognize that it is indeed the result sought (Davis 2006, p. 128). The first argument is also a problem for Turing machines as for them there is also no bound of time necessary to obtain the result. However, as we all know, Turing machines are accepted as the most relevant model for algorithms. The second argument involves not the problem of what is computable but what is useful. However, different super-recursive algorithms are successfully used in practice and this refutes the second argument.

When Hans Adler writes (21:01, 7 February 2008) "that this hypercomputation model is equivalent to Zeno machines, or perhaps slightly weaker" , he only increases confusion and demonstrates poor understanding because words "hypercomputation model" do not point to any model.

In this context, the suggestion of Carl (21:01, 7 February 2008) to read the book "Super-recursive algorithms", or at least, to look into it looks very relevant and important as it is necessary to give a grounded opinion when you are writing an article for an Encyclopedia or discussing such an article.

However, when Carl writes (23:27, 28 February 2008):

"There was a claim that the algorithm article discusses an algorithm to tell whether there are more 0s than 1s in an infinite binary sequence. There is no such algorithm (in the article or not, it simply doesn't exist). So I removed that claim from this article."

He violates the principle of neutrality, which is a cornerstone of Wikipedia, because in section "Termination" of the article "Algorithm" it is written (http://en.wikipedia.org/wiki/Algorithm):

"In the case of non-halting computation method (calculation procedure) success can no longer be defined in terms of halting with a meaningful output. Instead, terms of success that allow for unbounded output sequences must be defined. For example, an algorithm that verifies if there are more zeros than ones in an infinite random binary sequence must run forever to be effective. If it is implemented correctly, however, the algorithm's output will be useful: for as long as it examines the sequence, the algorithm will give a positive response while the number of examined zeros outnumber the ones, and a negative response otherwise. Success for this algorithm could then be defined as eventually outputting only positive responses if there are actually more zeros than ones in the sequence, and in any other case outputting any mixture of positive and negative responses."

Perhaps the article on algorithm is incorrect? Although it is interesting and worthwhile to study algorithms that run forever, to claim that the output of a digit-counting algo after a finite amount of time is "useful" is ...misleading, and seems to be a willful perversion of what the idea of infinity is all about. (And there are no "clever" algos, since there are more reals than rationals.) linas (talk) 05:52, 29 February 2008 (UTC)


In addition, when Carl writes (23:30, 28 February 2008) that he eliminated explanation of arguments given by Davis from the article "Super-recursive algorithms", it is possible to argue this explanation is necessary to correctly present arguments of Davis. Otherwise, the reader of the article will have misleading information, while Wikipedians in general and authors of articles in particular must place the interests of the encyclopedia first not giving a misleading information. Arguments of Davis are taken as direct citations from his paper and explanation of their invalidity is based on known and published facts.


With respect to all discussion participants, (Multipundit (talk) 04:51, 29 February 2008 (UTC))

I disagree with Multipundit's comment on so many levels that I find it hard to pick some of them out for a response. Perhaps most importantly, I had, and still have, an open mind about Burgin's claims and certainly not a strong anti-Burgin opinion, as Multipundit seems to suspect. However, in the few hours that I spent researching this topic, I have seen a combination of things that can be conceived as sensationally strong claims, unconvincing or merely philosophical arguments, and an apparent tendency to isolate the subject from existing well-established terminology. I cannot tell how much of this is due to Burgin himself, and how much of it is just due to some of his followers who may not be professional mathematicians or theoretical computer scientists. In any case the combination raises red flags, and there is no doubt for me that we need to figure out whether we have a situation here that is comparable to cold fusion.
Perhaps my choice of the word "refutation" to characterise Martin Davis' review of Burgin's book was not ideal, but I don't think I have ever read a similarly critical review in the BSL. If I was a native speaker I might have used the word "slating", which I just learned from a dictionary. A review that starts with the ambivalent sentence: "This is a truly extraordinary book" and does not contain a single unambiguously positive word about the book was very likely not meant to be a positive one. Many statements are guarded with phrases such as:
"But he assures us that"
"In the author's brave new world"
"Among the author's astonishing predictions is the following"
"Burgin finds the key to what he regards as"
Moreover, it seems obvious to me that the following sentences must be read as ironical:
"Mathematicians can eagerly await the proofs of the Goldbach Conjecture and the Riemann Hypothesis that this amazing technology promises!"
"Burgin's inductive Turing machines generalize this notion extending their reach through the entire arithmetic hierarchy. It is a pity that he doesn't stop to tell us how he imagines this power could be realized in real-world physical devices."
One general problem here seems to be that the peer review in the subject is for hard mathematical statements, while Burgin's sensational claim is no such thing. It looks like he wants to initiate a paradigm shift from algorithms to his generalised algorithms. This could be justified by claiming that some new generation of real-life computers will be able to run generalised algorithms in such a way that they will give us definite results after running for a finite amount of time, or by saying that we should just stop worrying whether the result we get from an algorithm is the definite one. In the latter case, if we run a computer to decide a mathematical problem such as existence of twin primes we should just start a computer now, which perhaps will say no for the first few years, until in 2038, say, after several hardware upgrades, it might eventually say yes. Too bad for the poor mathematician who published a paper in 2037 based on the negative answer. No, that's not a refutation of the Curch-Turing thesis, and neither are speculations that some computers in the future might be fundamentally more powerful.
There are similar fundamental problems with the traditional notions of algorithms, but that doesn't mean it's a good idea to add new problems.
I am not saying that it's impossible that I got something wrong here, but this is how things look to me now.
Multipundit, if you want me to take you seriously, I suggest that you look at this paragraph that you wrote:
"Thus, Hans Adler's suggestion is in conflict with WP:COI because Wikipedians must place the interests of the encyclopedia first and any editor who gives priority to outside interests may be subject to a conflict of interest"
A quick glance at WP:AGF should be enough to convince you that in the interest of a constructive working atmosphere it would be a good idea for you to strike that paragraph or to present some strong arguments for this accusation. Another link that you might find illuminating is WP:CRYSTAL. --Hans Adler (talk) 11:16, 29 February 2008 (UTC)
I rephrased the arguments using more quotes from Davis. Davis is not concerned at all with the fact that a program can run an arbitrary finite time before halting; rather, he is concerned with the fact that a limit computation never halts, and it is never known when the output is correct. Davis specifically reminds us that part of the definition of an algorithm is that we can tell what the output is meant to be. — Carl (CBM · talk) 15:12, 29 February 2008 (UTC)

I have found three more reviews of Burgin's book on the Web. They show that my understanding that the review of Davis is a positive review is grounded. With respect, Multipundit (talk) 02:07, 3 April 2008 (UTC)

I looked at two of these - the one in Computing Reviews I can't seem to get online here. The Zentralblatt "review" is havily copied from the back of the book; it's not much of a review at all. The review by Smith does point out the "controversy" around Burgin's claims, but avoid taking a stand either way, as is common for reviews.
I don't see any way that Davis' review can be read as positive. Ordinary reviews in the Bulletin of Symbolic Logic don't mock the book they are reviewing, and the sarcasm in Davis' review is especially pointed.
I also found a review in MathSciNet [1] by José Costa. I'll quote the end of his review here.
"To sum up, my answer to whether to recommend this book is yes and no. The book provides valuable information (although some historical information is also inaccurate), but it also contains misleading and confusing mathematics that cannot be recommended for students."
— Carl (CBM · talk) 13:23, 3 April 2008 (UTC)

[edit] Strange remarks

I find the second sentence deeply disturbing: The term was introduced by Mark Burgin to more adequately describe what contemporary computers are doing. In the strict, technical sense, modern computers are just plain Turing machines, so that claim that modern computers are more than just "just plain Turing machines" is non-sense. Yet, in a hand-wavy kind of way, I can perhaps see some inkling of truth: for example, quantum computers, or generalizations thereof (topological/geometric computers) can compute things that turing machines cannot (because they can sum infinite sequences, with turing machines cannot). Insofar as we use floating-point on ordinary computers to approximate these things, we are "using contemporary computers to do these fancier things" ... but this is a subtle point. Trotting it out, raw, naked, in the second sentence is asking for trouble. linas (talk) 05:12, 29 February 2008 (UTC)

Perhaps "The term was introduced by Mark Burgin, who believes it more adequately describes what contemporary computers are doing."? But I agree that contemporary computers are not doing super-recursive algorithms, and the distinction between a program the repeatedly asks for input and produces an answer, versus a program that asks for input but never produces an answer, is significant. — Carl (CBM · talk) 15:10, 29 February 2008 (UTC)

[edit] Notability / verifiability / accuracy

This is certainly a controversial subject, so I'll try to be careful here. I like the idea of having a page for this subject, since there appears to be borderline notability in addition to the articles and a book from the author and a short refutation by Davis, there are mentions/cites of other super-recursive proponents. But the article must be clear that the concept is at best protomathematics, and that it is not widely accepted.

The most important point for me, though, is that the article should inform the reader sufficiently to understand the debate, as it were, over the validity/applicability of the concept. I feel it does not meet this standard now. In reading the article, I was not sure precisely what a super-recursive algorithm (SRA) was. If SRAs are just supertasks, then the various claims made of them are true, of course, but I would hope that a book devoted to them would have more to say than a one-line proof. Ideally I would have in the article (with examples):

  • A sound definition of SRAs. (A super-recursive algorithm is a Turing machine that has oracle access to an infinite number of Turing machines with fast arithmetic subroutines.)
  • Claimed theorems about SRAs. (Super-recursive algorithms can solve the Collatz conjecture by directly checking cases.)
  • Claims about applicability to the real world. (A super-recursive algorithm can be built, disproving the Church-Turing thesis.)

CRGreathouse (t | c) 05:18, 29 February 2008 (UTC)

I completely agree that a good definition of super-recursive algorithm is needed. I will look for Burgin's book in the library this afternoon, which should help with that.
This is not actually a controversial subject, in the ordinary sense of the word. It's only "controversial" in the wikipedia sense. There is no serious discussion within the computability community that the Church-Turing thesis has been (or even could be) overturned. See for example this paper by Soare [2] (esp. p. 13, second para, "Some have cast doubt..."). Of course this article is not the place to write an original arguments for the pros and cons of Burgin's claims. We can simply state what Burgin has claimed, point out that his claims don't have acceptance in the field, and use Davis as an example of published criticism. — Carl (CBM · talk) 15:26, 29 February 2008 (UTC)
P.S. In particular, I recommend reading that paper by Soare for its explanation of a current viewpoint of computability theorists about the Church-Turing thesis. Even if a physical device could be created to solve the Halting problem, this would not invalidate the Church-Turing thesis, unless the device were algorithmic in nature. — Carl (CBM · talk) 15:34, 29 February 2008 (UTC)

[edit] Definition

I got Burgin's book from the library, so I can give some more clear definitions. According to Burgin (p. 108):

  • "Any class of algorithms that is more powerful than the class of recursive algorithms (such as Turing machines) is called a class of superrecursive algorithms."
  • "Hypercomputation is a computational process (including processes of input and output) that cannot be realized by recursive algorithms."

Note that these definitions emloy a nonstandard definition of "algorithm", since the ordinary definition of algorithm coincides with what Burgin calls "recursive algorithms". Burgin defines (p. 24):

"An algorithm is an unambiguous (definite) and adequately simple to follow (effective) prescription (organized set of instructions/rules) for deriving necessary results from given inputs (initial conditions)."

Burgin continues by explaining that "adequate" means that a person can follow the instructions. Burgin also (apparently) relaxes the ordinary criteria on the output of an algorithm being returned in a finite amount of time. — Carl (CBM · talk) 16:17, 29 February 2008 (UTC)

Burgin relaxes a lot more than that. He claims that Tandem computers are hyper-recursive. Haven't heard of Tandem? I don't blame you. Tandem was one of the first commercialized fault-tolerant computer systems, and there might be a few Tandem systems still running in the world, but the company itself is long gone (swallowed by DEC, which in turn got swallowed by Compaq, which got swallowed by HP). What makes Tandem computers hyper-recursive? For Burgin, it's this simple: They don't stop. (And in fact, IIRC, "Nonstop" was some kind of trademark for Tandem.) What he means, really (or should mean, if he knows anything) is that Tandem computers were fault-tolerant.
OK, now, the problem with this application of hyper-recursive is very simple: there was nothing happening inside a Tandem NonStop configuration that you couldn't model with a Turing machine. It had a proprietary architecture, but it was still your basic von Neumann architecture in a dual processor configuration. And, as anyone who has taken a basic computer theory course knows, you can emulate two Turing machines operating in parallel on one Turing machine. Tandem Nonstop computers didn't stop, and Turing computability is determined by whether an effective procedure halts? Ask me if I care! Tandem computers spit out answers and kept going; you can model this by just defining a Turing machine computation as having the same program but a different starting (resuming) point and a different initial (checkpointed) state. So, even though Tandem computers were reducible to Turing machines, and therefore couldn't compute anything more than Turing machines could, Tandem computers could compute more than a Turing machine? They can? They can't? Which is it, Burgin?
I know this is mindboggling for anyone who has studied computer science at any significant level of formality, but that's just how fuzzy and bogus (not to mention dated) Burgin's Super-recursive algorithms is. Here, I'll give you the precise passage on this point:
Large applications involving database transactions such as automatic teller machines (ATM), credit card transactions, and securities transactions demand distribute computer systems that have mean time failures [sic] measured in years. In response to these needs, NonStop Tandem computer systems have been created. Working in this nonstop, super-recursive mode, Tandem computers have achieved mean time to failures measured in tens of years. (p.109)
Yes - in Burgin's mind, this somehow makes Tandem computers something computationally more powerful than Turing machines. Gee, I wonder why nobody used Tandem computers to solve any problems of exponential time complexity (for Turing machines) in sub-exponential time? (Yes, I can guess the punchline: nobody stuck around long enough to get the answers!)
After I saw this, and looked around at the skimpy reviews, and noted the heavy sarcasm of the only review that discusses the Springer monograph at any significant length, it dawned on me: we shouldn't be arguing about how to develop the subject in this article. The subject is laughable vapor. We should be talking about either deletion of the article, or (if it clears the "notability" bar somehow) how to include criticisms of the subject so that Wikipedia will be doing what an encyclopedia does about any subject: give curious readers at least a summary of what's known about the subject. From what I can see, there only those two cases.
(1) There's nothing to know here because there's no actual knowledge, if anything it's mostly just misinformation (ergo, no article). Or,
(2) people should be able to find out that there really is nothing to know here -- i.e., that this an article about some fringe science that somehow cleared the bar for Wikipedia notablity. Yakushima (talk) 13:02, 24 May 2008 (UTC)

[edit] ref for computational schemas

Does anyone have a reference for the "computational schemas"? They aren't in the index of Burgin 2005. — Carl (CBM · talk) 16:47, 29 February 2008 (UTC)

[edit] lead sentence

In the lead sentence, "...more powerful than Turing machines, that is, computes more..." should be rewritten, as I'll try and explain. The examples of "super-recursive" have no links or particulars; however, one example, a limit of sequences of turing machine compuatations, seems sufficiently well defined by itself. However, it is unclear in what sense that a limit of an infinite sequence of computations, is a computation; since there is a minimum amount of time cost to any computation, the time cost of an infinite sequence would not converge; so in this example it would seem unreasonable to say that the limit process computes more than than a Turing Machine. The lead sentence should not suggest the possibility of constructing a working machine by some design that is better than current machines; I doubt that is the purport of Burgin's work. Better would be to talk about maps to solution spaces (as opposed to computable maps). There is no reason for a very theoretical process to be couched in lay terms. I doubt that currently it can be, without misleading. Pete St.John (talk) 18:13, 4 March 2008 (UTC)

I just want to note this online reference, a fairly short paper that seems to address this topic. Hasn't clarified anything for me, though. Pete St.John (talk) 18:28, 4 March 2008 (UTC)
The issue is that Burgin does claim that those limiting processes are a sort of computation - and he expands his definition of the term algorithm to accommodate them. He claims that an inductive Turing machine can "solve" the halting problem, and uses this as an argument against the Church-Turing thesis; see that section in this article. — Carl (CBM · talk) 19:43, 4 March 2008 (UTC)
This is sorta like Assuming Good Faith (on the part of Burgin); the article, and maybe even Burgin's original writing, does seem lame to me for the evident reasons. However, I'm sure that something makes a great deal of sense; something I would phrase like "A functor in Burgin Space (of which Algorithms are a subspace closed under Composition) maps problems into Burgin Space Two (a subspace of which is solution space)". Instead of saying that an algorithm (with a new defintion of "algorithm") is computable (with a new definition of "computable") I'd merely be more specific. I had this problem at the Wolfram (2,3) article, because the PR language from Wolfram Research was too general, leading everyone to "refute" the paper (which may not have been worded great, either, but which extends all the defintions and may contain true theorems). Of course we don't mind "generalizing so that Church-Turing does not apply" but "refuting Church-Turing" is wrong. If we can't find the former type of exposition then we should probably AfD this article, you think? But I expect that there is good work underlying Burgin (and maybe some language barrier too) Pete St.John (talk) 19:58, 4 March 2008 (UTC)

[edit] removed paragraph about "better way"

I don't think this sentence strikes the right tone:

Thus, a better way to present new advancements in computability theory is to say that Burgin and other researcher extended the definition of the term algorithm to a broader domain where the Church-Turing thesis is not true.

If someone has said this in print, we can attribute it to them; but to simply claim as fact that this is a "better" way seems somewhat biased. If included, it would also require more explanation, because the "Church-Turing thesis" being described is not the actual Church-Turing thesis, but some other claim. The actual thesis isn't ordinarily considered to be domain dependent. — Carl (CBM · talk) 12:07, 6 March 2008 (UTC)

[edit] Finding the true meaning

As our discussion continues, it brings good news and bad news. I would like to start with good news. First, our discussion is becoming more scientific because, at least, one participant (Carl) looked into the book that is discussed and another participant (CRGreathouse) correctly wrote (29 February 2008) that before discussing concepts, it's necessary to precisely define these concepts. All this helps us to make our discussion more scientific. As methodology of science teaches and as scientists know from their own experience, science has developed its own criteria estimating what people say or write. Namely, all statements or claims have to be based on facts and sound logic. What contradicts to facts or/and logic is not scientific. To prove or disprove something, it's necessary to give valid arguments based on facts. Unfortunately, often these scientific criteria are violated in many discussions that took place in science. For example, mocking an opponent is acceptable for a lawyer or politician, but not for a scientist. It is known that the first version of the periodic table of chemical elements was discovered by a British chemist Newlands. However, when he presented his discovery to the Royal Society, he was so severely mocked that he working in this area.

Second, the article "Super-recursive Algorithm" is becoming more organized, better structured and gives much more information to the reader, and there is reasonable suggestion of Pete St.John (4 March 2008) to change the statement "refuting Church-Turing" to the statement "generalizing so that Church-Turing does not apply".

Third, Hans Adler (29 February 2008) acknowledged that the word "refutation" cannot be used to characterise Martin Davis' review because refutation damands arguments based on facts that prove that something is not true, while the review provides no arguments, but gives only a reflection of some chosen parts of the book "Super-recursive Algorithms."

Bad news are: the participants of the discussion still have a lot of misconceptions and misunderstanding; the topic of the discussion is still too fuzzy; and many claims made by participants lack grounded arguments. For instance, participants of this discussion confuse mathematical theories presented in the book "Super-recursive Algorithms" and practical implications based on some of these theories and on known empirical facts about computers. In addition, it's necessary to understand that theory of super-recursive algorithms consists of many subtheories: the theory of inductive Turing machines and inductive computation, theory of limiting recursive functions, theory of analogue computations, theory of neural networks that work with all real numbers, and theory of Turing machines with oracles, to mention but a few. Even if one of these theories is criticized, it does not mean that the whole theory of super-recursive algorithms is criticized. To think so is the same as to think that if some inconsistent mathematical theory is criticized, then the whole mathematics is criticized or to think that if some ungrounded physical theory is criticized, then the whole physics is criticized.

For instance, Carl (29 February 2008) mentions a paper by Soare [3] (esp. p. 13, second para, "Some have cast doubt..."). Let us take the whole statement of Soare. He writes:

"Some have cast doubt on Turing's Thesis on the grounds that there might be physical or biological processes which may produce, say, the characteristic function of the halting problem. It is possible that these may exist (although there is presently no evidence) but if so, this will have absolutely no effect on Turing's Thesis because they will not be algorithmic or mechanical procedures ...".

This argument of Soare cannot be applied to what is written in the book "Super-recursive Algorithms" because the book gives an algorithmic or mechanical procedure in the form of an inductive Turing machine. Even Davis (in his article "The Church–Turing Thesis: Consensus and opposition") does not argue that these machines model (perform) computations when he writes (Davis 2006, p. 128): "in order for a computational result to be useful one must be able to at least recognize that it is indeed the result sought."

What worries Davis is that computations controlled by such algorithms will never be useful. It is possible to give valid arguments that such computations will be useful, but even if we agree with his assertion, as scientists we have to build mathematical model for such objects because they exist. Distant stars are not useful in the pragmatic sense but they exist and physics includes them in models of the universe, while physicists are studying these stars by means of theoretical models. In a similar way, computer scientists have to model and to study all phenomena that exist in the field of computing. That is why it's not a good situation that, as Carl writes (29 February 2008), there is no SERIOUS discussion within the computability community that the Church-Turing thesis has been (or even could be) overturned.

Such mathematical theories as the theory of inductive Turing machines and inductive computation, theory of limiting recursive functions, and theory of Turing machines with oracles have never been criticized for their mathematical content. What irritates many of researchers who based their works on the CTT is the connected to some of these theories implications that CTT is not true.

Actually everything in these statements about validity of CTT is based on definitions of algorithm that are used. We define that algorithm is the same as Turing machine and then it becomes impossible to disprove CTT. However, if we consider algorithm as some technical or even natural phenomenon, than CTT becomes a scientific hypothesis, which can be proved or disproved.

Hans Adler (18:00, 7 February 2008) made the most extended contribution to our discussion. Let's analyze what he wrote and what arguments he gives to support his claims. He correctly observes that the sentence "This is a truly extraordinary book" is ambivalent. To eliminate this ambivalence it's necessary to analyze what is written after this sentence. If we read the review, we see that after this sentence the reviewer explains what he saw in the book he is reviewing. For instance, the next sentence states that the author of the book notes the well-known fact that the majority of program properties related to their output are undecidable by modern computers when they work in the recursive mode. There is nothing extraordinary in this. Then the reviewer states that the author of the book assures that superrecursive algorithms can solve many of such problems for ordinary programs and contemporary computers. Is true or not? As it is written in the book, limiting recursive and limiting partial recursive functions introduced by the mathematician M. Gold are particular cases of super-recursive algorithms. It was proved long ago (more than forty years ago) that these functions can compute and decide sets that are not computable by Turing machines. Since it was done, nobody disproved these results. So, according to mathematical criteria, these results are true. Consequently, the citation from the book is also true. By the way, later the reviewer writes that it is known that not all sets accepted by limiting partial recursive functions of Gold and the “trial and error” predicates of Putnam are accepted by Turing machines. Hans Adler is concerned with expressions like: But he assures us that Among the author’s astonishing predictions is the following: Burgin finds a key to what he regards as The first expression simply shows that the reviewer wants to express the opinion of the author. The second expression explains why the reviewer thinks that the book is extraordinary. To better understand the third expression, it's necessary either to consider the whole sentence of which this expression is only a part or to read the book. In any case, the question is to really understand what contemporary computers are doing, we need to into consideration that there are non-terminating computations that nevetheless give results. Thus, it is necessary to buid mathematical models how to rigorously define results for non-terminating computations. There are several solutions to this problem. One of these solutions was embodied in (represented by) such models as limiting recursive functions and limiting partial recursive functions of Gold, “trial and error” predicates of Putnam, inductive Turing machines of Burgin, and general Turing machines of Schmidhuber. So, actually the opinion of the reviewer that "Burgin finds a key to" this problem is inexact because Gold and Putnam found this "key" much earlier. What the reviewer correctly writes and what Hans Adler reiterates (as if it can be something negative) is that "Burgin’s inductive Turing machines generalize this notion extending their reach through the entire arithmetic hierarchy." Is this statement in favor of the book or contains some negative implications? We know that the approach of Gold and Putnam gave birth to such a respected direction in computer science and artificial intelligence as inductive inference. Many mathematicians and computer scientists, including such well-known researchers as Leonard Adleman, Manuel Blum, Lenora Blum and Carl Smith, contributed to this area. Thus, to say that a book is a further development of a respected direction in computer science and artificial intelligence definitely is a positive statement with respect to this book.

Hans Adler gives one more citation from the review of Davis. Namely, that "mathematicians can eagerly await the proofs of the Goldbach Conjecture and the Riemann Hypothesis that this amazing technology promises!" This is really a strange statement because, as we know, now mathematicians prove theorems and not machines. Even if computers are heavily used for some proofs, such as the Four Color Theorem, people are doing these proofs as they are writing programs for those computers. I took the book "Super-recursive Algorithms" and tried to find a prediction that computers will prove theorems and mathematicians will wait for their results but was not able to do this.

However, in general, the review of Davis describes the content of the book. For instance, when he writes that “neural nets” with arbitrary real numbers as weights "not only accept uncomputable sets, in fact they accept all sets of strings on a given alphabet", he simply reiterates Corollary 4.2.9 (Burgin 2005, p. 151), which can be easily derived from Lemma 4.2.1 (Burgin 2005, p. 149) proved in the book.

When Davis writes: "Burgin's inductive Turing machines generalize this notion extending their reach through the entire arithmetic hierarchy. It is a pity that he doesn't stop to tell us how he imagines this power could be realized in real-world physical devices." It only states that the book "Super-recursive Algorithms" is a theoretical monograph and not an Instruction Manual for some device. It might be very useful if we find in the book how to design useful super-recursive algorithms, but theory is not about "know how" but about "know what." May be "know how" will be a topic of another book in this area. However, some statements from the review can be misinterpreted without further explanation. For instance, the Davis writes, "Burgin suggests the example of a PC being used as a terminal to access a supercomputer with the supercomputer playing the role of oracle." To understand this sentence, we need to know what is an oracle. I have not found what is written in the book "Super-recursive Algorithms" about oracles. I will look more, but for now I would like to share with other participants of this discussion what methodology and history of science teaches us. The lesson that we can take is that for science (and mathematics), it's more efficient and productive to ask not "What is some concept, say A?" but "How to understand the concept A?" Here is one example from the history of mathematics.

When mathematicians started to solve more and more complex algebraic equations and found some formulas for their solutions, they encountered a strange phenomenon. Expressions that had negative numbers under the square root appeared in the process of solving equations. At that time, it made no sense. All mathematicians knew that the square of any number could not be negative. So, at first, mathematicians forbade utilization of these unusual expressions. (In a similar way, now some people try to forbid utilization of super-recursive algorithms.) Then mathematicians started using these unusual expressions only as formal constructions that have no real meaning in mathematics. Only when some mathematicians asked the question "What can be a number?" imaginary and complex were invented. As we know, now complex numbers are very useful in mathematics and beyond in spite that some of them are called imaginary. Those who don't know mathematics can interpret the expression "imaginary numbers" as irony, but we know that it is only a mathematical term. So, to discern irony from a literal statement of some fact, it's necessary to have enough knowledge.

Now let's look what can be an oracle and try not to ascribe to an oracle some extraordinary features. We come to the conclusion that any system B can be an oracle to another system A when B has or can get more knowledge than A. For instance, now the Internet is an oracle for many people. When I read the "talk" of Hans Adler and saw acronyms WP:NPOV, WP:FRINGE and WP:COI, which I did not know, I went to the WIKIPEDIA as an oracle and found their meaning. When we want to know more about super-recursive algorithms, we go to such oracles as the book "Super-recursive Algorithms" and reviews on this book.

The example from the book cited in the review of Davis does not mean that the supercomputer is a super-recursive oracle. A supercomputer can be an oracle for a PC even if both of these machines are equivalent in their potential computing power to some Turing machines. From the point of view of recursion theory, such oracle makes no sense as it does not make this PC super-recursive. However, from the point of view of real life in general and computational practice in particular, such an oracle can be very useful to the user of that PC.

This shows that recursion theory that studies classes of algorithms rather than individual algorithms is too coarse and can make wrong conclusions in many practical situations. As S writes, theory of computation eventually was substituted by recursion theory. In recursion theory, the Church-Turing Thesis is an implicit axiom. However, computers science is not a pure mathematics but a technical science and all laws of science are only well-grounded hypotheses. The Church-Turing Thesis is a law of contemporary computer science. However, according to Popper to be a scientific law, a statement needs to allow empirical refutation. Thus, the Church-Turing Thesis can’t be refuted in recursion theory where it is an axiom. However, in computability theory, which is a part of computer science this Thesis allows empirical refutation.

Hans Adler also writes (18:00, 7 February 2008) about philosophical arguments with respect to the current status of the Church-Turing Thesis. As I can see from the new version of the article "Super-recursive Algorithm", people who support assertions from the book "Super-recursive Algorithms" are all computer scientists and not philosophers. Besides, the book itself contains mathematical proofs and not only general reasoning.

I would like also to explain to linas who writes (29 February 2008) that “ In the strict, technical sense, modern computers are just plain Turing machines …” and many other people who believe in this that this is a misconception. As is correctly written in the WIKIPEDIA article “Turing Machine,” “… any particular program running on a particular machine is in fact nothing but a deterministic finite automaton, since the machine it runs on can only be in finitely many configurations. Turing machines would actually only be equivalent to a machine that had an unlimited amount of storage space.” However, Turing machine was one of the best (if not the best) model of computers. However, times are changing. Information technology is developing and we need new models. We can find a variety of such models in computer science. We have only to understand: what models are better in what situations, what models are irrelevant now and have to wait for future computers, what models can pave the way to future computers, and what models are only mathematical abstractions that provide for a good mathematical theory.

With respect to all discussion participants, Multipundit (talk) 00:14, 7 March 2008 (UTC)

Multipundit, that was an impressive effort: some good points, some good prose. But frankly I didn't read all of it and I conjecture that most of us will not. Wiki is more like modern business email: if more than one point is made both will be ignored. That kills me every time, constraining myself to compounded sententences on merely finitely many points is unnatural to me, so I'm sympathetic. Pete St.John (talk) 16:50, 7 March 2008 (UTC)
Multipundit writes: "Now let's look what can be an oracle and try not to ascribe to an oracle some extraordinary features." Actually, ascribing extraordinary features to an oracle is the whole point of oracles. See Oracle machine if you're unacquainted with this sense of the term. (You should be acquainted with it, if you've studied computing theory formally. I haven't so much as cracked a book on computing theory in over 25 years, but I did take an undergrad course in it when I was a CS major at UC Berkeley, and then graded homework for that same course, for a few semesters.) Yakushima (talk) 05:30, 24 May 2008 (UTC)

[edit] critique by Prof Pratt

I asked about this subject at Prof Pratt's userpage (we have some prior history on account of the Wolfram (2,3) "Turing Machine" controversy) and he replied at length there. It's quite a severe, but informative, critique of Burgin's work and I commend it to interested editors. Pete St.John (talk) 17:40, 7 March 2008 (UTC)

[edit] An exhibition of fallacies

In his contribution to the discussion, Pete St.John (7 March 2008) writes about “Pratt’s critique of Burgin's work”. I looked at this critique. It looks like some scam that we receive by e-mail where people are writing about non-existing things to achieve their not so good goals. There are several arguments to support this conclusion. First argument: Pratt does not know anything about Burgin’s work. He shows complete ignorance in this area. Example of this is his completely incorrect statement that ‘’super-recursive algorithms’’ is the “name for computability classes above Turing degree 0 such as the arithmetic and analytic hierarchies, which have been studied for many decades.” Pratt was not even able to understand what he was reading in the article "Super-recursive algorithms" where many examples of super-recursive algorithms that have nothing to do with the arithmetic and analytic hierarchies are given. Thus, Pratt uses the Straw Man fallacy [3] to make an impression on the reader.

Second argument: not a single scientific or mathematical argument is given in the so-called “critique”. As a result, Pratt’s “critique” is a vivid example of the fallacy [1] of ‘’personal attack’’ [2].

Third argument: instead of writing something (anything!) about Burgin’s work, Pratt attacks the work of Siegelmann. This is an example of the Red Herring fallacy [5].

Thus, it’s easy to see that the so-called “Pratt’s critique” is an exhibition of different fallacies and does not help to find the true meaning but tries to distort the truth. I will not continue my arguments as Pete St.John (7 March 2008) is right writing that if more than one point is made both will be ignored.


[1] A fallacy is an error in reasoning.

[2] A personal attack fallacy is committed when a person substitutes abusive remarks for evidence when attacking another person's claim or claims.

[3] The Straw Man fallacy is committed when a person simply ignores a person's actual position and substitutes a distorted, exaggerated or misrepresented version of that position.

[4] The Appeal to Ridicule is a fallacy in which ridicule or mockery is substituted for evidence in an "argument."

[5] A Red Herring is a fallacy in which an irrelevant topic is presented in order to divert attention from the original issue.

With respect to all discussion participants, Multipundit (talk) 23:00, 7 March 2008 (UTC)


Multipundit, the user Pratt is (acknowledged to be) the logician Pratt; so an ad hominem debate will not go well. However, we can (here at least) apply actual logic. Please pick one specific point and I'll try to address it. We won't (I don't think) improve the article with a broad-axe, but with a scalpel.
I concede that Prof Pratt's tone is not especially politic; that may be why he posted to his talk, instead of here. So I'll take it on myself, if I can, to address particulars from that view. Meanwhile, we should note that evidently Burgin is not univerally (at least) endorsed by the formal logic community; so this material at least should be presented as iconoclastic, if not mere bunk. Pete St.John (talk) 18:21, 8 March 2008 (UTC)

[edit] Grounded scientific arguments

PeterStJohn (8 March 2008) asks to pick one specific point related to what Pratt wrote on his web site. The most important point is scientific validity of Pratt’s claims. Here we are discussing an area in computer science. Thus, the author of some claims must give scientific argument that support these claims. Let’s look what scientific arguments related to the work of Burgin are given by Pratt. The only scientific argument is:

“As best I can tell it seems to be Burgin's made-up name [super-recursive algorithm] for computability classes above Turing degree 0 such as the arithmetic and analytic hierarchies, which have been studied for many decades.”

This statement is completely incorrect because in the article (and I am not speaking about the book in question) there are various examples of super-recursive algorithms that have nothing to do with the arithmetical and analytic hierarchies. In addition, these algorithms (from the article "Super-recursive algorithm") have not been studied for many decades in contrast to what is written.

First, it would help if there were (are?) citations to use of the term "super-recursive algorithm" from elsewhere. Has anyone else adopted the term, and if so, what formal definition do they give for it? Second, Pratt didn't say "exclusively artithmetic and analytic" but "such as.." instead. I'm sure we'd all be interested in something specific which is new. Pratt's impression seems, to me, to be that anything new and correct is lost among what is old or incorrect. Pete St.John (talk) 03:41, 10 March 2008 (UTC)

When I looked at the next writing of Pratt (8 March 2008), I found that he himself admits:

“ I don't know about the book …”

Yes he corrected that himself. He read some of Burgin's papers and has not read this particular book. Judging from his apparent impression of the papers, I would not expect him to read the book. Pete St.John (talk) 03:41, 10 March 2008 (UTC)

I don’t know if it’s decent and correlates with rules of a scientific discussion to write about some subject without knowing this subject. It would be interesting to know opinions of other participants of this discussion on this issue.

With respect to all discussion participants, Multipundit (talk) 00:39, 10 March 2008 (UTC)

It isn't so easy to extrude single, definite, isolatable points from such a complex topic; I appreciate your trying. My replies are embedded above. Pete St.John (talk) 03:41, 10 March 2008 (UTC)

I have read what PeterStJohn wrote and I will answer to his claims later because I am more interested in finding the truth than in social controversies, but I would appreciate if nobody breaks what I wrote.

With respect to all discussion participants, Multipundit (talk) 00:15, 13 March 2008 (UTC)

It's not easy to respond to a long, complex arguement; but easier to address single specifics, one at a time. Not that I'm any paragon of "Accurate, Brief, and Concise" myself. Pete St.John (talk) 02:50, 14 March 2008 (UTC)

[edit] References for computational schemas

Carl (29 February 2008) asked for a reference for the "computational schemas." I have searched the book "Super-recursive Algorithms" and found terms algorithmic scheme (p. 113 and 115) and computational scheme (p. 121). It looks they have the same meaning. As I can understand, the term algorithmic scheme is much more general than the term super-recursive algorithm and Burgin argues (on p. 115) that it's necessary to make a clear distinction between super-recursive algorithms and algorithmic schemes that are not algorithms. That's why I changed the term computational schemata to the term algorithmic scheme in the article "Super-recursive algorithm".


With respect to all discussion participants, Multipundit (talk) 00:49, 13 March 2008 (UTC)

[edit] Three logical mistakes in one sentence

Pete St.John (10 March 2008) attracted our attention to the explanation of Pratt, writing that “Pratt didn't say "exclusively artithmetic and analytic" but "such as.." instead.” This explanation does not help to make what Ptatt wrote more consistent. Even with this explanation the Pratt’s sentence under consideration contains three logical and a couple of factual mistakes. Namely:

1. It is claimed that super-recursive algorithm is a name for computability classes above Turing degree 0 such as the arithmetic and analytic hierarchies. However, the arithmetic and analytic hierarchies consist of sets (Rogers 1987). Sets are not algorithms and to assert that an algorithm is an name for a class of sets distorts our understanding of algorithms. To say so is the same as to say that a plane is a name for passengers it carries. Thus, we have here a logical fallacy called False Analogy [2].

2. It is also incorrect to write “such as” (even in the new meaning) because there are super-recursive algorithms that have nothing to do with the arithmetic and analytic hierarchies. This is a logical fallacy called Suppressed Evidence [3] because in the article "Super-recursive Algorithm" there is evidence that there are super-recursive algorithms that have nothing to do with the arithmetic and analytic hierarchies.

3. In addition, the argument that “the arithmetic and analytic hierarchies … have been studied for many decades” has nothing to do with the topic of our discussion. Thus, this is a logical fallacy called Red Herring [1].

It is interesting to find a person who, as Pete St.John (8 March 2008) writes, is “acknowledged to be the logician” and makes three logical mistakes in one sentence.

[1] A fallacy is an error in reasoning. A Red Herring is a fallacy in which an irrelevant topic is presented in order to divert attention from the original issue.

[2] A False Analogy is a fallacy in which an analogy is irrelevant.

[3] A Suppressed Evidence is a fallacy of intentional failing to use information suspected of being relevant and significant.

With respect to all discussion participants, Multipundit (talk) 21:47, 13 March 2008 (UTC)

I am not sure that you are addressing the point that everybody else here seems to be concerned about, which is Burgin's claim that he has refuted the Church-Turing thesis. It has "near-universal acceptance", and therefore any such claim should come with very good arguments. As far as I can tell, Burgin's arguments seem to be: 1. redefining the word "algorithm", and 2. justifying this by physical speculations that most people consider unconvincing. If this impression is wrong, the best response is to tell us about the real, convincing arguments and prove to us that Burgin is not just performing silly word games.
I don't think the fine points that you are addressing are relevant to this, and it seems unfair to hold Vaughan's comments to a different standard than the one for which they are intended: Informal discussion about the legitimacy of Burgin's claim. --Hans Adler (talk) 00:44, 14 March 2008 (UTC)


Regarding point 1, "algorithms are sets". I certainly wouldn't say that myself, but I'm not a logician. Cantor himself said that sets are their cardinalities, that is, M = |M|. A logician friend tried to explain that to me; I would have trouble using such language. However, sets are equivalent to their cardinalities :-) and algorithms are sets of ordered pairs of commands and line numbers (say). Everyone knows algorithms are trees. So I wouldn't object to Vaughan's phrasing on that ground alone. Pete St.John (talk) 02:56, 14 March 2008 (UTC)
When making precise comparisons between classes of entities it is customary to use precisely defined entities. "Set" is a precisely defined concept, as is "program" for any specific programming language. In contrast "algorithm" is a vague term for "class of programs that do essentially the same thing." There are many notions of "essentially the same thing," but the one that is assumed by and is sufficient for the statement of the Church-Turing thesis is "accept the same set of inputs."
There are various finer notions. One much studied one is based on David Park's notion of bisimulation as strengthened by Robin Milner, as the finest of various branching time distinctions. Another is based on the distinctions between true concurrency and interleaving concurrency drawn by partial-order and higher-dimensional models of computation, both of which I have written many papers on between 1980 and the present. Yet another is based on the concept of identification in the limit, studied by E.M. Gold and others in the 1960s and rediscovered independently by Burgin in 1983 under the name "inductive Turing machine," see M. Burgin, Inductive Turing machines, Notices Acad. Sci. USSR 270 (1983) 1289-1293.
All these refinements of the general notion of computational equivalence naturally lead to new computation classes. Rob van Glabbeek's Ph.D. thesis "Comparative Concurrency Semantics and Refinement of Actions" (Vrije Universiteit te Amsterdam, 1990) identifies close to 40 of them based on various refinements of branching time and true concurrency. However none of these new classes impact the Church-Turing thesis, which is formulated for the acceptance-based notion of equivalence. Any discussion of violations of the thesis needs to be conducted in terms of sets, the natural semantics that goes with the thesis. Other distinctions drawn by more refined notions of "algorithm" while of interest in their own right have no bearing on the thesis. --Vaughan Pratt (talk) 03:17, 17 March 2008 (UTC)

[edit] What is our discussion about, or "silly word games"

Hans Adler suggests that our discussion "seems to be concerned about, which is Burgin's claim that he has refuted the Church-Turing thesis." I tried to find the assertion "I have refuted the Church-Turing thesis" in the Burgin's book "Super-recursive algorithms" and was not able to do this. Then I thought may be the book is too long and it's difficult to check all statements in it. So, I searched those Burgin's texts that I was able to find on the web. I also did not find such a claim. Then I thought may be Davis, who reviewed the book, was able to find such a claim. So, reread his review (it's easy to do this because it's only two pages) and also did not find such a claim. I understand that absence of a proof is not a proof of absence, but if Hans suggests discussing somebody’s claim, it's necessary to give a citation that this claim was really made. Otherwise, we'll discuss something that does not exist and as Hans calls it, we'll be "just performing silly word games." I admire Hans for coining this expression "silly word games."

Here I analyze only one assertion from the new contributions. Later I will analyze other assertions.

With respect, Multipundit (talk) 02:10, 21 March 2008 (UTC)

I'll quote the relevant part of Burgin's book when I get back to the office tomorrow. Actually, if you look at Davis' review, he specifically quotes Burgin as saying "Recent discoveries... allow researchers to overcome limitations of Church-Turing thesis." — Carl (CBM · talk) 02:15, 21 March 2008 (UTC)
I am not really interested in Burgin's work, but I am interested in maintaining a high standard in Wikipedia. My impression (mentioned on talk pages, but never in articles!) that Burgin claims to have refuted the CTT was based on statements of two people who seemed to have read his works: Davis and you. Now it may very well be that you have learned some things since you made this edit, but it certainly comes as a surprise to me that you now seem to be saying that Burgin never claimed that. Can you explain what is going on? I am genuinely puzzled. --Hans Adler (talk) 02:34, 21 March 2008 (UTC)
Hans is right writing that I have learned something participating in this discussion. In particular, I learned that Burgin did not claim that he has refuted the Church-Turing thesis because neither I nor such experienced WIKIPEDIAns as Carl and Hans were able to find the statement "I have refuted the Church-Turing thesis" in Burgin's works and in reviews of these works. With respect, Multipundit (talk) 21:28, 27 March 2008 (UTC)
Actually, I did find them and posted them on Talk:Algorithm (this edit). This is one reason why it's bad to split discussions over numerous pages.
Anyway, Burgin states, "This brings us to a new vision of the Church-Turing thesis. While in general this thesis has been disproved by the invention of different classes of superrecursive algorithms, ..." (p. 46) The interesting thing is that although Burgin states that the thesis has been disproved, he does seem to avoid giving a good justification of why it is disproved. Maybe he just takes it as a given that others will agree with him. — Carl (CBM · talk) 21:35, 27 March 2008 (UTC)
This citation is not an evidence that Burgin claimed that he refuted the Church-Turing thesis. It's only a statement that he agrees with those who think so. I will look what justification for this is given in the book, but what concerns "a good justification," I am not sure that it's possible to find such a justification that would be acceptible by all people. For instance, even in this discussion Carl (27 March 2008) assumes that there is a standard definition of algorithm and takes the one given by Knuth as a model. At the same time, Pratt (17 March 2008) writes that "algorithm" is a vague term." With respect, Multipundit (talk) 00:17, 28 March 2008 (UTC)
I think you're splitting hairs in asking whether Burgin thinks he has refuted the thesis, or just agrees with people who think he has refuted the thesis, or only says the thesis is refuted by superrecursive algorithms (a term he invented) but doesn't want to take credit for it. In any case - Burgin claims the thesis is refuted somehow.
V. Pratt is of course correct that "algorithm" isn't a formally defined term. On the other hand, there is very strong agreement about what an algorithm for computing a number-theoretic function is. It's hard for me to think of any other non-formally-defined term for which such widespread agreement could be found.
I have the impression that you are being excessively close to the chest about your actual knowledge about these things. Rather than nitpicking what others have said, could you explain what position you think Burgin has, or what you wish was included in the article that isn't included at the moment? — Carl (CBM · talk) 01:48, 28 March 2008 (UTC)


I also want to say that I disagree with the claim that the Church_Turing thesis is in any significant way contingent on the definition of algorithm. The thesis is ordinarily construed as a statement about about a particular type of algorithm, for example the type described by Knuth, for computing a number-theoretic function. As a rule of thumb: if an interpretation that makes the Church-Turing theses trivially false would have been well known in the 1960s, but experts in the field don't even mention it when discussing the thesis in their texts, we can view that interpretation as lacking plausibility. — Carl (CBM · talk) 22:15, 27 March 2008 (UTC)

[edit] What is the difference between sets and algorithms?

It was interesting to analyze contribution of Pete St.John (14 March 2008). At first, he wrote that he certainly wouldn't say himself that "algorithms are sets" and then he stated that he's not a logician. It would be interesting to find a logician who would deliberately say such a strange thing.

Then Pete St.John claimed that Cantor himself said that sets are their cardinalities, that is, M = |M|. It would be interesting if Pete St.John will give an exact reference in what paper of Cantor this assertion was published. Nevertheless, Pete St.John expresses a good intuition writing "I would have trouble using such language." Indeed, we’ll have a trouble if we say, as Pete St.John did, that "sets are equivalent to their cardinalities" because it is the same as to say say that Pete St.John is equivalent to Pratt. Here is a simple proof of this assertion. The set consisting of Pete St.John has the cardinality 1 and the set consisting of Pratt has the cardinality 1. Both these sets, the set consisting of Pete St.John and the set consisting of Pratt, are equivalent to the same cardinality, so the first set is equivalent to the second set. Consequently, Pete St.John is equivalent to Pratt.

It would be interesting to read comments of other participants of this discussion on the following assertion of Pete St.John:

Everyone knows algorithms are trees. 

Here, following an advice of Hans, I analyze only one assertion from the new contributions. Later I will analyze other assertions and other contributions.

With respect, Multipundit (talk) 03:37, 24 March 2008 (UTC)

Thank you for keeping to one topic at a time. That gives me the chance to remind you to attack arguments, not the editors who make them. See the short essay WP:APR, which explaines part of the problem. --Hans Adler (talk) 17:11, 24 March 2008 (UTC)
It's sure fair to ask for the source of "M = |M|". A difficulty is that I use pipe (on my keyboard shift-backslash) and I don't know how PDF interprets the vertical bar for cardinality, so grep isn't working for me. I did find this paraphrase from the intro of an English translation here in pdf:
...the cardinal number of an aggregate M is the general concept under which fall all aggregates equivalent to M...
which can be taken to mean, a cardinality is an equivalence class of sets (under the relation of bijection, equinumerousity) so in some sense a number (the cardinality) is a set (the equivalence class). But I hope I run into that specific line M = |M|, it was striking (to me). I don't want to defend it, and I don't mean to impugn Cantor, he can't have meant that a set of three lions is the same as a set of three wolves; more likely, that specifically the set {0, {0}, {0, {0}}} is three.
Regarding "everyone knows algoritms are trees" I maybe should have said "algorithms are equivalence classes of programs, and programs are trees". Pete St.John (talk) 18:04, 25 March 2008 (UTC)
Afterword: I found a line at the translation cited above; the PDF doesn't copy the special characters directly to wiki, but it's line (9) on page 88 of the first translated paper, roughly: M is equivalent to the cardinality of M, looks like M ~ |M| with the tilde symbol but overbars for cardinality instead of the vertical pipes. His defintions for what I'm calling "equivalence" and "cardinality" are in somewhat old-fashioned language; somehow M (with two over-bars) is an aggregate that represents the cardinality of M (e.g. maybe the {0, {0}, ...} above) and the tilde definitely is a bijection (he doesn't use that word though). Pete St.John (talk) 18:19, 25 March 2008 (UTC)
Equivalence being a weaker relation than isomorphism, I can't say I have much of a problem with the proposition that the singleton set {Pete St. John} is equivalent to the singleton set {Pratt}. As far as I'm concerned these two sets are in bijection, which is to say they are isomorphic. This is true of any two singleton sets, that is, sets each consisting of one element. Hence a fortiori they are equivalent.
By the same token {Burgin} and {Multipundit} are equivalent, in fact isomorphic. What I'd like to know is whether they are the same set. One reason I don't contribute much to this page is that I don't see the point of arguing with people who aren't willing to reveal their true names. --Vaughan Pratt (talk) 09:19, 9 April 2008 (UTC)

[edit] More logical fallacies

After analyzing Pete St.John contribution, it was interesting to analyze what Pratt wrote (17 March 2008). He tries to defend his strange statement that algorithms are sets with a much more sophisticated explanation than the argumentation of Pete St.John. However, all this sophistication does not prove his statement because all arguments are unrelated to his assertion. Indeed, at first Pratt writes that "set" is a precisely defined concept, as is "program" for any specific programming language. In contrast "algorithm" is a vague term …” Contrary to this the definition from the WIKIPEDIA’s article “Set” states:

At the beginning of his Beiträge zur Begründung der transfiniten Mengenlehre, Georg Cantor, the principal creator of set theory, gave the following definition of a set: By a "set" we mean any collection M into a whole of definite, distinct objects m (which are called the "elements" of M) of our perception [Anschauung] or of our thought.

Only a very biased person could believe that this definition is more precise than existing definitions of algorithm.

However, what’s much more important is that this argument has nothing to do with the statement that algorithms are sets. This argument neither proves that it’s a true statement nor disproves this statement. All given examples, starting with bisimulation, support this conclusion. Thus, we once more see a red herring.

In the list of fallacies, it is explained that red herring is a smelly fish that would distract even a bloodhound. It is also a logical fallacy in which an irrelevant topic is presented in order to divert attention from the original issue.

I will not go here over other logical fallacies in the contribution of Pratt because, as Hans wisely noted, nobody will read a long text. Nevertheless, it would be useful to check whether the Pratt's assertion about identification in the limit and inductive Turing machines is correct because this assertion is related to the essence of our discussion.

Pratt writes that "the concept of identification in the limit, studied by E.M. Gold and others in the 1960s and rediscovered independently by Burgin in 1983 under the name "inductive Turing machine." I was not able to find the paper cited by Pratt on the web, so I looked into the book "Super-recursive algorithms" to find what are inductive Turing machines and how they are related to identification in the limit. In the book, it is proved that inductive Turing machines can do much more than identification in the limit can do. Then I decided to look into the review of Davies and found that he writes that Burgin’s inductive Turing machines generalize the notion of the “trial and error” predicates of Gold and Putnam extending their reach through the entire arithmetic hierarchy. Thus, Pratt's assertion disagrees not only with the real situation but also with what his friend Davis writes.

With respect, Multipundit (talk) 00:31, 28 March 2008 (UTC)

Where did I say algorithms are sets? And why are you resorting to a 19th century definition to argue that "set" is not well-defined in the 21st century? A lot of work went into the definition of "set" during the early 20th century.
"Algorithm" as a class of programs with the same behavior is a vague term until one defines "same behavior" precisely, for which many definitions exist. One must specify that equivalence relation before making claims about classes of algorithms. --Vaughan Pratt (talk) 03:13, 13 April 2008 (UTC)

[edit] references section is questionable

The vast majority of the references don't use the term "super-recursive algorithm" at all. In fact, it seems to be an idiosyncratic term used exclusively by Mark Burgin, and the claim that what these other authors are doing amounts to something that should be described with that term is attributable solely to Burgin. So there is basically one source for this article, despite the lengthy list of references. Most of the rest more properly describe hypercomputation. --Delirium (talk) 05:49, 11 April 2008 (UTC)

By all means, feel free to prune the references to those that actually mention "super-recursive algorithms" or (possibly) that Burgin explicitly mentions. — Carl (CBM · talk) 11:20, 11 April 2008 (UTC)
I can agree that for those who don't understand what are super-recursive algorithms and what is hypercomputation the references section is questionable. However, anybody with sufficient knowledge and brains can check that all works listed in the references section are about some kind of super-recursive algorithms. What concerns hypercomputation, it is necessary to understand how super-recursive algorithms are related to hypercomputattion before making any assertions. To conclude, I don't know what Burgin explicitly mentions, but it's sufficient (for a learned and intelligent person) to check definitions to understand the true situation. With surprise, Multipundit (talk) 22:42, 15 April 2008 (UTC)

[edit] More ungrounded claims

The claims in the previous contribution to our discussion are completely ungrounded. First, there a field in computer science to which all references but three (namely, Axt, Kleene and Kosovsky) belong. The only name for this field is "super-recursive algorithms" and to my knowledge there is no other name for this field. Thus, all these references support neutrality of the article because they show that the article reflects research of many authors. Second, all references support factual accuracy of the text because any intelligent (!) reader can read a corresponding publication and see whether the text reflects what is published. The assertion that some of the references are also related to hypercomputation is correct but implication that the area of super-recursive algorithms coincides with the area of hypercomputation is incorrect. To oppose super-recursive algorithms and hypercomputation is the same as to assert that computation and algorithms for the same area and we need only one of these two terms, namely, the term "computation". However, it is true that super-recursive algorithms and hypercomputation have common areas. This situation is not so rare. For instance, theory of algorithms belongs both to computer science and mathematics. So, it would be more efficient not to oppose super-recursive algorithms and hypercomputation but to find what they have in common. It might be helpful for the further development of computer science.

Taking only one point at a time, as Pete St.John suggested, it's possible to conclude that ungrounded claims of the user Delirium only demonstrated that the article conforms to conditions of neutrality and factual accuracy.

With respect, Multipundit (talk) 01:54, 17 April 2008 (UTC)

The issue with articles on new topics, like this, is always that there is a lack of reliable published sources in the area. Of all the sources that are listed on the article, how many of them actually use the term super-recursive algorithm? — Carl (CBM · talk) 02:32, 18 April 2008 (UTC)
As we all know, encyclopedias in general and WIKIPEDIA in particular have articles about concepts and not about words. Euclid never used the term algorithm, but now we speak about the Euclidean algorithm. Newton never used the terms differentiation, integration, differential calculus and integral calculus. However, every educated person knows that he was one of the creators of the differential calculus and the integral calculus. In a similar way, if a paper or book studies a construction which is a super-recursive algorithm, then this paper or book is relevant for this topic if we want to have good articles and not, as Hans called it, "silly word games." With respect, Multipundit (talk) 00:50, 24 April 2008 (UTC)
The difference here is that everyone knows what differentiation is, but there is no widely used meaning for super-recursive algorithms. Only Burgin seems to use the term, and since his definition starts by redefining algorithm to mean something different than what is ordinarily called an algorithms, it's very difficult for me to tell what is meant to be a super-recursive algorithm. For example, is this a super-recursive algorithm?
Inputs: an index i of a computable function and a number e.
Step 1. If φi(e) is defined, output 1 and then halt.
Step 2. Output 0 and then halt.
I don't see that Burgin's definition is precise enough to answer that question. Certainly it isn't an algorithm in the ordinary sense. — Carl (CBM · talk) 01:05, 24 April 2008 (UTC)
Carl, in his contribution, attracted our attention to some important issues.

First, is it possible for the term super-recursive algorithm to be precise if the term algorithm is not precise? We can assume that Carl also thinks that the term algorithm is not precise because he did not object when Pratt wrote that "algorithm" is a vague term. Second, the definition of super-recursive algorithm given in the article is sufficiently (relative to the definition of algorithm) precise but it is not constructive. However, we use the concept of a total Turing machine or total recursive function although these terms are also not constructive because we can't tell in a general case whether a given Turing machine computes a total function or not. With respect, Multipundit (talk) 01:36, 24 April 2008 (UTC)

You haven't addressed my question. The ordinary definition of algorithm is sufficiently well understood that I can say without doubt the process I describe above is not an algorithm. But is it a super-recursive algorithm? — Carl (CBM · talk) 01:38, 24 April 2008 (UTC)
Have you found what you describe above in the book "Super-recursive algorithms"? Multipundit (talk) 02:24, 24 April 2008 (UTC)
No, but I have looked more than once. Burgin has several terms that I think are undefined - like "computational process" - and other terms that he doesn't use the ordiary meaning for - like "algorithm". So it is very difficult to tell what he means when he defines super-recursive algorithms. — Carl (CBM · talk) 02:36, 24 April 2008 (UTC)

[edit] What are relations between hypercomputation and super-recursive algorithms?

It would be useful if somebody explains what relations between hypercomputation and super-recursive algorithms exist. To write that these concepts are closely related is not enough. It would be helpful to describe what these concepts have in common and what are differences between them. Such a description might enhance both articles "Hypercomputation" and "Super-recursive algorithms".

With respect, Multipundit (talk) 01:17, 24 April 2008 (UTC)

Yes, that would be helpful. Do you know if there is any published source that gives that analysis? Unfortunately, Burgin doesn't seem to give enough information. Based on Burgin, def 4.1.2 (p. 108), it appears that the procedure I describe above is a "hypercomputation". — Carl (CBM · talk) 01:37, 24 April 2008 (UTC)

Yes, the description would help enhance both articles. Hopefully an adequate source can be found.--DavidD4scnrt (talk) 09:52, 25 April 2008 (UTC)

I searched for relations between hypercomputation and super-recursive algorithms the book "Super-recursive algorithms" and three survey articles in this and related areas (Burgin, M. and Klinger, A. Three Aspects of Super-recursive Algorithms and Hypercomputation or Finding Black Swans, Theoretical Computer Science, v. 317; Toby Ord. Hypercomputation: Computing more than the Turing machine can compute, arXiv.org; and Bruno Loff And Felix Costa, Five views of hypercomputation). However, in spite of the optimistic assertion of DavidD4scnrt, I was not able to find a grounded explanation of such relations. We can try to elaborate such an explanation ourselves, but it looks, that there are no published sources that give it. With respect, Multipundit (talk) 00:56, 1 May 2008 (UTC)

[edit] Eh, what?

What is a super-recursive algorithm? Saying it is an algorithm which is not general recursive doesn't help - when we say algorithm, we mean something which is recursive. Could the article explain what an algorithm means in this context? It's not clear. It sort of hints it has something to do with not having to stop to produce a result. But still, could the article explain what counts as a result then? Maybe an example? --192.75.48.150 (talk) 15:59, 30 April 2008 (UTC)

See the discussion above under "More ungrounded claims". Basically we have one editor who created this article and thinks it's important, while everybody else is asking themselves the same question as you. Apparently the one proponent of these "algorithms" doesn't really know either, but isn't too concerned about the problem of finding out what they are. Perhaps I could take superrecursive algorithms a bit more seriously if this situation didn't remind me so much of a certain Monty Python sketch. [4] --Hans Adler (talk) 19:50, 30 April 2008 (UTC)
Okay, I've looked at the example given (basically: f(n,m) = 1 if recursive function n terminates on input m, 0 if not). It's a bad one because it doesn't even look like an algorithm at all and so is not compelling. Now I do understand that the example was produced by "the other side", but I do notice no answer was forthcoming as to whether it was even superrecursive! What, in the name of the good lord that created the integers, is going on here? Can we get someone who thinks this is important to:
  1. Add a concrete example of a super-recursive algorithm to the article.
  2. Explain in the article why that example is not general recursive.
  3. Explain in the article in what sense Burgin, or anyone else, would consider that example to be something that "contemporary computers are doing" (phrase taken from the second sentence of the current version of this article).
I take it that #3 is the whole reason we ought to care -- unlike hypercomputation, which is not tied to any notion of concrete algorithm in particular, super-recursion is something that some (sane) people believe contemporary computers are doing, if we just take a different perspective on "termination". Right? --192.75.48.150 (talk) 20:54, 30 April 2008 (UTC)
If we continue our discussion on the level of "eh" and "meh", it will be a "kitchen talk" and not a scientific discussion. To keep this discussion on the scientific level, it's necessary to have some knowledge in the area of algorithms and computation, as well as to be able to read what was already written. Namely, in the article "Super-recursive algorithm," there are several examples of super-recursive algorithms. So, it's necessary to read initial papers where these models were defined or to read the book "Super-recursive algorithms" and only then to ask questions. Otherwise, all such talks will be silly word games in the terminology of Hans. However, the suggestion of the User 192.75.48.150 to explain in the article what counts as a result looks very reasonable. I will think how to do this. With surprise, Multipundit (talk) 01:18, 1 May 2008 (UTC)
First, the point of this Talk page is not have a scientific discussion, it's to discuss what to do with the article. For example, it might be improved by ... adding examples?
If by "article" you mean "the Wikipedia article", I see no "examples of super-recursive algorithms" there. I see ways of classifying algorithms, no actual example algorithms. I think you'll agree that the distinction between "instance of class x" and "class x" is not "playing silly word games." One could, I suppose, parry the request for an example by saying that the field of super-recursive functions is concerned with exploring the properties of abstractions that can't be explicated fully and concretely -- as one might say of irrational numbers because you can't ever finish writing the number itself. However, when I notice paper titles mentioning the vaunted practical significance of super-recursive functions in supercomputing, it strikes me that somebody, somewhere, is likely to want to write some actual code someday, and in that case, you'd need an explicit algorithm that could be written down. So: example, please? Yakushima (talk) 05:43, 24 May 2008 (UTC)

[edit] This is inexpressably sad

"Inductive Turing machines" are cited in the article as capable of super-recursive algorithm behavior. In the course of trying to find out what (if anything) they are, I ran across some very disturbing things.

Burgin argues (with a co-author) that a very simple piece of "hardware" added to Turing machines (yielding an "inductive Turing machine") makes them more powerful than ordinary Turing machines. The hardware modification is equivalent to having an LED on the front panel of some hardware realization, telling you whether the machine has halted yet. This is from Burgin and Klinger's "Experience, generations, and limits in machine learning", Theoretical Computer Science 217 (2004) 71-91.

"To show that inductive Turing machines are more powerful than ordinary Turing

machines, we need to find a problem that no ordinary Turing machine can solve and to explain how some inductive Turing machine solves this problem. To do this, consider the halting problem for an arbitrary Turing machine. It was proved unsolvable for Turing machines. Today it is one of the most popular unsolvable problems in the theory of algorithms.

"However, there is an inductive Turing machine M that solves this problem. This

machine M contains a universal Turing machine U as a subroutine. Given a word u and description D(T) of a Turing machine T, machine M uses machine U to simulate T with the input u. While U simulates T, machine M produces 0 on the output tape. If machine U stops, and this means that T halts being applied to u, machine M produces 1 on the output tape. According to the de4nition, the result of M is equal to 1 when T halts and the result of M is equal to 0 when T never halts. In such a way, M solves the halting problem.

"So, even the simplest inductive Turing machines are more powerful than conventional

Turing machines. At the same time, the development of their structure allowed inductive Turing machines to achieve much higher computing power than have the simplest inductive Turing machines described above. This contrasts to such a property of a conventional Turing machine that by changing its structure, we cannot get greater computing power."

Yeah, guys, but ... you don't have to "develop" the structure of a conventional Turing machine to get what you're talking about. You just watch the proposed Turing machine to see if it has halted yet. Do you object that this implies at least sporadic observation, to see if there's an answer yet? So what? So do regular Turing machines. You haven't "solved the halting problem", because the that problem is defined as specifying an algorithm that itself always halts with an answer. "Inductive Turing machines" are architecturally no different from regular ones, for all purposes practical and theoretical. Therefore, what you can compute with them is also no different. (In this same paper, Burgin and Klinger prove that all Turing machines can be modeled with "inductive Turing machines"; they don't ask the obvious next question "can all inductive Turing machines be modeled with regular ones?" The answer would have been rather embarrassing for them, I suppose.)

How did this flagrantly silly argument ever make it into a peer-reviewed journal? Well, a big part of the answer is: it didn't. The guest editors of this issue of Theoretical Computer Science were also the co-authors of this paper. This is not what is usually meant by "peer review".

This Wikipedia article should cite independent, reliable sources to support notability. "Independent" means, at the very least, "not written or co-authored by the progenitor of the topic". "Reliable", in a scientific context, means "peer reviewed in a mainstream (not fringe) journal." I'm starting to think that we may never turn up any work that meets these basic criteria when it comes to this supposed field of "super-recursive algorithms", not to mention "inductive Turing machines". (Now, inductive inference machines, that's a real topic.) Burgin wrote a monograph? It got published by Springer? Almost meaningless: it wasn't peer reviewed. Parts of it don't even seem to be copy-edited.

While I'm here, I think it's worth noting something else I find inexpressably sad: Burgin's current (2008-05-26) CV mentions that he's a "Member of the Research Board of Advisors, American Biographical Institute. Yes, please click on that link. Pretty shocking, eh?

It all makes me wish that somebody would research, write and publish an exposé in a reliable source, so that this can all be pointed out on Wikipedia as a notable case of pseudo-science emerging in the mainstream -- after all, that kind of thing does happen now and again. Then at least we could be sure there's really a topic here. Yakushima (talk) 09:32, 26 May 2008 (UTC)

Let’s look what Yakushima writes and let’s try to understand how much incorrect information and nonsense this writing contains. For now, only two examples are considered: one for incorrect information and one for nonsense. Yakushima writes, “Burgin argues (with a co-author) that a very simple piece of "hardware" added to Turing machines (yielding an "inductive Turing machine") makes them more powerful than ordinary Turing machines.” However, his citation from the paper disproves this statement because this citation shows that even the simplest inductive Turing machines can do more than any Turing machine can do. The problem with Yakushima is that he is unable to understand this.

Here is the second example. After explaining that he can’t understand this, Yakushima suggests to “just watch the proposed Turing machine to see if it has halted yet.” It would be interesting to know how Yakushima will watch when the Turing machine will need to do 10 to power 100 steps to halt. Even if the time for each step will equal to the time light needs to go from the nucleus of the hydrogen atom to the electron of this atom, the time to do all steps is larger than the time of the existence of our universe (according to contemporary physical theories). It’s possible to read about this, for example, in the book Birkhoff, G. and Bartee, T.C., "Modern Applied Algebra," McGraw-Hill, 1970. Thus, the necessary time to do this is much more than the island Yakushima has existed and it would be interesting to know how the person Yakushima will do such an observation. With interest, Multipundit (talk) 21:03, 30 May 2008 (UTC)


[edit] This not only sad but also disastrous

Yes, this not only sad but also disastrous if appealing to false information, spreading disinformation, and using such logical fallacies as "red herring", ignorance impose its low level of knowledge and understanding on WIKIPEDIA. Yes, ignorance does not need scientific criteria and arguments, but attacks using ungrounded claims and accusations.

With a surprise, Multipundit (talk) 21:39, 28 May 2008 (UTC)

Produce one unambiguously independent, unambiguously peer-reviewed article that treats the subject in any significant detail. Yakushima (talk) 14:05, 31 May 2008 (UTC)

Some time ago, the article "Super-recursive algorithm" was vandalized by insertion of irrelevant and incorrect data. While what’s relevant and what’s irrelevant depends on the opinion, incorrect data can be easily verified and corrected. I hope that some competent in this area person will make necessary corrections. With respect, Multipundit (talk) 00:54, 31 May 2008 (UTC)

Be specific. What was inaccurate? Yakushima (talk) 14:05, 31 May 2008 (UTC)

[edit] Can all people understand difference between finite and infinite?

To my surprise, I have found that some people, such as Pratt, cannot understand difference between finite and infinite sets.

Let’s look what implies the following claim of Pratt. He writes (27 May 2008), “I don't see what if anything Burgin's notion of inductive Turing machine adds to Gold's notion of language identification in the limit.” At the same time, Gold (1965-7) proved that language identification in the limit cannot generate more than 3 levels of the Arithmetic Hierarchy, while there is a proof in the book “Super-recursive algorithms” that inductive Turing machines can generate all levels of the Arithmetic Hierarchy (for those who work in other fields, the Arithmetic Hierarchy has infinitely many levels). Thus, this statement of Pratt implies that he cannot see the difference between finite and infinite sets.

It’s possible to understand why it might happen if we take a man from a primitive tribe in which people can count only 1, 2, 3 (or many). Such man will not understand what it means to count from 1 through 10. For this man, the proficiency in counting from 1 to 3 and the proficiency in counting from 1 to 1,000,000 will be the same. He will claim that counting from1 to 1,000,000 adds nothing to what people of his tribe can already do, i.e., to counting from 1 to 3.

With respect, Multipundit (talk) 21:10, 30 May 2008 (UTC)

It is true that not any person can always understand difference between finite and infinite. It is opening of mind that creates capability to understand such difference, and it is such opening that is absent in the Pratt and others including the vandal of this article. Those who comprehend will bury the Pratts by make glorious the citation index, as by article like Three levels of the symbolosphere by which one can inductively increase citation index to infinity, and also more efficiently than with crippled, limited, ordinary blind peer review process. Instruction pamphlet available from American Biographical Institute. Yakushima (talk) 16:46, 31 May 2008 (UTC)

[edit] The opening is incredibly biased and non-encyclopedic

The entire entrance to the article is obviously written by a figure opposed to the ideas brought forward, and uses primarily antagonistic language, and personal attacks to actively discourage any reader from proceeding beyond the introduction.

The introduction should be rewritten or removed in it's entirety.

--136.159.219.206 (talk) 20:21, 5 June 2008 (UTC)