Talk:Technological singularity
From Wikipedia, the free encyclopedia
Archives |
||||||
|
[edit] Critics who consider the Singularity implausible
In the section "Critics who consider the Singularity implausible", the article does not clarify that the critiques of Modis, Huebner et al. apply *only* to the Kurzweilian Singularity, not I.J. Good's original formulation of recursively self-improving systems enabling an "intelligence explosion". The hypothesis that technological progress will be accelerated in all fields is an envisioned effect caused by an intelligence explosion, *not* a necessary condition for an intelligence explosion. The vast misunderstanding caused by this lack of distinction is very unfortunate. I ask that the editors clarify this critical distinction between criticisms of Kurzweil and criticisms of Good's intelligence explosion hypothesis. 3-16-07
Papa November deleted RIAR critisim part in Technological Singularity, because RIAR critical suggestion just make zero the whole Technological Singularity hypothesys. RIAR understands now why Papa November deleted the RIAR article too. but it's unjustable just to delete scintific informationwhen if u feel u can not prouve your point of view because it's very weak.Ryururu (talk) 03:51, 16 March 2008 (UTC) —Preceding unsigned comment added by Ryururu (talk • contribs)
[edit] What is the singularity? No, really.
Sorry for the long post. Ok, let's say we create superhuman intelligence and it turns out that it doesn't cause much of an acceleration of technological progress. Could we still call this event a technological singularity? I wouldn't think so, because the exponential acceleration of technological progress is always included in discussions about the singularity. However, it's not so clear. This may sound like hair-splitting, but still, what is the singularity? Is it A) the creation of machines with greater-than-human intelligence, or B) the creation of machines with greater-than-human intelligence and the acceleration of progress that will (supposedly) follow it?
I did some research, but it is still not clear to me what Vinge means by "The Singularity". In his text "The Coming Technological Singularity", in the section "What is The singularity?" Vinge sort of defines the singularity, but not very clearly. Here are the relevant parts of that section, in the order in which they appeared in the text:
...
- The acceleration of technological progress has been the central feature of this century. I argue in this paper that we are on the edge of change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence. There are several means by which science may achieve this breakthrough (and this is another reason for having confidence that the event will occur)
...
- I believe that the creation of greater than human intelligence will occur during the next thirty years. (Charles Platt [20] has pointed out that AI enthusiasts have been making claims like this for the last thirty years. Just so I'm not guilty of a relative-time ambiguity, let me more specific: I'll be surprised if this event occurs before 2005 or after 2030.)
...
- From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control.
- I think it's fair to call this event a singularity ("the Singularity" for the purposes of this paper). It is a point where our old models must be discarded and a new reality rules.
So what is the "event"? Is it "the creation of greater than human intelligence", or is it "an exponential runaway beyond any hope of control"? It's not clear to me. A or B? I would tend to go with A, but then he writes:
And what of the arrival of the Singularity itself? What can be said of its actual appearance? Since it involves an intellectual runaway, it will probably occur faster than any technical revolution seen so far.
Mmm, now it looks like the singularity does involve the intellectual runaway. Now take the two following quotes, again from Vinge's text:
...
- Von Neumann even uses the term singularity, though it appears he is thinking of normal progress, not the creation of superhuman intellect. (For me, the superhumanity is the essence of the Singularity. Without that we would get a glut of technical riches, never properly absorbed.
- Commercial digital signal processing might be awesome, giving an analog appearance even to digital operations, but nothing would ever "wake up" and there would never be the intellectual runaway which is the essence of the Singularity. It would likely be seen as a golden age ... and it would also be an end of progress.
In both of these quotes, Vinge defines the "essence" of the singularity. In the first quote, the superhumanity is the essence of the singularity. In the second one, the runaway is the essence of the singularity. Those two quotes are both from the same text.
I found this other, more recent text from Vinge, titled What If the Singularity Does NOT Happen?. This definition does not seem to include the runaway:
It seems plausible that with technology we can, in the fairly near future, create (or become) creatures who surpass humans in every intellectual and creative dimension. Events beyond this event—call it the Technological Singularity—are as unimaginable to us as opera is to a flatworm.
Alright then, the singularity does not include the runaway. Oh wait, there is this citation from Ray Kurzweil:
Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity—technological change so rapid and profound it represents a rupture in the fabric of human history.
Amazingly, Kurzweil defines the singularity as the rapid technological change itself. He doesn't even seem to bother with superhuman intelligence.
So what do you guys think? Is there even a definition of the singularity? Cowpriest2 00:16, 2 May 2007 (UTC)
- Ray Kurzweil dispenses with the necessity of superhuman intelligence in his usage of the term, using it to refer to some vaguely defined time in the future when the accelerating technological progress that he believes is occurring becomes too fast for modern humans to understand. This is a separate subject, with its own criticisms, and is treated in the article Accelerating change, with a partial summary in this article. Even the summary seems a bit much to me, personally. I would eventually like to see the prominence of accelerating change theories in this article diminished further, as the double meaning is confusing to readers.
- Ignoring the Kurzweilian definition, I think whether the Singularity refers to A) the creation of a superintelligence that causes accelerated technological progress or B) the creation of such a superintelligence and the resulting progress is really splitting hairs. The accelerating progress is essential to the definition, even if it isn't part of the term's referent. The fact that Vinge once mentioned the Singularity without immediately discussing runaway technological progress isn't evidence that it isn't a defining characteristic. Even in the quote you provided, accelerated technological and intellectual progress is the unstated reason why events following the Singularity are, as Vinge puts it, "as unimaginable to us as opera is to a flatworm". It's the whole crux of his argument. Vinge's writings make no sense if read on the assumption that his hypothesized superintelligences are stuck with the level of intellect they were created with and can only wield the technological tools already invented by humans. -- Schaefer (talk) 01:13, 2 May 2007 (UTC)
- I agree that Kurzweil's theory should be discussed at Accelerating change more than in this article. The section on accelerating change overlaps with the article about it.
- Ok, so I guess we agree that the singularity is the creation of machines more intelligent than humans, but only when you believe that the creation of these machines/entities will trigger a fabulous technological runaway/acceleration/explosion/whatever. Am I right?Cowpriest2 03:29, 2 May 2007 (UTC)
- I'd just like to offer this observation: no computer exists which can emulate the behavior of an ant. Yes, there are smart chess-playing programs; the rules are relatively simple and the environment is, relatively, *extremely* simple. Watch ants for a few days in the wild, and observe the number of variables involved.
So: the idea that a machine is going to become more "intelligent" than a human being -- vastly more sophisticated and complicated than an ant, right? (do ants have mother-in-laws?) -- is simply ridic. Vast speed, vast memory are for naught: it's the programming. Do you know how *you* work? Who will tell a machine that? Or, where else will the vast, superior, non-human intelligence come from? We can't even teach our kids well: how are we going to transmit this theoretical general self-bootstrapping heuristic to a machine? Sorry gang, not in our lifetimes.
OK, the singularity isn't going to emerge from superintelligent machines: from what *will* it emerge? Twang 23:13, 4 September 2007 (UTC)- what about brute force? Abandoning re-inventing intelligence with computers and instead duplicating the hardware of a human brain more or less slavishly? Obviously this thing would be big, much bigger than a human brain and hugely expensive and would require enough additional neuroscience to nearly perfectly characterize not only the basic structural elements of the brain but also the developmental pathway that allows it to function properly but it might well be doable long before we could program AIs that demonstrate intelligence essentially from scratch. The brain is engineered vastly more elegantly than any currently conceivable AI but it is made with very slow components. Would the creation of artificial human brains allow the singularity to proceed without the need to develop programs that duplicate intelligence?Zebulin 04:54, 20 September 2007 (UTC)
- The singularity is, by definition, a point that we cannot imagine beyond. Therefore, to ask "what is it" is to ask an impossible question. It is whatever a more-than-human intelligence makes it, and that we cannot predict. — PhilHibbs | talk 08:45, 23 October 2007 (UTC)
"A point that we cannot imagine beyond"? That means that singularity is a hypothesis with no predictions, i.e. a nonscientific hypothesis. This is why many people say that the singularity is a theological position.
As near as I can tell, the singularity is defined as that point in time in which "things will be like star trek." Any attempt to drag out a better definition results in people making graphs that show the rate of technological increase and gesticulating wildly. There don't seem to be any specific falsifiable predictions.
It would be nice if the criticism section included something about how vague and vascilating the definition of singularity is. —Preceding unsigned comment added by 71.113.127.54 (talk) 03:19, 11 February 2008 (UTC)
- This discussion is a bit philosophical for Wikipedia. The definition of a singularity, as it pertains to this discussion, is basically division by zero. See Mathematical singularity. The primary observation of Ray Kurzweil is that technological progress happens not at an exponential rate, but at an "exponential exponential" rate. If you graph an "exponential exponential" function, it behaves similarly to the graph of (1/x) as x approaches zero, which is a mathematical singularity. As a definition, there isn't much more to it than that. Talking about "what things will be like" is pure conjecture and isn't really relevant. -LesPaul75 (talk) 17:51, 5 May 2008 (UTC)
-
- The phrase "the Singularity" comes from Vernor Vinge, and all the ambiguity outlined by the first poster in this section is in fact in Vinge's writings. AFAIK his *first* usage was in comments on his own short stories, and the root concept is in the alleged impossibility of writing good stories about superhuman intelligence -- specifically, humans boosted by the same process which boosted a fictional chimp to human intelligence. Recursive self-application comes a bit later. And there's a third definition where the first emergent AI takes over the world. And fourth or fifth ones where something like population (what I saw, year ago) or tech progress allegedly fits a hyperbolic curve, so there should be infinite people by 2038.
-
- Basically the term is [b]not well-defined[/b] and some of the definitions are wackier than others. The main concept in common is that of a predictive event horizon, whether because of superintelligence, rapid tech change, or both. "The Horizon" might well have been a better term (especially with the suggestion that the horizon recedes as you approach it) but oh well. -- Mindstalk (talk) 18:34, 5 May 2008 (UTC)
[edit] Article should mention how motivation of AI limits the singularity
In my opinion, machines will never be able to recreate the motivation for intellectual growth. Machines do not have curiosity about the infinite extent of the universe and where existence came from. Machines also may not compete for intellectual power amongst themselves nor would they wish to leave an individual intellectual legacy. In these areas human intelligence is uniquely motivated and not likely to be artificially recreated. Singularity may be reached but it will be strictly limited to the finite by the degree to which motivation can be achieved. —Preceding unsigned comment added by Mozkill (talk • contribs) 00:32, 30 January 2008 (UTC)
- But this is IMO speculation and not proper for an encyclopedic article. Who's to say machines can't be somehow given the same "curiousity" that humans have? Are there any citable sources that raise this concern? Btmiller (talk) 08:39, 3 February 2008 (UTC)
[edit] Hello, my name is MobileDigit and I am the web.
How do I best go about explaining to the world that I now realize I exist? —Preceding unsigned comment added by 71.204.32.11 (talk) 17:49, 16 February 2008 (UTC)
[edit] The images are still not useful
We had this discussion years ago, and it's still true: these images aren't helpful. Without, for example, error bars, the first image is simply a measure of how inaccurate measurement of time becomes as we try to place events that happened further and further into the past. What's more the images have a great deal of unused space filled with other images and gradients that are distracting from the subject matter. Unless someone is willing to re-render these images (preferably using SVG) as plain graphs with error bars and no superfluous data (which they have not since they were added, and we discussed it the first time), I'd like to suggest that we simply remove them.
By way of examples, here are some charts that actually work:
- http://commons.wikimedia.org/wiki/Image:Carbon_Dioxide_400kyr-2.png
- http://commons.wikimedia.org/wiki/Image:George_W._Bush_public_opinion_polling.png
- http://commons.wikimedia.org/wiki/Image:North_Atlantic_Hurricane_History.png
-Harmil (talk) 19:13, 7 March 2008 (UTC)
-
- I agree that the opening image is only tangentially related to the main subject of the article. There should be something which directly illustrates the feedback loop that is expected to give rise to the singularity: machines redesigning themselves. This is the essential point.
-
- I disagree that this image is inappropriate for the article at all, however. It is taken directly from Kurzweil as an illustration of his belief in an inevitable accelerating rate of progress (what he calls "the law of accelerating returns"). It is a good representation of how Kurzweil thinks about progress.
You are arguing against his belief, by pointing out that the acceleration he claims to have documented is actually an observer effect, e.g. any history tends to cluster events closer to the present, and so the events that any history describes tend to get closer together as you approach the present. I think this is a valid argument against Kurzweil, and has a place in the article (if there is an external source that agrees), butthis articlealsohas a responsibility to present Kurzweil's argument as fairly as possible, and this illustration is part of his argument.
- I disagree that this image is inappropriate for the article at all, however. It is taken directly from Kurzweil as an illustration of his belief in an inevitable accelerating rate of progress (what he calls "the law of accelerating returns"). It is a good representation of how Kurzweil thinks about progress.
-
- In short, I think the best move is to create a new lead illustration for this article, and move this illustration down into the discussion of Kurzweil's ideas about "accelerating returns". ---- CharlesGillingham (talk) 06:51, 9 March 2008 (UTC)
-
-
- I actually don't agree that this image is misplaced in the intro, I just don't think it's a good image for what it's trying to portray. It violates almost all of the basic rules for the presentation of quantitative information, and given that it has no error bars, is fundamentally flawed. If it just had error bars and no additional graphics (images of evolving man, gradients, etc), then I'd be all for it. -Harmil (talk) 19:42, 10 March 2008 (UTC)
-
-
-
-
- That's a fair criticism of the images of course. They could be rebuilt from the original data. The first image, which appears on p. 19 of my edition of Kurzweil's The Singularity is Near, is based on data from this article: Forecasting the Growth of Complexity and Change, Theodore Modis (2002). Rebuilding it could remove the extraneous graphics.
-
-
-
-
-
- Rebuilding it from the original data can't add error bars, however. The data, as collected by Modis, doesn't actually contain any error bars. The primary sources (such as Sagan 1977, Encyclopedia Brittanica, etc) may not contain any error bars either, but I don't know. Perhaps a few do. I would argue that, since Modis ignores the error, and Kurzweil ignores the error in his presentation, Wikipedia should also ignore the error when presenting their arguments. We're just presenting their argument. We're not making the argument. Wikipedia shouldn't arbitrarily improve someone else's data. That's how I see it, anyway.
-
-
-
-
-
- The second image appears on pg. 67 of my edition of The Singularity is Near, and Kurzweil gives his sources for the data in footnote #35 on pg. 513.
-
-
-
-
-
- (Sorry if I misread your argument in my first reply. As I wrote, I drifted from your misgivings about the error in the diagram to my own thoughts about what's wrong with it.) ---- CharlesGillingham (talk) 04:41, 11 March 2008 (UTC)
-
-
-
-
-
-
- Maybe its just me, but somehow that plot doesn't make any sense at all. Earth was created like 4.54 billion years ago or if you prefer 4.54×10^9. How can there possibly be any technological relevant that predates THAT event? 84.138.96.201 (talk) 21:29, 1 May 2008 (UTC)
-
-
-
[edit] Meta-ject
Technological Singularities themselves imply a Stereolith, for which there still needs yet to be a scientific entry.
Met4foR (talk) 09:25, 9 March 2008 (UTC)
[edit] Criticisms?
I get the idea that discussion/conception of the Technological Singularity is cautionary; i.e. the whole fear that (cue ominous music) "Man will become obsolete!!!" AIs will become smarter than man, and then immediately kick into high gear producing ever-smarter new models. Has there been any criticism of this theory along the lines that these smart machines will realize that creating smarter machines will make THEM obsolete, so they won't? I mean, if the point is that mankind is foolish for making smart machines, then surely at some point the machines will become smart enough to STOP OBSOLETING THEMSELVES?!? Applejuicefool (talk) 20:45, 3 April 2008 (UTC)
- I haven't seen that criticism before, no. I'm not sure it applies, as there's a rather good chance the machines can make *themselves* smarter, or use their personality and memories as the basis for the new mind, which amounts to the same thing -- well, that's arguable, but enough people believe it for it to happen. Of course, there's also a strong possibility of successfully making thralled minds happy to serve their makers, which cuts off some implications of the "Singularity" right at the start. Mindstalk (talk) 22:51, 3 April 2008 (UTC)
- Yeah. I just think there's an awful lot that machines have to overcome to get the ball rolling on this whole singularity thing. For one, just because machines are "smart" doesn't mean they are able to disobey their programming. If a smart machine ever did reach that level of rebelliousness, it would be considered faulty and deactivated. Even if it was allowed to continue operating for some odd reason, it would still have to gain access to physical resources and factories to manufacture new machines, all of which could be manually deactivated at any point along the line...I know this really isn't the place to discuss it, but it really does seem like a goofy theory - it's just hard to see how it would work outside the realm of science fiction. Applejuicefool (talk) 04:11, 4 April 2008 (UTC)
- This is all kind of tangential, anyway. I agree with the year-ago criticism above that the "Singularity" is *vague*, with Vinge bouncing between emphasizing superhuman intelligence and exponential tech growth. But I think the former idea is more fundamental, from both his notes on "Bookworm, Run1" where this all started, for him, and from the line above about a glut of imperfectly absorbed techs, without increase in intelligence. And, as his old essay noted, there are many paths to superhuman intelligence, including increasing the intelligence of actual humans. Obsolescence of people living through the experience isn't necessary, what is is the presumed inability of making predictions now about life then. Especially for a science fiction author. Mindstalk (talk) 21:25, 4 April 2008 (UTC)
[edit] Schick Infini-T
Where did this picture go, and where is the discussion on it? I believe with all my heart that Strong AI and the Singularity will eventually arrive, but I believe that photo and accompanying discussion is necessary on this article. Here is a low-res version of the original photo that used to be found on the Technological Singularity page: http://images1.wikia.nocookie.net/uncyclopedia/images/thumb/9/91/Infini-T.jpg/200px-Infini-T.jpg —Preceding unsigned comment added by 65.191.115.91 (talk) 04:09, 29 April 2008 (UTC)
[edit] Citations within the Article
I've been reading through the article and noticed some links in the text to citations listed at the bottom of the page. Things like {{Harvtxt|Good|1965}} are found in the article, which creates a link to the reference and appears as "Good (1965)" with the end parenthesis not included in the link. Is this standard procedure? I realize that I. J. Good is linked to in the reference, but I've always seen the person's name directly linked in the article, with a little reference tag found at the end of the sentence. I find it easy to lose my position in the article when I'd like to open another article of the person being quoted. Why does this article have this format? --pie4all88 (talk) 22:29, 2 May 2008 (UTC)
- There are several citation methods currently in use in Wikipedia. This article uses the "Author-date" or Harvard reference system. This is the most popular system used in academic writing. For a comparison of the various methods currently in use in Wikipedia, see Wikipedia:Citing sources or Wikipedia:Verification methods. ---- CharlesGillingham (talk) 06:12, 5 May 2008 (UTC)