Talk:History of artificial intelligence
From Wikipedia, the free encyclopedia
Archives |
Talk before rewrite on 24 July 2007 |
Contents |
[edit] Complete Rewrite
I have rewritten this article from scratch. Most of the information from the original article has been moved to Timeline of artificial intelligence. A tiny bit has been lost. (My apologies to the authors of the lost text.) I have also archived the talk page; since the comments referred to the old article, I thought it would be less confusing to make a clean start on the talk page as well.
I chose as my main source Daniel Crevier's AI: The Tumultuous Search for Artificial Intelligence. Hans Moravec calls this "the most complete and accurate history of AI thus far". Marvin Minsky and Roger Schank also endorse it.
I have tried to emphasize the major themes that affected the field in general. I tried to avoid just listing projects or people out of context -- if I mention something, I try to show how it affected the entire field of AI. This means that, in a short article, it is impossible to mention every interesting researcher, every institution, every success and every failure: I had to choose which ones were "most significant" or ones that illustrated a particular point. This means that a lot of important things had to be skipped. I hope that you won't feel compelled to add paragraphs about things that I chose to edit out ... perhaps they could go in the Timeline of artificial intelligence? The article is already too long.
Having said that, I encourage everyone to check my facts and tighten my prose. I am largely unaware of what happened in AI after 1993 or so and could use some help identifying the "major themes" of the last decades (but do use references, please). If anyone has access to any illustrations, that would be appreciated as well. Tell me where it could be shortened.
CharlesGillingham 08:07, 24 July 2007 (UTC)
[edit] DARPA challenge no milestone
The section "AI in the 21st century" is currently very misleading. It claims: "On 11 May 1997, Deep Blue became the first computer system to beat a reigning world champion, Gary Kasparov. On October 8, 2005, the robot car "Stanley" drove unaided over 132 miles of desert roads to win the DARPA Grand Challenge. After almost fifty years of effort, these two milestones were finally achieved. These successes were not due to some revolutionary new paradigm..." But while the first event is worth mentioning, the heavily promoted DARPA race achieved no milestone at all. The milestones in autonomous driving were set 10 years earlier by the much more impressive car robots of Ernst Dickmanns. Compare Talk:DARPA_Grand_Challenge: In 1995 Dickmanns' VaMP car autonomously drove up to 158 km, nearly the same distance as the traffic-free 2005 DARPA Grand Challenge, but much faster, and in traffic, and without a million GPS waypoints. And all of this with much slower computers. Willingandable 15:58, 21 August 2007 (UTC)
- I think this is a good point and feel free to change it, if you like. The point the paragraph is supposed to make is that AI has come a long way and that it has been a slow, difficult process. It's not important exactly what recent achievements get highlighted or how they are presented. I thought DARPA GC was a good example since Shakey couldn't always make it across the room in 8 hours, and now these DARPA GC machines are crossing deserts. It's a striking improvement.
- Maybe the title of the paragraph should change to accomodate your sense that DARPA GC doesn't represent a "milestone" --CharlesGillingham 19:30, 21 August 2007 (UTC)
[edit] Is the DARPA challenge an example of an advance?
I improved your sentence by replacing your DARPA example by the earlier and better one:
In other areas, such as robotics, tremendous progress has been made, for example, in 1970 the robot Shakey could not reliably cross a room in 8 hours,[1] but by 1995, the VaMP robot car of Mercedes-Benz and Ernst Dickmanns was driving on the Autobahn in traffic at up to 180 km/h.
Onetofive 15:41, 24 September 2007 (UTC)
- This is okay with me, but, to be honest I'm a little bothered that this research is now 12 years old. The history of AI is filled with stories of projects that began with great promise and high expectations, but then stalled because the technology could not be improved; there have been a lot of dead ends in the history of AI. We're now mentioning the VaMP in every place where we're trying to say AI is successful and doesn't always hit dead ends. I guess I'm a little worried about hyping Ernst Dickmanns' work when I'm not sure if it led anywhere. If the VaMP turns out to have been a dead end, then what does that say? The advantage of the DARPA Grand Challenge is that it happened this year. It's one "end" that definitely is not "dead" yet. ---- CharlesGillingham 02:22, 25 September 2007 (UTC)
- I've just read the EUREKA Prometheus Project article again and noticed that the VaMP only drove an average of 9km at a time without human intervention. When human intervention is part of an AI project, the results can be can disastrously misleading. Consider the history of machine translation. The systems that were developed in the early 60s required "post-editing" (a kind of human intervention). The ALPAC report found that this "post editing" actually required more time than just translating the text from scratch. Basically, they showed that the system was worse than useless.This discredited the whole field is considered one of the worst disasters in the history of AI. The point is, if the VaMP had an alert human being behind the wheel, ready to intervene, many of the hard problems are being solved by the human. The machine may have been only handling aspects of the problem that were trivial, at least in terms of the aspects that still needed to be solved.
- In contrast, the DARPA Grand Challenge machines are operating without any kind of safety net. This is much more difficult, because the system will fail when he hits the first unanticipated problem. And the history of AI teaches us that the unanticipated problems are often enormously more difficult than the anticipated ones.
- So, on the whole, I'm becoming very suspicious of the VaMP's real capabilities and how these stack up against the DARPA machines. I wish we had a source that told the whole story on this.
- Let me know what you think. ---- CharlesGillingham 02:54, 25 September 2007 (UTC)
-
- Let me cut and paste from the DARPA Grand Challenge:
Five cars finished the course: Stanley, Sandstorm, H1ghlander, TerraMax, and Kat-5. It is interesting to compare them to the earlier VaMP robot car of Mercedes-Benz and Ernst Dickmanns. The VaMP was built in the 1990's as a continuation of Dickmanns' earlier work at the Universität der Bundeswehr München in Munich; the project was funded in part by the $1 billion dollar EUREKA Prometheus Project.[2] The VaMP was able to drive in traffic among moving obstacles, automatically passing slower vehicles; the DARPA cars were not (H1ghlander was standing still when Stanley passed it in the 2005 DARPA Grand Challenge) [3]. The VaMP reached speeds up to 180 km/h (111 mph); the DARPA cars were limited to top speeds of 80 km/h (50 mph). In 1995, the VaMP drove up to 158 km without human intervention on a Danish highway where most drivers adhere to the 110km/h general speed limit and passing is rarely necessary; decisions made by the VaMP were checked for validity by a human safety pilot (the 158km represent the longest stretch of thousands of km of test runs, and the terrain was self-selected by the VaMP team). In 2005, the DARPA cars drove 212 km (132 miles) without human intervention on the Grand Challenge course selected by the race organizers. VaMP drove on the mostly straight Autobahn[2]; the DARPA cars drove on a variety of graded dirt roads, including narrow and steep mountain passes. The VaMP drove mostly by vision with some input provided by radar [4][5] but without GPS navigation; the DARPA cars heavily used GPS, always driving from one waypoint to the next (the DARPA course was unrehearsed by the teams but precisely given by almost 3000 waypoints, with several waypoints per curve). The DARPA cars combined other sensor data such as LIDAR, video cameras, and inertial guidance systems for better navigation in between waypoints, where road boundary identification was sometimes harder than on the Autobahn because of the unstructured terrain (Autobahn road boundaries are engineered to be easily visually observable but often partially hidden by trucks etc). The top speed of the VaMP's computer processors was 1000 times slower per dollar than those used in the DARPA vehicles[2].
- Let me cut and paste from the DARPA Grand Challenge:
-
- Sure, the tasks are not fully comparable, and in traffic of course you do need a safety driver for legal reasons. But the VaMP did demonstrate sustained fully autonomous driving in fast traffic, and it's pretty obvious which car represents the real breakthrough in robot cars! The DARPA race was surrounded by much more hype though. Onetofive 14:13, 25 September 2007 (UTC)
-
-
- The fact that it was a breakthrough is irrelevant. It was in the past and this paragraph is talking about where AI is now.
-
-
-
- There have been many breakthroughs in the past, like SHRDLU for example, that didn't lead anywhere. There isn't a single concept from SHRDLU that wound up being useful to later work in natural language processing. And yet, SHRDLU, from 1970 or so, may be the most successful natural language processing system ever built. It's architecture, although brilliant, turned out to be a dead end. This is part of the reason the AI Winters happen.
-
-
-
- This paragraph is supposed to make the point that AI has made "continuous advances in all areas." If the VaMP truly is the more advanced system than Stanley, then you've just made the opposite point: you've shown that AI is not making advances. You're showing that robot car work has been stalled since 1995. I think the VaMP is a bad example of AI making "continuous advances" right up to the present day.
-
-
-
- I'll leave it because the point is fairly subtle, 1995 isn't that long ago, and you (and Willingandable) feel strongly about it, but please consider what I'm saying here. ---- CharlesGillingham 21:11, 25 September 2007 (UTC)
-
[edit] Todo
- A paragraph on Judea Pearl and the rise of mathematical methods in the 90s. How this dovetails with the Intelligent agent approach to building AI systems -- "divide and conquer". How focussing on isolated problems opened more interdisciplinary connections. How isolating problems made commerical application easier. How AI has gotten more rigorous. "Victory of the neats." Done
- Did "the money pour in" England? Or Japan? Or elsewhere? Need to mention Edinburgh as a center of research in the golden years. Need good sources on this. Done ---- CharlesGillingham 03:58, 5 September 2007 (UTC)
[edit] Lead of the history of AI
- (I moved your comment here from my talk page as this is a better place to discuss)
I'm not sure I agree that artificial intelligence begins in antiquity. The term wasn't coined until 1956. Certainly there were precursors in myth and fiction, but these were usually either robots or artificial humans, not quite the same thing. I think an appropriate analogy here would be "space travel". Although the idea of "space travel" existed for centuries (and was worked out to high level of detail by the mid fifties), space travel begins in 1957, with sputnik, or at least by 1961, with Yuri Gagarin. Similarly, artificial intelligence research has a very definite "birthday". (Read the paragraph on the Dartmouth conference, especially the last line.) Before that date, the closest thing is cybernetics or automata theory (the subject of Claude Shannon and John McCarthy's collaboration before 1956). These are related to AI, but aren't AI: the goal of these fields aren't really the same. My point is this: before 1956, anything you can cite is either (1) merely closely related, or (2) merely speculation.
Also, the new opening line doesn't read well to me; it feels like a digression at the very top of the article. ---- CharlesGillingham (talk) 08:27, 27 November 2007 (UTC)
- Thanks for your comment. I find it hard to agree that the field of AI began in 1956. To begin with, Turing's seminal 1950 paper predated it, and there are chess and checkers programs that date prior to 1956 (see Timeline of artificial intelligence). To say that these contributions are "merely speculation" or "merely closely related" is wrong. 1956 may have been the year of the first conference and when the term was coined, but it is not the beginning of AI. The lead is inaccurate when it lets the reader to believe it all started in 1956. As for my English, I'll have another look to improve readability, but feel free to improve. Pgr94 (talk) 11:14, 27 November 2007 (UTC)
-
- I see what you mean about Turing's paper, and I think Shannon's paper about Chess was in 1950 too. Pitts and McCullough (1943) is also very close. I agree that they are doing what would later be called artificial intelligence research, and that they are part of the history of the idea of artificial intelligence, but I want to make a very precise point here. The old first line of this article identified it as the history of a field of study, "artificial intelligence research", which is a social construct and has a definite beginning in 1956. The new first line identifies this as the history of an idea, a "notion", which lots of individuals have had, on and off, since antiquity. This is why the new opening line seems like a digression to me. Is this article the history of a "notion" or of a "field of study"? As written, I think it is about the latter, and the first line should say so.
-
- Any history of science has social and technical dimensions, and so this article bounces back and forth between technical history and social history, between the history of ideas and '"notions" and the history of people and institutions. But I don't like bouncing back and forth mid sentence in the opening line. The first line needs to identify precisely what the article is about. (There may be a way to bounce backwards to AI research's precursors in the second sentence or paragraph, and still tell a good story.) ---- CharlesGillingham (talk) 18:21, 29 November 2007 (UTC)
-
- One more thought on your first comment: with all due respect, Computing Machinery and Intelligence is speculation. Intelligent, influential speculation but speculation nonetheless. ---- CharlesGillingham (talk) 21:20, 29 November 2007 (UTC)
- I don't see this distinction being made in articles such as History of logic, History of mathematics, History of physics, History of chemistry, History of biology. I think the lead the should reflect the rest of the article, which is that there is a history of artificial intelligence prior to 1956. It's odd to say otherwise. Pgr94 (talk) 19:01, 29 November 2007 (UTC)
-
- They're in a section called "precursors"! As far as logic, mathematics, and physics go, they are ancient. The history of chemistry article carefully makes the distinction between chemistry and alchemy and it's clear that the editors considered the issue. We should do the same. Also, the work of alchemists and metal smiths produced a lot of important information that is now part of chemistry. I don't think you can say that about the precursors of AI.
-
- I think there's a way to weave things together in a way that reads well and tells a good story. I just want each paragraph to actually have some kind of topic. For now I just split the two. There's a way to make this work, we just need to actually say something about the precursors to AI. ---- CharlesGillingham (talk) 21:20, 29 November 2007 (UTC)
-
- Okay, I've written something I like now. Sorry if I was cranky about this. I guess you were right. I just couldn't see how to make it work at first. ---- CharlesGillingham 09:20, 1 December 2007 (UTC)
[edit] Two cents
In an encyclopedia full of too many dry, disjointed articles, "History of artificial intelligence" is a breath of fresh air — thorough and lucid, a pleasure to read. Keep up the good work. Omphaloscope talk 15:28, 24 March 2008 (UTC)
[edit] Image copyright problem with Image:CoverOfWhatComputersCantDo.jpg
The image Image:CoverOfWhatComputersCantDo.jpg is used in this article under a claim of fair use, but it does not have an adequate explanation for why it meets the requirements for such images when used here. In particular, for each page the image is used on, it must have an explanation linking to that page which explains why it needs to be used on that page. Please check
-
- That there is a non-free use rationale on the image's description page for the use in this article.
- That this article is linked to from the image description page.
This is an automated notice by FairuseBot. For assistance on the image use policy, see Wikipedia:Media copyright questions. --04:41, 20 May 2008 (UTC)