History of artificial intelligence
From Wikipedia, the free encyclopedia
This is a sub-article of Artificial intelligence (AI), focusing on the development and History of artificial intelligence. The quest for artificial intelligence has made steady progress since at least the 1950s. Progress is due to some combination of finding new algorithms, improving our understanding of the nature of intelligence, and external factors such as increased computer power or progress made in other disciplines such as logic, mathematics, programming languages or statistics.
Contents |
[edit] Prehistory of AI
Humans have always speculated about the nature of mind, thought, and language, and searched for discrete representations of their knowledge. Aristotle tried to formalize this speculation by means of syllogistic logic, which remains one of the key strategies of AI. The first is-a hierarchy was created in 260 by Porphyry of Tyros. Classical and medieval grammarians explored more subtle features of language that Aristotle shortchanged. In the 13th century Ramon Llull was the first to build 'machines' that used logical means to produce knowledge. The mathematician Bernard Bolzano made the first modern attempt to formalize semantics in 1837.
Early computer design was driven mainly by the complex mathematics needed to target weapons accurately, with analog feedback devices inspiring an ideal of cybernetics. The expression "artificial intelligence" was introduced as a 'digital' replacement for the analog 'cybernetics'.
[edit] Development of AI theory
Much of the (original) focus of artificial intelligence research draws from an experimental approach to psychology, and emphasizes what may be called linguistic intelligence (best exemplified in the Turing test).
Approaches to artificial intelligence that do not focus on linguistic intelligence include robotics and collective intelligence approaches, which focus on active manipulation of an environment, or consensus decision making, and draw from biology and political science when seeking models of how "intelligent" behavior is organized.
AI also draws from animal studies, in particular with insects, which are easier to emulate as robots (see artificial life), as well as animals with more complex cognition, including apes, who resemble humans in many ways but have less developed capacities for planning and cognition. Some researchers argue that animals, which are apparently simpler than humans, ought to be considerably easier to mimic. But satisfactory computational models for animal intelligence are not available.
Seminal papers advancing AI include A Logical Calculus of the Ideas Immanent in Nervous Activity (1943), by Warren McCulloch and Walter Pitts, and Computing Machinery and Intelligence (1950), by Alan Turing, and Man-Computer Symbiosis by J.C.R. Licklider. See cybernetics and Turing test for further discussion.
There were also early papers which denied the possibility of machine intelligence on logical or philosophical grounds such as Minds, Machines and Gödel (1961) by John Lucas [3]. They referred to Kurt Goedel's result of 1931: sufficiently powerful formal systems are either inconsistent or allow for formulating true theorems unprovable by any theorem-proving AI deriving all provable theorems from the axioms. Since humans are able to "see" the truth of such theorems, machines were deemed inferior.
With the development of practical techniques based on AI research, advocates of AI have argued that opponents of AI have repeatedly changed their position on tasks such as computer chess or speech recognition that were previously regarded as "intelligent" in order to deny the accomplishments of AI. Douglas Hofstadter, in Gödel, Escher, Bach, pointed out that this moving of the goalposts effectively defines "intelligence" as "whatever humans can do that machines cannot".
John von Neumann (quoted by E.T. Jaynes) anticipated this in 1948 by saying, in response to a comment at a lecture that it was impossible for a machine to think: "You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!". Von Neumann was presumably alluding to the Church-Turing thesis which states that any effective procedure can be simulated by a (generalized) computer.
In 1969 McCarthy and Hayes started the discussion about the frame problem with their essay, "Some Philosophical Problems from the Standpoint of Artificial Intelligence".
[edit] Experimental AI research
Artificial intelligence began as an experimental field in the 1950s with such pioneers as Allen Newell and Herbert Simon, who founded the first artificial intelligence laboratory at Carnegie Mellon University, and John McCarthy and Marvin Minsky, who founded the MIT AI Lab in 1959. They all attended the Dartmouth College summer AI conference in 1956, which was organized by McCarthy, Minsky, Nathan Rochester of IBM and Claude Shannon.
Historically, there are two broad styles of AI research - the "neats" and "scruffies". "Neat", classical or symbolic AI research, in general, involves symbolic manipulation of abstract concepts, and is the methodology used in most expert systems. Parallel to this are the "scruffy", or "connectionist", approaches, of which artificial neural networks are the best-known example, which try to "evolve" intelligence through building systems and then improving them through some automatic process rather than systematically designing something to complete the task. Both approaches appeared very early in AI history. Throughout the 1960s and 1970s scruffy approaches were pushed to the background, but interest was regained in the 1980s when the limitations of the "neat" approaches of the time became clearer. However, it has become clear that contemporary methods using both broad approaches have severe limitations.
Artificial intelligence research was very heavily funded in the 1980s by the Defense Advanced Research Projects Agency in the United States and by the fifth generation computer systems project in Japan. The failure of the work funded at the time to produce immediate results, despite the grandiose promises of some AI practitioners, led to correspondingly large cutbacks in funding by government agencies in the late 1980s, leading to a general downturn in activity in the field known as AI winter. Over the following decade, many AI researchers moved into related areas with more modest goals such as machine learning, robotics, and computer vision, though research in pure AI continued at reduced levels.
[edit] Micro-World AI
The real world is full of distracting and obscuring detail: generally science progresses by focusing on artificially simple models of reality (in physics, frictionless planes and perfectly rigid bodies, for example). In 1970 Marvin Minsky and Seymour Papert, of the MIT AI Laboratory, proposed that AI research should likewise focus on developing programs capable of intelligent behaviour in artificially simple situations known as micro-worlds. Much research has focused on the so-called blocks world, which consists of coloured blocks of various shapes and sizes arrayed on a flat surface. Micro-World AI
[edit] Spinoffs
Whilst progress towards the ultimate goal of human-like intelligence has been slow, many spinoffs have come in the process. Notable examples include the languages LISP and Prolog, which were invented for AI research but are now used for non-AI tasks. Hacker culture first sprang from AI laboratories, in particular the MIT AI Lab, home at various times to such luminaries as John McCarthy, Marvin Minsky, Seymour Papert (who developed Logo there) and Terry Winograd (who abandoned AI after developing SHRDLU).
[edit] AI languages and programming styles
AI research has led to many advances in programming languages including the first list processing language by Allen Newell et. al., Lisp dialects, Planner, Actors, the Scientific Community Metaphor, production systems, and rule-based languages.
GOFAI TEST research is often done in programming languages such as Prolog or Lisp. Bayesian work often uses Matlab or Lush (a numerical dialect of Lisp). These languages include many specialist probabilistic libraries. Real-life and especially real-time systems are likely to use C++. AI programmers are often academics and emphasise rapid development and prototyping rather than bulletproof software engineering practices, hence the use of interpreted languages to empower rapid command-line testing and experimentation.
The most basic AI program is a single If-Then statement, such as "If A, then B." If you type an 'A' letter, the computer will show you a 'B' letter. Basically, you are teaching a computer to do a task. You input one thing, and the computer responds with something you told it to do or say. All programs have If-Then logic. A more complex example is if you type in "Hello.", and the computer responds "How are you today?" This response is not the computer's own thought, but rather a line you wrote into the program before. Whenever you type in "Hello.", the computer always responds "How are you today?". It seems as if the computer is alive and thinking to the casual observer, but actually it is an automated response. AI is often a long series of If-Then (or Cause and Effect) statements.
A randomizer can be added to this. The randomizer creates two or more response paths. For example, if you type "Hello", the computer may respond with "How are you today?" or "Nice weather" or "Would you like to play a game?" Three responses (or 'thens') are now possible instead of one. There is an equal chance that any one of the three responses will show. This is similar to a pull-cord talking doll that can respond with a number of sayings. A computer AI program can have thousands of responses to the same input. This makes it less predictable and closer to how a real person would respond, arguably because living people respond somewhat unpredictably. When thousands of input ("if") are written in (not just "Hello.") and thousands of responses ("then") are written into the AI program, then the computer can talk (or type) with most people, if those people know the If statement input lines to type.
Many games, like chess and strategy games, use action responses instead of typed responses, so that players can play against the computer. Robots with AI brains would use If-Then statements and randomizers to make decisions and speak. However, the input may be a sensed object in front of the robot instead of a "Hello." line, and the response may be to pick up the object instead of a response line.
[edit] Chronological History
[edit] Historical Antecedents
Greek myths of Hephaestus and Pygmalion incorporate the idea of intelligent robots. In the 5th century BC, Aristotle invented syllogistic logic, the first formal deductive reasoning system.
Ramon Llull, Spanish theologian, invented paper "machines" for discovering nonmathematical truths through combinations of words from lists in the 13th century.
By the 15th century and 16th century, clocks, the first modern measuring machines, were first produced using lathes. Clockmakers extended their craft to creating mechanical animals and other novelties. Rabbi Judah Loew ben Bezalel of Prague is said to have invented the Golem, a clay man brought to life (1580).
Early in the 17th century, René Descartes proposed that bodies of animals are nothing more than complex machines. Many other 17th century thinkers offered variations and elaborations of Cartesian mechanism. Thomas Hobbes published Leviathan, containing a material and combinatorial theory of thinking. Wilhelm Schickard created the first mechanical calculating machine in 1623, Blaise Pascal created the second mechanical and first digital calculating machine (1642). Gottfried Leibniz improved the earlier machines, making the Stepped Reckoner to do multiplication and division (1673). He also invented the binary system and evisioned a universal calculus of reasoning (Alphabet of human thought) by which arguments could be decided mechanically.
The 18th century saw a profusion of mechanical toys, including the celebrated mechanical duck of Jacques de Vaucanson and Wolfgang von Kempelen's phony chess-playing automaton, The Turk (1769).
Mary Shelley published the story of Frankenstein; or the Modern Prometheus (1818).
[edit] 19th and Early 20th Century
George Boole developed a binary algebra (Boolean algebra) representing (some) "laws of thought." Charles Babbage & Ada Lovelace worked on programmable mechanical calculating machines.
In the first years of the 20th century Bertrand Russell and Alfred North Whitehead published Principia Mathematica, which revolutionized formal logic. Russell, Ludwig Wittgenstein, and Rudolf Carnap lead philosophy into logical analysis of knowledge. Karel Čapek's play R.U.R. (Rossum's Universal Robots)) opens in London (1923). This is the first use of the word "robot" in English.
[edit] Mid 20th century and Early AI
In 1931 Kurt Gödel showed that sufficiently powerful consistent formal systems permit the formulation of true theorems that are unprovable by any theorem-proving machine deriving all possible theorems from the axioms. To do this he had to build a universal, integer-based programming language, which is the reason why he is sometimes called the "father of theoretical computer science". Since human mathematicians are able to "see" the truth of Goedel's theorems, AIs were deemed inferior by certain philosophers.
In 1941 Konrad Zuse built the first working program-controlled computers. Warren Sturgis McCulloch and Walter Pitts publish "A Logical Calculus of the Ideas Immanent in Nervous Activity" (1943), laying foundations for artificial neural networks. Arturo Rosenblueth, Norbert Wiener and Julian Bigelow coin the term "cybernetics" in a 1943 paper. Wiener's popular book by that name published in 1948.
Game theory which would prove invaluable in the progress of AI was introduced with the 1944 paper, Theory of Games and Economic Behavior by mathematician John von Neumann and economist Oskar Morgenstern.
Vannevar Bush published As We May Think (The Atlantic Monthly, July 1945) a prescient vision of the future in which computers assist humans in many activities.
[edit] 1950's
Date | Development |
---|---|
1950 | Alan Turing (who introduced the universal Turing machine in 1936) published "Computing Machinery and Intelligence", which suggested the Turing test as a way of operationalizing a test of intelligent behavior. |
1950 | Claude Shannon published a detailed analysis of chess playing as search. |
1950 | Isaac Asimov published his Three Laws of Robotics. |
1951 | The first working AI programs were written in 1951 to run on the Ferranti Mark I machine of the University of Manchester: a checkers-playing program written by Christopher Strachey and a chess-playing program written by Dietrich Prinz. |
1952-1962 | Arthur Samuel (IBM) wrote the first game-playing program, for checkers (draughts), to achieve sufficient skill to challenge a world champion. His first checkers-playing program was written in 1952, and in 1955 he created a version that learned to play (Samuel 1959). |
1956 | John McCarthy coined the term "artificial intelligence" as the topic of the Dartmouth Conference, the first conference devoted to the subject. |
1956 | The first demonstration of the Logic Theorist (LT) written by Allen Newell, J.C. Shaw and Herbert Simon (Carnegie Institute of Technology, now Carnegie Mellon University). This is often called the first AI program, though Samuel's checkers program also has a strong claim. |
1957 | The General Problem Solver (GPS) demonstrated by Newell, Shaw and Simon. |
1958 | John McCarthy (Massachusetts Institute of Technology or MIT) invented the Lisp programming language. |
1958 | Herb Gelernter and Nathan Rochester (IBM) described a theorem prover in geometry that exploits a semantic model of the domain in the form of diagrams of "typical" cases. |
1958 | Teddington Conference on the Mechanization of Thought Processes was held in the UK and among the papers presented were John McCarthy's Programs with Common Sense, Oliver Selfridge's Pandemonium, and Marvin Minsky's Some Methods of Heuristic Programming and Artificial Intelligence. |
Late 1950s, early 1960s | Margaret Masterman and colleagues at University of Cambridge design semantic nets for machine translation. |
[edit] 1960's
Date | Development |
---|---|
1960s | Ray Solomonoff lays the foundations of a mathematical theory of AI, introducing universal Bayesian methods for inductive inference and prediction. |
1961 | James Slagle (PhD dissertation, MIT) wrote (in Lisp) the first symbolic integration program, SAINT, which solved calculus problems at the college freshman level. |
1962 | First industrial robot company, Unimation, founded. |
1963 | Thomas Evans' program, ANALOGY, written as part of his PhD work at MIT, demonstrated that computers can solve the same analogy problems as are given on IQ tests. |
1963 | Edward Feigenbaum and Julian Feldman published Computers and Thought, the first collection of articles about artificial intelligence. |
1963 | Leonard Uhr and Charles Vossler published "A Pattern Recognition Program That Generates, Evaluates, and Adjusts Its Own Operators", which described one of the first machine learning programs that could adaptively acquire and modify features and thereby overcome the limitations of simple perceptrons of Rosenblatt |
1964 | Danny Bobrow's dissertation at MIT (technical report #1 from MIT's AI group, Project MAC), shows that computers can understand natural language well enough to solve algebra word problems correctly. |
1964 | Bertram Raphael's MIT dissertation on the SIR program demonstrates the power of a logical representation of knowledge for question-answering systems. |
1965 | J. Alan Robinson invented a mechanical proof procedure, the Resolution Method, which allowed programs to work efficiently with formal logic as a representation language. |
1965 | Joseph Weizenbaum (MIT) built ELIZA (program), an interactive program that carries on a dialogue in English language on any topic. It was a popular toy at AI centers on the ARPANET when a version that "simulated" the dialogue of a psychotherapist was programmed. |
1966 | Ross Quillian (PhD dissertation, Carnegie Inst. of Technology, now CMU) demonstrated semantic nets. |
1966 | First Machine Intelligence workshop at Edinburgh: the first of an influential annual series organized by Donald Michie and others. |
1966 | Negative report on machine translation kills much work in Natural language processing (NLP) for many years. |
1967 | Dendral program (Edward Feigenbaum, Joshua Lederberg, Bruce Buchanan, Georgia Sutherland at Stanford University) demonstrated to interpret mass spectra on organic chemical compounds. First successful knowledge-based program for scientific reasoning. |
1968 | Joel Moses (PhD work at MIT) demonstrated the power of symbolic reasoning for integration problems in the Macsyma program. First successful knowledge-based program in mathematics. |
1968 | Richard Greenblatt (programmer) at MIT built a knowledge-based chess-playing program, MacHack, that was good enough to achieve a class-C rating in tournament play. |
1969 | Stanford Research Institute (SRI): Shakey the Robot, demonstrated combining animal locomotion, perception and problem solving. |
1969 | Roger Schank (Stanford) defined conceptual dependency model for natural language understanding. Later developed (in PhD dissertations at Yale University) for use in story understanding by Robert Wilensky and Wendy Lehnert, and for use in understanding memory by Janet Kolodner. |
1969 | Yorick Wilks (Stanford) developed the semantic coherence view of language called Preference Semantics, embodied in the first semantics-driven machine translation program, and the basis of many PhD dissertations since such as Bran Boguraev and David Carter at Cambridge. |
1969 | First International Joint Conference on Artificial Intelligence (IJCAI) held at Stanford. |
1969 | Marvin Minsky and Seymour Papert publish Perceptrons, demonstrating previously unrecognized limits of a simple form of neural nets. This may have helped trigger the AI winter of the 1970s, a failure of confidence and funding for AI. Nevertheless significant progress in the field continued (see below). |
[edit] 1970s
Date | Development |
---|---|
Early 1970s | Jane Robinson and Don Walker established an influential Natural Language Processing group at SRI. |
1970 | Jaime Carbonell (Sr.) developed SCHOLAR, an interactive program for computer assisted instruction based on semantic nets as the representation of knowledge. |
1970 | Bill Woods described Augmented Transition Networks (ATN's) as a representation for natural language understanding. |
1970 | Patrick Winston's PhD program, ARCH, at MIT learned concepts from examples in the world of children's blocks. |
1971 | Terry Winograd's PhD thesis (MIT) demonstrated the ability of computers to understand English sentences in a restricted world of children's blocks, in a coupling of his language understanding program, SHRDLU, with a robot arm that carried out instructions typed in English. |
1972 | Prolog programming language developed by Alain Colmerauer. |
1973 | The Assembly Robotics Group at University of Edinburgh builds Freddy Robot, capable of using visual perception to locate and assemble models. |
1973 | The Lighthill report gives a largely negative verdict on AI research in Great Britain and forms the basis for the decision by the British government to discontine support for AI research in all but two universities. |
1974 | Edward H. Shortliffe's PhD dissertation on the MYCIN program (Stanford) demonstrated the power of rule-based systems for knowledge representation and inference in the domain of medical diagnosis and therapy. Sometimes called the first expert system. |
1974 | Earl Sacerdoti developed one of the first planning programs, ABSTRIPS, and developed techniques of hierarchical planning. |
1975 | Marvin Minsky published his widely-read and influential article on Frames as a representation of knowledge, in which many ideas about schemas and semantic links are brought together. |
1975 | The Meta-Dendral learning program produced new results in chemistry (some rules of mass spectrometry) the first scientific discoveries by a computer to be published in a referreed journal. |
Mid 1970s | Barbara Grosz (SRI) established limits to traditional AI approaches to discourse modeling. Subsequent work by Grosz, Bonnie Webber and Candace Sidner developed the notion of "centering", used in establishing focus of discourse and anaphoric references in Natural language processing. |
Mid 1970s | David Marr and MIT colleagues describe the "primal sketch" and its role in visual perception. |
1976 | Douglas Lenat's AM program (Stanford PhD dissertation) demonstrated the discovery model (loosely-guided search for interesting conjectures). |
1976 | Randall Davis demonstrated the power of meta-level reasoning in his PhD dissertation at Stanford. |
1978 | Tom Mitchell, at Stanford, invented the concept of Version Spaces for describing the search space of a concept formation program. |
1978 | Herbert Simon wins the Nobel Prize in Economics for his theory of bounded rationality, one of the cornerstones of AI known as "satisficing". |
1978 | The MOLGEN program, written at Stanford by Mark Stefik and Peter Friedland, demonstrated that an object-oriented programming representation of knowledge can be used to plan gene-cloning experiments. |
1979 | Bill VanMelle's PhD dissertation at Stanford demonstrated the generality of MYCIN's representation of knowledge and style of reasoning in his EMYCIN program, the model for many commercial expert system "shells". |
1979 | Jack Myers and Harry Pople at University of Pittsburgh developed INTERNIST, a knowledge-based medical diagnosis program based on Dr. Myers' clinical knowledge. |
1979 | Cordell Green, David Barstow, Elaine Kant and others at Stanford demonstrated the CHI system for automatic programming. |
1979 | The Stanford Cart, built by Hans Moravec, becomes the first computer-controlled, autonomous vehicle when it successfully traverses a chair-filled room and circumnavigates the Stanford AI Lab. |
1979 | Drew McDermott and Jon Doyle at MIT, and John McCarthy at Stanford begin publishing work on non-monotonic logics and formal aspects of truth maintenance. |
Late 1970s | Stanford's SUMEX-AIM resource, headed by Ed Feigenbaum and Joshua Lederberg, demonstrates the power of the ARPAnet for scientific collaboration. |
[edit] 1980s
Date | Development |
---|---|
Early 1980s | The team of Ernst Dickmanns at Bundeswehr University Munich builds the first robot cars, driving up to 55 mph on empty streets. |
1980s | Lisp machines developed and marketed. First expert system shells and commercial applications. |
1980 | First National Conference of the American Association for Artificial Intelligence (AAAI) held at Stanford. |
1981 | Danny Hillis designs the connection machine, which utilizes Parallel computing to bring new power to AI, and to computation in general. (Later founds Thinking Machines, Inc.) |
1982 | The Fifth Generation Computer Systems project (FGCS), an initiative by Japan's Ministry of International Trade and Industry, begun in 1982, to create a "fifth generation computer" (see history of computing hardware) which was supposed to perform much calculation utilizing massive parallelism. |
1983 | John Laird and Paul Rosenbloom, working with Allen Newell, complete CMU dissertations on Soar (program). |
1983 | James F. Allen invents the Interval Calculus, the first widely used formalization of temporal events. |
Mid 1980s | Neural Networks become widely used with the Backpropagation algorithm (first described by Paul Werbos in 1974). |
1985 | The autonomous drawing program, AARON, created by Harold Cohen, is demonstrated at the AAAI National Conference (based on more than a decade of work, and with subsequent work showing major developments). |
1987 | Marvin Minsky published The Society of Mind, a theoretical description of the mind as a collection of cooperating agents. He had been lecturing on the idea for years before the book came out (c.f. Doyle 1983). |
1987 | Around the same time, Rodney Brooks introduced the subsumption architecture and behavior-based robotics as a more minimalist modular model of natural intelligence. |
1989 | Dean Pomerleau at CMU creates ALVINN (An Autonomous Land Vehicle in a Neural Network). |
[edit] 1990s
Date | Development |
---|---|
Early 1990s | TD-Gammon, a backgammon program written by Gerry Tesauro, demonstrates that reinforcement (learning) is powerful enough to create a championship-level game-playing program by competing favorably with world-class players. |
1990s | Major advances in all areas of AI, with significant demonstrations in machine learning, intelligent tutoring, case-based reasoning, multi-agent planning, scheduling, uncertain reasoning, data mining, natural language understanding and translation, vision, virtual reality, games, and other topics. |
1993 | Ian Horswill extended behavior-based robotics by creating Polly, the first robot to navigate using vision and operate at animal-like speeds (1 meter/second). |
1993 | Rodney Brooks, Lynn Andrea Stein and Cynthia Breazeal started the widely-publicized MIT Cog project with numerous collaborators, in an attempt to build a humanoid robot child in just five years. |
1993 | ISX corporation wins "DARPA contractor of the year"[1] for the Dynamic Analysis and Replanning Tool (DART) which reportedly repaid the US government's entire investment in AI research since the 1950s.[2] |
1995 | ALVINN steered a car coast-to-coast under computer control for all but about 50 of the 2850 miles. Throttle and brakes, however, were controlled by a human driver. |
1995 | In the same year, one of Ernst Dickmanns' robot cars (with robot-controlled throttle and brakes) drove more than 1000 miles from Munich to Copenhagen and back, in traffic, at up to 120 mph, occasionally executing maneuvers to pass other cars (only in a few critical situations a safety driver took over). Active vision was used to deal with rapidly changing street scenes. |
1997 | The Deep Blue chess program (IBM) beats the world chess champion, Garry Kasparov, in a widely followed match. |
1997 | First official RoboCup football (soccer) match featuring table-top matches with 40 teams of interacting robots and over 5000 spectators. |
1998 | Tim Berners-Lee published his Semantic Web Road map paper [4]. |
Late 1990s | Web crawlers and other AI-based information extraction programs become essential in widespread use of the World Wide Web. |
Late 1990s | Demonstration of an Intelligent room and Emotional Agents at MIT's AI Lab. |
Late 1990s | Initiation of work on the Oxygen architecture, which connects mobile and stationary computers in an adaptive network. |
[edit] 2000 and beyond
Date | Development |
---|---|
2000 | Interactive robopets ("smart toys") become commercially available, realizing the vision of the 18th century novelty toy makers. |
2000 | Cynthia Breazeal at MIT publishes her dissertation on Sociable machines, describing Kismet (robot), with a face that expresses emotions. |
2000 | The Nomad robot explores remote regions of Antarctica looking for meteorite samples. |
2004 | OWL Web Ontology Language W3C Recommendation (10 February 2004). |
2006 | The Dartmouth Artificial Intelligence Conference: The Next 50 Years (AI@50) AI@50 (July 14-16 2006) |
2006 | Release 1.0 of the OpenCyc top-level ontology engine is released as open source at sourceforge.net. |
[edit] References
- Jon Doyle (1983) "A Society of Mind", CMU Department of Computer Science Tech. Report #127.
- Arthur L. Samuel (1959) "Some studies in machine learning using the game of checkers." IBM Journal of Research and Development, 3(3):210-219, July.