Artificial intelligence

From Wikipedia, the free encyclopedia

Honda's humanoid robot
Enlarge
Honda's humanoid robot

Artificial intelligence (AI) is a branch of computer science and engineering that deals with intelligent behavior, learning, and adaptation in machines. Research in AI is concerned with producing machines to automate tasks requiring intelligent behavior. Examples include control, planning and scheduling, the ability to answer diagnostic and consumer questions, handwriting, speech, and facial recognition. As such, it has become an engineering discipline, focused on providing solutions to real life problems, software applications, traditional strategy games like computer chess and other video games.

For topics relating specifically to full human-like intelligence, see Strong AI.

Contents

[edit] Schools of thought

AI divides roughly into two schools of thought: Conventional AI and Computational Intelligence (CI), also sometimes referred to as Synthetic Intelligence to highlight the differences.

Conventional AI mostly involves methods now classified as machine learning, characterized by formalism and statistical analysis. This is also known as symbolic AI, logical AI, neat AI and Good Old Fashioned Artificial Intelligence (GOFAI). (Also see semantics.) Methods include:

Computational Intelligence involves iterative development or learning (e.g. parameter tuning e.g. in connectionist systems). Learning is based on empirical data and is associated with non-symbolic AI, scruffy AI and soft computing. Methods mainly include:

With hybrid intelligent systems attempts are made to combine these two groups. Expert inference rules can be generated through neural network or production rules from statistical learning such as in ACT-R. It is thought that the human brain uses multiple techniques to both formulate and cross-check results. Thus, systems integration is seen as promising and perhaps necessary for true AI.

[edit] History

Early in the 17th century, René Descartes envisioned the bodies of animals as complex but reducible machines, thus formulating the mechanistic theory, also known as the "clockwork paradigm". Wilhelm Schickard created the first mechanical digital calculating machine in 1623, followed by machines of Blaise Pascal (1643) and Gottfried Wilhelm von Leibniz (1671), who also invented the binary system. In the 19th century, Charles Babbage and Ada Lovelace worked on programmable mechanical calculating machines.

Bertrand Russell and Alfred North Whitehead published Principia Mathematica in 1910-1913, which revolutionized formal logic. In 1931 Kurt Gödel showed that sufficiently powerful consistent formal systems contain true theorems unprovable by any theorem-proving AI that is systematically deriving all possible theorems from the axioms. In 1941 Konrad Zuse built the first working program-controlled computers. Warren McCulloch and Walter Pitts published A Logical Calculus of the Ideas Immanent in Nervous Activity (1943), laying the foundations for neural networks. Norbert Wiener's Cybernetics or Control and Communication in the Animal and the Machine (MIT Press, 1948) popularizes the term "cybernetics".

[edit] 1950s

The 1950s were a period of active efforts in AI. In 1950, Alan Turing introduced the "Turing test" as a way of operationalizing a test of intelligent behavior. The first working AI programs were written in 1951 to run on the Ferranti Mark I machine of the University of Manchester: a draughts-playing program written by Christopher Strachey and a chess-playing program written by Dietrich Prinz. John McCarthy coined the term "artificial intelligence" at the first conference devoted to the subject, in 1956. He also invented the Lisp programming language. Joseph Weizenbaum built ELIZA, a chatterbot implementing Rogerian psychotherapy. The birthdate of AI is generally considered to be July 1956 at the Dartmouth Conference, where many of these people met and exchanged ideas.

At the same time, John von Neumann, who had been hired by the RAND Corporation, developed the game theory, which would prove invaluable in the progress of AI research.

[edit] 1960s-1970s

During the 1960s and 1970s, Joel Moses demonstrated the power of symbolic reasoning for integration problems in the Macsyma program, the first successful knowledge-based program in mathematics. Leonard Uhr and Charles Vossler published "A Pattern Recognition Program That Generates, Evaluates, and Adjusts Its Own Operators" in 1963, which described one of the first machine learning programs that could adaptively acquire and modify features and thereby overcome the limitations of simple perceptrons of Rosenblatt. Marvin Minsky and Seymour Papert published Perceptrons, which demonstrated the limits of simple neural nets. Alain Colmerauer developed the Prolog computer language. Ted Shortliffe demonstrated the power of rule-based systems for knowledge representation and inference in medical diagnosis and therapy in what is sometimes called the first expert system. Hans Moravec developed the first computer-controlled vehicle to autonomously negotiate cluttered obstacle courses.

[edit] 1980s

In the 1980s, neural networks became widely used due to the backpropagation algorithm, first described by Paul Werbos in 1974. The team of Ernst Dickmanns built the first robot cars, driving up to 55 mph on empty streets.

[edit] 1990s & Turn of the Century

The 1990s marked major achievements in many areas of AI and demonstrations of various applications. In 1995, one of Dickmanns' robot cars drove more than 1000 miles in traffic at up to 110 mph. Deep Blue, a chess-playing computer, beat Garry Kasparov in a famous six-game match in 1997. DARPA stated that the costs saved by implementing AI methods for scheduling units in the first Persian Gulf War have repaid the US government's entire investment in AI research since the 1950s. Honda built the first prototypes of humanoid robots like the one depicted above.

During the 1990s and 2000s AI has become very influenced by probability theory and statistics. Bayesian networks are the focus of this movement, providing links to more rigorous topics in statistics and engineering such as Markov models and Kalman filters, and bridging the divide between `neat' and `scruffy' approaches. The last few years have also seen a big interest in game theory applied to AI decision making. This new school of AI is sometimes called `machine learning'. After the September 11, 2001 attacks there has been much renewed interest and funding for threat-detection AI systems, including machine vision research and data-mining. However despite the hype, excitment about Bayesian AI is perhaps now fading again as successful Bayesian models have only appeared for tiny statistical tasks (such as finding principal components probabilistically) and appear to be intractable for general perception and decision making.

[edit] Challenge & Prize

The DARPA Grand Challenge is a race for a $2 million prize where cars drive themselves across several hundred miles of challenging desert terrain without any communication with humans, using GPS, computers and a sophisticated array of sensors. In 2005 the winning vehicles completed all 132 miles of the course in just under 7 hours. Unfortunately, there will be no prize money awarded to the winners of the 2007 race due to a re-allocation of DARPA funds through a bill signed by George W. Bush in which Congress switched the authority from DARPA to its boss, the Director of Defense Engineering and Research. [1]

In the post-dot com boom era, some search engine websites such have sprung using a simple form of AI to provide answers to questions by entered by the visitor. Questions such as "What is the tallest building?" Can be entered into the search engine's input form and a list of answers will be returned.

[edit] AI in Philosophy

The strong AI vs. weak AI debate ("can a man-made artifact be conscious?") is still a hot topic amongst AI philosophers. This involves philosophy of mind and the mind-body problem. Most notably Roger Penrose in his book The Emperor's New Mind and John Searle with his "Chinese room" thought experiment argue that true consciousness cannot be achieved by formal logic systems, while Douglas Hofstadter in Gödel, Escher, Bach and Daniel Dennett in Consciousness Explained argue in favour of functionalism. In many strong AI supporters’ opinion, artificial consciousness is considered as the holy grail of artificial intelligence. Edsger Dijkstra famously opined that the debate had little importance: "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim."

Epistemology, the study of knowledge, also makes contact with AI, as engineers find themselves debating similar questions to philosophers about how best to represent and use knowledge and information. (e.g. semantic networks).

[edit] AI in business

Banks use artificial intelligence systems to organize operations, invest in stocks, and manage properties. In August 2001, robots beat humans in a simulated financial trading competition (BBC News, 2001).[1] A medical clinic can use artificial intelligence systems to organize bed schedules, make a staff rotation, and to provide medical information. Many practical applications are dependent on artificial neural networks — networks that pattern their organization in mimicry of a brain's neurons, which have been found to excel in pattern recognition. Financial institutions have long used such systems to detect charges or claims outside of the norm, flagging these for human investigation. Neural networks are also being widely deployed in homeland security, speech and text recognition, medical diagnosis (such as in Concept Processing technology in EMR software), data mining, and e-mail spam filtering.

Robots have also become common in many industries. They are often given jobs that are considered dangerous to humans. Robots have also proven effective in jobs that are very repetitive which may lead to mistakes or accidents due to a lapse in concentration, and other jobs which humans may find degrading. General Motors uses around 16,000 robots for tasks such as painting, welding, and assembly. Japan is the leader in using robots in the world. In 1995, 700,000 robots were in use worldwide; over 500,000 of which were from Japan (Encarta, 2006).

[edit] AI in fiction

In science fiction AI — almost always strong AI — is commonly portrayed as an upcoming power trying to overthrow human authority as in HAL 9000, Skynet, Colossus and The Matrix or as service humanoids like C-3PO, Marvin, Data, KITT and KARR, the Bicentennial Man, the Mechas in A.I., Cortana from the Halo series or Sonny in I, Robot.

A notable exception is Mike in Robert A. Heinlein's The Moon Is a Harsh Mistress: a supercomputer that becomes aware and aids in a local revolution.

The inevitability of world domination by out-of-control AI is also argued by some fiction writers like Kevin Warwick. In works such as the Japanese manga Ghost in the Shell, the existence of intelligent machines questions the definition of life as organisms rather than a broader category of autonomous entities, establishing a notional concept of systemic intelligence. See list of fictional computers and list of fictional robots and androids.

Some fiction writers, such as Vernor Vinge and Ray Kurzweil, have also speculated that the advent of strong AI is likely to cause abrupt and dramatic societal change. The period of abrupt change is sometimes referred to as "the Singularity".

Author Frank Herbert explored the idea of a time when mankind might ban clever machines entirely. His Dune series makes mention of a rebellion called the Butlerian Jihad in which mankind defeats the smart machines of the future and then imposes a death penalty against any who would again create thinking machines. Often quoted from the fictional Orange Catholic Bible, "Thou shalt not make a machine in the likeness of a human mind."

[edit] See also

[edit] Applications

Typical problems to which AI methods are applied:

Other fields in which AI methods are implemented:

Lists of researchers, projects & publications

[edit] References

  1. ^ Robots beat humans in trading battle. BBC News, Business. The British Broadcasting Corporation (August 8, 2001). Retrieved on 2006-11-02.
  • Cummings, Maeve. McCubbrey J, Donald. Pinsonneault, Alain. Donovan, Richard. Management Information Systems for the Information Age. Third Canadian Edition. Canada. McGraw-Hill, 2006.

[edit] External links

Wikibooks
Wikibooks has more about this subject:
Wikimedia Commons has media related to: