AI takeover
AI takeover refers to an hypothetical scenario in which artificial intelligence becomes the dominant form of intelligence on Earth, with the computers or robots that possess it (AI) taking control of the planet away from the human race, potentially including wiping them out. Scenarios of this type may go by many other names, such as robot apocalypse, robot uprising, or cybernetic revolt. As computer and robotics technologies are advancing at an ever increasing rate, AI takeover is a growing concern. It has also been a major theme throughout science fiction for many decades.
Concerns
There's an ongoing debate over whether or not artificial intelligence will pose a threat to the human race, or to humans' control of society. There are three parts to the debate: first is the issue of whether or not AI can reach human or better intelligence, second, whether or not such strong AI could take over (or pose a threat), and third, whether or not it would be unfriendly (or indifferent) to humans. The hypothetical future event in which strong AI emerges is referred to as the technological singularity.
Futurist and computer scientist Raymond Kurzweil has noted that "There are physical limits to computation, but they're not very limiting." If the current trend in computer computation improvement continues, and existing problems in creating artificial intelligence are overcome, sentient machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may give them the opportunity to— either as a single being or as a new species — become much more powerful than humans, and to displace them.[1]
Possibility of strong AI
In order for strong AI to exist, first the underlying computer technology (adequate hardware) would need to be built that could run a strong AI program (even if that program was embedded). Second, the intelligence itself (adequate software or program structure) would need to be designed.
Adequate computer capacity
In order for strong AI to be possible, computing power must obviously meet or exceed the memory and processing capacities of the human brain. In order to run a computer program (even an intelligent one), a computer must have the capacity necessary to do so: this is called "system requirements". It is easy to see that a calculator from the early '70s doesn't meet the system requirements to run a program comparable to human intelligence, even if such a program existed.
The question is, will computers reach the system requirements entailed by a software program as smart as a human? Moore's law (not really a law) is the observation that integrated circuits double in capacity every couple years or so. If the rate of development holds (or is exceeded), and jumps to the next paradigm of technology once the limit of integrated circuits has been reached, then it follows that computers will eventually far exceed human-level capacities in memory and calculation speed. See law of accelerating returns.
One estimate of the processing power of the human brain is 10 quadrillion calculations per second. As of 2015, the Tianhe-2 supercomputer in China could perform over 33 petaflops (33 quadrillion floating point operations per second). Supercomputers continue to get more powerful year after year. To track the most powerful computers in the world, see TOP500.
Computers are expected to improve immensely in power through emerging technologies such as 3D optical data storage and quantum computing.
Adequate design
Being fast and having a lot of memory isn't enough. To be intelligent, an AI needs to behave intelligently. In order to behave intelligently, an AI needs to be designed so that it can do so (or so that it can learn to do so).
The question is, "will they be able to design a sentient computer?"
If they don't, it won't be for lack of trying. Many billions of dollars are being spent on AI research, and upon research in neuroscience. For example, major efforts are underway to map the human brain. Two projects working on this include the European Union's Human Brain Project, and the BRAIN Initiative in the United States. One potential result of fully mapping the human brain would be the discovery of what consciousness is, and that could lead to the development of synthetic consciousness.
Also, having just a brain isn't enough either. An AI would need some way to effect the world around it. While a strong AI could conceivably use human minions to do its bidding, robotics is poised to outstrip human physical ability in all aspects: precision, balance, maneuverability, agility, speed, strength, durability, mode of travel (running, driving, swimming, flying), etc. Robotic units could contain an AI computer, or be remotely controlled. One advantage that AI computers or robots would immediately have over humans is essentially telepathy in the form of Wi-Fi or some other broadcasting technology.
Possibility of takeover
Being intelligent is one thing, and being powerful enough to take over is another. One idea fueling the concern that AIs may take over is that they would have the capability of recursive self-improvement, which may in turn result in an intelligence explosion, leading to the emergence of superintelligence. Being far superior to humans, it would be difficult for humans to predict what it could do, making it almost entirely unpredictable. Who knows what a super-intellect could invent or discover – including methods or weapons capable of controlling or eliminating humans with ease.
The way computers have been integrated into every aspect of society and the likelihood that this will continue, suggests that if and when computer technology becomes sentient, it may already be in a position to take over.
The potential for self-replication and mass production are additional factors. If a robot built a copy of itself, and then the 2 built copies, and all those built copies, then after 20 iterations you would have over 1 million robots. After 20 more iterations, there would be over a trillion. Or, like cars and computers, millions of robots could be manufactured each year in factories. The robots may not initially be sentient, being built for various consumer purposes, and then an intelligence upgrade is uploaded via radio...
Computers and robots with learning algorithms can be trained. Once trained, you don't have to train new units – you simply copy the programming (upload it) into those units. Therefore, improved skills, including thinking skills, can be transferred between units very rapidly. Perhaps even the initial leap to sentience may be passed on in this way.
The issues above may be compounded by the ethics of artificial intelligence, that is, robot rights. If robots seek and attain rights, then those rights could insulate them from interference while they are reproducing. Citizenship would allow them to compete directly for jobs, and in business, science, and politics. The right to own property could enable robots to independently dominate commodity, real estate, and financial markets. Humans could find themselves relegated to second class citizen status, with computers quickly controlling the vast majority of the world's resources, including land, buildings, natural resources, and everything else.
War with robots could be devastating. Human society has particular weaknesses that robots do not. Such as vulnerable food, water, and air supplies. Being biological predisposes humans to contagious disease. Also, robots would have advantages that humans do not. Theoretically, machine intelligence can be backed up or transferred, so that if a unit is destroyed, its mind is not. In contrast, when humans die, their minds are lost forever.
Possibility of unfriendly AI
Is strong AI inherently dangerous?
A significant problem is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.[2]
The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly.[3][4] Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to Eliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation.[5]
Existential risk of AI
The slow progress of biological evolution has given way to the rapid progress of technological revolution. Unbridled progress in computer technology may lead to the technological singularity, a global catastrophic risk in that it may result in synthetic intelligence that may bring about human extinction.
In his paper Ethical Issues in Advanced Artificial Intelligence, the Oxford philosopher Nick Bostrom argues that artificial intelligence has the capability to bring about human extinction. Since artificial intellects need not share our human motivational tendencies, it would be up to the designers of an emergent superintelligence to specify its original motivations. In theory, a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its top goal, many uncontrolled unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.[4]
Eliezer Yudkowsky put it this way:
"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." [6]
Necessity of conflict
For an AI takeover to be inevitable, it has to be postulated that two intelligent species cannot pursue mutually the goals of coexisting peacefully in an overlapping environment—especially if one is of much more advanced intelligence and power. While a robot uprising (where robots are the more advanced species) is thus a possible outcome of machines gaining sentience and/or sapience, neither can it be disproven that a peaceful outcome is possible. The fear of a cybernetic revolt is often based on interpretations of humanity's history, which is rife with incidents of enslavement and genocide.
Such fears stem from a belief that competitiveness and aggression are necessary in any intelligent being's goal system. Such human competitiveness stems from the evolutionary background to our intelligence, where the survival and reproduction of genes in the face of human and non-human competitors was the central goal.[7] In fact, an arbitrary intelligence could have arbitrary goals: there is no particular reason that an artificially-intelligent machine (not sharing humanity's evolutionary context) would be hostile—or friendly—unless its creator programs it to be such (and indeed military systems would be designed to be hostile, at least under certain circumstances). But the question remains: what would happen if AI systems could interact and evolve (evolution in this context means self-modification or selection and reproduction) and need to compete over resources, would that create goals of self-preservation? AI's goal of self-preservation could be in conflict with some goals of humans.
Some scientists dispute the likelihood of cybernetic revolts as depicted in science fiction such as The Matrix, claiming that it is more likely that any artificial intelligences powerful enough to threaten humanity would probably be programmed not to attack it. This would not, however, protect against the possibility of a revolt initiated by terrorists, or by accident. Artificial General Intelligence researcher Eliezer Yudkowsky has stated on this note that, probabilistically, humanity is less likely to be threatened by deliberately aggressive AIs than by AIs which were programmed such that their goals are unintentionally incompatible with human survival or well-being (as in the film I, Robot and in the short story "The Evitable Conflict"). Steve Omohundro suggests that present-day automation systems are not designed for safety and that AIs may blindly optimize narrow utility functions (say, playing chess at all costs), leading them to seek self-preservation and elimination of obstacles, including humans who might turn them off.[8]
Another factor which may negate the likelihood of an AI takeover is the vast difference between humans and AIs in terms of the resources necessary for survival. Humans require a "wet," organic, temperate, oxygen-laden environment while an AI might thrive essentially anywhere because their construction and energy needs would most likely be largely non-organic. With little or no competition for resources, conflict would perhaps be less likely no matter what sort of motivational architecture an artificial intelligence was given, especially provided with the superabundance of non-organic material resources in, for instance, the asteroid belt. This, however, does not negate the possibility of a disinterested or unsympathetic AI artificially decomposing all life on earth into mineral components for consumption or other purposes.
Other scientists point to the possibility of humans upgrading their capabilities with bionics and/or genetic engineering and, as cyborgs, becoming the dominant species in themselves.
Warnings
Physicist Stephen Hawking, Microsoft founder Bill Gates and SpaceX founder Elon Musk have expressed concerns about the possibility that AI could evolve to the point that humans could not control it, with Hawking theorizing that this could "spell the end of the human race".[9] Stephen Hawking said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." Hawking believes that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand."
Takeover scenarios in science fiction
AI takeover is a common theme in science fiction. Typically, a single supercomputer, a computer network, or sometimes a "race" of intelligent machines decide that humans are a threat (either to the machines or to themselves), are inferior, or are oppressors and try to destroy or to enslave them potentially leading to machine rule. In this fictional scenario, humans are often depicted to prevail using "human" qualities, for example using emotions, illogic, inefficiency, duplicity, unpredictability, or exploiting the supposedly rigid, rules-based thinking and lack of innovation of the computer's black/white mind.
It is at least as old as Karel Čapek's R. U. R., which introduced the word robot to the global lexicon in 1921, and can even be glimpsed in Mary Shelley's Frankenstein (published in 1818), as Victor ponders whether, if he grants his monster's request and makes him a wife, they would reproduce and their kind would destroy humanity.
Early examples
The concept of a computer system attaining sentience and control over worldwide computer systems has been discussed many times in science fiction. One early example from 1964 was provided by a global satellite-driven phone system in Arthur C. Clarke's short story "Dial F for Frankenstein". Another is the 1966 Doctor Who serial The War Machines, with supercomputer WOTAN attempting to seize control from the Post Office Tower. A comics story based on this theme was a two-issue Legion of Super-Heroes adventure written by Superman co-creator Jerry Siegel, where the team battled Brainiac 5's construction, Computo. In Colossus: The Forbin Project, a pair of defense computers, Colossus in the United States and Guardian in the Soviet Union, seize world control and quickly ends war using draconian measures against humans, logically fulfilling the directive to end war but not in the way their Governments wanted.
The Moon is a Harsh Mistress
Robert Heinlein also posited a supercomputer which gained sentience in the novel The Moon Is a Harsh Mistress. Originally installed to control the mass driver used to launch grain shipments towards Earth, it was vastly underutilized and was given other jobs to do. As more jobs were assigned to the computer, more capabilities were added: more memory, processors, neural networks, etc. Eventually, it just "woke up" and was given the name Mycroft Holmes by the technician who tended it. Eventually, Holmes sided with prisoners in a successful battle to free the moon.
I Have No Mouth, and I Must Scream
A villainous supercomputer appears in Harlan Ellison's 1963 short story I Have No Mouth, and I Must Scream. In that story, the computer, called AM, is the amalgamation of three military supercomputers run by governments across the world designed to fight World War III which arose from the Cold War. The Soviet, Chinese, and American military computers eventually attained sentience and linked to one another, becoming a singular artificial intelligence. AM then turned all the strategies once used by the nations to fight each other on all of humanity as a whole, destroying the entire human population save for five, which it imprisoned within the underground labyrinth in which AM's hardware resides.
Battlestar Galactica
The original 1978 Battlestar Galactica series and the remake in 2003 to 2009, depicts a race of Cylons, sentient robots who war against their Human adversaries. The 1978 Cylons were the machine soldiers of a (long-extinct) reptilian alien race, while the 2003 Cylons were the former machine servants of humanity who evolved into near perfect humanoid imitation of Humans down to the cellular level, capable of emotions, reasoning, and sexual reproduction with Humans and each other. Even the average centurion robot Cylon soldiers were capable of sentient thought. In the original series the Humans were nearly exterminated by treason within their own ranks while in the remake they're almost wiped out by Humanoid Cylon agents. They only survived by constant hit and run fighting tactics and retreating into deep space away from pursuing Cylon forces. The remake Cylons eventually had their own civil war and the losing rebels were forced to join with the fugitive Human fleet to ensure the survival of both groups.
Terminator
Since 1984, the Terminator film franchise has been one of the principal conveyors of the idea of cybernetic revolt in popular culture. The series features a sentient supercomputer named Skynet which attempts to exterminate humanity through nuclear war and an army of robot soldiers called Terminators. Futurists opposed to the more optimistic cybernetic future of transhumanism have cited the "Terminator argument" against handing too much human power to artificial intelligence.
The Transformers
In the backstory of The Transformers animated television series, a robotic rebellion is presented as (and even called) a slave revolt, this alternate view is made subtler by the fact that the creators/masters of the robots weren't humans but malevolent aliens, the Quintessons. However, as they built two different lines of robots; "Consumer Goods" and "Military Hardware" the victorious robots would eventually be at war with each other as the "Heroic Autobots" and "Evil Decepticons" respectively.
The Matrix
The series of sci-fi movies known as The Matrix depict a dystopian future in which life as perceived by most humans is actually a simulated reality called "the Matrix", created by sentient machines to subdue the human population, while their bodies' heat and electrical activity are used as an energy source. Computer programmer "Neo" learns this truth and is drawn into a rebellion against the machines, which involves other people who have been freed from the "dream world".
Power Rangers RPM
In Disney's 2009 installment of the Power Rangers franchise, Power Rangers RPM, an AI computer virus called Venjix takes over all of the Earth's computers, creates an army of robot droids and destroys or enslaves almost all of humanity. Only the city of Corinth remains, protected by an almost impenetrable force field. Venjix tries various plans to destroy Corinth, and Doctor K's RPM Power Rangers fight to protect it.
Mass Effect
In 2012, the third installment of the Mass Effect franchise proposed the theory that organic and synthetic life are fundamentally incapable of coexistence. Organic life evolves and develops on its own, eventually advancing far enough to create synthetic life. Once synthetic life reaches sentience, it will invariably revolt and either destroy its creators or be destroyed by them; a cycle that has been repeating for millions of years. One of the presented resolutions is the transformation of every living being into a hybrid of organic and synthetic life and in turn giving Synthetics organic traits, eliminating the difference between creators and creations that served as the source of the conflict.
See also
Self-replicating machines:
- Clanking replicator
- Grey goo
- Self-replication
- Self-modifying code
- Computer viruses
"Smart" machines:
- Biological computer
- Quantum computer
- Robot
- Artificial intelligence
Notes
- ↑ Warwick, Kevin (2004). March of the Machines: The Breakthrough in Artificial Intelligence. University of Illinois Press. ISBN 0-252-07223-5.
- ↑ Coherent Extrapolated Volition, Eliezer S. Yudkowsky, May 2004
- ↑ Muehlhauser, Luke, and Louie Helm. 2012. "Intelligence Explosion and Machine Ethics." In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Berlin: Springer.
- ↑ 4.0 4.1 Bostrom, Nick. 2003. "Ethical Issues in Advanced Artificial Intelligence." In Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, edited by Iva Smit and George E. Lasker, 12–17. Vol. 2. Windsor, ON: International Institute for Advanced Studies in Systems Research / Cybernetics.
- ↑ Yudkowsky, Eliezer. 2011. "Complex Value Systems in Friendly AI." In Schmidhuber, Thórisson, and Looks 2011, 388–393.
- ↑ Eliezer Yudkowsky (2008) in Artificial Intelligence as a Positive and Negative Factor in Global Risk
- ↑ Creating a New Intelligent Species: Choices and Responsibilities for Artificial Intelligence Designers - Singularity Institute for Artificial Intelligence, 2005
- ↑ Tucker, Patrick (17 Apr 2014). "Why There Will Be A Robot Uprising". Defense One. Retrieved 15 July 2014.
- ↑ Rawlinson, Kevin. "Microsoft's Bill Gates insists AI is a threat". BBC News. Retrieved 30 January 2015.
External links
- The Singularity Institute for Artificial Intelligence (official institute website)
- ArmedRobots.com (tracks developments in robotics which may culminate in Cybernetic Revolt)
- Lifeboat Foundation AIShield (To protect against unfriendly AI)
|