Eliezer Yudkowsky

Eliezer Yudkowsky
Born (1979-09-11) September 11, 1979
Nationality American
Organization Machine Intelligence Research Institute
Spouse(s) Brienne Yudkowsky (m. 2013)[1]

Eliezer Shlomo Yudkowsky (born September 11, 1979) is an American AI researcher and writer best known for popularising the idea of friendly artificial intelligence.[2][3] He is a co-founder and research fellow at the Machine Intelligence Research Institute, a private research nonprofit based in Berkeley, California.[4] He never attended high school or college and has no formal education in artificial intelligence. Yudkowsky says he is self-taught in the field.[5] His work on the prospect of a runaway intelligence explosion was a seminal influence on Nick Bostrom's Superintelligence: Paths, Dangers, Strategies.

Work in artificial intelligence safety

Goal learning and incentives in software systems

Yudkowsky's views on the safety challenges posed by future generations of AI systems are discussed in the standard undergraduate textbook in AI, Stuart Russell and Peter Norvig's Artificial Intelligence: A Modern Approach. Noting the difficulty of formally specifying general-purpose goals by hand, Russell and Norvig cite Yudkowsky's proposal that autonomous and adaptive systems be designed to learn correct behavior over time:

Yudkowsky (2008)[6] goes into more detail about how to design a Friendly AI. He asserts that friendliness (a desire not to harm humans) should be designed in from the start, but that the designers should recognize both that their own designs may be flawed, and that the robot will learn and evolve over time. Thus the challenge is one of mechanism design – to design a mechanism for evolving AI under a system of checks and balances, and to give the systems utility functions that will remain friendly in the face of such changes.[2]

Citing Steve Omohundro's idea of instrumental convergence, Russell and Norvig caution that autonomous decision-making systems with poorly designed goals would have default incentives to treat humans adversarially, or as dispensable resources, unless specifically designed to counter such incentives: "even if you only want your program to play chess or prove theorems, if you give it the capability to learn and alter itself, you need safeguards".[2][7]

In response to the instrumental convergence concern, Yudkowsky and other MIRI researchers have recommended that work be done to specify software agents that converge on safe default behaviors even when their goals are misspecified.[8] The Future of Life Institute (FLI) summarizes this research program in the Open Letter on Artificial Intelligence research priorities document:

If an AI system is selecting the actions that best allow it to complete a given task, then avoiding conditions that prevent the system from continuing to pursue the task is a natural subgoal (and conversely, seeking unconstrained situations is sometimes a useful heuristic). This could become problematic, however, if we wish to repurpose the system, to deactivate it, or to significantly alter its decision-making process; such a system would rationally avoid these changes. Systems that do not exhibit these behaviors have been termed corrigible systems, and both theoretical and practical work in this area appears tractable and useful. For example, it may be possible to design utility functions or decision processes so that a system will not try to avoid being shut down or repurposed, and theoretical frameworks could be developed to better understand the space of potential systems that avoid undesirable behaviors.[9]

Yudkowsky argues that as AI systems become increasingly intelligent, new formal tools will be needed in order to avert default incentives for harmful behavior, as well as to inductively teach correct behavior.[8][10] These lines of research are discussed in MIRI's 2015 technical agenda.[11]

System reliability and transparency

Yudkowsky studies decision theories that achieve better outcomes than causal decision theory in Newcomblike problems.[12] This includes decision procedures that allow agents to cooperate with equivalent reasoners in the one-shot prisoner's dilemma.[13] Yudkowsky has also written on theoretical prerequisites for self-verifying software.[14][10]

Yudkowsky argues that it is important for advanced AI systems to be cleanly designed and transparent to human inspection, both to ensure stable behavior and to allow greater human oversight and analysis.[10] Citing papers on this topic by Yudkowsky and other MIRI researchers, the FLI research priorities document states that work on defining correct reasoning in embodied and logically non-omniscient agents would be valuable for the design, use, and oversight of AI agents.[9][15]

Capabilities forecasting

In their discussion of Omohundro and Yudkowsky's work, Russell and Norvig cite I. J. Good's 1965 prediction that when computer systems begin to outperform humans in software engineering tasks, this may result in a feedback loop of increasingly capable AI systems. This raises the possibility that AI's impact could increase very quickly after it reaches a certain level of capability.[2]

In the intelligence explosion scenario inspired by Good's hypothetical, recursively self-improving AI systems quickly transition from subhuman general intelligence to superintelligent.[10] Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies sketches out Good's argument in greater detail, while making a broader case for expecting AI systems to eventually outperform humans across the board. Bostrom cites writing by Yudkowsky on inductive value learning and on the risk of anthropomorphizing advanced AI systems, e.g.: "AI might make an apparently sharp jump in intelligence purely as the result of anthropomorphism, the human tendency to think of 'village idiot' and 'Einstein' as the extreme ends of the intelligence scale, instead of nearly indistinguishable points on the scale of minds-in-general."[6][16]

The Open Philanthropy Project, an offshoot of the charity evaluator GiveWell, credits Yudkowsky and Bostrom with several (paraphrased) arguments for expecting future AI advances to have a large societal impact:[17]

Over a relatively short geological timescale, humans have come to have enormous impacts on the biosphere, often leaving the welfare of other species dependent on the objectives and decisions of humans. It seems plausible that the intellectual advantages humans have over other animals have been crucial in allowing humans to build up the scientific and technological capabilities that have made this possible. If advanced artificial intelligence agents become significantly more powerful than humans, it seems possible that they could become the dominant force in the biosphere, leaving humans' welfare dependent on their objectives and decisions. As with the interaction between humans and other species in the natural environment, these problems could be the result of competition for resources rather than malice.

In comparison with other evolutionary changes, there was relatively little time between our hominid ancestors and the evolution of humans. There was therefore relatively little time for evolutionary pressure to lead to improvements in human intelligence relative to the intelligence of our hominid ancestors, suggesting that the increases in intelligence may be small on some absolute scale. [...T]his makes it seem plausible that creating intelligent agents that are more intelligent than humans could have dramatic real-world consequences even if the difference in intelligence is small in an absolute sense.[15]

Russell and Norvig raise the objection that there are known limits to intelligent problem-solving from computational complexity theory; if there are strong limits on how efficiently algorithms can solve various computer science tasks, then intelligence explosion may not be possible.[2] Yudkowsky has debated the likelihood of intelligence explosion with economist Robin Hanson, who argues that AI progress is likely to accelerate over time, but is not likely to be localized or discontinuous.[18]

Rationality writing

Between 2006 and 2009, Yudkowsky and Robin Hanson were the principal contributors to Overcoming Bias,[19] a cognitive and social science blog sponsored by the Future of Humanity Institute of Oxford University. In February 2009, Yudkowsky founded LessWrong,[20] a "community blog devoted to refining the art of human rationality".[21] Overcoming Bias has since functioned as Hanson's personal blog. LessWrong has been covered in depth in Business Insider.[22]

Yudkowsky has also written several works of fiction.[23] His fan fiction story, Harry Potter and the Methods of Rationality, uses plot elements from J.K. Rowling's Harry Potter series to illustrate topics in science.[21][24][25][26][27][28][29] The New Yorker describes Harry Potter and the Methods of Rationality as a retelling of Rowling's original "in an attempt to explain Harry's wizardry through the scientific method".[30]

Over 300 blogposts by Yudkowsky have been released as six books, collected in a single ebook titled Rationality: From AI to Zombies by the Machine Intelligence Research Institute in 2015.[31]

Personal views

Yudkowsky identifies as an atheist[32] and a "small-l libertarian".[33]

Academic publications

See also

References

  1. Yudkowsky, Eliezer. "Eliezer S. Yudkowsky". yudkowsky.net. Retrieved October 7, 2015.
  2. 1 2 3 4 5 Russell, Stuart; Norvig, Peter (2009). Artificial Intelligence: A Modern Approach. Prentice Hall. ISBN 978-0-13-604259-4.
  3. Leighton, Jonathan (2011). The Battle for Compassion: Ethics in an Apathetic Universe. Algora. ISBN 978-0-87586-870-7.
  4. Kurzweil, Ray (2005). The Singularity Is Near. New York City: Viking Penguin. ISBN 0-670-03384-7.
  5. Saperstein, Gregory (August 9, 2012). "5 Minutes With a Visionary: Eliezer Yudkowsky".
  6. 1 2 Yudkowsky, Eliezer (2008). "Artificial Intelligence as a Positive and Negative Factor in Global Risk" (PDF). In Bostrom, Nick; Ćirković, Milan. Global Catastrophic Risks. Oxford University Press. ISBN 978-0199606504.
  7. Omohundro, Steve (2008). "The Basic AI Drives" (PDF). Proceedings of the First AGI Conference. IOS Press.
  8. 1 2 Soares, Nate; Fallenstein, Benja; Yudkowsky, Eliezer (2015). "Corrigibility". AAAI Workshops: Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, TX, January 25–26, 2015. AAAI Publications.
  9. 1 2 Future of Life Institute (2015). Research priorities for robust and beneficial artificial intelligence (PDF) (Report). Retrieved October 12, 2015.
  10. 1 2 3 4 Yudkowsky, Eliezer (2013). "Five theses, two lemmas, and a couple of strategic implications". MIRI Blog. Retrieved October 12, 2015.
  11. Soares, Nate; Fallenstein, Benja (2015). "Aligning Superintelligence with Human Interests: A Technical Research Agenda" (PDF). In Miller, James; Yampolskiy, Roman; Armstrong, Stuart; et al. The Technological Singularity: Managing the Journey. Springer.
  12. Soares, Nate; Fallenstein, Benja (2015). "Toward Idealized Decision Theory". arXiv:1507.01986Freely accessible [cs.AI].
  13. LaVictoire, Patrick; Fallenstein, Benja; Yudkowsky, Eliezer; Bárász, Mihály; Christiano, Paul; Herreshoff, Marcello (2014). "Program Equilibrium in the Prisoner's Dilemma via Löb's Theorem". Multiagent Interaction without Prior Coordination: Papers from the AAAI-14 Workshop. AAAI Publications.
  14. Fallenstein, Benja; Soares, Nate (2015). Vingean Reflection: Reliable Reasoning for Self-Improving Agents (PDF) (Technical report). Machine Intelligence Research Institute. 2015-2.
  15. 1 2 GiveWell (2015). Potential risks from advanced artificial intelligence (Report). Retrieved October 12, 2015.
  16. Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies. ISBN 0199678111.
  17. Yudkowsky, Eliezer (2013). Intelligence Explosion Microeconomics (PDF) (Technical report). Machine Intelligence Research Institute. 2013-1.
  18. Hanson, Robin; Yudkowsky, Eliezer (2013). The Hanson-Yudkowsky AI Foom Debate. Machine Intelligence Research Institute.
  19. "Overcoming Bias: About". Robin Hanson. Retrieved February 1, 2012.
  20. "Where did Less Wrong come from? (LessWrong FAQ)". Retrieved September 11, 2014.
  21. 1 2 Miller, James (2012). Singularity Rising. ISBN 978-1936661657.
  22. Miller, James (July 28, 2011). "You Can Learn How To Become More Rational". Business Insider. Retrieved March 25, 2014.
  23. Eliezer S. Yudkowsky. "Fiction". Yudkowsky. Retrieved September 14, 2015.
  24. David Brin (June 21, 2010). "CONTRARY BRIN: A secret of college life... plus controversies and science!". Davidbrin.blogspot.com. Retrieved August 31, 2012."'Harry Potter' and the Key to Immortality", Daniel Snyder, The Atlantic
  25. Authors (April 2, 2012). "Rachel Aaron interview (April 2012)". Fantasybookreview.co.uk. Retrieved August 31, 2012.
  26. "Civilian Reader: An Interview with Rachel Aaron". Civilian-reader.blogspot.com. May 4, 2011. Retrieved August 31, 2012.
  27. Hanson, Robin (October 31, 2010). "Hyper-Rational Harry". Overcoming Bias. Retrieved August 31, 2012.
  28. Swartz, Aaron. "The 2011 Review of Books (Aaron Swartz's Raw Thought)". archive.org. Archived from the original on March 16, 2013. Retrieved October 4, 2013.
  29. "Harry Potter and the Methods of Rationality". fanfiction.net. February 28, 2010. Retrieved December 29, 2014.
  30. Packer, George (2011). "No Death, No Taxes: The Libertarian Futurism of a Silicon Valley Billionaire". The New Yorker: 54. Retrieved October 12, 2015.
  31. Rationality: From AI to Zombies, MIRI, 2015-03-12
  32. "The Correct Contrarian Cluster - Less Wrong". lesswrong.com.
  33. 7, Eliezer Yudkowsky Response Essays September; 2011. "Is That Your True Rejection?". Cato Unbound.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.