Machine Intelligence Research Institute

Machine Intelligence Research Institute
Formation 2000
Type Nonprofit research institute
Legal status 501(c)(3) tax exempt charity
Purpose Research into friendly artificial intelligence
Location
Edwin Evans
Luke Muehlhauser
Key people
Eliezer Yudkowsky
Revenue
$1.7 million (2013)[1]
Staff
9[1]
Website intelligence.org
Formerly called
Singularity Institute, Singularity Institute for Artificial Intelligence

The Machine Intelligence Research Institute (MIRI) is a non-profit organization founded in 2000 to research safety issues related to the development of Strong AI. The organization advocates ideas initially put forth by I. J. Good and Vernor Vinge regarding an "intelligence explosion", or Singularity, which MIRI thinks may follow the creation of sufficiently advanced AI.[2] Research fellow Eliezer Yudkowsky coined the term Friendly AI to refer to a hypothetical super-intelligent AI that has a positive impact on humanity.[3] The organization has argued that to be "Friendly" a self-improving AI needs to be constructed in a transparent, robust, and stable way.[4] MIRI hosts regular research workshops to develop the mathematical foundations for constructing Friendly AI.[5]

Luke Muehlhauser[6] is Executive Director. Inventor and futures studies author Ray Kurzweil served as one of its directors from 2007 to 2010.[7] The institute’s advisory board includes Oxford philosopher Nick Bostrom, PayPal co-founder Peter Thiel, and Foresight Institute co-founder Christine Peterson. MIRI is tax exempt under Section 501(c)(3) of the United States Internal Revenue Code, and has a Canadian branch, SIAI-CA, formed in 2004 and recognized as a Charitable Organization by the Canada Revenue Agency.

Purpose

MIRI's purpose is "to ensure that the creation of smarter-than-human intelligence has a positive impact".[8] MIRI does not intend to program an AI that will have a positive impact, and their work does not involve any coding. Instead, they are working on solving mathematical and philosophical issues that arise when an agent has the ability to see how its mind is constructed and modify important parts of it. Their goal is to build a framework for the creation of a Friendly AI, to ensure that the first superintelligence is not (and does not become) an unfriendly AI.

Friendly and unfriendly AI

A friendly artificial intelligence is a hypothetical artificial general intelligence (AGI) that would have a positive rather than negative effect on humanity. An unfriendly artificial intelligence, conversely, would have an overall negative impact on humanity. This negative impact could range from the AI not accomplishing its goals quite the way we had originally intended, to the AI destroying humanity as an instrumental step to fulfilling one of its goals. According to Nick Bostrom, an artificial general intelligence will be unfriendly unless its goals are specifically designed to be aligned with human values.[9] The term was coined by Eliezer Yudkowsky[10] to discuss superintelligent artificial agents that reliably implement human values.

Key results and papers

MIRI’s research is concentrated in four areas: Computational self-reflection, decision procedures, value functions, and forecasting.

Computational self-reflection

In order to modify itself, an AGI will need to reason about its own behavior and prove that the modified version will continue to optimize for the correct goals. This leads to several fundamental problems such as the Löbian obstacle, where an agent cannot prove that a more powerful version of itself is consistent within the current version’s framework.[11] MIRI aims to develop a rigorous basis for self-reflective reasoning to overcome these obstacles.

Decision procedures

Standard decision procedures are not well-specified enough (e.g., with regard to counterfactuals) to be instantiated as algorithms.[12] These procedures also tend to be inconsistent under reflection: an agent that initially uses causal decision theory will regret doing so, and will attempt to change its own decision procedure. MIRI has developed Timeless Decision Theory, which is an extension of causal decision theory; it has been shown to avoid the failure modes of causal decision theory and evidential decision theory (such as the Prisoner’s Dilemma, Newcomb’s Paradox, etc).[13]

Value functions

Human values are complex and fragile; if you remove a small part of our value system, the outcome would be of no value to us. For example, if the value of “desiring new experiences” is removed, we would relive the same optimized experience ad infinitum without getting bored. How can we ensure that an artificial agent will create a future that we desire, instead of a perverse instantiation of our instructions that miss a critical aspect of what we value? This is known as the value-loading problem (from Bostrom’s Superintelligence).[14]

Forecasting

In addition to mathematical research, MIRI also studies strategic questions related to AGI, such as: What can (and can’t) we predict about future AI? How can we improve our forecasting ability? Which interventions available today appear to be the most beneficial, given what little we do know?

History

In 2000, Eliezer Yudkowsky[15] and Internet entrepreneurs Brian and Sabine Atkins founded the Singularity Institute for Artificial Intelligence to "help humanity prepare for the moment when machine intelligence exceeded human intelligence".[16] At first, it operated primarily over the Internet, receiving financial contributions from transhumanists and futurists.

In 2002, it published on its website the paper Levels of Organization in General Intelligence,[17] a preprint of a book chapter later included in a compilation of general AI theories, entitled "Artificial General Intelligence" (Ben Goertzel and Cassio Pennachin, eds.). Later that year, it released their two main introductory pieces, "What is the Singularity"[18] and "Why Work Toward the Singularity".[19]

In 2003, the Institute appeared at the Foresight Senior Associates Gathering, where co-founder Eliezer Yudkowsky presented a talk titled "Foundations of Order". They also made an appearance at the Transvision 2003 conference[20] at Yale University with a talk by institute volunteer Michael Anissimov.

In 2004, it released AsimovLaws.com,[21] a website that examined AI morality in the context of the I, Robot movie starring Will Smith, released just two days later. From July to October, the institute ran a Fellowship Challenge Grant that raised $35,000 over the course of three months. Early the next year, the Institute relocated from Atlanta, Georgia to Silicon Valley.

In February 2006, the Institute completed a $200,000 Singularity Challenge fundraising drive,[22] in which every donation up to $100,000 was matched by Clarium Capital President, PayPal co-founder and Institute Advisor Peter Thiel.[23] The stated uses of the funds included hiring additional full-time staff, an additional full-time research fellow position, and the organization of the Singularity Summit at Stanford.

From 2009-2012, the Institute released about a dozen papers on subjects including machine ethics, economic implications of AI, and decision theory topics.[24] Since 2009, MIRI has published 7 peer reviewed journal articles.[25]

Having previously shortened its name to simply Singularity Institute, in January 2013 it changed its name to the Machine Intelligence Research Institute in order to avoid confusion with Singularity University.[26]

Singularity Summit

In 2006, the Institute, along with the Symbolic Systems Program at Stanford, the Center for Study of Language and Information, KurzweilAI.net, and Peter Thiel, co-sponsored the Singularity Summit at Stanford.[27] The summit took place on 13 May 2006 at Stanford University with Thiel moderating and 1300 in attendance. The keynote speaker was Ray Kurzweil,[28] followed by eleven others: Nick Bostrom, Cory Doctorow, K. Eric Drexler, Douglas Hofstadter, Steve Jurvetson, Bill McKibben, Max More, Christine Peterson, John Smart, Sebastian Thrun, and Eliezer Yudkowsky.

The 2007 Singularity Summit took place on September 8-September 9, 2007, at the Palace of Fine Arts Theatre, San Francisco. A third Singularity Summit took place on October 25, 2008, at the Montgomery Theater in San Jose. The 2009 Singularity Summit took place on October 3, at the 92nd Street Y in New York City, New York. The 2010 Summit was held on August 14–15, 2010, at the Hyatt Regency in San Francisco.[29] The 2011 Summit was held October 16–17, 2011, at the 92nd St. Y in New York. The 2012 Singularity Summit was on the weekend of October 13–14 at the Nob Hill Masonic Center, 1111 California Street, San Francisco, CA.[30]

Center for Applied Rationality

In mid-2012, the Institute spun off a new organization called the Center for Applied Rationality (CFAR), whose focus is to help people apply the principles of rationality in their day-to-day life and to research and develop de-biasing techniques.[31][32] The organisation is based in the San Francisco Bay Area, Berkeley, California. CFAR develops and tests strategies of cognitive tools and triggers that are know from research in the field of cognitive science on how people form and change their beliefs. The organisation also gives workshops to train people to internalize and use strategies based the principles of rationality on a more regular basis to improve their reasoning and decision making skills and achieve goals.[33] According to its co-founder and president Julia Galef the term "Applied" refers to a practical version of rationality in which people not only know how to be rational but also understand when being rational makes a difference.[33] Among the exercises taught in the three-day workshops are Goal Factoring, Pre-Hindsight and Structured Procrastination.[34]

See also

References

  1. 1.0 1.1 "IRS Form 990" (PDF). Machine Intelligence Research Institute. 2013. Retrieved 17 December 2014.
  2. Intelligence Explosion Microeconomics
  3. What is Friendly AI?
  4. MIRI Overview
  5. Research workshops
  6. About Us
  7. I, Rodney Brooks, Am a Robot
  8. The Foundations of AI safety
  9. Bostrom, Nick (2014). "Is the default outcome doom?". Superintelligence: Paths, Dangers, Strategies (First edition. ed.). ISBN 0199678111. Proceeding from the idea of first-mover advantage, the orthogonality thesis, and the instrumental convergence thesis, we can now begin to see the outlines of an argument for fearing that a plausible default outcome of the creation of machine superintelligence is existential catastrophe.
  10. Tegmark, Max (2014). "Life, Our Universe and Everything". Our mathematical universe : my quest for the ultimate nature of reality (First edition. ed.). ISBN 9780307744258. Its owner may cede control to what Eliezer Yudkowsky terms a "Friendly AI,"...
  11. Tiling Agents for Self-Modifying AI, and the Löbian Obstacle
  12. A Comparison of Decision Algorithmson Newcomblike Problems
  13. Timeless Decision Theory
  14. Bostrom, Nick (2014). "Acquiring Values". Superintelligence: Paths, Dangers, Strategies (First edition. ed.). ISBN 0199678111.
  15. Scientists Fear Day Computers Become Smarter Than Humans September 12, 2007
  16. Business Artificial Intelligence Conference in S.J. this week San Jose Mercury News (CA) - October 24, 2008 - 3E Business
  17. Levels of Organization in General Intelligence
  18. "What is the Singularity"
  19. "Why Work Toward the Singularity"
  20. "Humanity 2.0: transhumanists believe that human nature's a phase we'll outgrow, like adolescence. Someday we'll be full-fledged adult posthumans, with physical and intellectual powers of which we can now only dream. But will progress really make perfect?"
  21. AsimovLaws.com
  22. Singularity Challenge
  23. The Singularity: Humanity's Last Invention?, Martin Kaste, National Public Radio
  24. Singularity Institute - Recent Publications
  25. "We are now the “Machine Intelligence Research Institute” (MIRI)", Luke Muehlhauser, 30 January 2013
  26. Smarter than thou?, San Francisco Chronicle, 12 May 2006
  27. Public meeting will re-examine future of artificial intelligence Real brains are gathering in San Francisco to ponder the future of artificial intelligence, September 07, 2007. Tom Abate, SFGate.com
  28. Silicon Valley tycoon embraces sci-fi future MSNBC Tech & Science
  29. "Singularity Summit: Logistics". SingularitySummit.com. Retrieved 2012-09-25.
  30. "July 2012 Newsletter". Singularity Institute.
  31. "About Us". Center for Applied Rationality.
  32. 33.0 33.1 Stiefel, Todd; Metskas, Amanda K. (22 May 2013). "Julia Galef". The Humanist Hour. Episode 083. The Humanist. Retrieved 3 March 2015.
  33. Chen, Angela (1 January 2014). "More Rational Resolutions". The Wall Street Journal. Retrieved 5 March 2015.

External links