Singularity Institute for Artificial Intelligence

From Wikipedia, the free encyclopedia

The Singularity Institute for Artificial Intelligence (SIAI) is a non-profit organization with the goal of developing a theory of Friendly artificial intelligence and implementing that theory as a software system. This goal is implied by a belief that a technological singularity is likely to occur and that the outcome of such an event is heavily dependent on the structure of the first AI to exceed human-level intelligence. The organisation was founded in 2000 and has the secondary goal of facilitating a broader discussion and understanding of Moral, Strong Artificial Intelligence.

The SIAI considers that Friendly AI ultimately offers better prospects for addressing major challenges facing humanity (eg. disease, illness, poverty and hunger), than any other project seeking to advance the common good.

The executive director of SIAI is Tyler Emerson. Its development director is Allison Taguchi. Its community director is Jeff Medina. Its researchers include Eliezer Yudkowsky, Marcello Herreshoff, and Michael Wilson. The SIAI is tax exempt under Section 501(c)(3) of the United States Internal Revenue Code. In 2004 a Canadian branch was formed by Michael Roy Ames to allow Canadian donors to benefit from tax relief. The SIAI-CA is recognised as a Charitable Organization by the Canada Revenue Agency.

Contents

[edit] Founding

The Singularity Institute for AI was founded on July 22, 2000 by artificial intelligence researcher Eliezer Yudkowsky and internet entrepreneur Brian Atkins, after extended discussions on how to best approach the great risk and opportunity of smarter-than-human intelligence. Atkins and Yudkowsky met on the popular Extropy chat list, on which they had both been participants for several years prior. Here are the Articles of Incorporation and Bylaws from the original founding of SIAI, which took place in Atlanta, Georgia.

Atkins and Yudkowsky both accepted the controversial thesis that creating Artificial Intelligence with enough intelligence to improve on its own design independently (seed AI) within the next few decades was possible, given sufficient effort and resources. Prominent thinkers supporting this position are Oxford philosopher Nick Bostrom and renowned inventor Ray Kurzweil, both of whom joined the SIAI Advisory Board in 2004.

One of the stated purposes of the organisation is to combat perceived apathy within the artificial intelligence community. The SIAI claims that the present view that AI has "failed" comes from the fact that early AI predictions were over-optimistic and under-realistic, failing to anticipate or comprehend the fundamental challenge of the unparalleled complexity of human-level intellect. Consequently, it is claimed, a view has developed that general artificial intelligence is a monolithic, intractable problem, but that when it becomes realistically tractable, we will certainly be able to tell because it will somehow become "clear" in our advanced indeterminately futuristic understanding. The SIAI asserts that such thinking is over-reactive and hurts AI community productivity with the idea that if we just make continuous progress on individual details, that these details should at some point coalesce into a sum which will be the end product of AI research. In opposition to this alleged apathy, SIAI proposes approaching the problem of general intelligence as a realistic dilemma in terms of theoretical engineering, and argues that a concerted, rigorous and perennial effort to understand and replicate the mechanisms of intelligence, with realistic regard paid to the complexity and import of the project, is the most likely approach to succeed, rather than the traditional "great miracle" scenario.

[edit] Early years

In 2000, right around the founding of the Singularity Institute, two books were released that discussed the potential and near-term (before 2040) feasibility of strong AI. These were Robot: Mere Machine to Transcendent Mind (ISBN 0-19-513630-6) by Carnegie Mellon robotics guru Hans Moravec and The Age of Spiritual Machines by Ray Kurzweil. The widespread popularity and success of both these books contributed to the early growth and support of SIAI as an organization.

Primarily existing as an online entity, SIAI's main donor base consisted of transhumanists and futurists who saw rogue AI as a greater threat to humanity than (biological, nuclear, nanotechnological) weapons of mass destruction, and saw beneficial AI as one of the most helpful technological advancements we could reach. On June 15, 2001, the Singularity Institute released Creating Friendly AI: The Analysis and Design of Benevolent Goal Architectures, a book-length work by Eliezer Yudkowsky on the feasibility and details of Friendly AI, concurrently with the SIAI Guidelines on Friendly AI, analogous to the Foresight Guidelines on Nanotech Safety, but for AI rather than nanotech. The response from the transhumanist and futurist communities was strong and positive. Many, even skeptics of the near-term feasibility of AI, saw Yudkowsky's work as a valuable contribution to the long-term goal of constructing benevolent goal systems in self-modifying AI. Creating Friendly AI particularly emphasized that AIs would lack any complex inbuilt tendencies aside from what was painstakingly programmed into them, including human-typical tendencies such as arrogance, reactionism, competition, empathy, philosophical contemplation, or even the very notion of an observer-centered goal system.

The Institute continued to grow steadily throughout the early 2000s. Concern for the Singularity and the possibility of strong AI began to emerge more strongly among supporters of the Foresight Institute, a Palo Alto-based organization focused on the transformative impact of future technologies, particularly nanotechnology. A special interest group focused on the Singularity was held during their Spring 2001 Senior Associates Gathering in Palo Alto. On May 3, 2001, What is Friendly AI?, a short, SIAI-published paper discussing the topic, was published on the high-traffic futurist website KurzweilAI.net, which drew additional attention to the Singularity Institute. On May 18, 2001, SIAI released General Intelligence and Seed AI, a short-book-length document describing a starting point for a seed AI theory.

On July 23, 2001, SIAI launched the open source Flare Programming Language Project, described as "annotative programming language" with features inspired by Python, Java, C++, Eiffel, Common Lisp, Scheme, Perl, Haskell, and others. The specifications were designed with the complex challenges of seed AI in mind. But the effort was quietly shelved less than a year later when the Singularity Institute's analysts determined that trying to invent a new programming language to tackle the problem of AI just reflected an ignorance of the theoretical foundations of the problem. Today the SIAI is tentatively planning to use C++ or Java when a full-scale implementation effort is launched.

The next major publication from SIAI, Levels of Organization in General Intelligence was released on April 7, 2002. The paper was a preprint of a book chapter to be included in an upcoming compilation of general AI theories, entitled "Real AI: New Approaches to Artificial General Intelligence" (Ben Goertzel and Cassio Pennachin, eds.) Levels represents a more thoroughly developed version of the theory presented in General Intelligence and Seed AI. To date, Levels is the most highly detailed version of SIAI's AI theory available publicly.

The remainder of 2002 saw a number of milestones for SIAI. Christian Rovner joined SIAI as a full-time volunteer coordinator. The site design was overhauled, and several new documents were added, including "What is the Singularity" and "Why Work Toward the Singularity", SIAI's two main introductory pieces. A number of new volunteers joined SIAI, contributing valuably to communicating the message of SIAI to a wider audience. SIAI experienced its best year yet, with funding levels doubling every year since the inception of the organization.

In 2003, the Singularity Institute made yet another strong showing at the Foresight Senior Associates Gathering, with Eliezer Yudkowsky giving a well-received talk, "Foundations of Order", which discussed seed AI as a new type of order-builder, intelligence building upon intelligence, distinct from emergence or biological evolution. He referenced humans as a peculiar example of an intelligence built by evolution, transitional entities between an era dominated by evolution and an era dominated by intelligence. An edited transcript of the talk, "Why We Need Friendly AI", can be found here. SIAI also made an appearance at the Transvision 2003 conference at Yale University, organized by the World Transhumanist Association. SIAI volunteer Michael Anissimov gave the talk "Accelerating Progress and the Potential Consequences of Smarter than Human Intelligence".

[edit] Recent progress

On March 11, 2004, the Singularity Institute hired its second full-time employee, Executive Director Tyler Emerson. Prior to joining SIAI, Emerson worked with John Smart to co-organize the first Accelerating Change Conference, a yearly conference held at Stanford University and organized by the Acceleration Studies Foundation, a futurist organization that encourages dialogue on accelerating change and technology issues. On April 7, 2004, Michael Anissimov was named SIAI Advocacy Director, a part-time formal role. On July 14, 2004, SIAI released AsimovLaws.com, a website that examined AI morality in the context of the "I, Robot" movie starring Will Smith, released just two days later. AsimovLaws.com was a success, being discussed widely and linked from the popular weblog Slashdot.

In May 2004, SIAI released "Coherent Extrapolated Volition", an update to their theory of AI benevolence that described an extrapolation dynamic for turning perceptual data about human actions into beliefs meant to model their true preferences. On June 1, 2004, British software engineer Michael Wilson was announced as an SIAI Associate Researcher, the second official researcher to join SIAI (besides Eliezer Yudkowsky). At the same time, SIAI released Becoming a Seed AI Programmer, a sort of job-description document meant to attract development team members. From July to October, SIAI ran a Fellowship Challenge Grant that successfully raised $35,000 over the course of 3 months. At the end of the year, in December, SIAI began releasing a newsletter, the Singularity Update, which gives a comprehensive report of SIAI activities every 3 months.

On October 14, 2004, SIAI announced the formation of their advisory board, with two initial members Nick Bostrom, philosopher and director of the Future of Humanity Institute at Oxford, and Christine Peterson, co-founder and vice president of public policy for the Foresight Nanotech Institute (formerly "Foresight Institute"). Later that year, Ray Kurzweil and Aubrey de Grey, prominent transhumanists, also joined the advisory board.

In February of 2005, the Singularity Institute relocated from Atlanta, Georgia to Silicon Valley. Tyler Emerson moved to Sunnyvale, CA, while Eliezer Yudkowsky moved to Santa Clara, CA. Silicon Valley has the highest concentration of Singularity Institute donors and supporters.

In February 2006, the Singularity Institute completed a $200,000 Singularity Challenge fundraising drive, in which every donation up to $100,000 was matched by Clarium Capital President, Paypal Co-Founder and SIAI Advisor Peter Thiel. Among the stated uses of the funds included hiring additional full-time staff, an additional full-time research fellow position, and putting on the Singularity Summit at Stanford event in May 2006.

In April 2006, the Singularity Institute hired their third full-time employee, Communications Director Carolyn L Burke, founder and CEO of Integrity Incorporated, which advises corporations on security, privacy, and corporate governance, and incorporating the highest levels of integrity and ethics into business practices.

In May 2006, the Singularity Institute co-sponsored the Singularity Summit at Stanford with the Symbolic Systems Program at Stanford, the Center for Study of Language and Information, and KurzweilAI.net. The summit had 1300 in attendance, the largest gathering to date on the singularity hypothesis. The keynote was Ray Kurzweil, followed by eleven other speakers: Nick Bostrom, Cory Doctorow, K. Eric Drexler, Douglas R. Hofstadter, Steve Jurvetson, Bill McKibben, Max More, Christine Peterson, John Smart, Sebastian Thrun, and Eliezer Yudkowsky. The main organizer was SIAI Executive Director Tyler Emerson. The main sponsor and moderator was Peter Thiel. Extensive coverage of the conference, including audio, video, and photos, can be found at the Singularity Summit website.

In May 2006, the Singularity Institute hired Marcello Herreshoff, Stanford class of 2011, as an associate researcher for at least one year. Herreshoff decided to delay his attendance at Stanford to work full-time with Yudkowsky on Friendly AI research.

In July 2006, Allison Taguchi was hired as the Director of Development. She oversees all fund development operations for the Singularity Institute. She comes to the Institute with over 12 years of fund development experience at research institutes, universities, government agencies and nonprofit organizations. Some of the businesses she worked with in the past include: Rushford Nanotech Laboratory, Department of Defense, Oakland Military Institute, and University of Hawaii Biotech Research Center.

[edit] See also

  • People, theories and terms directly related to SIAI:
    • Eliezer Yudkowsky - co-founder of the SIAI and originator of many of its theories
    • Friendly AI - a theory originated by Eliezer Yudkowsky and advanced by the SIAI
    • Seed AI - a theory originated by Eliezer Yudkowsky and advanced by the SIAI
    • Singularitarianism - the name given to individuals, such as members in SIAI, who work to increase the technical understanding of the singularity and ensure that, if it's realized, it's realized safely
  • Other general topics:

[edit] External links