Eliezer Yudkowsky

From Wikipedia, the free encyclopedia

Eliezer Yudkowsky
Eliezer Yudkowsky at the 2006 Stanford Singularity Summit.
Eliezer Yudkowsky at the 2006 Stanford Singularity Summit.
Born 1979
Nationality American
Fields Artificial intelligence
Institutions Singularity Institute for Artificial Intelligence
Known for Seed AI, Friendly AI
Religious stance Transhumanist Atheist

Eliezer S. Yudkowsky (born in 1979) is an American artificial intelligence researcher concerned with the Singularity, and an advocate of Friendly Artificial Intelligence.

Yudkowsky is a co-founder and research fellow of the Singularity Institute for Artificial Intelligence (SIAI). Yudkowsky is the author of the SIAI publications "Creating Friendly AI" (2001) and "Levels of Organization in General Intelligence" (2002). His most recent academic contributions include two chapters in Oxford philosopher Nick Bostrom's forthcoming edited volume Global Catastrophic Risks. Yudkowsky is an autodidact with no formal education in artificial intelligence.

Yudkowsky's research work focuses on Artificial Intelligence designs which enable self-understanding, self-modification, and recursive self-improvement (seed AI); and also on artificial-intelligence architectures for stably benevolent motivational structures (Friendly AI). Apart from his research work, Yudkowsky is notable for his explanations of technical subjects in non-academic language, particularly on rationality, such as "An Intuitive Explanation of Bayesian Reasoning". He advocates the moral philosophy of Singularitarianism.

Yudkowsky is a frequent contributor to the Overcoming Bias blog of the Future of Humanity Institute of Oxford University.

[edit] References

[edit] External links

Persondata
NAME Yudkowsky, Eliezer
ALTERNATIVE NAMES
SHORT DESCRIPTION American Artificial Intelligence researcher
DATE OF BIRTH 1979
PLACE OF BIRTH California
DATE OF DEATH
PLACE OF DEATH