Eliezer Yudkowsky

From Wikipedia, the free encyclopedia

Eliezer Yudkowsky at the 2006 Stanford Singularity Summit.
Eliezer Yudkowsky at the 2006 Stanford Singularity Summit.

Eliezer S. Yudkowsky(born in 1979) is an American artificial intelligence researcher concerned with the Singularity, and an advocate of Friendly Artificial Intelligence.

Yudkowsky is a co-founder and research fellow of the Singularity Institute for Artificial Intelligence (SIAI). Yudkowsky is the author of the SIAI publications "Creating Friendly AI" (2001) and "Levels of Organization in General Intelligence" (2002). His most recent academic contributions include two chapters in Oxford philosopher Nick Bostrom's forthcoming edited volume Global Catastrophic Risks. Yudkowsky has no formal education in artificial intelligence.

Yudkowsky's research work focuses on Artificial Intelligence designs which enable self-understanding, self-modification, and recursive self-improvement (seed AI); and also on artificial-intelligence architectures for stably benevolent motivational structures (Friendly AI). Apart from his research work, Yudkowsky is notable for his explanations of technical subjects in non-academic language, particularly on rationality, such as "An Intuitive Explanation of Bayesian Reasoning". Yudkowsky advocates the moral philosophy of Singularitarianism.

[edit] External links


Persondata
NAME Yudkowsky, Eliezer
ALTERNATIVE NAMES
SHORT DESCRIPTION American Artificial Intelligence researcher
DATE OF BIRTH
PLACE OF BIRTH California
DATE OF DEATH
PLACE OF DEATH