Eliezer Yudkowsky

From Wikipedia, the free encyclopedia

Eliezer Yudkowsky at the 2006 Stanford Singularity Summit.
Enlarge
Eliezer Yudkowsky at the 2006 Stanford Singularity Summit.

Eliezer S. Yudkowsky is an American artificial intelligence researcher concerned with the Singularity, and an advocate of Friendly Artificial Intelligence.

Yudkowsky is a co-founder and research fellow of the Singularity Institute for Artificial Intelligence (SIAI). Yudkowsky is the author of the SIAI publications "Levels of Organization in General Intelligence" and "Creating Friendly AI."

Yudkowsky's research work focuses on Artificial Intelligence designs which enable self-understanding, self-modification, and recursive self-improvement (seed AI); and also on artificial-intelligence architectures for stably benevolent motivational structures (Friendly AI). Apart from his research work, Yudkowsky is notable for his explanations of technical subjects in non-academic language, particularly on rationality, such as An Intuitive Explanation of Bayesian Reasoning.

His most recent academic contributions include two chapters in Nick Bostrom's forthcoming edited volume Global Catastrophic Risks.

[edit] See also

[edit] External links