Eliezer Yudkowsky
From Wikipedia, the free encyclopedia
Eliezer S. Yudkowsky is an American artificial intelligence researcher concerned with the Singularity, and an advocate of Friendly Artificial Intelligence.
Yudkowsky is a co-founder and research fellow of the Singularity Institute for Artificial Intelligence (SIAI). Yudkowsky is the author of the SIAI publications "Levels of Organization in General Intelligence" and "Creating Friendly AI."
Yudkowsky's research work focuses on Artificial Intelligence designs which enable self-understanding, self-modification, and recursive self-improvement (seed AI); and also on artificial-intelligence architectures for stably benevolent motivational structures (Friendly AI). Apart from his research work, Yudkowsky is notable for his explanations of technical subjects in non-academic language, particularly on rationality, such as An Intuitive Explanation of Bayesian Reasoning.
His most recent academic contributions include two chapters in Nick Bostrom's forthcoming edited volume Global Catastrophic Risks.
[edit] See also
- Singularitarianism - a moral philosophy advanced by Yudkowsky
[edit] External links
Persondata | |
---|---|
NAME | Yudkowsky, Eliezer |
ALTERNATIVE NAMES | |
SHORT DESCRIPTION | American Artificial Intelligence researcher |
DATE OF BIRTH | |
PLACE OF BIRTH | California |
DATE OF DEATH | |
PLACE OF DEATH |