Michael I. Jordan

From Wikipedia, the free encyclopedia

Michael I. Jordan is a leading researcher in machine learning and artificial intelligence. Jordan was a prime mover behind popularising Bayesian networks in the machine learning community and is known for pointing out links between machine learning and statistics. Jordan was also prominent in the formalisation of variational methods for approximate inference and the popularisation of the expectation-maximization algorithm in machine learning.

Jordan was a student of David E. Rumelhart and a member of the PDP Group in the 1980s. During this time he developed recurrent neural networks as a cognitive model. In recent years, though, his work is less driven from a cognitive perspective and more from the background of traditional statistics.

Jordan is currently a full professor at the University of California, Berkeley where his appointment is split across the Department of Statistics and the Department of EECS.

It is notable that many of Jordan's graduate students and postdocs continue to strongly influence the machine learning field after their PhDs. Zoubin Ghahramani, Tommi Jaakkola, Andrew Ng, Lawrence Saul and David Blei (all former students or postdocs* of Jordan) have all continued to make significant contributions to the field.

[edit] Controversies

Jordan and his school explicitly argued against double-blind reviews by saying that it is difficult to squeeze more theoretical papers into 8 pages and reviewers have no way to check the validity of claims made in the paper.

[edit] External links