Robert M. French

Robert M. French is a research director at the French National Centre for Scientific Research. He is currently at the University of Burgundy in Dijon. He holds a Ph.D. from the University of Michigan, where he worked with Douglas Hofstadter on the Tabletop computational cognitive model. He specializes in cognitive science and has made an extensive study of the process of analogy-making.[1]

French is the inventor of Tabletop, a computer program that forms analogies in a microdomain consisting of everyday objects placed on a table.

He has done extensive research in artificial intelligence and written several articles about the Turing Test, which was proposed by Alan Turing in 1950 as a means of determining whether an advanced computer can be said to be intelligent. French was for a long time an outspoken critic of the test, which, he suggested, no computer might ever be able to meet. French now believes that the way forward in AI does not lie in an attempt to flawlessly simulate human cognition (i.e., pass a Turing Test) but, rather, in trying to design computers capable of developing their own abilities to understand the world and in interacting with these machines in a meaningful manner.[2]

He has published work on catastrophic forgetting in neural networks, the Turing test and foundations of cognitive science, the evolution of sex, and categorization and learning in infants, among other topics.

Early life and education

French attended Miami University from 1969 to 1972, earning a B.S. in mathematics after three years of study. From 1972 to 1974 he was at Indiana University, from which received an M.A. in mathematics.[3]

Career

Early career and doctoral studies

From 1972 to 1974, French worked as a teaching assistant in mathematics at Indiana University. For several months in 1975, he taught mathematics at Hanover College in Hanover, Indiana.

He then moved to France, where from 1976 to 1985 he lived in Paris, working as a freelance translator and interpreter.[3] During his years there, he collaborated with a colleague, Jacqueline Henry, on the French translation of Douglas Hofstadter's bestseller Gödel, Escher, Bach.[1][3]

French returned to the U.S. in 1985 to become a graduate student in computer science at the University of Michigan, Ann Arbor, where he pursued a Ph.D. under Hofstadter in artificial intelligence/cognitive science. He completed his doctoral work in 1992, receiving a degree in computer science.

His Ph.D. dissertation was entitled Tabletop: An Emergent, Stochastic Computer Model of Analogy-Making. His thesis committee consisted of Hofstadter, John Holland, Daniel Dennett, Arthur Burks, John Laird, and Steve Lytinen.[3]

“The key notion underlying the research presented in this dissertation,” wrote French in his summary of the dissertation, “is my conviction that the cognitive mechanisms giving rise to human analogy-making form the very basis of intelligence. Our ability to perceive and create analogies is made possible by the same mechanisms that drive our ability to categorize, to generalize, and to compare different situations.”[4]

From 1985 to 1992 he was a research assistant in Computer Science at the University of Michigan, Ann Arbor. During this period he was also a Visiting Researcher at CREA, Ecole Polytechnique, Paris (1988), and a Visiting Lecturer in Computer Science at Earlham College in Richmond, Indiana (1991).

He spent several months in 1992 as a postdoctoral fellow at the Center for Research on Concepts and Cognition at Indiana University. From 1992 to 1994, he was Visiting Assistant Professor of Computer Science at Willamette University in Salem, Oregon. From 1994 to 1995, he was a Postdoctoral Fellow at the Department of Psychology at the University of Wisconsin, Madison, and a Lecturer in Cognitive Science in the Department of Educational Psychology at the same institution.[3]

Later career

From 1995 to 1998, French was a Research Scientist in the Department of Psychology at the University of Liège. From 1998 to 2000, he was an Associate Professor in Quantitative Psychology and Cognitive Science in the same department. From 2001 to 2004, he was a Professor of Quantitative Psychology and Cognitive Science in that department.

Since 2004, he has been the Research Director at the French National Center for Scientific Research (CNRS).[3]

Selected Publications

Books

In his foreword to the book, Daniel Dennett wrote that French “has created a model of human analogy-making that attempts to bridge the gap between classical top-down AI and more recent bottom-up approaches.” French's research, Dennett explained, “is based on the premise that human analogy-making is an extension of our constant background process of perceiving—in other words, that analogy-making and the perception of sameness are two sides of the same coin. At the heart of the author's theory and computer model of analogy-making is the idea that the building-up and the manipulation of representations are inseparable aspects of mental functioning, in contrast to traditional AI models of high-level cognitive processes, which have almost always depended on a clean separation.” Dennett maintained that “French's work is exciting not only because it reveals analogy-making to be an extension of our complex and subtle ability to perceive sameness but also because it offers a computational model of mechanisms underlying these processes. This model makes significant strides in putting into practice microlevel stochastic processing, distributed processing, simulated parallelism, and the integration of representation-building and representation-processing.”[5] Arthur B. Markman of Columbia University, in a review for the International Journal of Neural Systems described The Subtlety of Sameness as “fascinating.”[6]
A review in Choice said that “French reveals analogy-making to be an extension of our complex and subtle ability to perceive sameness. His computer program, Tabletop, forms analogies in a microdomain consisting of objects (utensils, cups, drinking glasses, etc.) on a table set for a meal. The theory and the program rely on the idea that stochastic choices made on the microlevel can add up to human-like robustness on a macrolevel. Thousands of program runs attempt to verify this on dozens of interrelated analogy problems in the Tabletop microworld.”[7]

Articles

References

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.