Stephen Grossberg
Stephen Grossberg is a cognitive scientist, neuroscientist, biomedical engineer, and neuromorphic technologist. He is the Wang Professor of Cognitive and Neural Systems and a Professor of Mathematics, Psychology, and Biomedical Engineering at Boston University.[1]
Education
Grossberg graduated from Stuyvesant High School in Manhattan. He received a B.A. from Dartmouth College and an M.S. from Stanford University. He received a PhD in mathematics from Rockefeller University in 1967.[1]
Research
Grossberg is a founder of the fields of computational neuroscience, connectionist cognitive science, and neuromorphic technology. His work focuses upon the design principles and mechanisms that enable the behavior of individuals, or machines, to adapt autonomously in real time to unexpected environmental challenges. This research has included neural models of vision and image processing; object and event learning and pattern recognition; audition, speech and language; cognitive information processing; reinforcement learning and cognitive-emotional interactions; autonomous navigation; adaptive sensory-motor control and robotics; self-organizing neurodynamics; mental disorders; and neural network technology. Grossberg has published seventeen books or journal special issues, over 500 research articles, and has seven patents.
Grossberg has studied how brains give rise to minds since he took the introductory psychology course as a freshman at Dartmouth College in 1957. That began his journey in the fields of computational neuroscience, connectionist cognitive science, and neuromorphic technology. At that time, Grossberg introduced the paradigm of using nonlinear systems of differential equations to show how brain mechanisms can give rise to behavioral functions.[2] This paradigm is helping to solve the classical mind/body problem, and is the basic mathematical formalism that is used in biological neural network research today. Of historical interest is the fact that Artificial Intelligence also was born at Dartmouth College at a conference in the summer of 1956, just before Grossberg arrived there. In particular, in 1957-1958, Grossberg discovered widely used equations for (1) short-term memory (STM), or neuronal activation (often called the Additive and Shunting models, or the Hopfield model after John Hopfield's 1984 application of the Additive model equation); (2) medium-term memory (MTM), or activity-dependent habituation (often called habituative transmitter gates, or depressing synapses after Larry Abbott's 1997 introduction of this term); and (3) long-term memory (LTM), or neuronal learning (often called gated steepest descent learning). One variant of these learning equations, called Instar Learning, was introduced by Grossberg in 1976 into Adaptive Resonance Theory and Self-Organizing Maps for the learning of adaptive filters in these models. This learning equation was also used by Kohonen in his applications of Self-Organizing Maps starting in 1984. Another variant of these learning equations, called Outstar Learning, was used by Grossberg starting in 1967 for spatial pattern learning. Outstar and Instar learning were combined by Grossberg in 1976 in a three-layer network for the learning of multi-dimensional maps from any m-dimensional input space to any n-dimensional output space. This application was called Counter-propagation by Hecht-Nielsen in 1987.
Models that Grossberg introduced and helped to develop include, for the foundation of neural network research: competitive learning, self-organizing maps, instars, and masking fields (for classification), outstars (for spatial pattern learning), avalanches (for serial order learning and performance), gated dipoles (for opponent processing); for cognitive development, working memory, cognitive information processing, and attention: Adaptive Resonance Theory (ART), ARTMAP, STORE, CORT-X, SpaN, LIST PARSE, lisTELOS, SMART, CRIB; for visual perception, attention, recognition, and search: BCS/FCS, FACADE, 3D LAMINART, aFILM, LIGHTSHAFT, Motion BCS, 3D FORMOTION, MODE, VIEWNET, ARTSCAN, ARTSCENE; for auditory perception, speech, and language processing: SPINET, ARTSTREAM, ARTPHONE, ARTWORD, NormNet; for cognitive-emotional dynamics and adaptively timed behavior: CogEM, START, MOTIVATOR; for visual and spatial navigation: SOVEREIGN, STARS, ViSTARS, GRIDSmap, GridPlaceMap; and for spatial and sensory-motor processing: VITE, FLETE, VITEWRITE, DIRECT, VAM, SACCART, TELOS, SAC-SPEM.
Career
Grossberg founded several institutions aimed at providing interdisciplinary training and research in the fields of computational neuroscience, connectionist cognitive science, and neuromorphic technology. In 1981, he founded the Center for Adaptive Systems at Boston University and remains its Director. In 1991, he founded the Department of Cognitive and Neural Systems at Boston University and served as its Chairman until 2007. In 2004, he founded the NSF Center of Excellence for Learning in Education, Science, and Technology (CELEST)[3] and served as its Director until 2009.[4] All of these institutions were aimed at answering two related questions: How does the brain control behavior? How can technology emulate biological intelligence? In addition, Grossberg founded and was first President of the International Neural Network Society (INNS), which grew to 3700 members from 49 states of the United States and 38 countries during the fourteen months of his presidency. The formation of INNS soon led to the formation of the European Neural Network Society (ENNS) and the Japanese Neural Network Society (JNNS). Grossberg also founded the INNS official journal, Neural Networks, and was its Editor-in-Chief from 1988 - 2010.[5] Neural Networks is also the archival journal of ENNS and JNNS.
Grossberg has also served on the editorial board of more than 25 other journals, including Journal of Cognitive Neuroscience, Behavioral and Brain Sciences, Cognitive Brain Research, Cognitive Science, Neural Computation, IEEE Transactions on Neural Networks, IEEE Expert, and the International Journal of Humanoid Robotics. He has organized many conferences since the 1970s.
Awards
Grossberg won the first 1991 IEEE Neural Network Pioneer Award, the 1992 INNS Leadership Award, the 1992 Boston Computer Society Thinking Technology Award, the 2000 Information Science Award of the Association for Intelligent Machinery, the 2002 Charles River Laboratories prize of the Society for Behavioral Toxicology, and the 2003 INNS Helmholtz Award. He is a 1990 member of the Memory Disorders Research Society, a 1994 Fellow of the American Psychological Association, a 1996 Fellow of the Society of Experimental Psychologists, a 2002 Fellow of the American Psychological Society, a 2005 IEEE Fellow, a 2008 Inaugural Fellow of the American Educational Research Association, and a 2011 INNS Fellow.
ART theory
With Gail Carpenter, Grossberg developed the adaptive resonance theory (ART). ART is a cognitive and neural theory of how the brain can quickly learn, and stably remember and recognize, objects and events in a changing world. ART proposed a solution of the stability-plasticity dilemma; namely, how a brain or machine can learn quickly about new objects and events without just as quickly being forced to forget previously learned, but still useful, memories. ART predicts how learned top-down expectations focus attention on expected combinations of features, leading to a synchronous resonance that can drive fast learning. ART also predicts how large enough mismatches between bottom-up feature patterns and top-down expectations can drive a memory search, or hypothesis testing, for recognition categories with which to better learn to classify the world. ART thus defines a type of self-organizing production system. ART was practically demonstrated through the ART family of classifiers (e.g., ART 1, ART 2, ART 2A, ART 3, ARTMAP, fuzzy ARTMAP, ART eMAP, distributed ARTMAP), developed with Gail Carpenter, which has been used in large-scale applications in engineering and technology where fast, yet stable, incrementally learned classification and prediction are needed.
New computational paradigms
Grossberg has recently led the development of two computational paradigms that are relevant to biological intelligence and its applications:
Complementary Computing
What is the nature of brain specialization? Many scientists have proposed that our brains possess independent modules, as in a digital computer. The brain’s organization into distinct anatomical areas and processing streams shows that brain processing is indeed specialized. However, independent modules should be able to fully compute their particular processes on their own. Much behavioral data argue against this possibility. Complementary Computing (Grossberg, 2000,[6] 2012[7]) concerns the discovery that pairs of parallel cortical processing streams compute complementary properties in the brain. Each stream has complementary computational strengths and weaknesses, much as in physical principles like the Heisenberg Uncertainty Principle. Each cortical stream can also possess multiple processing stages. These stages realize a hierarchical resolution of uncertainty. "Uncertainty" here means that computing one set of properties at a given stage prevents computation of a complementary set of properties at that stage. Complementary Computing proposes that the computational unit of brain processing that has behavioral significance consists of parallel interactions between complementary cortical processing streams with multiple processing stages to compute complete information about a particular type of biological intelligence.
Laminar Computing
The cerebral cortex, the seat of higher intelligence in all modalities, is organized into layered circuits (often six main layers) that undergo characteristic bottom-up, top-down, and horizontal interactions. How do specializations of this shared laminar design embody different types of biological intelligence, including vision, speech and language, and cognition? Laminar Computing proposes how this can happen (Grossberg, 1999,[8] 2012[7]). Laminar Computing explains how the laminar design of neocortex may realize the best properties of feedforward and feedback processing, digital and analog processing, and bottom-up data-driven processing and top-down attentive hypothesis-driven processing. Embodying such designs into VLSI chips promises to enable the development of increasingly general-purpose adaptive autonomous algorithms for multiple applications.
See also
References
- ↑ 1.0 1.1 Faculty page at Boston University
- ↑ Towards building a neural networks community
- ↑ CELEST at Boston University
- ↑ "$36.5 Million for Three Centers to Explore How Humans, Animals, and Machines Learn", National Science Foundation, cited at Newswise, September 30, 2004
- ↑ "Elsevier Announces New Co-Editor-In-Chief for Neural Networks", Elsevier, December 23, 2010
- ↑ The complementary brain: Unifying brain dynamics and modularity.
- ↑ 7.0 7.1 Adaptive Resonance Theory: How a brain learns to consciously attend, learn, and recognize a changing world.
- ↑ How does the cerebral cortex work? Learning, attention and grouping by the laminar circuits of visual cortex.