Frank H. Guenther
Frank H. Guenther (born April 18, 1964, Kansas City, MO) is an American computational and cognitive neuroscientist whose research focuses on the neural computations underlying speech, including characterization of the neural bases of communication disorders and development of brain-computer interfaces for communication restoration. He is currently a professor of speech, language, and hearing sciences and biomedical engineering at Boston University.
Education
Frank Guenther received a B.S. in electrical engineering from the University of Missouri in Columbia (1986), graduating summa cum laude and ranking first overall in the College of Engineering. He received an M.S. in electrical engineering from Princeton University (1987) and a Ph.D. in cognitive and neural systems from Boston University (1993).
Professional
In 1992, Guenther joined the faculty of the Cognitive & Neural Systems Department at Boston University, receiving tenure in 1998. In 2010 he became associate director of the graduate program for neuroscience and director of the computational neuroscience PhD specialization at Boston University. He joined the Department of Speech, Language, & Hearing Sciences at BU that same year. In addition to his Boston University appointments, Guenther was a research affiliate in the Research Laboratory of Electronics at Massachusetts Institute of Technology from 1998-2011, and in 2011 he became a research affiliate in the Picower Institute for Learning and Memory at MIT. Since 1998 he has been a member of the Speech and Hearing Bioscience and Technology PhD program in the Harvard University – MIT Division of Health Sciences and Technology, and since 2003 he has been a visiting scientist in the Department of Radiology at Massachusetts General Hospital. Guenther has given numerous keynote and distinguished lectures worldwide and has authored over 55 refereed journal articles concerning the neural bases of speech and motor control as well as brain-computer interface technology.
Research
Frank Guenther’s research is aimed at uncovering the neural computations underlying the processing of speech by the human brain. He is the originator of the Directions Into Velocities of Articulators (DIVA) model, which is currently the leading model of the neural computations underlying speech production.[1][2][3][4][5] This model mathematically characterizes the computations performed by each brain region involved in speech production as well as the function of the interconnections between these regions. The model has been supported by a wide range of experimental tests of model predictions, including electromagnetic articulometry studies investigating speech movements,[6][7][8][9][10] auditory perturbation studies involving modification of a speaker’s feedback of his/her own speech in real time,[11][12][13][14] and functional magnetic resonance imaging studies of brain activity during speech,[12][15][16][17] though some parts of the model remain to be experimentally verified. The DIVA model has been used to investigate the neural underpinnings of a number of communication disorders, including stuttering [18][19] apraxia of speech,[20][21] and hearing-impaired speech.[8][9][10]
In addition to computational modeling and experimental research investigating the neural bases of speech, Guenther directs the Boston University Neural Prosthesis Laboratory, which focuses on the development of technologies that can decode the brain signals of profoundly paralyzed individuals, particularly those suffering from locked-in syndrome, in order to control external devices such as speech synthesizers, mobile robots, and computers. Guenther’s team received widespread press coverage in 2009, when they developed a brain-computer interface for real-time speech synthesis that allowed locked-in patient Erik Ramsey to produce vowel sounds in collaboration with Dr. Philip Kennedy (inventor of the neurotrophic electrode used in the study) and Dr. Jonathan Brumberg.[22] He has also made headlines for his research into non-invasive brain-computer interfaces for communication.[23][24] In 2011, Guenther founded the Unlock Project, a non-profit project aimed at providing free brain-computer interface technology to patients suffering from locked-in syndrome.
Media
Frank Guenther’s research has been covered extensively in the science and mainstream media, including television spots on CNN News,[25] PBS News Hour,[23] and Fox News;[26] articles in popular science magazines Nature News,[27] New Scientist,[28][29] Discover,[30][31] and Scientific American;[32][33][34] and mainstream media coverage in Esquire,[35] Wired,[36] The Boston Globe,[37] MSNBC,[38] and BBC News.[39]
References
- ↑ Guenther, F.H. (1994). A neural network model of speech acquisition and motor equivalent speech production. Biological Cybernetics, 72, pp. 43-53.
- ↑ Guenther, F.H. (1995). Speech sound acquisition, coarticulation, and rate effects in a neural network model of speech production. Psychological Review, 102, pp. 594-621.
- ↑ Guenther, F.H., Hampson, M., and Johnson, D. (1998). A theoretical investigation of reference frames for the planning of speech movements. Psychological Review, 105, pp. 611-633.
- ↑ Guenther, F.H., Ghosh, S.S., and Tourville, J.A. (2006). Neural modeling and imaging of the cortical interactions underlying syllable production. Brain and Language, 96, pp. 280-301.
- ↑ Golfinopoulos, E., Tourville, J.A., and Guenther, F.H. (2010). The integration of large-scale neural network modeling and functional brain imaging in speech motor control. NeuroImage, 52, pp. 862-874.
- ↑ Perkell, J.S., Guenther, F.H., Lane, H., Matthies, M.L., Stockmann, E., Tiede, M., and Zandipour, M. (2004). The distinctness of speakers’ productions of vowel contrasts is related to their discrimination of the contrasts. Journal of the Acoustical Society of America, 116(4) Pt. 1, pp. 2338-2344.
- ↑ Perkell, J.S., Matthies, M.L., Tiede, M., Lane, H., Zandipour, M., Marrone, N., Stockmann, E., and Guenther, F.H. (2004). The distinctness of speakers’ /s-sh/ contrast is related to their auditory discrimination and use of an articulatory saturation effect. Journal of Speech, Language, and Hearing Research, 47, pp. 1259-1269.
- ↑ 8.0 8.1 Lane, H., Denny, M., Guenther, F.H., Matthies, M.L., Menard, L., Perkell, J.S., Stockmann, E., Tiede, M., Vick, J., and Zandipour, M. (2005). Effects of bite blocks and hearing status on vowel production. Journal of the Acoustical Society of America, 118, pp. 1636-1646.
- ↑ 9.0 9.1 Lane, H, Denny, M., Guenther, F.H., Hanson, H., Marrone, N., Matthies, M.L., Perkell, J.S., Burton, E., Tiede, M., Vick, J., and Zandipour, M. (2007). On the structure of phoneme categories in listeners with cochlear implants. Journal of Speech, Language, and Hearing Research, 50, pp. 2-14.
- ↑ 10.0 10.1 Lane, H., Matthies, M.L., Denny, M., Guenther, F.H., Perkell, J.S., Stockmann, E., Tiede, M., Vick, J., and Zandipour, M. (2007). Effects of short- and long-term changes in auditory feedback on vowel and sibilant contrasts. Journal of Speech, Language, and Hearing Research, 50, pp. 913-927.
- ↑ Villacorta, V.M., Perkell, J.S., and Guenther, F.H. (2007). Sensorimotor adaptation to feedback perturbations of vowel acoustics and its relation to perception. Journal of the Acoustical Society of America, 122, pp. 2306-2319.
- ↑ 12.0 12.1 Tourville, J.A., Reilly, K.J., and Guenther, F.H. (2008). Neural mechanisms underlying auditory feedback control of speech. NeuroImage, 39, pp. 1429-1443.
- ↑ Patel, R., Niziolek, C., Reilly, K.J., and Guenther, F.H. (2011). Prosodic adaptations to pitch perturbation in running speech. Journal of Speech, Language, and Hearing Research, 54, pp. 1051-1059.
- ↑ Cai, S., Ghosh, S.S., Guenther, F.H., and Perkell, J.S. (2011). Focal manipulations of formant trajectories reveal a role of auditory feedback in the online control of both within-syllable and between-syllable speech timing. Journal of Neuroscience, 31, pp. 16483-90.
- ↑ Ghosh, S.S., Tourville, J.A., and Guenther, F.H. (2008). A neuroimaging study of premotor lateralization and cerebellar involvement in the production of phonemes and syllables. Journal of Speech, Language, and Hearing Research, 51, pp. 1183-1202.
- ↑ Bohland, J.W. and Guenther, F.H. (2006). An fMRI investigation of syllable sequence production. NeuroImage, 32, pp. 821-841.
- ↑ Peeva, M.G., Guenther, F.H., Tourville, J.A., Nieto-Castanon, A., Anton, J.-L., Nazarian, B., and Alario, F.-X. (2010). Distinct representations of phonemes, syllables, and supra-syllabic sequences in the speech production network. NeuroImage, 50, pp. 626-638.
- ↑ Max, L., Guenther, F.H., Gracco, V.L., Ghosh, S.S., and Wallace, M.E. (2004). Unstable or insufficiently activated internal models and feedback-biased motor control as sources of dysfluency: A theoretical model of stuttering. Contemporary Issues in Communication Science and Disorders, 31, pp. 105-122.
- ↑ Civier, O., Tasko, S.M., and Guenther, F.H. (2010). Overreliance on auditory feedback may lead to sound/syllable repetitions: Simulations of stuttering and fluency-inducing conditions with a neural model of speech production. Journal of Fluency Disorders, 35, pp. 246-279.
- ↑ Terband, H., Maassen, B, Guenther, F.H., and Brumberg, J. (2009). Computational neural modeling of speech motor control in childhood apraxia of speech. Journal of Speech, Language, and Hearing Research, 52, pp. 1595-1609.
- ↑ Maas, E., Mailend, M.-L., Story, B.H., and Guenther, F.H. (2011). The role of auditory feedback in apraxia of speech: Effects of feedback masking on vowel contrast. 6th International Conference on Speech Motor Control, Groningen, The Netherlands.
- ↑ Guenther, F.H., Brumberg, J.S., Wright, E.J., Nieto-Castanon, A., Tourville, J.A., Panko, M., Law, R., Siebert, S.A., Bartels, J.L., Andreasen, D.S., Ehirim, P., Mao, H., and Kennedy, P.R. (2009). A wireless brain-machine interface for real-time speech synthesis. PLoS ONE, 4(12), pp. e8218+.
- ↑ 23.0 23.1 “Brain-Powered Technology May Help Locked-In Patients” PBS News Hour, October 14, 2011, http://www.pbs.org/newshour/rundown/2011/10/brain-powered-technology-may-help-locked-in-patients.html
- ↑ “Cracking the Neural Code for Speech” Science for the Public, WGBH, February 2012 http://www.scienceforthepublic.org/?page_id=4031
- ↑ Lee, Y.S. “Scientists seek to help 'locked-in' man speak.” CNN 14 December 2007. http://articles.cnn.com/2007-12-14/health/locked.in_1_brain-activity-neural-systems-neural-signals?_s=PM:HEALTH
- ↑ Underwood, C. “Brain Implants May Let ‘Locked-In’ Patients Speak” Fox News 23 May 2008. http://www.foxnews.com/story/0,2933,357162,00.html
- ↑ Smith, K. “Brain implant allows mute man to speak: Patient with paralysis controls speech synthesizer with his mind.” Nature News 21 November 2008. http://www.nature.com/news/2008/081121/full/news.2008.1247.html
- ↑ Callaway, E. “Locked-in man controls speech synthesizer with thought.” New Scientist 15 December 2009 http://www.newscientist.com/article/dn18293-lockedin-man-controls-speech-synthesiser-with-thought.html
- ↑ Thomson, H. “Telepathy machine reconstructs speech from brainwaves.” New Scientist 31 January 2012. http://www.newscientist.com/article/dn21408-telepathy-machine-reconstructs-speech-from-brainwaves.html
- ↑ Weed, W. S. “The Biology of…Stuttering.” Discover Magazine 1 November 2002. http://discovermagazine.com/2002/nov/featbiology
- ↑ Baker, S “The Rise of the Cyborgs: Melding humas and machines to help the paralyzed walk, the mute speak and the near-dead return to life.” Discover Magazine 26 September 2008. http://discovermagazine.com/2008/oct/26-rise-of-the-cyborgs
- ↑ Gibbs, W. W. “From Mouth to Mind: New insights into how language warps the brain.” Scientific American 15 July 2002. http://www.scientificamerican.com/article.cfm?id=from-mouth-to-mind
- ↑ Svoboda, E. “Avoiding the Big Choke.” Scientific American February/March 2009. http://www.nature.com/scientificamericanmind/journal/v20/n1/full/scientificamericanmind0209-36.html
- ↑ Brown, A. S. “Putting thoughts into Action: Implants tap the thinking brain.” Scientific American 12 November 2008. http://www.scientificamerican.com/article.cfm?id=putting-thoughts-into-action
- ↑ Foer, J. “The unspeakable odyssey of the motionless boy.” Esquire 2 October 2008. http://www.esquire.com/features/unspeakable-odyssey-motionless-boy-1008
- ↑ Keim, B. “Wireless Brain-to-Computer Connection Synthesizes Speech.” Wired 9 December 2009. http://www.wired.com/wiredscience/2009/12/wireless-brain/
- ↑ Rosenbaum, S. I. “Out of Silence, the sounds of hope.” Boston Globe 27 July 2008. http://www.boston.com/news/health/articles/2008/07/27/out_of_silence_the_sounds_of_hope/
- ↑ Klotz. I. “Device turns thoughts into speech.” MSNBC 31 December 2009. http://www.msnbc.msn.com/id/34642356/ns/technology_and_science-innovation/t/device-turns-thoughts-speech/
- ↑ “Paralysed man’s mind is ‘read’.” BBC News 15 November 2007. http://news.bbc.co.uk/2/hi/7094526.stm
External links
- Frank Guenther’s homepage
- The Boston University Speech Lab homepage
- The Boston University Neural Prosthesis Lab homepage
- The Boston University Graduate Program for Neuroscience
- The Boston University PhD Program in Computational Neuroscience
- The Unlock Project