Music and the brain

From Wikipedia, the free encyclopedia

Sounds and noises are only separated by the experience of the listener. In the domain of the mind subjectivity reigns, and yet attempts are still made to chip away at individual variation to quantify the actions of the brain. As ultimate subjective experience, music combines the cognitive elements of language, tonality, emotion and rhythm to elicit responses as variable as the individuals who are listening.

Contents

[edit] Pitch

There is nothing special about resonant frequencies; they are stupefied in all wave forms in physics, but when they are aurally perceived we call them keys or pitches. Some pitches and combinations of pitches are processed as more harmonious than others. The array of pitches that are found to be pleasurable can be found in the octave scale. In western music there are twelve chromatic pitch classes, which can be represented in many ways.

Eastern music includes more notes than in the West by making finer distinctions. For instance, in western music there is only one interval between a C and a C sharp, but in India there are many notes in between what we call a C and a C sharp. Each of these pitches can be arranged into chords, which occur in characteristic sequences throughout western music. For example, after two or three of these often-used chords are heard in sequence, the sequence of chords can be satisfactorily resolved only by a limited number of expected chords. This has been an area of interest for many, as even musical laymen can detect these chord patterns and recognize when a chord progression has not resolved "correctly". This allows researchers such as Petr Janata to look at the areas of the brain which are affected during this cognitive event.

[edit] Recognizing pitches

Within our ear, there is a small membrane called the basilar membrane. When we hear a certain pitch, a corresponding part of the tonotopically organized membrane responds, and sends the signal to the auditory cortex within the brain. Studies suggest that once the signal arrives, there are specific regions for each band of pitch such that the area is organized into sections of cells that are responsive to certain frequencies which range from very low to very high in pitches [1]. It has been thought that this organization may not be stable and the specific cells that are responsive to different pitches may change over days or months [2]. It has been suggested that in some people this organization is less variable, leading to perfect pitch, or the ability to recognize the musical scale label of a certain tone without hearing it in reference to other tones.

[edit] Rhythm

Different parts of the auditory cortex are involved in processing rhythm, specifically the belt and parabelt areas of the right hemisphere. When individuals are preparing to tap out a rhythm of regular intervals (1:2 or 1:3) the “left frontal cortex, left parietal cortex, and right cerebellum are all activated”(Tramo, 2001). With more difficult rhythms such as a 1:2.5, more of the cortex and cerebellum are involved. Still, the structures involved in tonal comprehension and speech are better known than the rhythm and involve many distant structures.

[edit] Internal rhythms

There has been an effort to attach musical rhythm with some innate biological rhythm, although they have not been met with much success. One sparse correlation is that the beats per minute in a song have been known to affect heart rate, and (coincidentally?) fall roughly in the same range of a normal human heart beat. A fast song can make the heart beat faster, while a slower paced song can make the heart beat slower[citation needed]. It would be interesting to see if there are any connections from the auditory cortex (or anywhere in the auditory network) which connect to the medulla to regulate heart beat. Perhaps even from the ear directly to the hypothalamus, analogous to the retino-hypothalamic tract.

Several studies, one of which was by Charles Gray of UC Davis and another by David McCormick of Yale U. School of Medicine (Schechter 1996), have shown that the brain has its own internal rhythm. Chatter cells coordinate rhythmic firings of millions of cells in bursts around 30-60 hertz. However, it is most likely that these cells link anatomically distant neural structures and is unlikely to have anything to do with musical rhythm.

[edit] Tonality and emotion

The Planum Temporal
The Planum Temporal

It has been shown that the right auditory cortex is the primary component for perceiving pitch, and parts of harmony, melody and rhythm (Tramo 2001). One study by Petr Janata found that there are tonally sensitive areas in the medial prefrontal cortex, the cerebellum, the Superior Temporal Sulci of both hemispheres and the Superior Temporal gyri (which has a skew towards the right hemisphere). When unpleasant melodies are played, the posterior cingulate cortex activates, which indicates a sense of conflict or emotional pain(Tramo, 2001). The right hemisphere has also been found to be correlated with emotion, which can also activate areas in the cingluate in times of emotional pain, specifically social rejection (Eisenberger). This evidence, along with observations, has led many musical theorists, philosophers and neuroscientists to link emotion with tonality. This seems almost obvious because the tones in music seem like a characterization of the tones in human speech, which indicate emotional content. The vowels in the phonemes of a song are elongated for a dramatic effect, and it seems as though musical tones are simply exaggerations of the normal verbal tonality.

As inviting as the proposition sounds, studies on amusia suggest at least a slight separation between speech tonality and musical tonality. Congenital amusics are individuals who are incapable of distinguishing between pitches; they are unmoved by dissonance and a wrong key on a piano never bothers them. They cannot be taught to remember a melody or to recite a song. This being said, they are still capable of hearing the tonality of speech, for example, they can tell the difference between “You speak French” and “You speak French?” when spoken. Perhaps this suggests some sort of linear organization in the right brain for comprehending tone, analogous to the left hemisphere’s linguistic organization. Knowing this, it would be interesting to see if amusics have a flatter affect than a control, or if right brain damaged patients exhibit at least a partial amusia. It would also be interesting if anyone were to do a study to see if patients with amygdala damage exhibit some form of amusia. It seems as though tonality and rhythm are the most important and unique components to music, but lyrics play an important part too.

[edit] Linguistics and organization

Linguistic processing has generally been attributed to the left side of the brain, especially to the famous Broca's Area, and the left planum temporale within Wernicke's area.

Broca's Area (blue) and Wernicke's Area (green) are areas on the left side of the brain which are involved in speech and linguistic comprehension respectively
Broca's Area (blue) and Wernicke's Area (green) are areas on the left side of the brain which are involved in speech and linguistic comprehension respectively

Evolutionary neurobiologists have made endocasts of the skulls of early humans and have shown that society developed right along side the lateralization of the planum temporale to the left side. This area has been indicated in musical ability, linguistic ability and in word memory. Musicians have been shown to have significantly more developed left planum temporales, and have also shown to have a greater word memory (Chan et al.). Chan’s study controlled for age, grade point average and years of education and found that when given a 16 word memory test, the musicians averaged one to two more words above their non musical counterparts.

[edit] Development

The researcher Malyarenko played music in a background setting for a group of four year old preschoolers for a period of six months. The musical group had significantly greater interhemispheric activity and range coherence than the control. Also, the musical four year olds were found to have greater left hemisphere intrahemispheric coherence (Strickland, 2001). Musicians have been found to have more developed anterior portions of the corpus callosum in a study by Cowell et al. in 1992 (Strickland, 2001). This was confirmed by a study by Schlaug et al in 1995 who found that classical musicians between the ages of 21 and 36 have significantly greater anterior corpora callosa than the non-musical control. Schlaug also found that there was a strong correlation of musical exposure before the age of seven greatly increases the size of the corpus callosum (Strickland, 2001). These fibers join together the left and right hemispheres and indicate an increased relaying between both sides of the brain. This suggests the merging between the spatial- emotiono-tonal processing of the right brains and the linguistical processing of the left brain. It has been thought that this large relaying across many different areas of the brain has contributed to music’s ability to aide in memory function.

[edit] Memory

Musical training has been shown to aid in memory functions in many different ways. Although the exact neural mechanism of how it helps it not fully agreed upon, it could be a neural exercise of different parts of the brain which are involved in memory. Another idea is that it could form neural connections from different angles to a single memory and help to create different pathways for the recall of a single memory. Altenmuller et al studied the difference between active and passive musical instruction and found the results to be equally effective in the short term. However, it was found that over a longer period of time, the actively taught students retained much more information than the passively taught students. The actively taught students were also found to have greater cerebral cortex activation; this would indicate that the musically taught students were more effectively taught. It should also be noted that the passively taught students weren’t wasting their time; they, along with the active group, displayed greater left hemisphere activity which is typical in trained musicians (Strickland, 2001).

There is also an anecdote of a woman with chronic dementia due to her age, she could not remember integral portions of her life such her place of birth, her place of residence for the majority of her life, or if she had had a short career singing on the radio. Despite this extreme dotage, she could remember every song she had sang perfectly (Skloot, 2002). It has also been indicated that simple melodies get “stuck” in our heads easier than more complex ones. Evolutionary biologists theorized that simpler tunes helped the ancient profession of the bard sing and remember oral histories. It has been shown that the more predictable the tune, the easier it is to get stuck in the head (Shouse, 2001). When subjects are asked to remember a song in their heads, the same parts of the brains light up except fainter and the primary auditory cortex is not activated as much.

[edit] Auditory cortices

The primary auditory cortex “is thought to identify the fundamental elements of music, such as pitch and loudness” (Music: Maestro, Please!, 2002). This makes sense as this is the area which receives direct input from the thalamus, which relays the actual sound from the ear. If there is no input, or reason to process the pitch or loudness, there is no function for the primary auditory cortex. The secondary auditory cortex has been indicated in the processing of “harmonic, melodic and rhythmic patterns.”(Music, Maestro, please!, 2002). The tertiary auditory cortex supposedly integrates everything into the overall experience of music (Music, maestro please! 2002).

this is an illustration of the primary, secondary and tertiary musical cortices
this is an illustration of the primary, secondary and tertiary musical cortices

This aligns with the studies of people remembering a song in their minds; they do not perceive any sound, but experience the melody, rhythm and overall experience of music. By deduction, when the primary auditory cortex is activated without auditory input, this should cause a hallucination. The going belief is that whole experience of music actually does terminate in the tertiary auditory cortex, which unites everything into the full experience. If so, it would be interesting to study a subject without a tertiary auditory cortex. This would be very difficult to do as the tertiary cortex is simply a ring around the secondary, which is a ring around the primary AC.

The power of music should not be underestimated. It is the neural triathlon, triggering an incredible concatenation of neural events, along with many parallel processes. The incredible, linguistic, emotional, rhythmic, mnemonic powers of music have been a great source of entertainment and functionality in both our modern and ancient human environments. There is little doubt that the discoveries of musical comprehension from a neurological standpoint have only just begun.

[edit] External links

[edit] References

  1. ^ Arlinger, S., Elberling, C., Bak, C., Kofoed, B., Lebech, J., & Saermark, K. (1982). Cortical magnetic fields evoked by frequency glides of a continuous tone. EEG & Clinical Neurophysiology, 54, 642-653
  2. ^ Janata P, Birk J, Van Horn J, Leman M, Tillmann B, & Bharucha J. 2002. The cortical topography of tonal structures underlying Western music. Science, 298, 2167–70