Evolutionary musicology

From Wikipedia, the free encyclopedia

Evolutionary musicology is a subfield of biomusicology that grounds the psychological mechanisms of music perception and production in evolutionary theory. It covers vocal communication in non-human animal species, theories of the evolution of human music, and cross-cultural human universals in musical ability and processing.

Contents

[edit] History

The origins of the field can be traced back to Charles Darwin who wrote in his Descent of Man:

"When we treat of sexual selection we shall see that primeval man, or rather some early progenitor of man, probably first used his voice in producing true musical cadences, that is in singing, as do some of the gibbon-apes at the present day; and we may conclude from a widely-spread analogy, that this power would have been especially exerted during the courtship of the sexes,--would have expressed various emotions, such as love, jealousy, triumph,--and would have served as a challenge to rivals. It is, therefore, probable that the imitation of musical cries by articulate sounds may have given rise to words expressive of various complex emotions." [1]

This theory of a musical protolanguage has been revived and re-discovered repeatedly, often without attribution to Darwin [2] [3].

[edit] Controversies

Current debate concentrates on whether music constitutes an evolutionary adaptation or exaptation (i.e. by-product of evolution). Steven Pinker, in his book How the Mind Works, for example, argues that music is merely "auditory cheesecake" - it was evolutionarily adaptive to have a preference for fat and sugar but cheesecake did not play a role in that selection process. Adaptation, on the other hand, is highlighted in hypotheses such as the one by Edward Hagen and Gregory Bryant which posits that human music evolved from animal territorial signals, eventually becoming a method of signaling a group's social cohesion to other groups for the purposes of making beneficial multi-group alliances [4]. Part of the problem in the debate is that music, like any complex cognitive function, is not a holistic entity but rather modular [5] – perception and production of rhythm, melodies, harmony and other musical parameters may thus involve multiple cognitive functions with possibly quite distinct evolutionary histories.

[edit] Musilanguage

Musilanguage is a term coined by Steven Brown to describe the above-mentioned Darwinian theory that music and language have a common ancestor.

It is both a model of musical and linguistic evolution and a term coined to describe a certain stage in that evolution. Brown states that both music and human language have origins in a phenomenon known as the "musilanguage" stage of evolution. This model represents the view that the structural features shared by music and language are not the results of mere chance parallelism, nor are they a function of one system emerging from the other–indeed, this model asserts that "music and language are seen as reciprocal specializations of a dual-natured referential emotive communicative precursor, whereby music emphasizes sound as emotive meaning and language emphasizes sound as referential meaning."[6]

The musilanguage model is a structural model of music evolution, meaning that it views music’s acoustic properties as effects of homologous precursor functions. This can be contrasted with functional models of music evolution, which view music’s innate physical properties to be determined by its adaptive roles.

Musilanguage hinges on the idea that sound patterns produced by humans fall at varying places on a single spectrum of acoustic expression. At one end of the spectrum, we find semanticity and lexical meaning, whereby completely arbitrary patterns of sound are used to convey a purely symbolic meaning that lacks any emotional content. This is called the "sound reference" end of the spectrum. At the other end of the spectrum are sound patterns that convey only emotional meaning and are devoid of conceptual and semantic reference points. This is the "sound emotion" side of the spectrum.

In actual fact, both of these endpoints are theoretical in nature, and music is seen as falling more towards the latter end of the spectrum, while human language falls more towards the former. As we can easily witness, music and language often combine to utilize this spectrum in unique ways; musical narratives that lack clearly defined meaning, such as those of the band Sigur Rós, where the vocal element is in a made-up language, fall more on the "sound emotion" end of the spectrum, while lexical narratives like stories or news articles that have a greater amount of semantic content will fall more towards the "sound reference" end of the spectrum. It should be noted here that language emphasizes sound reference and music emphasizes sound emotion, but that language cannot be completely devoid of sound emotion any more than music can be completely devoid of sound reference. The emphasis is different in music than in language, but both are evolutionary subcategories of the musilanguage stage of evolution, which intertwines sound reference and sound emotion much more tightly.

[edit] Properties of the Musilanguage stage

The musilanguage stage exhibits three properties which help to make it a viable explanation for the evolution of both language and music: lexical tone, combinatorial phrase formation, and expressive phrasing mechanisms. Many of these ideas have their roots in existing phonological theory in linguistics, but phonological theory has largely neglected the strong mechanistic parallels between melody, phrasing, and rhythm in speech and music.

[edit] Lexical tone

Lexical tone refers to the pitch of speech as a vehicle for semantic meaning. The importance of pitch to conveying musical ideas is well-known, but the linguistic importance of pitch is less obvious. Tonal languages, wherein the lexical meaning of a sound depends heavily on its pitch relative to other sounds, are seen as evolutionary artifacts of musilanguage. According to Brown, the majority of the world’s languages are tonal. Nontonal, or “intonation” languages, which don’t depend heavily on pitch for lexical meaning, are seen as evolutionary late-comers which have discarded their dependence on tone. Intermediate states, known as pitch-accent languages, are exemplified well by Japanese, Swedish, Serbian and Croatian language. These languages exhibit some lexical dependence on tone, but also depend heavily on intonation.

[edit] Combinatorial formation

Combinatorial formation refers to the ability to form small phrases from different tonal elements. These phrases must be able to exhibit melodic, rhythmic, and semantic variation, and must be able to be combined with other phrases to create global melodic formulas capable of conveying emotive meaning. Examples in modern speech would be the rules for arranging letters to form words and then words to form sentences. In music, the notes of different scales are combined according to their own unique rules to form larger musical ideas.

[edit] Expressive phrasing

Expressive phrasing is the device by which expressive emphasis can be added to the phrases, both at a local (in the sense individual units) and global (in the sense of phrases) level. There are numerous ways that this can happen both in speech and language that exhibit many interesting parallels to one another. For instance, the increase in the amplitude of a sound being played by an instrument accents that sound much the same way that an increase in amplitude will accent a particular point that a language speaker is trying to make when he or she speaks. Similarly, speaking very rapidly often creates a frenzied effect that mirrors that of a very presto musical passage.

These properties, when taken together, provide all of the necessary foundations for the emergence of language and music as distinct communicative mediums that nonetheless are seen to share the common ancestor of musilanguage, in which they combined to form a single expressive medium. The explorations of the origins of music are still in their infancy, but the concept of "musilanguage" offers an exciting theoretical springboard for further exploration into the origins of music.

[edit] References

  1. ^ The Descent of Man, and Selection in Relation to Sex, 1871, <http://www.gutenberg.org/dirs/etext00/dscmn10.txt> 
  2. ^ Nils L. Wallin, Björn Merker, and Steven Brown (Editors) (2000). The Origins of Music. Cambridge, MA: MIT Press. ISBN 0-262-23206-5. 
  3. ^ Steven Mithen, The Singing Neanderthals: the Origins of Music, Language, Mind and Body, Harvard University Press, 2006.
  4. ^ Hagen, Edward H & Bryant, Gregory A, “Music and dance as a coalition signaling system”, Human Nature 14 (1): 21-51, <http://itb.biologie.hu-berlin.de/~hagen/papers/music.pdf> 
  5. ^ Fodor, Jerry A. (1983). Modularity of Mind: An Essay on Faculty Psychology. Cambridge, Mass.: MIT Press. ISBN 0-262-56025-9
  6. ^ Brown S (1999-12-03). "The "Musilanguage" Model of Music Evolution", in Wallin NL, Merker B, and Brown S, eds: The Origins of Music. The MIT Press, 271 – 301. ISBN 0-262-23206-5. 

[edit] See also