Evolutionary musicology
From Wikipedia, the free encyclopedia
Evolutionary musicology is a subfield of biomusicology, promoted in the book The Origins of Music[1] as an application of evolutionary psychology's metatheoretical approach to human music. Examples of theories in this discipline, as documented in the volume, include Geoffrey Miller's sexually selected fitness indicator approach to human creative abilities, including music; and Stephen Brown's "musilanguage" model of music-language co-evolution.
The most common position of evolutionary psychologists is to assume that proposed cognitive adaptation is actually a by-product of other functions until evidence proves otherwise; Steven Pinker argues for this explanation for human music in How the Mind Works. Another recent hypothesis published by Edward Hagen and Gregory Bryant is that human music evolved as a method of signaling a group's social cohesion to other groups for the purposes of making beneficial multi-group alliances.
Contents |
[edit] Musilanguage
Musilanguage is a term coined by Steven Brown to describe a theory that music and language have a common ancestor.
It is both a model of musical and linguistic evolution and a term coined to describe a certain stage in that evolution. Brown states that both music and human language have origins in a phenomenon known as the "musilanguage" stage of evolution. This model represents the view that the structural features shared by music and language are not the results of mere chance parallelism, nor are they a function of one system emerging from the other–indeed, this model asserts that "music and language are seen as reciprocal specializations of a dual-natured referential emotive communicative precursor, whereby music emphasizes sound as emotive meaning and language emphasizes sound as referential meaning."[2]
The musilanguage model is a structural model of music evolution, meaning that it views music’s acoustic properties as effects of homologous precursor functions. This can be contrasted with functional models of music evolution, which view music’s innate physical properties to be determined by its adaptive roles.
Musilanguage hinges on the idea that sound patterns produced by humans fall at varying places on a single spectrum of acoustic expression. At one end of the spectrum, we find semanticity and lexical meaning, whereby completely arbitrary patterns of sound are used to convey a purely symbolic meaning that lacks any emotional content. This is called the "sound reference" end of the spectrum. At the other end of the spectrum are sound patterns that convey only emotional meaning and are devoid of conceptual and semantic reference points. This is the "sound emotion" side of the spectrum.
In actual fact, both of these endpoints are theoretical in nature, and music is seen as falling more towards the latter end of the spectrum, while human language falls more towards the former. As we can easily witness, music and language often combine to utilize this spectrum in unique ways; musical narratives that lack clearly defined meaning, such as those of the band Sigur Rós, where the vocal element is in a made-up language, fall more on the "sound emotion" end of the spectrum, while lexical narratives like stories or news articles that have a greater amount of semantic content will fall more towards the "sound reference" end of the spectrum. It should be noted here that language emphasizes sound reference and music emphasizes sound emotion, but that language cannot be completely devoid of sound emotion any more than music can be completely devoid of sound reference. The emphasis is different in music than in language, but both are evolutionary subcategories of the musilanguage stage of evolution, which intertwines sound reference and sound emotion much more tightly.
[edit] Properties of the Musilanguage stage
The musilanguage stage exhibits three properties which help to make it a viable explanation for the evolution of both language and music: lexical tone, combinatorial phrase formation, and expressive phrasing mechanisms. Many of these ideas have their roots in existing phonological theory in linguistics, but phonological theory has largely neglected the strong mechanistic parallels between melody, phrasing, and rhythm in speech and music.
[edit] Lexical tone
Lexical tone refers to the pitch of speech as a vehicle for semantic meaning. The importance of pitch to conveying musical ideas is well-known, but the linguistic importance of pitch is less obvious. Tonal languages, wherein the lexical meaning of a sound depends heavily on its pitch relative to other sounds, are seen as evolutionary artifacts of musilanguage. According to Brown, the majority of the world’s languages are tonal. Nontonal, or “intonation” languages, which don’t depend heavily on pitch for lexical meaning, are seen as evolutionary late-comers which have discarded their dependence on tone. Intermediate states, known as pitch-accent languages, are exemplified well by Japanese, Swedish, Serbian and Croatian language. These languages exhibit some lexical dependence on tone, but also depend heavily on intonation.
[edit] Combinatorial formation
Combinatorial formation refers to the ability to form small phrases from different tonal elements. These phrases must be able to exhibit melodic, rhythmic, and semantic variation, and must be able to be combined with other phrases to create global melodic formulas capable of conveying emotive meaning. Examples in modern speech would be the rules for arranging letters to form words and then words to form sentences. In music, the notes of different scales are combined according to their own unique rules to form larger musical ideas.
[edit] Expressive phrasing
Expressive phrasing is the device by which expressive emphasis can be added to the phrases, both at a local (in the sense individual units) and global (in the sense of phrases) level. There are numerous ways that this can happen both in speech and language that exhibit many interesting parallels to one another. For instance, the increase in the amplitude of a sound being played by an instrument accents that sound much the same way that an increase in amplitude will accent a particular point that a language speaker is trying to make when he or she speaks. Similarly, speaking very rapidly often creates a frenzied effect that mirrors that of a very presto musical passage.
These properties, when taken together, provide all of the necessary foundations for the emergence of language and music as distinct communicative mediums that nonetheless are seen to share the common ancestor of musilanguage, in which they combined to form a single expressive medium. The explorations of the origins of music are still in their infancy, but the concept of "musilanguage" offers an exciting theoretical springboard for further exploration into the origins of music.
[edit] References
- ^ Nils L. Wallin, Björn Merker, and Steven Brown (Editors) (2000). The Origins of Music. Cambridge, MA: MIT Press. ISBN 0-262-23206-5. — The foundational volume of evolutionary musicology. Chapters cover vocal communication in non-human animal species, theories of the evolution of human music, and cross-cultural human universals in musical ability and processing.
- ^ Wallin, Merker, and Brown, eds (1999-12-03). "The "Musilanguage" Model of Music Evolution", The Origins of Music. The MIT Press, 271 – 301. ISBN 0-262-23206-5.
- Wayne Chase. "DID MUSIC AND LANGUAGE CO-EVOLVE? SIMILARITIES BETWEEN MUSIC AND LANGUAGE", How music REALLY works. Roedy Black Publishing Inc.. ISBN 1-897311-55-9 and ISBN 1-897311-56-7.
- Ball, Philip. "Music: The international language?", New Scientist, 2005-07-09.
- Nechvatal, Tony. "Musical talk", New Scientist, 2005-08-06.
- Philip Tagg. A Short Prehistory of Western Music. Provisional course material, W310 degree course, IPM, University of Liverpool. Retrieved on December 21, 2005.
- Bruno Tucunduva Ruviaro (2004-06-03). "The Spell of Speech, 'The Musilanguage model'".