Cognitive musicology is a branch of cognitive science concerned with computationally modeling musical knowledge with the goal of understanding both music and cognition.[1]
Cognitive musicology can be differentiated from the fields of music cognition, music psychology and cognitive neuroscience of music by a difference in methodological emphasis. Cognitive musicology uses computer modeling to study music-related knowledge representation and has roots in artificial intelligence and cognitive science. The use of computer models provides an exacting, interactive medium in which to formulate and test theories.[2]
This interdisciplinary field investigates topics such as the parallels between language and music in the brain. Biologically inspired models of computation are often included in research, such as neural networks and evolutionary programs.[3] This field seeks to model how musical knowledge is represented, stored, perceived, performed, and generated. By using a well-structured computer environment, the systematic structures of these cognitive phenomena can be investigated.[4]
Contents |
The polymath Christopher Longuet-Higgins, who coined the term "cognitive science", is one of the pioneers of cognitive musicology. Among other things, he is noted for the computational implementation of an early key-finding algorithm [5]. Key finding is an essential element of tonal music, and the key-finding problem has attracted considerable attention in the psychology of music over the past several decades. Carol Krumhansl proposed an empirically grounded key-finding algorithm which bears her name [6]. Her approach is based on key-profiles which she painstakingly determined by what has come to be known as the probe-tone technique [7]. David Temperley, whose early work within the field of cognitive musicology applied dynamic programming to aspects of music cognition, has suggested a number of refinements to the Krumhansl Key-Finding Algorithm [8].
Otto Laske was a champion of cognitive musicology [9]. A collection of papers that he co-edited served to highten the visibility of cognitive musicology and to strengthen its association with AI and music [10]. The forward of this book reprints a free-wheeling interview with Marvin Minsky, one of the founding fathers of AI, in which he discusses some of his early writings on music and the mind [11]. AI researcher turned cognitive scientist Douglas Hofstadter has also contributed a number of ideas pertaining to music from an AI perspective [12]. Musician Steve Larson, who worked for a time in Hofstadter's lab, formulated a theory of "musical forces" derived by analogy with physical forces [13]. Hofstadter [14] also weighed in on David Cope's experiments in musical intelligence [15], which take the form of a computer program called EMI which produces music in the form of, say, Bach, or Chopin, or Cope.
Cope's programs are written in Lisp, which turns out to be a popular language for research in cognitive musicology. Desain and Honing have exploited Lisp in their efforts to tap the potential of microworld methodology in cognitive musicology research [16]. Also working in Lisp, Heinrich Taube has explored computer composition from a wide variety of perspectives[17]. There are, of course, researchers who chose to use languages other than Lisp for their research into the computational modeling of musical processes. Tim Rowe, for example, explores "machine musicianship" through C++ programming [18]. A rather different computational methodology for researching musical phenomena is the toolkit approach advocated by David Huron [19]. At a higher level of abstraction, Gerraint Wiggins has investigated general properties of music knowledge representations such as structural generality and expressive completeness [20].
Although a great deal of cognitive musicology research features symbolic computation, notable contributions have been made from the biologically inspired computational paradigms. For example, Jamshed Bharucha and Peter Todd have modeled music perception in tonal music with neural networks [21]. Al Biles has applied genetic algorithms to the composition of jazz solos [22]. Numerous researchers have explored algorithmic composition grounded in a wide range of mathematical formalisms [23][24].
Within cognitive psychology, among the most prominent researchers is Diana Deutsch, who has engaged in a wide variety of work ranging from studies of absolute pitch and musical illusions to the formulation of musical knowledge representations to relationships between music and language [25]. Equally important is Aniruddh Patel, whose work combines traditional methodologies of cognitive psychology with neuroscience. Patel is also the author of a comprehensive survey of cognitive science research on music.[26]
Perhaps the most significant contribution to viewing music from a linguistic perspective is the Generative Theory of Tonal Music (GTTM) proposed by Fred Lerdahl and Ray Jackendoff [27][28]. Although GTTM is presented at the algorithmic level of abstraction rather than the implementational level, their ideas have found computational manifestations in a number of computational projects [29].
Cognitive Musicology falls within the realm of the generative sciences. A generative science is an interdisciplinary field of study that explores how the world works through research into specific topics. By studying a given topic from a generative perspective, we can see how it functions with natural laws. By studying cognitive musicology, we can potentially understand how humans think about music and how we can computationally model those thoughts.