Linguistics |
---|
Theoretical linguistics |
Cognitive linguistics Generative linguistics Quantitative linguistics Phonology · Morphology Syntax · Lexis Semantics · Pragmatics |
Descriptive linguistics |
Anthropological linguistics Comparative linguistics Historical linguistics Etymology · Phonetics Sociolinguistics |
Applied linguistics |
Computational linguistics Forensic linguistics Internet linguistics Language acquisition Language assessment Language development Language education Linguistic prescription Linguistic anthropology Neurolinguistics Psycholinguistics |
Related articles |
History of linguistics List of linguists List of unsolved problems
in linguistics |
Portal |
Phonology (from Ancient Greek: φωνή, phōnḗ, "voice, sound" and λόγος, lógos, "word, speech, subject of discussion") is the systematic use of sound to encode meaning in any spoken human language, or the field of linguistics studying this use. Just as a language has syntax and vocabulary, it also has a phonology in the sense of a sound system. When describing the formal area of study, the term typically describes linguistic analysis either beneath the word (e.g., syllable, onset and rhyme, phoneme, articulatory gestures, articulatory feature, mora, etc.) or to units at all levels of language that are thought to structure sound for conveying linguistic meaning.
It is viewed as the subfield of linguistics that deals with the sound systems of languages. Whereas phonetics is about the physical production, acoustic transmission and perception of the sounds of speech, phonology describes the way sounds function within a given language or across languages to encode meaning. The term "phonology" was used in the linguistics of a greater part of the 20th century as a cover term uniting phonemics and phonetics. Current phonology can interface with disciplines such as psycholinguistics and speech perception, resulting in specific areas like articulatory or laboratory phonology.
Contents |
An important part of traditional forms of phonology has been studying which sounds can be grouped into distinctive units within a language; these units are known as phonemes. For example, in English, the [p] sound in pot is aspirated (pronounced [pʰ]), while the word- and syllable-final [p] in soup is not aspirated (indeed, it might be realized as a glottal stop). However, English speakers intuitively treat both sounds as variations (allophones) of the same phonological category, that is, of the phoneme /p/. Traditionally, it would be argued that if a word-initial aspirated [p] were interchanged with the word-final unaspirated [p] in soup, they would still be perceived by native speakers of English as "the same" /p/. (However, speech perception findings now put this theory in doubt.) Although some sort of "sameness" of these two sounds holds in English, it is not universal and may be absent in other languages. For example, in Thai, Hindi, and Quechua, aspiration and non-aspiration differentiates phonemes: that is, there are word pairs that differ only in this feature (there are minimal pairs differing only in aspiration).
In addition to the minimal units that can serve the purpose of differentiating meaning (the phonemes), phonology studies how sounds alternate, i.e. replace one another in different forms of the same morpheme (allomorphs), as well as, e.g., syllable structure, stress, accent, and intonation.
The principles of phonological theory have also been applied to the analysis of sign languages, even though the sub-lexical units are not instantiated as speech sounds. The principles of phonological analysis can be applied independently of modality because they are designed to serve as general analytical tools, not language-specific ones. On the other hand, it must be noted, it is difficult to analyze phonologically a language one does not speak, and most phonological analysis takes place with recourse to phonetic information.
The writing systems of some languages are based on the phonemic principle of having one letter (or combination of letters) per phoneme and vice-versa. Ideally, speakers can correctly write whatever they can say, and can correctly read anything that is written. However in English, different spellings can be used for the same phoneme (e.g., rude and food have the same vowel sounds), and the same letter (or combination of letters) can represent different phonemes (e.g., the "th" consonant sounds of thin and this are different). In order to avoid this confusion based on orthography, phonologists represent phonemes by writing them between two slashes: " / / ". On the other hand, reference to variations of phonemes or attempts at representing actual speech sounds are usually enclosed by square brackets: " [ ] ". While the letters between slashes may be based on spelling conventions, the letters between square brackets are usually the International Phonetic Alphabet (IPA) or some other phonetic transcription system. Additionally, angled brackets " ⟨ ⟩ " can be used to isolate the graphemes of an alphabetic writing system.
Part of the phonological study of a language involves looking at data (phonetic transcriptions of the speech of native speakers) and trying to deduce what the underlying phonemes are and what the sound inventory of the language is. Even though a language may make distinctions between a small number of phonemes, speakers actually produce many more phonetic sounds. Thus, a phoneme in a particular language can be instantiated in many ways.
Traditionally, looking for minimal pairs forms part of the research in studying the phoneme inventory of a language. A minimal pair is a pair of words from the same language, that differ by only a single categorical sound, and that are recognized by speakers as being two different words. When there is a minimal pair, the two sounds are said to be examples of realizations of distinct phonemes. However, since it is often impossible to detect or agree to the existence of all the possible phonemes of a language with this method, other approaches are used as well.
If two similar sounds do not belong to separate phonemes, they are called allophones of the same underlying phoneme. For instance, voiceless stops (/p/, /t/, /k/) can be aspirated. In English, voiceless stops at the beginning of a stressed syllable (but not after /s/) are aspirated, whereas after /s/ they are not aspirated. This can be seen by putting the fingers right in front of the lips and noticing the difference in breathiness in saying pin versus spin. There is no English word pin that starts with an unaspirated p, therefore in English, aspirated [pʰ] (the [ʰ] means aspirated) and unaspirated [p] are allophones of the same phoneme /p/. This is an example of a complementary distribution.
The /t/ sounds in the words tub, stub, but, butter, and button are all pronounced differently in American English, yet are all intuited to be of "the same sound", therefore they constitute another example of allophones of the same phoneme in English. However, an intuition such as this could be interpreted as a function of post-lexical recognition of the sounds. That is, all are seen as examples of English /t/ once the word itself has been recognized.
The findings and insights of speech perception and articulation research complicates this idea of interchangeable allophones being perceived as the same phoneme, no matter how attractive it might be for linguists who wish to rely on the intuitions of native speakers. First, interchanged allophones of the same phoneme can result in unrecognizable words. Second, actual speech, even at a word level, is highly co-articulated, so it is problematic to think that one can splice words into simple segments without affecting speech perception. In other words, interchanging allophones is a nice idea for intuitive linguistics, but it turns out that this idea can not transcend what co-articulation actually does to spoken sounds. Yet human speech perception is so robust and versatile (happening under various conditions) because, in part, it can deal with such co-articulation.
There are different methods for determining why allophones should fall categorically under a specified phoneme. Counter-intuitively, the principle of phonetic similarity is not always used. This tends to make the phoneme seem abstracted away from the phonetic realities of speech. It should be remembered that, just because allophones can be grouped under phonemes for the purpose of linguistic analysis, this does not necessarily mean that this is an actual process in the way the human brain processes a language. On the other hand, it could be pointed out that some sort of analytic notion of a language beneath the word level is usual if the language is written alphabetically. So one could also speak of a phonology of reading and writing.
The particular sounds which are phonemic in a language can change over time. At one time, [f] and [v] were allophones in English, but these later changed into separate phonemes. This is one of the main factors of historical change of languages as described in historical linguistics.
Phonology also includes topics such as phonotactics (the phonological constraints on what sounds can appear in what positions in a given language) and phonological alternation (how the pronunciation of a sound changes through the application of phonological rules, sometimes in a given order which can be feeding or bleeding,[1] as well as prosody, the study of suprasegmentals and topics such as stress and intonation.
In ancient India, the Sanskrit grammarian Pāṇini (c. 520–460 BC) in his text of Sanskrit phonology, the Shiva Sutras, discusses something like the concepts of the phoneme, the morpheme and the root. The Shiva Sutras describe a phonemic notational system in the fourteen initial lines of the Aṣṭādhyāyī. The notational system introduces different clusters of phonemes that serve special roles in the morphology of Sanskrit, and are referred to throughout the text. Panini's grammar of Sanskrit had a significant influence on Ferdinand de Saussure, the father of modern structuralism, who was a professor of Sanskrit.
The Polish scholar Jan Baudouin de Courtenay, (together with his former student Mikołaj Kruszewski) coined the word phoneme in 1876, and his work, though often unacknowledged, is considered to be the starting point of modern phonology. He worked not only on the theory of the phoneme but also on phonetic alternations (i.e., what is now called allophony and morphophonology). His influence on Ferdinand de Saussure was also significant.
Prince Nikolai Trubetzkoy's posthumously published work, the Principles of Phonology (1939), is considered the foundation of the Prague School of phonology. Directly influenced by Baudouin de Courtenay, Trubetzkoy is considered the founder of morphophonology, though morphophonology was first recognized by Baudouin de Courtenay. Trubetzkoy split phonology into phonemics and archiphonemics; the former has had more influence than the latter. Another important figure in the Prague School was Roman Jakobson, who was one of the most prominent linguists of the twentieth century.
In 1968 Noam Chomsky and Morris Halle published The Sound Pattern of English (SPE), the basis for Generative Phonology. In this view, phonological representations are sequences of segments made up of distinctive features. These features were an expansion of earlier work by Roman Jakobson, Gunnar Fant, and Morris Halle. The features describe aspects of articulation and perception, are from a universally fixed set, and have the binary values + or −. There are at least two levels of representation: underlying representation and surface phonetic representation. Ordered phonological rules govern how underlying representation is transformed into the actual pronunciation (the so called surface form). An important consequence of the influence SPE had on phonological theory was the downplaying of the syllable and the emphasis on segments. Furthermore, the Generativists folded morphophonology into phonology, which both solved and created problems.
Natural Phonology was a theory based on the publications of its proponent David Stampe in 1969 and (more explicitly) in 1979. In this view, phonology is based on a set of universal phonological processes which interact with one another; which ones are active and which are suppressed are language-specific. Rather than acting on segments, phonological processes act on distinctive features within prosodic groups. Prosodic groups can be as small as a part of a syllable or as large as an entire utterance. Phonological processes are unordered with respect to each other and apply simultaneously (though the output of one process may be the input to another). The second-most prominent Natural Phonologist is Stampe's wife, Patricia Donegan; there are many Natural Phonologists in Europe, though also a few others in the U.S., such as Geoffrey Nathan. The principles of Natural Phonology were extended to morphology by Wolfgang U. Dressler, who founded Natural Morphology.
In 1976 John Goldsmith introduced autosegmental phonology. Phonological phenomena are no longer seen as operating on one linear sequence of segments, called phonemes or feature combinations, but rather as involving some parallel sequences of features which reside on multiple tiers. Autosegmental phonology later evolved into Feature Geometry, which became the standard theory of representation for the theories of the organization of phonology as different as Lexical Phonology and Optimality Theory.
Government Phonology, which originated in the early 1980s as an attempt to unify theoretical notions of syntactic and phonological structures, is based on the notion that all languages necessarily follow a small set of principles and vary according to their selection of certain binary parameters. That is, all languages' phonological structures are essentially the same, but there is restricted variation that accounts for differences in surface realizations. Principles are held to be inviolable, though parameters may sometimes come into conflict. Prominent figures include Jonathan Kaye, Jean Lowenstamm, Jean-Roger Vergnaud, Monik Charette, John Harris, and many others.
In a course at the LSA summer institute in 1991, Alan Prince and Paul Smolensky developed Optimality Theory — an overall architecture for phonology according to which languages choose a pronunciation of a word that best satisfies a list of constraints which is ordered by importance: a lower-ranked constraint can be violated when the violation is necessary in order to obey a higher-ranked constraint. The approach was soon extended to morphology by John McCarthy and Alan Prince, and has become the dominant trend in phonology. Though this usually goes unacknowledged, Optimality Theory was strongly influenced by Natural Phonology; both view phonology in terms of constraints on speakers and their production, though these constraints are formalized in very different ways.
Broadly speaking government phonology (or its descendant, strict-CV phonology) has a greater following in the United Kingdom, whereas optimality theory is predominant in North America.
|