Content analysis

Content analysis is a research method for studying communication artifacts. Social scientists use content analysis to quantify patterns in communication. Practices and philosophies of content analysis vary between scholarly communities. They all involve systematic reading or observation of texts or artifacts which are assigned labels (sometimes called codes) to indicate the presence of interesting, meaningful patterns [1] [2]. After labeling a large set of texts, a researcher is able to statistically estimate the proportions of patterns in the texts, as well as correlations between patterns. Computers are increasingly used in content analysis. Popular qualitative data analysis programs provide efficient work-flow and data management tools for labeling. Simple computational techniques can provide descriptive data such as word frequencies and document lengths. Machine learning classifiers can greatly increase the number of texts which can be labeled, but the scientific utility of doing so is a matter of debate.

Goals of Content Analysis


Content analysis is best understood as a broad family of techniques. Effective researchers choose techniques that best help them answer their substantive questions. That said, according to Klaus Krippendorff, six questions must be addressed in every content analysis:[3]

  1. Which data are analyzed?
  2. How is the data defined?
  3. From what population are data drawn?
  4. What is the relevant context?
  5. What are the boundaries of the analysis?
  6. What is to be measured?

The simplest and most objective form of content analysis considers unambiguous characteristics of the text such as word frequencies, the page area taken by a newspaper column, or the duration of a radio or television program. Analysis of simple word frequencies is limited because the meaning of a word depends on surrounding text. Keyword In Context routines address this by placing words in their textual context. This helps resolve ambiguities such as those introduced by synonyms and homonyms.

A further step in analysis is the distinction between dictionary-based (quantitative) approaches and qualitative approaches. Dictionary-based approaches set up a list of categories derived from the frequency list of words and control the distribution of words and their respective categories over the texts. While methods in quantitative content analysis in this way transform observations of found categories into quantitative statistical data, the qualitative content analysis focuses more on the intentionality and its implications. There are strong parallels between qualitative content analysis and thematic analysis.[4]


Computational Tools

More generally, content analysis is research using the categorization and classification of speech, written text, interviews, images, or other forms of communication. In its beginnings, using the first newspapers at the end of the 19th century, analysis was done manually by measuring the number of lines and amount of space given a subject. With the rise of common computing facilities like PCs, computer-based methods of analysis are growing in popularity. Answers to open ended questions, newspaper articles, political party manifestoes, medical records or systematic observations in experiments can all be subject to systematic analysis of textual data.

By having contents of communication available in form of machine readable texts, the input is analyzed for frequencies and coded into categories for building up inferences.

Reliability

Robert Weber notes: "To make valid inferences from the text, it is important that the classification procedure be reliable in the sense of being consistent: Different people should code the same text in the same way".[5] The validity, inter-coder reliability and intra-coder reliability are subject to intense methodological research efforts over long years.[3] Neuendorf suggests that when human coders are used in content analysis two coders should be used. Reliability of human coding is often measured using a statistical measure of intercoder reliability or "the amount of agreement or correspondence among two or more coders".[6]

Kinds of Text

There are five types of texts in content analysis:

  1. written text, such as books and papers
  2. oral text, such as speech and theatrical performance
  3. iconic text, such as drawings, paintings, and icons
  4. audio-visual text, such as TV programs, movies, and videos
  5. hypertexts, which are texts found on the Internet


History

Over the years, content analysis has been applied to a variety of scopes. Hermeneutics and philology have long used content analysis to interpret sacred and profane texts and, in not a few cases, to attribute texts' authorship and authenticity.[2][3]

In recent times, particularly with the advent of mass communication, content analysis has known an increasing use to deeply analyze and understand media content and media logic. The political scientist Harold Lasswell formulated the core questions of content analysis in its early-mid 20th-century mainstream version: "Who says what, to whom, why, to what extent and with what effect?".[7] The strong emphasis for a quantitative approach started up by Lasswell was finally carried out by another "father" of content analysis, Bernard Berelson, who proposed a definition of content analysis which, from this point of view, is emblematic: "a research technique for the objective, systematic and quantitative description of the manifest content of communication".[8]

Quantitative content analysis has enjoyed a renewed popularity in recent years thanks to technological advances and fruitful application in of mass communication and personal communication research. Content analysis of textual big data produced by new media, particularly social media and mobile devices has become popular. These approaches take a simplified view of language that ignores the complexity of semiosis, the process by which meaning is formed out of language. Quantitative content analysts have been criticized for appealing to statistical measures to justify the objectivity and systematic nature of their methods while ignoring the limitations of their approach .

Content analysis can also be described as studying traces, which are documents from past times, and artifacts, which are non-linguistic documents. Texts are understood to be produced by communication processes in a broad sense of that phrase—often gaining mean through abduction.[2][9]

More elaborate description

The method of content analysis enables the researcher to include large amounts of textual information and systematically identify its properties, such as the frequencies of most used keywords by locating the more important structures of its communication content. Such amounts of textual information must be categorized to provide a meaningful reading of content under scrutiny. For example, David Robertson created a coding frame for a comparison of modes of party competition between British and American parties.[10] It was developed further in 1979 by the Manifesto Research Group aiming at a comparative content-analytic approach on the policy positions of political parties. This group created the Manifesto Project Database.

Since the 1980s, content analysis has become an increasingly important tool in the measurement of success in public relations (notably media relations) programs and the assessment of media profiles, such as political media slant—orientation towards one of the two major parties.[11][12] In 1982, John Naisbitt published his popular Megatrends, based on content analysis in the US media. In analyses of this type, data from content analysis is usually combined with media data (circulation, readership, number of viewers and listeners, frequency of publication). It has also been used by futurists to identify trends.

The creation of coding frames is intrinsically related to a creative approach to variables that influence textual content. In political analysis, these variables could be political scandals, the impact of public opinion polls, sudden events in external politics, inflation etc. Mimetic Convergence, created by Fátima Carvalho for the comparative analysis of electoral proclamations on free-to-air television, is an example of creative articulation of variables in content analysis.[13] The methodology describes the construction of party identities during long-term party competitions on TV, from a dynamic perspective, governed by the logic of the contingent. This method aims to capture the contingent logic observed in electoral campaigns by focusing on the repetition and innovation of themes sustained in party broadcasts. According to such post-structuralist perspective from which electoral competition is analysed, the party identities, 'the real' cannot speak without mediations because there is not a natural centre fixing the meaning of a party structure, it rather depends on ad-hoc articulations. There is no empirical reality outside articulations of meaning. Reality is an outcome of power struggles that unify ideas of social structure as a result of contingent interventions. In Brazil, these contingent interventions have proven to be mimetic and convergent rather than divergent and polarised, being integral to the repetition of dichotomised world-views.

Mimetic Convergence aims to show the process of fixation of meaning through discursive articulations that repeat, alter and subvert political issues that come into play. For this reason, parties are not taken as the pure expression of conflicts for the representation of interests (of different classes, religions, ethnic groups[14][15]) but attempts to recompose and re-articulate ideas of an absent totality around signifiers gaining positivity.

Every content analysis should depart from a hypothesis. The hypothesis of Mimetic Convergence supports the Downsian interpretation that in general, rational voters converge in the direction of uniform positions in most thematic dimensions. The hypothesis guiding the analysis of Mimetic Convergence between political parties' broadcasts is: 'public opinion polls on vote intention, published throughout campaigns on TV will contribute to successive revisions of candidates' discourses. Candidates re-orient their arguments and thematic selections in part by the signals sent by voters. One must also consider the interference of other kinds of input on electoral propaganda such as internal and external political crises and the arbitrary interference of private interests on the dispute. Moments of internal crisis in disputes between candidates might result from the exhaustion of a certain strategy. The moments of exhaustion might consequently precipitate an inversion in the thematic flux.

As an evaluation approach, content analysis is considered by some to be quasi-evaluation because content analysis judgements need not be based on value statements if the research objective is aimed at presenting subjective experiences. Thus, they can be based on knowledge of everyday lived experiences. Such content analyses are not evaluations. On the other hand, when content analysis judgements are based on values, such studies are evaluations.[16]

Qualitative content analysis is "a systematic, replicable technique for compressing many words of text into fewer content categories based on explicit rules of coding".[17] It often involves building and applying a "concept dictionary" or fixed vocabulary of terms on the basis of which words are extracted from the textual data for concording or statistical computation.

Uses

Holsti groups fifteen uses of content analysis into three basic categories:[18]

He also places these uses into the context of the basic communication paradigm.

The following table shows fifteen uses of content analysis in terms of their general purpose, element of the communication paradigm to which they apply, and the general question they are intended to answer.

Uses of Content Analysis by Purpose, Communication Element, and Question
Purpose Element Question Use
Make inferences about the antecedents of communications Source Who?
Encoding process Why?
  • Secure political & military intelligence
  • Analyse traits of individuals
  • Infer cultural aspects & change
  • Provide legal & evaluative evidence
Describe & make inferences about the characteristics of communications Channel How?
  • Analyse techniques of persuasion
  • Analyse style
Message What?
  • Describe trends in communication content
  • Relate known characteristics of sources to messages they produce
  • Compare communication content to standards
Recipient To whom?
  • Relate known characteristics of audiences to messages produced for them
  • Describe patterns of communication
Make inferences about the consequences of communications Decoding process With what effect?
Note. Purpose, communication element, & question from Holsti.[18] Uses primarily from Berelson[19] as adapted by Holsti.[18]

See also

References

  1. Hodder, I. (1994). The interpretation of documents and material culture. Thousand Oaks etc.: Sage. p. 155. ISBN 0761926879.
  2. 1 2 3 Tipaldo, G. (2014). L'analisi del contenuto e i mass media. Bologna, IT: Il Mulino. p. 42. ISBN 978-88-15-24832-9.
  3. 1 2 3 Krippendorff, Klaus (2004). Content Analysis: An Introduction to Its Methodology (2nd ed.). Thousand Oaks, CA: Sage. p. 413. ISBN 9780761915454.
  4. Vaismoradi, Mojtaba; Turunen, Hannele; Bondas, Terese (2013-09-01). "Content analysis and thematic analysis: Implications for conducting a qualitative descriptive study". Nursing & Health Sciences. 15 (3): 398–405. ISSN 1442-2018. doi:10.1111/nhs.12048.
  5. Weber, Robert Philip (1990). Basic Content Analysis (2nd ed.). Newbury Park, CA: Sage. p. 12. ISBN 9780803938632.
  6. Neuendorf, Kimberly A. (2002). The Content Analysis Guidebook. Thousand Oaks, CA: Sage. p. 10.
  7. Lasswell, Harold Dwight (1948). Power and Personality. New York, NY.
  8. Berelson, B. (1952). Content Analysis in Communication Research. Glencoe: Free Press. p. 18.
  9. Timmermans, Stefan and Iddo Tavory (2012). "Theory Construction in Qualitative Research: From Grounded Theory to Abductive Analysis". Sociological Theory (30(3) ed.): 167–186.
  10. Robertson, David Bruce (1976). A theory of party competition. London and New York: J. Wiley. ISBN 0471727377.
  11. Gentzkow, Matthew and Jesse M. Shapiro (2007). "What Drives Media Slant? Evidence from U.S. Daily Newspapers". Econometrica (78(1)): 35–71.
  12. "Methods for Media Analysis". ReStore. Economic and Social Research Council. Retrieved 13 June 2013.
  13. Carvalho, Fátima Lampreia (2000). "Continuidade e Inovação: conservadorismo e política da comunicação no Brasil" [Continuity and Innovation: Conservatism and Politics of Communication in Brazil]. Journal Revista Brasileira de Ciencias Sociais. São Paulo. 15 (43): 147–162. doi:10.1590/S0102-69092000000200008. Retrieved 12 June 2013.
  14. Lipset, Seymour M.; Stein Rokkan (1967). Cleavage structures, party systems, and voter alignments: an introduction. Free Press. pp. 1–64.
  15. Lijphart, Arend (1984). Democracies: Patterns of majoritarian and consensus government in twenty-one countries. New Haven: Yale University Press. p. 229. ISBN 0300031157.
  16. Frisbie, Richard (7–11 April 1986). The use of microcomputer programs to improve the reliability and validity of content analysis in evaluation. Annual Meeting of the American Educational Research Association. San Francisco, CA.
  17. Stemler, Steve (2001). "An Overview of Content Analysis". Practical Assessment, Research & Evaluation. 7 (17). Retrieved 12 June 2013.
  18. 1 2 3 Holsti, Ole R. (1969). Content Analysis for the Social Sciences and Humanities. Reading, MA: Addison-Wesley.
  19. Berelson, Bernard (1952). Content Analysis in Communication Research. Glencoe, Ill: Free Press.

Further reading

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.