Content analysis of word texts

This guide provides an introduction to content analysis, a research methodology that examines words or phrases within a wide range of texts.

  • Introduction to Content Analysis: Read about the history and uses of content analysis.
  • Conceptual Analysis: Read an overview of conceptual analysis and its associated methodology.
  • Relational Analysis: Read an overview of relational analysis and its associated methodology.
  • Commentary: Read about issues of reliability and validity with regard to content analysis as well as the advantages and disadvantages of using content analysis as a research methodology.
  • Examples: View examples of real and hypothetical studies that use content analysis.
  • Annotated Bibliography: Complete list of resources used in this guide and beyond.

An Introduction to Content Analysis

Content analysis is a research tool used to determine the presence of certain words or concepts within texts or sets of texts. Researchers quantify and analyze the presence, meanings and relationships of such words and concepts, then make inferences about the messages within the texts, the writer(s), the audience, and even the culture and time of which these are a part. Texts can be defined broadly as books, book chapters, essays, interviews, discussions, newspaper headlines and articles, historical documents, speeches, conversations, advertising, theater, informal conversation, or really any occurrence of communicative language. Texts in a single study may also represent a variety of different types of occurrences, such as Palmquist’s 1990 study of two composition classes, in which he analyzed student and teacher interviews, writing journals, classroom discussions and lectures, and out-of-class interaction sheets. To conduct a content analysis on any such text, the text is coded, or broken down, into manageable categories on a variety of levels—word, word sense, phrase, sentence, or theme—and then examined using one of content analysis’ basic methods: conceptual analysis or relational analysis.

A Brief History of Content Analysis

Historically, content analysis was a time consuming process. Analysis was done manually, or slow mainframe computers were used to analyze punch cards containing data punched in by human coders. Single studies could employ thousands of these cards. Human error and time constraints made this method impractical for large texts. However, despite its impracticality, content analysis was already an often utilized research method by the 1940’s. Although initially limited to studies that examined texts for the frequency of the occurrence of identified terms (word counts), by the mid-1950’s researchers were already starting to consider the need for more sophisticated methods of analysis, focusing on concepts rather than simply words, and on semantic relationships rather than just presence (de Sola Pool 1959). While both traditions still continue today, content analysis now is also utilized to explore mental models, and their linguistic, affective, cognitive, social, cultural and historical significance.

Uses of Content Analysis

Perhaps due to the fact that it can be applied to examine any piece of writing or occurrence of recorded communication, content analysis is currently used in a dizzying array of fields, ranging from marketing and media studies, to literature and rhetoric, ethnography and cultural studies, gender and age issues, sociology and political science, psychology and cognitive science, and many other fields of inquiry. Additionally, content analysis reflects a close relationship with socio- and psycholinguistics, and is playing an integral role in the development of artificial intelligence. The following list (adapted from Berelson, 1952) offers more possibilities for the uses of content analysis:

  • Reveal international differences in communication content
  • Detect the existence of propaganda
  • Identify the intentions, focus or communication trends of an individual, group or institution
  • Describe attitudinal and behavioral responses to communications
  • Determine psychological or emotional state of persons or groups

Types of Content Analysis

In this guide, we discuss two general categories of content analysis: conceptual analysis and relational analysis. Conceptual analysis can be thought of as establishing the existence and frequency of concepts most often represented by words of phrases in a text. For instance, say you have a hunch that your favorite poet often writes about hunger. With conceptual analysis you can determine how many times words such as hunger, hungry, famished, or starving appear in a volume of poems. In contrast, relational analysis goes one step further by examining the relationships among concepts in a text. Returning to the hunger example, with relational analysis, you could identify what other words or phrases hunger or famished appear next to and then determine what different meanings emerge as a result of these groupings.

Conceptual Analysis

Traditionally, content analysis has most often been thought of in terms of conceptual analysis. In conceptual analysis, a concept is chosen for examination, and the analysis involves quantifying and tallying its presence. Also known as thematic analysis [although this term is somewhat problematic, given its varied definitions in current literature—see Palmquist, Carley, & Dale (1997) vis-a-vis Smith (1992)], the focus here is on looking at the occurrence of selected terms within a text or texts, although the terms may be implicit as well as explicit. While explicit terms obviously are easy to identify, coding for implicit terms and deciding their level of implication is complicated by the need to base judgments on a somewhat subjective system. To attempt to limit the subjectivity, then (as well as to limit problems of reliability and validity), coding such implicit terms usually involves the use of either a specialized dictionary or contextual translation rules. And sometimes, both tools are used—a trend reflected in recent versions of the Harvard and Lasswell dictionaries.

Methods of Conceptual Analysis

Conceptual analysis begins with identifying research questions and choosing a sample or samples. Once chosen, the text must be coded into manageable content categories. The process of coding is basically one of selective reduction. By reducing the text to categories consisting of a word, set of words or phrases, the researcher can focus on, and code for, specific words or patterns that are indicative of the research question.

An example of a conceptual analysis would be to examine several Clinton speeches on health care, made during the 1992 presidential campaign, and code them for the existence of certain words. In looking at these speeches, the research question might involve examining the number of positive words used to describe Clinton’s proposed plan, and the number of negative words used to describe the current status of health care in America. The researcher would be interested only in quantifying these words, not in examining how they are related, which is a function of relational analysis. In conceptual analysis, the researcher simply wants to examine presence with respect to his/her research question, i.e. is there a stronger presence of positive or negative words used with respect to proposed or current health care plans, respectively.

Once the research question has been established, the researcher must make his/her coding choices with respect to the eight category coding steps indicated by Carley (1992).

Steps for Conducting Conceptual Analysis

The following discussion of steps that can be followed to code a text or set of texts during conceptual analysis use campaign speeches made by Bill Clinton during the 1992 presidential campaign as an example. To read about each step, click on the items in the list below:

  1. Decide the level of analysis.

First, the researcher must decide upon the level of analysis. With the health care speeches, to continue the example, the researcher must decide whether to code for a single word, such as «inexpensive,» or for sets of words or phrases, such as «coverage for everyone.»

  1. Decide how many concepts to code for.

The researcher must now decide how many different concepts to code for. This involves developing a pre-defined or interactive set of concepts and categories. The researcher must decide whether or not to code for every single positive or negative word that appears, or only certain ones that the researcher determines are most relevant to health care. Then, with this pre-defined number set, the researcher has to determine how much flexibility he/she allows him/herself when coding. The question of whether the researcher codes only from this pre-defined set, or allows him/herself to add relevant categories not included in the set as he/she finds them in the text, must be answered. Determining a certain number and set of concepts allows a researcher to examine a text for very specific things, keeping him/her on task. But introducing a level of coding flexibility allows new, important material to be incorporated into the coding process that could have significant bearings on one’s results.

  1. Decide whether to code for existence or frequency of a concept.

After a certain number and set of concepts are chosen for coding , the researcher must answer a key question: is he/she going to code for existence or frequency? This is important, because it changes the coding process. When coding for existence, «inexpensive» would only be counted once, no matter how many times it appeared. This would be a very basic coding process and would give the researcher a very limited perspective of the text. However, the number of times «inexpensive» appears in a text might be more indicative of importance. Knowing that «inexpensive» appeared 50 times, for example, compared to 15 appearances of «coverage for everyone,» might lead a researcher to interpret that Clinton is trying to sell his health care plan based more on economic benefits, not comprehensive coverage. Knowing that «inexpensive» appeared, but not that it appeared 50 times, would not allow the researcher to make this interpretation, regardless of whether it is valid or not.

  1. Decide on how you will distinguish among concepts.

The researcher must next decide on the , i.e. whether concepts are to be coded exactly as they appear, or if they can be recorded as the same even when they appear in different forms. For example, «expensive» might also appear as «expensiveness.» The research needs to determine if the two words mean radically different things to him/her, or if they are similar enough that they can be coded as being the same thing, i.e. «expensive words.» In line with this, is the need to determine the level of implication one is going to allow. This entails more than subtle differences in tense or spelling, as with «expensive» and «expensiveness.» Determining the level of implication would allow the researcher to code not only for the word «expensive,» but also for words that imply «expensive.» This could perhaps include technical words, jargon, or political euphemism, such as «economically challenging,» that the researcher decides does not merit a separate category, but is better represented under the category «expensive,» due to its implicit meaning of «expensive.»

  1. Develop rules for coding your texts.

After taking the generalization of concepts into consideration, a researcher will want to create translation rules that will allow him/her to streamline and organize the coding process so that he/she is coding for exactly what he/she wants to code for. Developing a set of rules helps the researcher insure that he/she is coding things consistently throughout the text, in the same way every time. If a researcher coded «economically challenging» as a separate category from «expensive» in one paragraph, then coded it under the umbrella of «expensive» when it occurred in the next paragraph, his/her data would be invalid. The interpretations drawn from that data will subsequently be invalid as well. Translation rules protect against this and give the coding process a crucial level of consistency and coherence.

  1. Decide what to do with «irrelevant» information.

The next choice a researcher must make involves irrelevant information. The researcher must decide whether irrelevant information should be ignored (as Weber, 1990, suggests), or used to reexamine and/or alter the coding scheme. In the case of this example, words like «and» and «the,» as they appear by themselves, would be ignored. They add nothing to the quantification of words like «inexpensive» and «expensive» and can be disregarded without impacting the outcome of the coding.

  1. Code the texts.

Once these choices about irrelevant information are made, the next step is to code the text. This is done either by hand, i.e. reading through the text and manually writing down concept occurrences, or through the use of various computer programs. Coding with a computer is one of contemporary conceptual analysis’ greatest assets. By inputting one’s categories, content analysis programs can easily automate the coding process and examine huge amounts of data, and a wider range of texts, quickly and efficiently. But automation is very dependent on the researcher’s preparation and category construction. When coding is done manually, a researcher can recognize errors far more easily. A computer is only a tool and can only code based on the information it is given. This problem is most apparent when coding for implicit information, where category preparation is essential for accurate coding.

  1. Analyze your results.

Once the coding is done, the researcher examines the data and attempts to draw whatever conclusions and generalizations are possible. Of course, before these can be drawn, the researcher must decide what to do with the information in the text that is not coded. One’s options include either deleting or skipping over unwanted material, or viewing all information as relevant and important and using it to reexamine, reassess and perhaps even alter one’s coding scheme. Furthermore, given that the conceptual analyst is dealing only with quantitative data, the levels of interpretation and generalizability are very limited. The researcher can only extrapolate as far as the data will allow. But it is possible to see trends, for example, that are indicative of much larger ideas. Using the example from step three, if the concept «inexpensive» appears 50 times, compared to 15 appearances of «coverage for everyone,» then the researcher can pretty safely extrapolate that there does appear to be a greater emphasis on the economics of the health care plan, as opposed to its universal coverage for all Americans. It must be kept in mind that conceptual analysis, while extremely useful and effective for providing this type of information when done right, is limited by its focus and the quantitative nature of its examination. To more fully explore the relationships that exist between these concepts, one must turn to relational analysis.

Relational Analysis

Relational analysis, like conceptual analysis, begins with the act of identifying concepts present in a given text or set of texts. However, relational analysis seeks to go beyond presence by exploring the relationships between the concepts identified. Relational analysis has also been termed semantic analysis (Palmquist, Carley, & Dale, 1997). In other words, the focus of relational analysis is to look for semantic, or meaningful, relationships. Individual concepts, in and of themselves, are viewed as having no inherent meaning. Rather, meaning is a product of the relationships among concepts in a text. Carley (1992) asserts that concepts are «ideational kernels;» these kernels can be thought of as symbols which acquire meaning through their connections to other symbols.

Theoretical Influences on Relational Analysis

The kind of analysis that researchers employ will vary significantly according to their theoretical approach. Key theoretical approaches that inform content analysis include linguistics and cognitive science.

Linguistic approaches to content analysis focus analysis of texts on the level of a linguistic unit, typically single clause units. One example of this type of research is Gottschalk (1975), who developed an automated procedure which analyzes each clause in a text and assigns it a numerical score based on several emotional/psychological scales. Another technique is to code a text grammatically into clauses and parts of speech to establish a matrix representation (Carley, 1990).

Approaches that derive from cognitive science include the creation of decision maps and mental models. Decision maps attempt to represent the relationship(s) between ideas, beliefs, attitudes, and information available to an author when making a decision within a text. These relationships can be represented as logical, inferential, causal, sequential, and mathematical relationships. Typically, two of these links are compared in a single study, and are analyzed as networks. For example, Heise (1987) used logical and sequential links to examine symbolic interaction. This methodology is thought of as a more generalized cognitive mapping technique, rather than the more specific mental models approach.

Mental models are groups or networks of interrelated concepts that are thought to reflect conscious or subconscious perceptions of reality. According to cognitive scientists, internal mental structures are created as people draw inferences and gather information about the world. Mental models are a more specific approach to mapping because beyond extraction and comparison because they can be numerically and graphically analyzed. Such models rely heavily on the use of computers to help analyze and construct mapping representations. Typically, studies based on this approach follow five general steps:

  1. Identifing concepts
  2. Defining relationship types
  3. Coding the text on the basis of 1 and 2
  4. Coding the statements
  5. Graphically displaying and numerically analyzing the resulting maps

To create the model, a researcher converts a text into a map of concepts and relations; the map is then analyzed on the level of concepts and statements, where a statement consists of two concepts and their relationship. Carley (1990) asserts that this makes possible the comparison of a wide variety of maps, representing multiple sources, implicit and explicit information, as well as socially shared cognitions.

Relational Analysis: Overview of Methods

As with other sorts of inquiry, initial choices with regard to what is being studied and/or coded for often determine the possibilities of that particular study. For relational analysis, it is important to first decide which concept type(s) will be explored in the analysis. Studies have been conducted with as few as one and as many as 500 concept categories. Obviously, too many categories may obscure your results and too few can lead to unreliable and potentially invalid conclusions. Therefore, it is important to allow the context and necessities of your research to guide your coding procedures.

The steps to relational analysis that we consider in this guide suggest some of the possible avenues available to a researcher doing content analysis. We provide an example to make the process easier to grasp. However, the choices made within the context of the example are but only a few of many possibilities. The diversity of techniques available suggests that there is quite a bit of enthusiasm for this mode of research. Once a procedure is rigorously tested, it can be applied and compared across populations over time. The process of relational analysis has achieved a high degree of computer automation but still is, like most forms of research, time consuming. Perhaps the strongest claim that can be made is that it maintains a high degree of statistical rigor without losing the richness of detail apparent in even more qualitative methods.

Three Subcategories of Relational Analysis

Affect extraction: This approach provides an emotional evaluation of concepts explicit in a text. It is problematic because emotion may vary across time and populations. Nevertheless, when extended it can be a potent means of exploring the emotional/psychological state of the speaker and/or writer. Gottschalk (1995) provides an example of this type of analysis. By assigning concepts identified a numeric value on corresponding emotional/psychological scales that can then be statistically examined, Gottschalk claims that the emotional/psychological state of the speaker or writer can be ascertained via their verbal behavior.

Proximity analysis: This approach, on the other hand, is concerned with the co-occurrence of explicit concepts in the text. In this procedure, the text is defined as a string of words. A given length of words, called a window, is determined. The window is then scanned across a text to check for the co-occurrence of concepts. The result is the creation of a concept determined by the concept matrix. In other words, a matrix, or a group of interrelated, co-occurring concepts, might suggest a certain overall meaning. The technique is problematic because the window records only explicit concepts and treats meaning as proximal co-occurrence. Other techniques such as clustering, grouping, and scaling are also useful in proximity analysis.

Cognitive mapping: This approach is one that allows for further analysis of the results from the two previous approaches. It attempts to take the above processes one step further by representing these relationships visually for comparison. Whereas affective and proximal analysis function primarily within the preserved order of the text, cognitive mapping attempts to create a model of the overall meaning of the text. This can be represented as a graphic map that represents the relationships between concepts.

In this manner, cognitive mapping lends itself to the comparison of semantic connections across texts. This is known as map analysis which allows for comparisons to explore «how meanings and definitions shift across people and time» (Palmquist, Carley, & Dale, 1997). Maps can depict a variety of different mental models (such as that of the text, the writer/speaker, or the social group/period), according to the focus of the researcher. This variety is indicative of the theoretical assumptions that support mapping: mental models are representations of interrelated concepts that reflect conscious or subconscious perceptions of reality; language is the key to understanding these models; and these models can be represented as networks (Carley, 1990). Given these assumptions, it’s not surprising to see how closely this technique reflects the cognitive concerns of socio-and psycholinguistics, and lends itself to the development of artificial intelligence models.

Steps for Conducting Relational Analysis

The following discussion of the steps (or, perhaps more accurately, strategies) that can be followed to code a text or set of texts during relational analysis. These explanations are accompanied by examples of relational analysis possibilities for statements made by Bill Clinton during the 1998 hearings.

  • Identify the Question.

The question is important because it indicates where you are headed and why. Without a focused question, the concept types and options open to interpretation are limitless and therefore the analysis difficult to complete. Possibilities for the Hairy Hearings of 1998 might be:

What did Bill Clinton say in the speech? OR What concrete information did he present to the public?

  • Choose a sample or samples for analysis.

Once the question has been identified, the researcher must select sections of text/speech from the hearings in which Bill Clinton may have not told the entire truth or is obviously holding back information. For relational content analysis, the primary consideration is how much information to preserve for analysis. One must be careful not to limit the results by doing so, but the researcher must also take special care not to take on so much that the coding process becomes too heavy and extensive to supply worthwhile results.

  • Determine the type of analysis.

Once the sample has been chosen for analysis, it is necessary to determine what type or types of relationships you would like to examine. There are different subcategories of relational analysis that can be used to examine the relationships in texts.

In this example, we will use proximity analysis because it is concerned with the co-occurrence of explicit concepts in the text. In this instance, we are not particularly interested in affect extraction because we are trying to get to the hard facts of what exactly was said rather than determining the emotional considerations of speaker and receivers surrounding the speech which may be unrecoverable.

Once the subcategory of analysis is chosen, the selected text must be reviewed to determine the level of analysis. The researcher must decide whether to code for a single word, such as «perhaps,» or for sets of words or phrases like «I may have forgotten.»

  • Reduce the text to categories and code for words or patterns.

At the simplest level, a researcher can code merely for existence. This is not to say that simplicity of procedure leads to simplistic results. Many studies have successfully employed this strategy. For example, Palmquist (1990) did not attempt to establish the relationships among concept terms in the classrooms he studied; his study did, however, look at the change in the presence of concepts over the course of the semester, comparing a map analysis from the beginning of the semester to one constructed at the end. On the other hand, the requirement of one’s specific research question may necessitate deeper levels of coding to preserve greater detail for analysis.

In relation to our extended example, the researcher might code for how often Bill Clinton used words that were ambiguous, held double meanings, or left an opening for change or «re-evaluation.» The researcher might also choose to code for what words he used that have such an ambiguous nature in relation to the importance of the information directly related to those words.

  • Explore the relationships between concepts (Strength, Sign & Direction).

Once words are coded, the text can be analyzed for the relationships among the concepts set forth. There are three concepts which play a central role in exploring the relations among concepts in content analysis.

  1. Strength of Relationship: Refers to the degree to which two or more concepts are related. These relationships are easiest to analyze, compare, and graph when all relationships between concepts are considered to be equal. However, assigning strength to relationships retains a greater degree of the detail found in the original text. Identifying strength of a relationship is key when determining whether or not words like unless, perhaps, or maybe are related to a particular section of text, phrase, or idea.
  2. Sign of a Relationship: Refers to whether or not the concepts are positively or negatively related. To illustrate, the concept «bear» is negatively related to the concept «stock market» in the same sense as the concept «bull» is positively related. Thus «it’s a bear market» could be coded to show a negative relationship between «bear» and «market». Another approach to coding for strength entails the creation of separate categories for binary oppositions. The above example emphasizes «bull» as the negation of «bear,» but could be coded as being two separate categories, one positive and one negative. There has been little research to determine the benefits and liabilities of these differing strategies. Use of Sign coding for relationships in regard to the hearings my be to find out whether or not the words under observation or in question were used adversely or in favor of the concepts (this is tricky, but important to establishing meaning).
  3. Direction of the Relationship: Refers to the type of relationship categories exhibit. Coding for this sort of information can be useful in establishing, for example, the impact of new information in a decision making process. Various types of directional relationships include, «X implies Y,» «X occurs before Y» and «if X then Y,» or quite simply the decision whether concept X is the «prime mover» of Y or vice versa. In the case of the 1998 hearings, the researcher might note that, «maybe implies doubt,» «perhaps occurs before statements of clarification,» and «if possibly exists, then there is room for Clinton to change his stance.» In some cases, concepts can be said to be bi-directional, or having equal influence. This is equivalent to ignoring directionality. Both approaches are useful, but differ in focus. Coding all categories as bi-directional is most useful for exploratory studies where pre-coding may influence results, and is also most easily automated, or computer coded.
  • Code the relationships.

One of the main differences between conceptual analysis and relational analysis is that the statements or relationships between concepts are coded. At this point, to continue our extended example, it is important to take special care with assigning value to the relationships in an effort to determine whether the ambiguous words in Bill Clinton’s speech are just fillers, or hold information about the statements he is making.

  • Perform Statisical Analyses.

This step involves conducting statistical analyses of the data you’ve coded during your relational analysis. This may involve exploring for differences or looking for relationships among the variables you’ve identified in your study.

  • Map out the Representations.

In addition to statistical analysis, relational analysis often leads to viewing the representations of the concepts and their associations in a text (or across texts) in a graphical — or map — form. Relational analysis is also informed by a variety of different theoretical approaches: linguistic content analysis, decision mapping, and mental models.

Commentary

The authors of this guide have created the following commentaries on content analysis.

Issues of Reliability & Validity

The issues of reliability and validity are concurrent with those addressed in other research methods. The reliability of a content analysis study refers to its stability, or the tendency for coders to consistently re-code the same data in the same way over a period of time; reproducibility, or the tendency for a group of coders to classify categories membership in the same way; and accuracy, or the extent to which the classification of a text corresponds to a standard or norm statistically. Gottschalk (1995) points out that the issue of reliability may be further complicated by the inescapably human nature of researchers. For this reason, he suggests that coding errors can only be minimized, and not eliminated (he shoots for 80% as an acceptable margin for reliability).

On the other hand, the validity of a content analysis study refers to the correspondence of the categories to the conclusions, and the generalizability of results to a theory.

The validity of categories in implicit concept analysis, in particular, is achieved by utilizing multiple classifiers to arrive at an agreed upon definition of the category. For example, a content analysis study might measure the occurrence of the concept category «communist» in presidential inaugural speeches. Using multiple classifiers, the concept category can be broadened to include synonyms such as «red,» «Soviet threat,» «pinkos,» «godless infidels» and «Marxist sympathizers.» «Communist» is held to be the explicit variable, while «red,» etc. are the implicit variables.

The overarching problem of concept analysis research is the challenge-able nature of conclusions reached by its inferential procedures. The question lies in what level of implication is allowable, i.e. do the conclusions follow from the data or are they explainable due to some other phenomenon? For occurrence-specific studies, for example, can the second occurrence of a word carry equal weight as the ninety-ninth? Reasonable conclusions can be drawn from substantive amounts of quantitative data, but the question of proof may still remain unanswered.

This problem is again best illustrated when one uses computer programs to conduct word counts. The problem of distinguishing between synonyms and homonyms can completely throw off one’s results, invalidating any conclusions one infers from the results. The word «mine,» for example, variously denotes a personal pronoun, an explosive device, and a deep hole in the ground from which ore is extracted. One may obtain an accurate count of that word’s occurrence and frequency, but not have an accurate accounting of the meaning inherent in each particular usage. For example, one may find 50 occurrences of the word «mine.» But, if one is only looking specifically for «mine» as an explosive device, and 17 of the occurrences are actually personal pronouns, the resulting 50 is an inaccurate result. Any conclusions drawn as a result of that number would render that conclusion invalid.

The generalizability of one’s conclusions, then, is very dependent on how one determines concept categories, as well as on how reliable those categories are. It is imperative that one defines categories that accurately measure the idea and/or items one is seeking to measure. Akin to this is the construction of rules. Developing rules that allow one, and others, to categorize and code the same data in the same way over a period of time, referred to as stability, is essential to the success of a conceptual analysis. Reproducibility, not only of specific categories, but of general methods applied to establishing all sets of categories, makes a study, and its subsequent conclusions and results, more sound. A study which does this, i.e. in which the classification of a text corresponds to a standard or norm, is said to have accuracy.

Advantages of Content Analysis

Content analysis offers several advantages to researchers who consider using it. In particular, content analysis:

  • looks directly at communication via texts or transcripts, and hence gets at the central aspect of social interaction
  • can allow for both quantitative and qualitative operations
  • can provides valuable historical/cultural insights over time through analysis of texts
  • allows a closeness to text which can alternate between specific categories and relationships and also statistically analyzes the coded form of the text
  • can be used to interpret texts for purposes such as the development of expert systems (since knowledge and rules can both be coded in terms of explicit statements about the relationships among concepts)
  • is an unobtrusive means of analyzing interactions
  • provides insight into complex models of human thought and language use

Disadvantages of Content Analysis

Content analysis suffers from several disadvantages, both theoretical and procedural. In particular, content analysis:

  • can be extremely time consuming
  • is subject to increased error, particularly when relational analysis is used to attain a higher level of interpretation
  • is often devoid of theoretical base, or attempts too liberally to draw meaningful inferences about the relationships and impacts implied in a study
  • is inherently reductive, particularly when dealing with complex texts
  • tends too often to simply consist of word counts
  • often disregards the context that produced the text, as well as the state of things after the text is produced
  • can be difficult to automate or computerize

Examples

The Palmquist, Carley and Dale study, a summary of «Applications of Computer-Aided Text Analysis: Analyzing Literary and Non-Literary Texts» (1997) is an example of two studies that have been conducted using both conceptual and relational analysis. The Problematic Text for Content Analysis shows the differences in results obtained by a conceptual and a relational approach to a study.

Related Information: Example of a Problematic Text for Content Analysis

In this example, both students observed a scientist and were asked to write about the experience.

Student A: I found that scientists engage in research in order to make discoveries and generate new ideas. Such research by scientists is hard work and often involves collaboration with other scientists which leads to discoveries which make the scientists famous. Such collaboration may be informal, such as when they share new ideas over lunch, or formal, such as when they are co-authors of a paper.

Student B: It was hard work to research famous scientists engaged in collaboration and I made many informal discoveries. My research showed that scientists engaged in collaboration with other scientists are co-authors of at least one paper containing their new ideas. Some scientists make formal discoveries and have new ideas.

Content analysis coding for explicit concepts may not reveal any significant differences. For example, the existence of «I, scientist, research, hard work, collaboration, discoveries, new ideas, etc…» are explicit in both texts, occur the same number of times, and have the same emphasis. Relational analysis or cognitive mapping, however, reveals that while all concepts in the text are shared, only five concepts are common to both. Analyzing these statements reveals that Student A reports on what «I» found out about «scientists,» and elaborated the notion of «scientists» doing «research.» Student B focuses on what «I’s» research was and sees scientists as «making discoveries» without emphasis on research.

Related Information: The Palmquist, Carley and Dale Study

Consider these two questions: How has the depiction of robots changed over more than a century’s worth of writing? And, do students and writing instructors share the same terms for describing the writing process? Although these questions seem totally unrelated, they do share a commonality: in the Palmquist, Carley & Dale study, their answers rely on computer-aided text analysis to demonstrate how different texts can be analyzed.

Literary texts

One half of the study explored the depiction of robots in 27 science fiction texts written between 1818 and 1988. After texts were divided into three historically defined groups, readers look for how the depiction of robots has changed over time. To do this, researchers had to create concept lists and relationship types, create maps using a computer software (see Fig. 1), modify those maps and then ultimately analyze them. The final product of the analysis revealed that over time authors were less likely to depict robots as metallic humanoids.

Figure 1
Figure 1: A map representing relationships among concepts.

Non-literary texts

The second half of the study used student journals and interviews, teacher interviews, texts books, and classroom observations as the non-literary texts from which concepts and words were taken. The purpose behind the study was to determine if, in fact, over time teacher and students would begin to share a similar vocabulary about the writing process. Again, researchers used computer software to assist in the process. This time, computers helped researchers generated a concept list based on frequently occurring words and phrases from all texts. Maps were also created and analyzed in this study (see Fig. 2).

Figure 2
Figure 2: Pairs of co-occurring words drawn from a source text

Annotated Bibliography

Resources On How To Conduct Content Analysis

Beard, J., & Yaprak, A. (1989). Language implications for advertising in international markets: A model for message content and message execution. A paper presented at the 8th International Conference on Language Communication for World Business and the Professions. Ann Arbor, MI.

This report discusses the development and testing of a content analysis model for assessing advertising themes and messages aimed primarily at U.S. markets which seeks to overcome barriers in the cultural environment of international markets. Texts were categorized under 3 headings: rational, emotional, and moral. The goal here was to teach students to appreciate differences in language and culture.

Berelson, B. (1971). Content analysis in communication research. New York: Hafner Publishing Company.

While this book provides an extensive outline of the uses of content analysis, it is far more concerned with conveying a critical approach to current literature on the subject. In this respect, it assumes a bit of prior knowledge, but is still accessible through the use of concrete examples.

Budd, R. W., Thorp, R.K., & Donohew, L. (1967). Content analysis of communications. New York: Macmillan Company.

Although published in 1967, the decision of the authors to focus on recent trends in content analysis keeps their insights relevant even to modern audiences. The book focuses on specific uses and methods of content analysis with an emphasis on its potential for researching human behavior. It is also geared toward the beginning researcher and breaks down the process of designing a content analysis study into 6 steps that are outlined in successive chapters. A useful annotated bibliography is included.

Carley, K. (1992). Coding choices for textual analysis: A comparison of content analysis and map analysis. Unpublished Working Paper.

Comparison of the coding choices necessary to conceptual analysis and relational analysis, especially focusing on cognitive maps. Discusses concept coding rules needed for sufficient reliability and validity in a Content Analysis study. In addition, several pitfalls common to texts are discussed.

Carley, K. (1990). Content analysis. In R.E. Asher (Ed.), The Encyclopedia of Language and Linguistics. Edinburgh: Pergamon Press.

Quick, yet detailed, overview of the different methodological kinds of Content Analysis. Carley breaks down her paper into five sections, including: Conceptual Analysis, Procedural Analysis, Relational Analysis, Emotional Analysis and Discussion. Also included is an excellent and comprehensive Content Analysis reference list.

Carley, K. (1989). Computer analysis of qualitative data. Pittsburgh, PA: Carnegie Mellon University.

Presents graphic, illustrated representations of computer based approaches to content analysis.

Carley, K. (1992). MECA. Pittsburgh, PA: Carnegie Mellon University.

A resource guide explaining the fifteen routines that compose the Map Extraction Comparison and Analysis (MECA) software program. Lists the source file, input and out files, and the purpose for each routine.

Carney, T. F. (1972). Content analysis: A technique for systematic inference from communications. Winnipeg, Canada: University of Manitoba Press.

This book introduces and explains in detail the concept and practice of content analysis. Carney defines it; traces its history; discusses how content analysis works and its strengths and weaknesses; and explains through examples and illustrations how one goes about doing a content analysis.

de Sola Pool, I. (1959). Trends in content analysis. Urbana, Ill: University of Illinois Press.

The 1959 collection of papers begins by differentiating quantitative and qualitative approaches to content analysis, and then details facets of its uses in a wide variety of disciplines: from linguistics and folklore to biography and history. Includes a discussion on the selection of relevant methods and representational models.

Duncan, D. F. (1989). Content analysis in health educaton research: An introduction to purposes and methods. Heatlth Education, 20 (7).

This article proposes using content analysis as a research technique in health education. A review of literature relating to applications of this technique and a procedure for content analysis are presented.

Gottschalk, L. A. (1995). Content analysis of verbal behavior: New findings and clinical applications. Hillside, NJ: Lawrence Erlbaum Associates, Inc.

This book primarily focuses on the Gottschalk-Gleser method of content analysis, and its application as a method of measuring psychological dimensions of children and adults via the content and form analysis of their verbal behavior, using the grammatical clause as the basic unit of communication for carrying semantic messages generated by speakers or writers.

Krippendorf, K. (1980). Content analysis: An introduction to its methodology Beverly Hills, CA: Sage Publications.

This is one of the most widely quoted resources in many of the current studies of Content Analysis. Recommended as another good, basic resource, as Krippendorf presents the major issues of Content Analysis in much the same way as Weber (1975).

Moeller, L. G. (1963). An introduction to content analysis—including annotated bibliography. Iowa City: University of Iowa Press.

A good reference for basic content analysis. Discusses the options of sampling, categories, direction, measurement, and the problems of reliability and validity in setting up a content analysis. Perhaps better as a historical text due to its age.

Smith, C. P. (Ed.). (1992). Motivation and personality: Handbook of thematic content analysis. New York: Cambridge University Press.

Billed by its authors as «the first book to be devoted primarily to content analysis systems for assessment of the characteristics of individuals, groups, or historical periods from their verbal materials.» The text includes manuals for using various systems, theory, and research regarding the background of systems, as well as practice materials, making the book both a reference and a handbook.

Solomon, M. (1993). Content analysis: a potent tool in the searcher’s arsenal. Database, 16(2), 62-67.

Online databases can be used to analyze data, as well as to simply retrieve it. Online-media-source content analysis represents a potent but little-used tool for the business searcher. Content analysis benchmarks useful to advertisers include prominence, offspin, sponsor affiliation, verbatims, word play, positioning and notational visibility.

Weber, R. P. (1990). Basic content analysis, second edition. Newbury Park, CA: Sage Publications.

Good introduction to Content Analysis. The first chapter presents a quick overview of Content Analysis. The second chapter discusses content classification and interpretation, including sections on reliability, validity, and the creation of coding schemes and categories. Chapter three discusses techniques of Content Analysis, using a number of tables and graphs to illustrate the techniques. Chapter four examines issues in Content Analysis, such as measurement, indication, representation and interpretation.

Examples of Content Analysis

Adams, W., & Shriebman, F. (1978). Television network news: Issues in content research. Washington, DC: George Washington University Press.

A fairly comprehensive application of content analysis to the field of television news reporting. The books tripartite division discusses current trends and problems with news criticism from a content analysis perspective, four different content analysis studies of news media, and makes recommendations for future research in the area. Worth a look by anyone interested in mass communication research.

Auter, P. J., & Moore, R. L. (1993). Buying from a friend: a content analysis of two teleshopping programs. Journalism Quarterly, 70(2), 425-437.

A preliminary study was conducted to content-analyze random samples of two teleshopping programs, using a measure of content interactivity and a locus of control message index.

Barker, S. P. (???) Fame: A content analysis study of the American film biography. Ohio State University. Thesis.

Barker examined thirty Oscar-nominated films dating from 1929 to 1979 using O.J. Harvey Belief System and the Kohlberg’s Moral Stages to determine whether cinema heroes were positive role models for fame and success or morally ambiguous celebrities. Content analysis was successful in determining several trends relative to the frequency and portrayal of women in film, the generally high ethical character of the protagonists, and the dogmatic, close-minded nature of film antagonists.

Bernstein, J. M. & Lacy, S. (1992). Contextual coverage of government by local television news. Journalism Quarterly, 69(2), 329-341.

This content analysis of 14 local television news operations in five markets looks at how local TV news shows contribute to the marketplace of ideas. Performance was measured as the allocation of stories to types of coverage that provide the context about events and issues confronting the public.

Blaikie, A. (1993). Images of age: a reflexive process. Applied Ergonomics, 24 (1), 51-58.

Content analysis of magazines provides a sharp instrument for reflecting the change in stereotypes of aging over past decades.

Craig, R. S. (1992). The effect of day part on gender portrayals in television commercials: a content analysis. Sex Roles: A Journal of Research, 26 (5-6), 197-213.

Gender portrayals in 2,209 network television commercials were content analyzed. To compare differences between three day parts, the sample was chosen from three time periods: daytime, evening prime time, and weekend afternoon sportscasts. The results indicate large and consistent differences in the way men and women are portrayed in these three day parts, with almost all comparisons reaching significance at the .05 level. Although ads in all day parts tended to portray men in stereotypical roles of authority and dominance, those on weekends tended to emphasize escape form home and family. The findings of earlier studies which did not consider day part differences may now have to be reevaluated.

Dillon, D. R. et al. (1992). Article content and authorship trends in The Reading Teacher, 1948-1991. The Reading Teacher, 45 (5), 362-368.

The authors explore changes in the focus of the journal over time.

Eberhardt, EA. (1991). The rhetorical analysis of three journal articles: The study of form, content, and ideology. Ft. Collins, CO: Colorado State University.

Eberhardt uses content analysis in this thesis paper to analyze three journal articles that reported on President Ronald Reagan’s address in which he responded to the Tower Commission report concerning the IranContra Affair. The reports concentrated on three rhetorical elements: idea generation or content; linguistic style or choice of language; and the potential societal effect of both, which Eberhardt analyzes, along with the particular ideological orientation espoused by each magazine.

Ellis, B. G. & Dick, S. J. (1996). ‘Who was ‘Shadow’? The computer knows: applying grammar-program statistics in content analyses to solve mysteries about authorship. Journalism & Mass Communication Quarterly, 73(4), 947-963.

This study’s objective was to employ the statistics-documentation portion of a word-processing program’s grammar-check feature as a final, definitive, and objective tool for content analyses — used in tandem with qualitative analyses — to determine authorship. Investigators concluded there was significant evidence from both modalities to support their theory that Henry Watterson, long-time editor of the Louisville Courier-Journal, probably was the South’s famed Civil War correspondent «Shadow» and to rule out another prime suspect, John H. Linebaugh of the Memphis Daily Appeal. Until now, this Civil War mystery has never been conclusively solved, puzzling historians specializing in Confederate journalism.

Gottschalk, L. A., Stein, M. K. & Shapiro, D.H. (1997). The application of computerized content analysis in a psychiatric outpatient clinic. Journal of Clinical Psychology, 53(5), 427-442.

Twenty-five new psychiatric outpatients were clinically evaluated and were administered a brief psychological screening battery which included measurements of symptoms, personality, and cognitive function. Included in this assessment procedure were the Gottschalk-Gleser Content Analysis Scales on which scores were derived from five minute speech samples by means of an artificial intelligence-based computer program. The use of this computerized content analysis procedure for initial, rapid diagnostic neuropsychiatric appraisal is supported by this research.

Graham, J. L., Kamins, M. A., & Oetomo, D. S. (1993). Content analysis of German and Japanese advertising in print media from Indonesia, Spain, and the United States. Journal of Advertising, 22 (2), 5-16.

The authors analyze informational and emotional content in print advertisements in order to consider how home-country culture influences firms’ marketing strategies and tactics in foreign markets. Research results provided evidence contrary to the original hypothesis that home-country culture would influence ads in each of the target countries.

Herzog, A. (1973). The B.S. Factor: The theory and technique of faking it in America. New York: Simon and Schuster.

Herzog takes a look at the rhetoric of American culture using content analysis to point out discrepancies between intention and reality in American society. The study reveals, albeit in a comedic tone, how double talk and «not quite lies» are pervasive in our culture.

Horton, N. S. (1986). Young adult literature and censorship: A content analysis of seventy-eight young adult books. Denton, TX: North Texas State University.

The purpose of Horton’s content analysis was to analyze a representative seventy-eight current young adult books to determine the extent to which they contain items which are objectionable to would-be censors. Seventy-eight books were identified which fit the criteria of popularity and literary quality. Each book was analyzed for, and tallied for occurrence of, six categories, including profanity, sex, violence, parent conflict, drugs and condoned bad behavior.

Isaacs, J. S. (1984). A verbal content analysis of the early memories of psychiatric patients. Berkeley: California School of Professional Psychology.

Isaacs did a content analysis investigation on the relationship between words and phrases used in early memories and clinical diagnosis. His hypothesis was that in conveying their early memories schizophrenic patients tend to use an identifiable set of words and phrases more frequently than do nonpatients and that schizophrenic patients use these words and phrases more frequently than do patients with major affective disorders.

Jean Lee, S. K. & Hwee Hoon, T. (1993). Rhetorical vision of men and women managers in Singapore. Human Relations, 46 (4), 527-542.

A comparison of media portrayal of male and female managers’ rhetorical vision in Singapore is made. Content analysis of newspaper articles used to make this comparison also reveals the inherent conflicts that women managers have to face. Purposive and multi-stage sampling of articles are utilized.

Kaur-Kasior, S. (1987). The treatment of culture in greeting cards: A content analysis. Bowling Green, OH: Bowling Green State University.

Using six historical periods dating from 1870 to 1987, this content analysis study attempted to determine what structural/cultural aspects of American society were reflected in greeting cards. The study determined that the size of cards increased over time, included more pages, and had animals and flowers as their most dominant symbols. In addition, white was the most common color used. Due to habituation and specialization, says the author, greeting cards have become institutionalized in American culture.

Koza, J. E. (1992). The missing males and other gender-related issues in music education: A critical analysis of evidence from the Music Supervisor’s Journal, 1914-1924. Paper presented at the annual meeting of the American Educational Research Association. San Francisco.

The goal of this study was to identify all educational issues that would today be explicitly gender related and to analyze the explanations past music educators gave for the existence of gender-related problems. A content analysis of every gender-related reference was undertaken, finding that the current preoccupation with males in music education has a long history and that little has changed since the early part of this century.

Laccinole, M. D. (1982). Aging and married couples: A language content analysis of a conversational and expository speech task. Eugene, OR: University of Oregon.

Using content analysis, this paper investigated the relationship of age to the use of the grammatical categories, and described the differences in the usage of these grammatical categories in a conversation and expository speech task by fifty married couples. The subjects Laccinole used in his analysis were Caucasian, English speaking, middle class, ranged in ages from 20 to 83 years of age, were in good health and had no history of communication disorders.

Laffal, J. (1995). A concept analysis of Jonathan Swift’s ‘A Tale of a Tub’ and ‘Gulliver’s Travels.’ Computers and Humanities, 29(5), 339-362.

In this study, comparisons of concept profiles of «Tub,» «Gulliver,» and Swift’s own contemporary texts, as well as a composite text of 18th century writers, reveal that «Gulliver» is conceptually different from «Tub.» The study also discovers that the concepts and words of these texts suggest two strands in Swift’s thinking.

Lewis, S. M. (1991). Regulation from a deregulatory FCC: Avoiding discursive dissonance. Masters Thesis, Fort Collins, CO: Colorado State University.

This thesis uses content analysis to examine inconsistent statements made by the Federal Communications Commission (FCC) in its policy documents during the 1980s. Lewis analyzes positions set forth by the FCC in its policy statements and catalogues different strategies that can be used by speakers to be or to appear consistent, as well as strategies to avoid inconsistent speech or discursive dissonance.

Norton, T. L. (1987). The changing image of childhood: A content analysis of Caldecott Award books. Los Angeles: University of South Carolina.

Content analysis was conducted on 48 Caldecott Medal Recipient books dating from 1938 to 1985 to determine whether the reflect the idea that the social perception of childhood has altered since the early 1960’s. The results revealed an increasing «loss of childhood innocence,» as well as a general sentimentality for childhood pervasive in the texts. Suggests further study of children’s literature to confirm the validity of such study.

O’Dell, J. W. & Weideman, D. (1993). Computer content analysis of the Schreber case. Journal of Clinical Psychology, 49(1), 120-125.

An example of the application of content analysis as a means of recreating a mental model of the psychology of an individual.

Pratt, C. A. & Pratt, C. B. (1995). Comparative content analysis of food and nutrition advertisements in Ebony, Essence, and Ladies’ Home Journal. Journal of Nutrition Education, 27(1), 11-18.

This study used content analysis to measure the frequencies and forms of food, beverage, and nutrition advertisements and their associated health-promotional message in three U.S. consumer magazines during two 3-year periods: 1980-1982 and 1990-1992. The study showed statistically significant differences among the three magazines in both frequencies and types of major promotional messages in the advertisements. Differences between the advertisements in Ebony and Essence, the readerships of which were primarily African-American, and those found in Ladies Home Journal were noted, as were changes in the two time periods. Interesting tie in to ethnographic research studies?

Riffe, D., Lacy, S., & Drager, M. W. (1996). Sample size in content analysis of weekly news magazines. Journalism & Mass Communication Quarterly,73(3), 635-645.

This study explores a variety of approaches to deciding sample size in analyzing magazine content. Having tested random samples of size six, eight, ten, twelve, fourteen, and sixteen issues, the authors show that a monthly stratified sample of twelve issues is the most efficient method for inferring to a year’s issues.

Roberts, S. K. (1987). A content analysis of how male and female protagonists in Newbery Medal and Honor books overcome conflict: Incorporating a locus of control framework. Fayetteville, AR: University of Arkansas.

The purpose of this content analysis was to analyze Newbery Medal and Honor books in order to determine how male and female protagonists were assigned behavioral traits in overcoming conflict as it relates to an internal or external locus of control schema. Roberts used all, instead of just a sample, of the fictional Newbery Medal and Honor books which met his study’s criteria. A total of 120 male and female protagonists were categorized, from Newbery books dating from 1922 to 1986.

Schneider, J. (1993). Square One TV content analysis: Final report. New York: Children’s Television Workshop.

This report summarizes the mathematical and pedagogical content of the 230 programs in the Square One TV library after five seasons of production, relating that content to the goals of the series which were to make mathematics more accessible, meaningful, and interesting to the children viewers.

Smith, T. E., Sells, S. P., and Clevenger, T. Ethnographic content analysis of couple and therapist perceptions in a reflecting team setting. The Journal of Marital and Family Therapy, 20 (3), 267-286.

An ethnographic content analysis was used to examine couple and therapist perspectives about the use and value of reflecting team practice. Postsession ethnographic interviews from both couples and therapists were examined for the frequency of themes in seven categories that emerged from a previous ethnographic study of reflecting teams. Ethnographic content analysis is briefly contrasted with conventional modes of quantitative content analysis to illustrate its usefulness and rationale for discovering emergent patterns, themes, emphases, and process using both inductive and deductive methods of inquiry.

Stahl, N. A. (1987). Developing college vocabulary: A content analysis of instructional materials. Reading, Research and Instruction, 26 (3).

This study investigates the extent to which the content of 55 college vocabulary texts is consistent with current research and theory on vocabulary instruction. It recommends less reliance on memorization and more emphasis on deep understanding and independent vocabulary development.

Swetz, F. (1992). Fifteenth and sixteenth century arithmetic texts: What can we learn from them? Science and Education, 1 (4).

Surveys the format and content of 15th and 16th century arithmetic textbooks, discussing the types of problems that were most popular in these early texts and briefly analyses problem contents. Notes the residual educational influence of this era’s arithmetical and instructional practices.

Walsh, K., et al. (1996). Management in the public sector: a content analysis of journals. Public Administration 74 (2), 315-325.

The popularity and implementaion of managerial ideas from 1980 to 1992 are examined through the content of five journals revolving on local government, health, education and social service. Contents were analyzed according to commercialism, user involvement, performance evaluation, staffing, strategy and involvement with other organizations. Overall, local government showed utmost involvement with commercialism while health and social care articles were most concerned with user involvement.

For Further Reading

Abernethy, A. M., & Franke, G. R. (1996).The information content of advertising: a meta-analysis. Journal of Advertising, Summer 25 (2),1-18.

Carley, K., & Palmquist, M. (1992). Extracting, representing and analyzing mental models. Social Forces, 70 (3), 601-636.

Fan, D. (1988). Predictions of public opinion from the mass media: Computer content analysis and mathematical modeling. New York, NY: Greenwood Press.

Franzosi, R. (1990). Computer-assisted coding of textual data: An application to semantic grammars. Sociological Methods and Research, 19(2), 225-257.

McTavish, D.G., & Pirro, E. (1990) Contextual content analysis. Quality and Quantity, 24, 245-265.

Palmquist, M. E. (1990). The lexicon of the classroom: language and learning in writing class rooms. Doctoral dissertation, Carnegie Mellon University, Pittsburgh, PA.

Palmquist, M. E., Carley, K.M., and Dale, T.A. (1997). Two applications of automated text analysis: Analyzing literary and non-literary texts. In C. Roberts (Ed.), Text Analysis for the Social Sciences: Methods for Drawing Statistical Inferences from Texts and Tanscripts. Hillsdale, NJ: Lawrence Erlbaum Associates.

Roberts, C.W. (1989). Other than counting words: A linguistic approach to content analysis. Social Forces, 68, 147-177.

Issues in Content Analysis

Jolliffe, L. (1993). Yes! More content analysis! Newspaper Research Journal, 14(3-4), 93-97.

The author responds to an editorial essay by Barbara Luebke which criticizes excessive use of content analysis in newspaper content studies. The author points out the positive applications of content analysis when it is theory-based and utilized as a means of suggesting how or why the content exists, or what its effects on public attitudes or behaviors may be.

Kang, N., Kara, A., Laskey, H. A., & Seaton, F. B. (1993). A SAS MACRO for calculating intercoder agreement in content analysis. Journal of Advertising, 22(2), 17-28.

A key issue in content analysis is the level of agreement across the judgments which classify the objects or stimuli of interest. A review of articles published in the Journal of Advertising indicates that many authors are not fully utilizing recommended measures of intercoder agreement and thus may not be adequately establishing the reliability of their research. This paper presents a SAS MACRO which facilitates the computation of frequently recommended indices of intercoder agreement in content analysis.

Lacy, S. & Riffe, D. (1996). Sampling error and selecting intercoder reliability samples for nominal content categories. Journalism & Mass Communication Quarterly, 73(4), 693-704.

This study views intercoder reliability as a sampling problem. It develops a formula for generating sample sizes needed to have valid reliability estimates. It also suggests steps for reporting reliability. The resulting sample sizes will permit a known degree of confidence that the agreement in a sample of items is representative of the pattern that would occur if all content items were coded by all coders.

Riffe, D., Aust, C. F., & Lacy, S. R. (1993). The effectiveness of random, consecutive day and constructed week sampling in newspaper content analysis. Journalism Quarterly, 70 (1), 133-139.

This study compares 20 sets each of samples for four different sizes using simple random, constructed week and consecutive day samples of newspaper content. Comparisons of sample efficiency, based on the percentage of sample means in each set of 20 falling within one or two standard errors of the population mean, show the superiority of constructed week sampling.

Thomas, S. (1994). Artifactual study in the analysis of culture: A defense of content analysis in a postmodern age. Communication Research, 21(6), 683-697.

Although both modern and postmodern scholars have criticized the method of content analysis with allegations of reductionism and other epistemological limitations, it is argued here that these criticisms are ill founded. In building and argument for the validity of content analysis, the general value of artifact or text study is first considered.

Zollars, C. (1994). The perils of periodical indexes: Some problems in constructing samples for content analysis and culture indicators research. Communication Research, 21(6), 698-714.

The author examines problems in using periodical indexes to construct research samples via the use of content analysis and culture indicator research. Issues of historical and idiosyncratic changes in index subject category heading and subheadings make article headings potentially misleading indicators. Index subject categories are not necessarily invalid as a result; nevertheless, the author discusses the need to test for category longevity, coherence, and consistency over time, and suggests the use of oversampling, cross-references, and other techniques as a means of correcting and/or compensating for hidden inaccuracies in classification, and as a means of constructing purposive samples for analytic comparisons.

Citation Information

Carol Busch, Paul S. De Maret, Teresa Flynn, Rachel Kellum, Sheri Le, Brad Meyers, Matt Saunders, Robert White, and Mike Palmquist. (1994-2023). Content Analysis. The WAC Clearinghouse. Colorado State University. Available at https://wac.colostate.edu/repository/resources/writing/guides/.

Copyright Information

Copyright © 1994-2023 Colorado State University and/or this site’s authors, developers, and contributors. Some material displayed on this site is used with permission.

Content analysis is a type of qualitative research (as opposed to quantitative research) that focuses on analyzing content in various mediums, the most common of which is written words in documents.

It’s a very common technique used in academia, especially for students working on theses and dissertations, but here we’re going to talk about how companies can use qualitative content analysis to improve their processes and increase revenue.

Whether you’re new to content analysis or a seasoned professor, this article provides all you need to know about how data analysts use content analysis to improve their business. It will also help you understand the relationship between content analysis and natural language processing — what some even call natural language content analysis.

Don’t forget, you can get the free Intro to Data Analysis eBook, which will ensure you build the right practical skills for success in your analytical endeavors.

What is qualitative content analysis, and what is it used for?

Any content analysis definition must consist of at least these three things: qualitative language, themes, and quantification.

In short, content analysis is the process of examining preselected words in video, audio, or written mediums and their context to identify themes, then quantifying them for statistical analysis in order to draw conclusions. More simply, it’s counting how often you see two words close to each other.

For example, let’s say I place in front of you an audio bit, a old video with a static image, and a document with lots of text but no titles or descriptions. At the start, you would have no idea what any of it was about.

Let’s say you transpose the video and audio recordings on paper. Then you use a counting software to count the top ten most used words, excluding prepositions (of, over, to, by) and articles (the, a), conjunctions (and, but, or) and other common words like “very.”

Your results are that the top 5 words are “candy,” “snow,” “cold,” and “sled.” These 5 words appear at least 25 times each, and the next highest word appears only 4 times. You also find that the words “snow” and “sled” appear adjacent to each other 95% of the time that “snow” appears.

Well, now you have performed a very elementary qualitative content analysis.

This means that you’re probably dealing with a text in which snow sleds are important. Snow sleds, thus, become a theme in these documents, which goes to the heart of qualitative content analysis.

The goal of qualitative content analysis is to organize text into a series of themes. This is opposed to quantitative content analysis, which aims to organize the text into categories.

If you’ve heard about content analysis, it was most likely in an academic setting. The term itself is common among PhD students and Masters students writing their dissertations and theses. In that context, the most common type of content analysis is document analysis.

There are many types of content analysis, including:

  • Short- and long-form survey questions
  • Focus group transcripts
  • Interview transcripts
  • Legislature
  • Journals
  • Magazines
  • Public records
  • Newspapers
  • Textbooks
  • Cookbooks
  • Comments sections
  • Messaging platforms

This list gives you an idea for the possibilities and industries in which qualitative content analysis can be applied.

For example, marketing departments or public relations groups in major corporations might collect survey, focus groups, and interviews, then hand off the information to a data analyst who performs the content analysis.

A political analysis institution or Think Tank might look at legislature over time to identify potential emerging themes based on their slow introduction into policy margins. Perhaps it’s possible to identify certain beliefs in the senate and house of representatives before they enter the public discourse.

Non-governmental organizations (NGOs) might perform an analysis on public records to see how to better serve their constituents. If they have access to public records, it would be possible to identify citizen characteristics that align with their goal.

Analysis logic: inductive vs deductive

There are two types of logic we can apply to qualitative content analysis: inductive and deductive. Inductive content analysis is more of an exploratory approach. We don’t know what patterns or ideas we’ll discover, so we go in with an open mind.

On the other hand, deductive content analysis involves starting with an idea and identifying how it appears in the text. For example, we may approach legislation on wildlife by looking for rules on hunting. Perhaps we think hunting with a knife is too dangerous, and we want to identify trends in the text.

Neither one is better per se, and they each have carry value in different contexts. For example, inductive content analysis is advantageous in situations where we want to identify author intent. Going in with a hypothesis can bias the way we look at the data, so the inductive method is better

Deductive content analysis is better when we want to target a term. For example, if we want to see how important knife hunting is in the legislation, we’re doing deductive content analysis.

Measurements: idea coding vs word frequency

Two main methodologies exist for analyzing the text itself: coding and word frequency. Idea coding is the manual process of reading through a text and “coding” ideas in a column on the right. The reason we call this coding is because we take ideas and themes expressed in many words, and turn them into one common phrase. This allows researchers to better understand how those ideas evolve. We will look at how to do this in word below.

In short, coding in the context qualitative content analysis follows 2 steps:

  1. Reading through the text one time
  2. Adding 2-5 word summaries each time a significant theme or idea appears

Word frequency is simply counting the number of times a word appears in a text, as well as its proximity to other words. In our “snow sled” example above, we counted the number of times a word appeared, as well as how often it appeared next to other words. There’s are online tool for this we’ll look at below.

In short, word frequency in the context of content analysis follows 2 steps:

  1. Decide whether you want to find a word, or just look at the most common words
  2. Use word’s Replace function for the first, or an online tool such as Text Analyzer for the second (we’ll look at these in more detail below).

Many data scientists consider coding as the only qualitative content analysis, since word frequency turns to counting the number of times a word appears, making is quantitative.

While there is merit to this claim, I personally do not consider word frequency a part of quantitative content analysis. The fact that we count the frequency of a word does not mean we can draw direct conclusions from it. In fact, without a researcher to provide context on the number of time a word appears, word frequency is useless. True quantitative research carries conclusive value on its own.

Measurements AND analysis logic

There are four ways to approach qualitative content analysis given our two measurement types and inductive/deductive logical approaches. You could do inductive coding, inductive word frequency, deductive coding, and deductive word frequency.

The two best are inductive coding and deductive word frequency. If you would like to discover a document, trying to search for specific words will not inform you about its contents, so inductive word frequency is un-insightful.

Likewise, if you’re looking for the presence of a specific idea, you do not want to go through the whole document to code just to find it, so deductive coding is not insightful. Here’s simple matrix to illustrate:

Inductive (discovery) Deductive (locating)
Coding (summarizing ideas) GOOD. (Example: discovering author intent in a passage.) BAD. (Example: coding an entire document to locate one idea.)
Word frequency (counting word occurrences) OK. (Example: trying to understand author intent by pulling to 10% of words.) GOOD. (Example: locating and comparing a specific term in a text.)
Matrix of measurement types and logical approaches in content analysis

Qualitative content analysis example

We looked at a small example above, but let’s play out all of the above information in a real world example. I will post the link to the text source at the bottom of the article, but don’t look at it yet. Let’s jump in with a discovery mentality, meaning let’s use an inductive approach and code our way through each paragraph.

*Click the “1” superscript to the right for a link to the source text.1

How to do qualitative content analysis

We could use word frequency analysis to find out which are the most common x% of words in the text (deductive word frequency), but this takes some time because we need to build a formula that excludes words that are common but that don’t have any value (a, the, but, and, etc).

As a shortcut, you can use online tools such as Text Analyzer and WordCounter, which will give you breakdowns by phrase length (6 words, 5 words, 4 words, etc), without excluding common terms. Here are a few insightful example using our text with 7 words:

7 word strings, inductive word frequency, content analysis

Perhaps more insightfully, here is a list of 5 word combinations, which are much more common:

5 word strings, inductive word frequency, content analysis

The downside to these tools is that you cannot find 2- and 1-word strings without excluding common words. This is a limitation, but it’s unlikely that the work required to get there is worth the value it brings.

OK. Now that we’ve seen how to go about coding our text into quantifiable data, let’s look at the deductive approach and try to figure out if the text contains a single word we’re looking for. (This is my favorite.)

Deductive word frequency

We know the text now because we’ve already looked through it. It’s about the process of becoming literate, namely, the elements that impact our ability to learn to read. But we only looked at the first four sections of the article, so there’s more to explore.

Let’s say we want to know how a household situation might impact a student’s ability to read. Instead of coding the entire article, we can simply look for this term and it’s synonyms. The process for deductive word frequency is the following:

  1. Identify your term
  2. Think of all the possible synonyms
  3. Use the word find function to see how many times they appear
  4. If you suspect that this word often comes in connection with others, try searching for both of them

In my example, the process would be:

  1. Household
  2. Parents, parent, home, house, household situation, household influence, parental, parental situation, at home, home situation
  3. Go to “Edit>Find>Replace…” This will enable you to locate the number of instances in which your word or combinations appear. We use the Replace window instead of the simply Find bar because it allows us to visualize the information.
  4. Accounted for in possible synonyms

The results: 0! None of these words appeared in the text, so we can conclude that this text has nothing to do with a child’s home life and its impact on his/her ability to learn to read. Here’s a picture:

deductive word frequency content analysis

Don’t Be Afraid of Content Analysis

Content analysis can be intimidating because it uses data analysis to quantify words. This article provides a starting point for your analysis, but to ensure you get 90% reliability in word coding, sign up to receive our eBook Beginner Content Analysis. I went from philosophy student to a data-heavy finance career, and I created it to cater to research and dissertation use cases.

Content analysis vs natural language processing

While similar, content analysis, even the deductive word frequency approach, and natural language processing (NLP) are not the same. The relationship is hierarchical. Natural language processing is a field of linguistics and data science that’s concerned with understanding the meaning behind language.

On the other hand, content analysis is a branch of natural language processing that focuses on the methodologies we discussed above: discovery-style coding (sometimes called “tokenization”) and word frequency (sometimes called the “bag of words” technique)

For example, we would use natural language processing to quantify huge amounts of linguistic information, turn it into row-and-column data, and run tests on it. NLP is incredibly complex in the details, which is why it’s nearly impossible to provide a synopsis or example technique here (we’ll provide them in coursework on AnalystAnswers.com). However, content analysis only focuses on a few manual techniques.

Content analysis in marketing

Content analysis in marketing is the use of content analysis to improve marketing reach and conversions. has grown in importance over the past ten years. As digital platforms become more central to our understanding and interaction with others, we use them more.

We write out ideas, small texts. We post our thoughts on Facebook and Twitter, and we write blog posts like this one. But we also post videos on youtube and express ourselves in podcasts.

All of these mediums contain valuable information about who we are and what we might want to buy. A good marketer aims to leverage this information in three ways:

  1. Collect the data
  2. Analyze the data
  3. Modify his/her marketing messaging to better serve the consumer
  4. Pretend, with bots or employees, to be a consumer and craft messages that influence potential buyers

The challenge for marketers doing this is getting the rights to access this data. Indeed, data privacy laws have gone into play in the European Union (General Data Protection Regulation, or GDPR) as well as in Brazil (General Data Protection Law, or GDPL).

Content analysis vs narrative analysis

Content analysis is concerned with themes and ideas, whereas narrative analysis is concerned with the stories people express about themselves or others. Narrative analysis uses the same tools as content analysis, namely coding (or tokenization) and word frequency, but its focus is on narrative relationship rather than themes. This is easier to understand with an example. Let’s look at how we might code the following paragraph from the two perspectives:

I do not like green eggs and ham. I do not like them, Sam-I-Am. I do not like them here or there. I do not like them anywhere!

Content analysis: the ideas expressed include green eggs and ham. the narrator does not like them

Narrative analysis: the narrator speaks from first person. He has a relationship with Sam-I-Am. He orients himself with regards to time and space. he does not like green eggs and ham, and may be willing to act on that feeling.

Content analysis vs document analysis

Content analysis and document analysis are very similar, which explains why many people use them interchangeably. The core difference is that content analysis examines all mediums in which words appear, whereas document analysis only examines written documents.

For example, if I want to carry out content analysis on a master’s thesis in education, I would consult documents, videos, and audio files. I may transcribe the video and audio files into a document, but I wouldn’t exclude them form the beginning.

On the other hand, if I want to carry out document analysis on a master’s thesis, I would only use documents, excluding the other mediums from the start. The methodology is the same, but the scope is different. This dichotomy also explains why most academic researchers performing qualitative content analysis refer to the process as “document analysis.” They rarely look at other mediums.

Content Gap Analysis

Content gap analysis is a term common in the field of content marketing, but it applies to the analytical fields as well. In a sentence, content gap analysis is the process of examining a document or text and identifying the missing pieces, or “gap,” that it needs to be completed.

As you can imagine, a content marketer uses gap analysis to determine how to improve blog content. An analyst uses it for other reasons. For example, he/she may have a standard for documents that merit analysis. If a document does not meet the criteria, it must be rejected until it’s improved.

The key message here is that content gap analysis is not content analysis. It’s a way of measuring the distance an underperforming document is from an acceptable document. It is sometimes, but not always, used in a qualitative content analysis context.

  1. Link to Source Text [↩]

In our big data era, best content analysis software programs (also called document analysis tools or text mining software) are more than crucial.

They help you to examine almost any type of unstructured text data such as business documents, emails, social media, chats, comments, news, blogs, competitor websites, marketing surveys questions, customer feedbacks, product reviews, call center transcripts and even scientific documents.

They come in many forms: premium or free online text analysis tools, on-premise document analysis software for MAC and Windows and etc.

On this page, we collected 10 of the top software for content analysis to allow you gain valuable insights from a large amount of unstructured data.

1. Word Stat

WordStat is a flexible and very easy-to-use content analysis and text mining software tool for handling large amounts of data. It helps you to quickly extract themes, patterns, and trends and analyze unstructured and structured data from many types of documents.

You can use Word Stat for a wide variety of content examing activities such as: analysis of interview or focus group transcripts, competitive websites analysis, business intelligence, information extraction from customer complaints and etc.

In addition, Word Stat can analyze news coverage or scientific literature or can help you in fraud detection, patent analysis, and authorship attribution.

Key Benefits and Features:

  • Text processing capabilities.
  • Integrated explanatory text mining and visualization tools such as clustering, proximity plots, and more.
  • Relates unstructured to structured data such as dates, numbers or categorical data for identifying temporal trends.
  • Univariate keyword frequency analysis.
  • Keyword retrieval function and keyword co-occurrence analysis.
  • Analysis of case or document similarity.
  • Automated text classification and many others.

Operating Systems: Microsoft Windows XP, 2000, Vista, Windows 7, 8 and 10, Mac OS, Linux.

Website: https://provalisresearch.com/

2. Lexalytics

When it comes to the best content analysis software and tools, Lexalytics definitely has a top place here.

Lexalytics makes cloud and on-premise text and sentiment analysis solutions that help you to transform customers’ thoughts and conversations into valuable and actionable insights.

Lexalytics’ products SaaS Semantria and on-premise Salience are integrated into platforms for social media monitoring, survey analysis, reputation management, market research, and many other content analysis activities.

Key Benefits and Features:

  • Named entity extraction for identifying named text figures such as people, places, and brands, specific abbreviations, street addresses, phone numbers and whatever else you want.
  • Themes to deals with multiple meaning words.
  • Categories.
  • Intentions such as Buy, Sell, Recommend, and Quit.
  • Sentiment analysis to show you how consumers feel about their subject.
  • Natural language processing engine.

Salience is available for Microsoft Windows and Linux servers. Semantria API is for analyzing documents in the cloud.

Website: https://www.lexalytics.com

3. DiscoverText

They are some kind of innovators. DiscoverText combines adaptive software algorithms with human-based coding for conducting large-scale analyses.

The software can merge unstructured data from different sources. Examples of unstructured data are different documents and text files, open-ended answers on surveys, emails, and other offline and online text sources.

With one platform, you can maintain your data quality metrics, you can capture, filter, de-duplicate, search, cluster, human code, and machine-classify large numbers of small text units.

Key Benefits and Features:

  • Advanced, multi-faceted power tool for text analysis.
  • Schedule repeat fetches of live feeds via API.
  • Classification via automation and manual training.
  • Attach memos to documents and datasets.
  • Redact sensitive information.
  • Connect and work with the team via your browser.
  • Generate high-level summary.
  • Build topic models to automate.
  • Enjoy a cloud-hosted application.
  • Share projects, re-use models, and update results.

Website: https://discovertext.com

4. Rapid Miner Text Extension

If you are searching for the best free content analysis software, Rapid Miner Text Extension worth considering. It is an extension of the popular free and open source data science software platform – Rapid Miner.

Rapid Miner Text Extension has it all for statistical text analysis and Natural Language Processing (NLP). It allows you to load texts from a variety of data sources and documents, transform them and effortlessly analyze your content.

Key Benefits and Features:

  • Supports many text formats including plain text, HTML, or PDF.
  • Provides standard filters for tokenization, stemming, stop word filtering or n-gram generation.
  • Document class that stores the whole documents in combination with additional meta information.
  • Statistical text analysis.

You have to be a member of the RapidMiner community to use their Text Extension. You can join the community and use all the RapidMiner extensions for free.

Website: https://marketplace.rapidminer.com/

5. SAS Text Miner

No doubt that SAS is one of the biggest names in the business intelligence and data mining industry. Their Text Miner product can bring you fast and deep insight from a wide variety of unstructured documents and other data types.

The software allows you easily analyze text data from the web, books, comment fields, and many other text sources. The tool automatically reads text data and deliver advanced analysis algorithms to help you catch trends and take new opportunities.

Key Benefits and Features:

  • High-performance text mining to help you quickly evaluate a large number of document collections.
  • Processing interface conforms to Windows accessibility standards.
  • Automatic Boolean rule generation to easily classify content.
  • Term profiling and trending.
  • Document theme discovery.
  • Native support for multiple languages.
  • Determine what’s hot and what’s not.

Website: https://www.sas.com

6. Text2data

Text2data is a great product for those who are looking for affordable text mining and content analysis software.

This is an NLP/ Deep Learning (Natural Language Processing) platform with proprietary algorithms that provide cloud-based text analytics services for understanding your customers better.

No matter if you want to analyze social media, emails, free-text survey questions or other sources of information about your customers’ experiences, the system got you covered.

Key Benefits and Features:

  • Sentiment analysis to identify and extract subjective information in a text document.
  • Text/ Document summarization.
  • Document classification based on pre-trained data models.
  • Entity extraction as the names of persons, organizations, locations etc.
  • Themes discovery that can significantly improve the inferring context of the document.
  • Keyword analysis of the documents and assigning the sentiment score to them.
  • Citation detection.
  • Slang detection.

Website: http://text2data.org/

7. Etuma

Etuma Text Analysis is one of the top content analysis software tools able to turn your open-ended customer feedbacks into actionable information.

Etuma serves a wide variety of business domains such as customer experience management, competitor analysis, market analysis and intelligence, contact centers, sentiment analysis, voice of the customer (research surveys), social media monitoring, employee engagement, and chat analysis.

Etuma discovers in real-time what your customers like and dislike about your products. Etuma is a SaaS product that runs in the cloud.

Key Benefits and Features:

  • Multi-language – understand several languages – results in one language.
  • Automatic software with no human work. It uses Natural Language Processing (NLP) and Artificial Intelligence.
  • Consistent – relevant industry-specific categorization.
  • Discovers in real-time what your customers like and dislike.
  • Can analyze any type of feedback.
  • Easy to integrate – the tool has connectors to a variety of survey, contact center, and customer experience management platforms such as Salesforce, Zendeck and etc.
  • A large number of text analysis and text mining functions such as syntactic parsing, word count, tokenization roots, word stemming, end of sentence detection, part of speech (POS) identifications, spelling correction, synonyms, semantic topic, sentiment detection and etc.

Website: www.etuma.com

8. Luminoso

Luminoso is a great text mining and analysis tool that deals with a large number of unstructured data to enable you to quickly surface key topics, identify the conversations that matter, tab into customer services, and track trends over time.

This easy to use tool allows you to quickly explore and analyze data like open-ended survey responses, product reviews, and call center transcripts.

The software uses the latest methods in artificial intelligence and natural language understanding to help you in a variety of business challenges like understanding key drivers behind NPS scores and classifying customer support tickets.

Key Benefits and Features:

  • Quickly uncover high-value insights in your text data.
  • Can natively analyze data in 13 languages, including English, Arabic, Chinese, Spanish, and Russian.
  • Within minutes, Luminoso Text Analytics identifies key topics, meaningful connections, and trends.
  • Latest methods in artificial intelligence and natural language understanding.
  • Flexible outputs and intuitive dashboards that make reporting and sharing easy
  • Analyzing contact center data.
  • Uncover connections between employees’ engagement levels and customers’ satisfaction.
  • Understand caregiving needs by analyzing text data from online communities.

Website: https://luminoso.com

9. Pingar DiscoveryOne

Pingar DiscoveryOne is a content analysis software designed to let enterprises discover opportunities and risks hidden in the text of web and corporate data.

It can help you reduce the cost of document storage and security. This powerful text analytics engine is able to show you relevant summaries of articles, feeds from social channels, reports, or even inbound customer email complaints.

Pingar DiscoveryOne aims to indicate trends, topics, and issues exposed in different types of documents, posts, articles, and emails.

Key Benefits and Features:

  • Content Intelligence solution that quickly identifies key market data and identifies trends over time.
  • Based on machine learning and can be uniquely adapted to any industry.
  • Can constantly crawl the internet in order to find relevant information for monitoring competitors, emerging trends and analyzing target markets.
  • Improves search, enable defensible deletion and identify document security risks.
  • Shows reports with graphs displaying important relationships.
  • Combines social, news, blogs, forums, chats, emails and documents into same analysis.
  • Can read your documents and categorize them according to your needs.
  • Can detect networks, relationships, and their importance.
  • Media and sentiment analysis and etc.

Website: http://pingar.com/

10. MeaningCloud

MeaningCloud is one of the easiest and most affordable content analysis software tools that can extract the meaning from any text source: social conversation, posts, emails, articles, business documents and etc.

You can analyze customer feedback through email, social media, call center, surveys, tweets and comments and any other communication channel. You can handle your content publishing, social media analysis, document coding, and management.

MeaningCloud is an in-cloud solution without a need to install it on your computer.

Key Benefits and Features:

  • Combines the advanced features: feature-level sentiment analysis, social media language processing and etc.
  • Very easy to use and integrate.
  • Multiple languages – you can analyze contents in English, Spanish, French, Portuguese or Italian.
  • Affordable – you pay only for what you use.
  • The tool automatically classifies documents of any type (medical records, notes, claims, etc.).
  • Very customizable content analysis software.
  • Voice of the Customer analysis, customer experience management, content publishing, and monetization.

Conclusion:

Picking the best content analysis software and text data mining tools for your needs isn’t an easy process.

Despite there is a plenty of good options available on the market that combines advanced technologies (such as artificial intelligence and natural language understanding), there might be no a single perfect solution.

Unstructured data is one of the most powerful weapons that can bring hidden and even unexpected values to your business.

Before choosing your text analysis software solution, you should examine your content analysis needs first. Depending on your business size, it might be a significant research project.

A comprehensive analysis of text data adds a high level of intelligence and gives you decision support at any scale, opportunities, competitive advantages, and increased work efficiency.

So, take your time to examine your needs and define the software features you need and then go for the solution.

What are your suggestions for good software for content analysis and text data mining tools? Share your thoughts and experience in the comment area below.

Published on
July 18, 2019
by

Amy Luo.

Revised on
December 5, 2022.

Content analysis is a research method used to identify patterns in recorded communication. To conduct content analysis, you systematically collect data from a set of texts, which can be written, oral, or visual:

  • Books, newspapers and magazines
  • Speeches and interviews
  • Web content and social media posts
  • Photographs and films

Content analysis can be both quantitative (focused on counting and measuring) and qualitative (focused on interpreting and understanding). In both types, you categorize or “code” words, themes, and concepts within the texts and then analyze the results.

What is content analysis used for?

Researchers use content analysis to find out about the purposes, messages, and effects of communication content. They can also make inferences about the producers and audience of the texts they analyze.

Content analysis can be used to quantify the occurrence of certain words, phrases, subjects or concepts in a set of historical or contemporary texts.

Quantitative content analysis example

To research the importance of employment issues in political campaigns, you could analyze campaign speeches for the frequency of terms such as unemployment, jobs, and work and use statistical analysis to find differences over time or between candidates.

In addition, content analysis can be used to make qualitative inferences by analyzing the meaning and semantic relationship of words and concepts.

Qualitative content analysis example

To gain a more qualitative understanding of employment issues in political campaigns, you could locate the word unemployment in speeches, identify what other words or phrases appear next to it (such as economy, inequality or laziness), and analyze the meanings of these relationships to better understand the intentions and targets of different campaigns.

Because content analysis can be applied to a broad range of texts, it is used in a variety of fields, including marketing, media studies, anthropology, cognitive science, psychology, and many social science disciplines. It has various possible goals:

  • Finding correlations and patterns in how concepts are communicated
  • Understanding the intentions of an individual, group or institution
  • Identifying propaganda and bias in communication
  • Revealing differences in communication in different contexts
  • Analyzing the consequences of communication content, such as the flow of information or audience responses

Advantages of content analysis

  • Unobtrusive data collection

You can analyze communication and social interaction without the direct involvement of participants, so your presence as a researcher doesn’t influence the results.

  • Transparent and replicable

When done well, content analysis follows a systematic procedure that can easily be replicated by other researchers, yielding results with high reliability.

  • Highly flexible

You can conduct content analysis at any time, in any location, and at low cost – all you need is access to the appropriate sources.

Disadvantages of content analysis

    • Reductive

    Focusing on words or phrases in isolation can sometimes be overly reductive, disregarding context, nuance, and ambiguous meanings.

    • Subjective

    Content analysis almost always involves some level of subjective interpretation, which can affect the reliability and validity of the results and conclusions, leading to various types of research bias and cognitive bias.

    • Time intensive

    Manually coding large volumes of text is extremely time-consuming, and it can be difficult to automate effectively.

    How to conduct content analysis

    If you want to use content analysis in your research, you need to start with a clear, direct research question.

    Example research question for content analysis

    Is there a difference in how the US media represents younger politicians compared to older ones in terms of trustworthiness?

    Next, you follow these five steps.

    1. Select the content you will analyze

    Based on your research question, choose the texts that you will analyze. You need to decide:

    • The medium (e.g. newspapers, speeches or websites) and genre (e.g. opinion pieces, political campaign speeches, or marketing copy)
    • The inclusion and exclusion criteria (e.g. newspaper articles that mention a particular event, speeches by a certain politician, or websites selling a specific type of product)
    • The parameters in terms of date range, location, etc.

    If there are only a small amount of texts that meet your criteria, you might analyze all of them. If there is a large volume of texts, you can select a sample.

    To research media representations of younger and older politicians, you decide to analyze news articles and opinion pieces in print newspapers between 2017–2019. Because this is a very large volume of content, you choose three major national newspapers and sample only Monday and Friday editions.

    2. Define the units and categories of analysis

    Next, you need to determine the level at which you will analyze your chosen texts. This means defining:

    • The unit(s) of meaning that will be coded. For example, are you going to record the frequency of individual words and phrases, the characteristics of people who produced or appear in the texts, the presence and positioning of images, or the treatment of themes and concepts?
    • The set of categories that you will use for coding. Categories can be objective characteristics (e.g. aged 30-40lawyer, parent) or more conceptual (e.g. trustworthy, corrupt, conservative, family oriented).

    Your units of analysis are the politicians who appear in each article and the words and phrases that are used to describe them. Based on your research question, you have to categorize based on age and the concept of trustworthiness. To get more detailed data, you also code for other categories such as their political party and the marital status of each politician mentioned.

    3. Develop a set of rules for coding

    Coding involves organizing the units of meaning into the previously defined categories. Especially with more conceptual categories, it’s important to clearly define the rules for what will and won’t be included to ensure that all texts are coded consistently.

    Coding rules are especially important if multiple researchers are involved, but even if you’re coding all of the text by yourself, recording the rules makes your method more transparent and reliable.

    In considering the category “younger politician,” you decide which titles will be coded with this category (senator, governor, counselor, mayor). With “trustworthy”, you decide which specific words or phrases related to trustworthiness (e.g. honest and reliable) will be coded in this category.

    4. Code the text according to the rules

    You go through each text and record all relevant data in the appropriate categories. This can be done manually or aided with computer programs, such as QSR NVivo, Atlas.ti and Diction, which can help speed up the process of counting and categorizing words and phrases.

    Following your coding rules, you examine each newspaper article in your sample. You record the characteristics of each politician mentioned, along with all words and phrases related to trustworthiness that are used to describe them.

    5. Analyze the results and draw conclusions

    Once coding is complete, the collected data is examined to find patterns and draw conclusions in response to your research question. You might use statistical analysis to find correlations or trends, discuss your interpretations of what the results mean, and make inferences about the creators, context and audience of the texts.

    Let’s say the results reveal that words and phrases related to trustworthiness appeared in the same sentence as an older politician more frequently than they did in the same sentence as a younger politician. From these results, you conclude that national newspapers present older politicians as more trustworthy than younger politicians, and infer that this might have an effect on readers’ perceptions of younger people in politics.

    Cite this Scribbr article

    If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

    Luo, A.
    (2022, December 05). Content Analysis | Guide, Methods & Examples. Scribbr.
    Retrieved April 13, 2023,
    from https://www.scribbr.com/methodology/content-analysis/

    Is this article helpful?

    You have already voted. Thanks :-)
    Your vote is saved :-)
    Processing your vote…

    Content Analysis is a research method used to determine a specific pattern of words and concepts given within the text or set of documents.

    The practice of content analysis has been in practice since the early 1940s. Then, it was limited to examining texts of different studies manually or through slow mainframe computers to check the frequency of specific terms/words. Later, in the 1950s, content analysis incorporated sophisticated means of analysis that used to focus upon the concepts and semantic relationships as well.

    Nowadays, with the revolution of technology, Content Analysis is used for analyzing various aspects of content for exploring mental models alongside the cognitive, linguistic, cultural, social and historical significance of content.

    In this post, we will dive deep into the world of Content Analysis, and understand how it plays a crucial role in finding out the statistical impact of a particular kind of content. So, without any further ado, let us get started right away

    Introduction to the Content Analysis

    Content Analysis is a quantitative as well as a qualitative method that offers a more objective evaluation of the content.

    It will, for sure, be more accurate than the comparison based upon the impressions of any listener. It is more effective than a review or evaluation. You will find the essential numbers and percentages to gauge the performance of your content.

    You can use content analysis for- Removing subjectivities from your summaries & Simplifying the detection of trends in your niche.

    For channelizing content analysis, you generally collect data from oral, visual or written texts. This kind of data is found in books, magazines, newspapers, speeches, films, interviews, photographs, social media, web and different other sorts of content.

    All in all, Content Analysis is an expert level technique that helps in finding out the purpose, effects, and messages of any form of communication content.

    Let us now have a look upon the types of texts in Content Analysis-

    • The written text for instance papers and books
    • Oral text for instant theatrical performance and speech
    • Iconic text for instance paintings, drawings, and icons
    • Audio-visual text for instance TV programs, videos, and movies
    • Hypertexts such as texts found on the Internet

    Questions that every Content Analysis should Answer

    Questions that every Content Analysis should Answer

    As per the suggestion of Klaus Krippendorff, there should be six questions that should be addressed by every Content Analysis-

    • Which data or information are investigated or analysed?
    • How are the data and related information defined or characterised?
    • From what population or the kinds of the populace are data drawn?
    • What is the relevant context of the content?
    • What are the limits of your content analysis?
    • What is to be measured with the content analysis?

    Types of Content Analysis

    As discussed above, Quantitative and Qualitative are two forms of Content Analysis, and their difference will help you understand the significance of these two analyses-

    1. Quantitative Content Analysis

    You can use it by focusing upon counting and measuring the occurrence of specific phrases, words, concepts, and subjects. For instance, if you are performing content analysis for a speech on employment issues, terms such as jobs, unemployment, work, etc. will be focused and analyzed.

    2. Qualitative Content Analysis

    Such kind of content analysis focuses upon interpretation and understanding of a particular type of content. For instance, if we perform qualitative analysis upon the aforestated Employment Issue Speech Example, you will look for the term unemployment and other terms (inequality, economy, etc.) next to it. Then, you should analyze the relationships of these terms to gauge the intentions and semantic relations of these terms and concepts in the campaigns.

    These two types of Content Analysis can further be understood as Conceptual Analysis and Relational Analysis. Let us understand this version of content analysis division as well-

    3. Conceptual Analysis

    Conceptual analysis is similar to Quantitative Analysis and performed in a specific manner. While doing Conceptual Content analysis, a concept is chosen for examination, and the study implicates quantifying and tallying its presence.

    4. Conceptual Content Analysis Example

    For example, say that you have the impression that your favorite author often writes about love. So with conceptual analysis, you can quickly determine how many times words such as crush, fondness, liking, adore, appear in the text.

    5. Relational Analysis

    Whereas in the Relational Analysis; it begins with identifying the ideas already present in the given text or set of documents. It is quite similar to Qualitative Analysis. It deals with the examination of relationships amongst the concepts and terms in content.

    6. Relational Content Analysis Example

    Returning to the same ‘Love’ example, you start with the first step and examine the relation of the content. You identify these words (such as crush, fondness, liking, adore) and then conclude what different meanings emerge from this group of words. It is then you complete that your favorite author writes about love very often.

    This here, we can say that conceptual analysis focuses on looking at the occurrence of selected terms in the text, although the time can be implicit or explicit as well.

    On the other hand, the relational analysis focuses on to look for semantic or essential relationships. Unique concepts, in and of themselves, are viewed as having no inherent significance. Instead, the implication is a product of the relationships among concepts in a text.

    Why is Content Analysis important?

    Why is Content Analysis important

    Now you must be thinking, why we use content analysis if it is time-consuming?

    Content analysis is used by researchers to find out about the messages, purposes, and effects of communication content.

    Possibly, it can be applied to examine any piece of writing or occurrence of recorded conversation, so that is the central fact, researchers use content analysis. They can also make assumptions about the producer and audience of the texts to analyze.

    Moreover, content analysis is used in the confusing collection of fields, varying from gender and age issues, to psychology and mental science, marketing and media studies, literature and rhetoric, sociology and political science, ethnography and cultural studies, and many other fields of inquiry. Also, content analysis indicates a close relationship with socio- and psycholinguistics and plays a very integral role in the improvement of artificial intelligence.

    Uses of Content Analysis

    Uses of Content Analysis

    1) You can use Content Analysis for making inferences regarding the antecedents of communication such as-

    • Analyzing the traits of individuals
    • Inferring cultural aspects & change
    • Providing legal & evaluative evidence
    • Answering questions of disputed authorship

    2) Content analysis is also used for describing and making inferences associated with the characteristics of any communication such as

    • Describing trends in communication content
    • Associating known characteristics of sources to messages they produce
    • Comparing communication content to standards
    • Establishing the relationship of known characteristics of audiences to messages produced for them
    • Expressing different patterns of communication
    • Evaluating techniques of persuasion

    3) Content analysis can also be used for making inferences regarding the effects and consequences of communication such as-

    • Measuring readability
    • Analyzing the flow of information
    • Assessing responses to communications

    Advantages

    Advantages

    Because content analysis is spread to a wide range of fields covering a broad range of texts from marketing to social science disciplines, it has various possible goals. They majorly are-

    • Determining the psychological and emotional state of a person and understanding their intentions
    • Disclosing the distinction in communication and different contexts
    • Finding correlations and patterns in how concepts are conveyed to different types of target audiences
    • Revealing international differences in communication content in a variety of contexts
    • Detecting the reality of propaganda and bias in communication
    • Describing the attitudinal and behavioral responses to contact and many more

    Content analysis advantages us to analyze communication and social interaction without the direct involvement of the participants.

    It follows a systematic strategy that can be easily reproduced by other researchers, generating results with high reliability. Plus, it can be conducted at any time, any location at a little cost. This, all you need is to access the appropriate sources.

    Disadvantages 

    All coins are two-faced. Similarly, the content analysis also has certain advantages as well as disadvantages.

    We have previously mentioned some of the benefits. Now coming to downsides-

    Content analysis can be reductive. That is, it’s focusing on words or phrases in isolation can be over reductive, mainly when dealing with complex texts.

    It is subjective, too, which means it almost always involves some level of personal interpretation, which in turn affects the reliability and validity of the conclusion and results. Content analysis is time-intensive. Manually coding large quantities of content is extremely time-consuming, and it can be difficult to automate or computerize effectively.

    It is liable to increased mistakes, mainly when the relational analysis is used to attain a higher level of understanding. Also, it is continually devoid of hypothetical root or endeavors too liberally to draw crucial assumptions about the relationships and impacts implied in a study. It also tends to consist of word scores only.

    6 Steps to Conduct Content Analysis

    How to Conduct Content Analysis

    Further, there comes a question: how do we conduct such an analysis-

    1) To start with content analysis usage in your research, you need to start with a clear, direct research question. You need to identify the problem as to the first and foremost step. After that is done, the second thing is to select the content you will analyze.

    2) Choose a sample for analysis. In this step, you have to look for a medium such as newspaper, speech, etc. from where you will take your content. The parameter in terms of location, date range, etc. is all part of selecting the material.

    3) Next, you have to determine the type of analysis. Then reduce the text to categories and code and define the units. It means you have to determine the level of analysis of the chosen text.

    4) The group of meaning that will be coded. For example, you have to record the frequency of the set of words that frequently appear in the text, its theme, and concepts, the presence, and positioning of the image, etc. The collection of categories that you will use for coding, for example, objective characteristics like female, mother, lawyer, or conceptual like family-oriented, trustworthy, corrupt, etc

    5) Next, you have to develop a set of rules for coding and code the text according to it, or we can say code relationships. Coding involves organizing the units of meaning into the previously defined categories.

    Coding rules are essential, mainly when multiple researchers are included. But if you are coding all the text by yourself, it makes it more transparent and reliable. You code the text and record all the data in categories. This is done manually, but it can be computerized to make the process of counting and categorizing words and phrases a speedy task.

    6) Lastly, you map out the representation, analyze the results, and conclude. Cone coding is complete; the collected data is examined to find patterns and draw conclusions in response to the research question with which we have started.

    This is how you can conduct a successful content analysis. It has various advantages as well as disadvantages. Content analysis can help provide cultural and historical intuitions over time.

    Wrapping it up!

    Though it is time-consuming, Content Analysis is helpful for researchers in the examination of a particular text or set of documents in a manner to find a specific pattern.  Content analysis can be widely used in any field as it is not related to one such field, which makes it flexible.

    It is one of the most effective methods for establishing the connection between causes such as program content and effect like the size. For doing your surveys result-driven, it is essential that you systematically establish a relation between your survey findings and program output. Content Analysis will help you do so in an adept fashion.

    For different organizations, content analysis is used for evaluating and improving their programming. So, what are your thoughts about the content analysis for your business model?

    Share your views about the significance of content analysis in boosting the performance of your marketing or advertising campaigns with us in the comments below.

    Liked this post? Check out these detailed articles on Topic of Blogging

    Alternatively, check out the Marketing91 Academy, which provides you access to 10+ marketing courses and 100s of Case studies.

    Понравилась статья? Поделить с друзьями:
  • Contained in symbol word
  • Contact to excel apk
  • Construction word of the day
  • Construction of russian word
  • Constituent parts of the word