Academic Vocabulary Instruction: Does Word Generation Really Teach You Two Years’ Worth of Words in 22 Weeks?

Study reviewed:

One of the hot topics of the past decade or so in language education research has been the teaching of “academic language” and “academic vocabulary.”

I have no doubt that there is a thing called “academic language” (Jim Cummins told us this back in 1979), that it’s important for kids to have in order to be successful in school, and that it is rather complex in its many manifestations. In fact, Nagy and Townsend (2012) have convinced me that it is so complex, it’s almost certainly one of those aspects of our language competence that is mostly acquired, not learned (Krashen, 1982).

I – like you, I’m guessing – did not take an “academic language” course in high school. That’s because everything most of us know about reading academic texts came from reading academic texts. Academic language is acquired like all other aspects of language: through listening to or reading comprehensible input that contains that particular aspect (more on how to do this in a later post).

Yet perhaps because researchers need to justify their existence, most took their new-found knowledge of the characteristics of academic language and decided that it must be taught (in-services and Uncle Sam’s grant money soon followed, and I mean lots and lots and lots and lots and lots of grant money).

Researchers think their efforts have really been paying off. I think the opposite. I think that the evidence on the effectiveness of academic vocabulary instruction is weak, and that it is another potentially time-consuming distraction that teachers and administrators should be very wary of, at least in the forms currently on offer.

Snow, Lawrence, & White (2009)

An early study often cited as proof that academic language can be taught was done by Catherine Snow of Harvard University and her colleagues (Snow, Lawrence & White, 2009) (paywall, but open access here). Their study examined the effects of a program called Word Generation, which is supposed to teach students up to 120 “academic words” that are useful in helping them understand “the language of school.”

The Word Generation program includes a series of related, 15-minute vocabulary lessons conducted in different subject matter classes each day (e.g. Monday in social studies class, Tuesday in math class, etc.). The instruction also includes short readings and discussions. The intervention is meant to last 24 weeks, for a total of 120 words taught in 30 hours.

Snow et al.’s experiment was conducted in Boston Public Schools, with 697 students in the treatment group and 319 in the comparison classes (all grades 6 – 8). The researchers thought the program was a great success. They stated that:

[S]tudents in the Word Generation program learned approximately the number of words that differentiated eight from sixth graders on the pretest – in other words, participation in 20 to 22 weeks of the curriculum was equivalent to 2 years of incidental learning (p. 334, emphasis added).

This dramatic claim was repeated by Nagy and Townsend (2012) a few years later, and appears on the website of the group that sponsors the program, something called the Strategic Education Research Partnership.

The data from the study tell a different story, however.

Snow et al. gave all of the students a multiple-choice vocabulary test to measure how many words they learned. It consisted of 40 items drawn from the 120 academic words that were taught in the program. The treatment students were tested in October before the program began and then again in late May after it ended. But because life happens, the control group didn’t take the pretest until January, three months after the treatment group.

I’ve summarized the pretest and post-test scores by grade level in Table 1. I’ve adjusted the control group pretest scores to reflect the “missing” three months due to the late administration of the test. Snow et al. did this in a more roundabout way in another part of their paper, where they calculated the “per month” growth rates on the vocabulary assessment by school rather than grade (Table 4, p. 336). For Table 1, I assumed, as they did, that growth in vocabulary knowledge was linear from October to May (in Lawrence, Capotosto, Branum-Martin, White, & Snow (2012), the researchers make a similar assumption with their data).

Table 1: Snow et al. (2009) Word Generation Vocabulary Test Results

[table "22" not found /]

From Table 3, p. 337, Snow et al. (2009) Maximum score = 40

There are two things to notice about Table 1. First, the kids already knew half of the words they were to be taught. This is not an unusual finding in vocabulary instruction studies, and is in fact an inevitable problem when you try to teach any sort of discrete “skill” (spelling, grammar, etc.)

Second, there is a noticeable “summer slump” or decline each year – about half the gains made during the school year are “lost” over the summertime (for example, compare the post-test scores of grade 6 with the pretest scores of grade 7). This is again not unusual, but a well-documented pattern for students from low-income families. It is almost certainly a result of the relative lack of reading done by these students compared to kids from middle- and upper-class families.

Okay, back to our “two years’ gain in 22 weeks”: Snow and colleagues looked at the difference between the pre-test scores of incoming sixth graders and those of incoming eighth graders on their vocabulary test and saw that it was 3.6 points (20.6 (eighth grade) minus 17.0 (sixth grade)). This, they reasoned, was the “incidental” gain in academic vocabulary you might typically expect a student to make in two years’ time, without any special instruction.

Since the average student in their experimental group gained 4.4 points, and 4.4 > 3.6, the kids in the Word Generation program gained two years’ worth of vocabulary in only 20-22 weeks. QED.

But wait: now look at the gains made by the control group during that same period of time: 3.6 points. The average difference between incoming sixth graders and eighth graders in the control group is 3.2 points. By Snow and colleagues’ logic, then, I can now declare that the average student in the control group also gained more than 2 years’ worth of academic vocabulary in just 20 to 22 weeks! Everyone wins a prize in the Boston Public Schools, it seems (Garrison Keillor, call your office).

Let’s try this again.

The difference between the gains made by the treatment and control groups was 4.4 – 3.6 = 0.8 points on the vocabulary test. Let’s assume, as Snow et al. did, that 3.6 points represents the typical growth on their test from sixth to eighth grade. Since most of this 3.6 point gain will take place during the school year (no gains are likely in the summer, as we’ve seen), we’ll divide 3.6 by 18 school months to get about 0.2 points gained per month.

The advantage of the Word Generation group over the control group was thus 0.8/0.2, or four months, not two years. Snow et al.’s estimate was off by a factor of five.

“Okay,” you might thinking, “four months is four months, and we should be happy with that, no?” But what does 0.8 points amount to in practice? Since the vocabulary test was a total of 40 words sampled from the 120 words taught, we multiply this 0.8 advantage by three (120/40) to get a final difference of 2.4 words. The treatment group, after having undergone 30 hours of instruction on 120 words, knew about two and a half words more than the control group, which had no instruction on the words at all. (That’s 0.08 words an hour in case you’re counting, which nobody seems to be.)

The researchers emphasized that the kids actually picked up 12 words total (4.4 points times three = 13.2), which while true, does not take into account the estimated 10.8 words they would have learned just sitting in the control classes. In any case, even 12 words in 30 hours seems a fairly small return on investment (less than 1/2 of a word per hour).

Snow and company might respond that the kids got other things not directly measured in the study, such as better class discussions and exposure to more “academic discourse.” Perhaps. But reading 30 hours can also give you lots of benefits, including better spelling, writing, reading comprehension, and knowledge of the world (Krashen, 2004).

There was another “positive” finding in the study: students in the treatment group did better on the Massachusetts Comprehensive Assessment System (MCAS), the Bay State’s year-end reading test. How much better? I’d describe it as “barely.”

According to Snow et al.’s Table 7 (p. 338), the post-test by treatment interaction improved the amount of variance explained (R²) in their statistical model from .629 to .631. This means that being in the Word Generation group explained 0.2% of the difference in reading comprehension scores.

Please clap.

Print Friendly, PDF & Email