The New York Times education reporter Dana Goldstein on the rather silly notion that most kids are not “proficient” in writing:
Three-quarters of both 12th and 8th graders lack proficiency in writing, according to the most recent National Assessment of Educational Progress. And 40 percent of those who took the ACT writing exam in the high school class of 2016 lacked the reading and writing skills necessary to successfully complete a college-level English composition class, according to the company’s data.
We’ve heard all this before, of course. During the 1990s, we were told we had a “reading crisis,” that most kids were in danger of never learning how to read, or at least read very well. This view was in part based on the results of the National Assessment of Educational Progress (NEAP), the “Nation’s Report Card” issued by the U.S. Department of Education.
So it’s time to re-up our discussion of the absurd NAEP numbers:
During the early years of the NAEP tests, the Department released only the raw scores for each age level on its 0 to 500 scale, with no designations of which score was thought to constitute “basic knowledge” or “proficiency.” The designers of the NAEP test later decided that simply reporting the raw scores was no longer adequate in order to judge the progress of United States schools. The Department decided it would determine how well students were reading by establishing the minimum score constituting “below basic,” “basic,” “proficient,” and “advanced” reading. The “basic” level for fourth- grade reading, for example, was fixed at a score of 208. In 1994, 40% of United States children scored below the “basic” cutoff of 208.
The problem with this approach lies in “objectively” determining where these cutoff points should be. Glass (1978), after reviewing the various methods proposed for creating “minimal” criterion scores of performance, concluded that all such efforts are necessarily arbitrary. Of course, such arbitrary cutoff points already exist in education and many other fields, but at least they are recognized as arbitrary and not given the status of absolute or objective levels of competence. In 1991, the General Accounting Office (GAO) examined the how the NAEP defined their levels of proficiencies and found their methods to be questionable (Chelimsky, 1993).
NAEP scores on reading assessments, as I pointed out in a letter to the Wall Street Journal not too long ago (see the letter pinned to the top of my Twitter feed), have never been higher for most students. Writing proficiency is strongly linked to reading proficiency. If kids can’t write nowadays, they probably never could, which means either (a) our economy appears to run just fine even though most people are not “proficient” at writing, or (b) the NAEP criteria levels are nonsense.
No one who is familiar with how the NAEP sausage is made takes these levels seriously. Neither should you, and neither should The New York Times.
Follow Us!