Four years ago, when Congress enacted NCLB, it showed uncommon wisdom in requiring all states to participate in the National Assessment of Educational Progress (NAEP). The reasoning: the national test would serve as a benchmark of the states' own reports of progress, and the resulting sunshine would potentially buffer state leaders from the inevitable political pressure to lower their own standards. Gadfly noted in October, following the release of the 2005 NAEP results in reading and math, that there was good reason to believe state exams showing huge achievement gains were in fact suspect. (See here and here.) The New York Times also took note, and it published a trenchant piece illustrating the cavernous gap between states' definitions of "proficiency" and NAEP's. In Tennessee, for example, 87 percent of the state's 8th-grade students received "proficient" scores on the state exam, but just 21 percent of eight graders who took the NAEP math exam reached the same level. That trend is also reflected, sadly, in the just-released Trial Urban District Assessment, a special project of NAEP (see here and here). Just as with the national scores released last month, urban district scores are basically flat. For example, the percentage of fourth-grade New York City students at the proficient level jumped seven points from 2003 to 2005 on the state assessment, but didn't budge on the NAEP. The value of a single standard and single assessment instrument is clear, and we have Congress to thank for showing us the way.

"Students Ace State Tests, but Earn D's From U.S.," by Sam Dillon, New York Times, November 28, 2005

Item Type: