Measuring Up: What Education Testing Really Tells Us

Daniel Koretz
Harvard University Press

Measuring Up is an excellent primer on the basics of academic testing. In a pleasant, jargon-lite manner, author and Harvard professor Daniel Koretz provides a brief history of American education testing along with real-world examples illustrating the strengths and weaknesses of it. He also provides lay-friendly definitions of buzzwords like criterion-referenced, measurement error, and validity.

Koretz doesn't pretend to be a fan of the educational testing brought on by the federal No Child Left Behind Act and works to discount much of the assessment-based accountability aspects of the law. But most of this book is an acknowledgement that education testing isn't a black-or-white issue, and Koretz does a good job of telling both sides of the story. Here are a few points are worth lifting for Gadfly readers, given the current testing debate in Ohio.

- On standardized testing, he cautions that the goals of education are diverse and that only some of these goals are amenable to standardized testing. But he equally acknowledges that standardized tests "avoid irrelevant factors that might distort comparisons between individuals" and are useful in shedding light on achievement gaps.

- On achievement as the sole measure of academic success, Koretz promotes the use of a growth or progress measure, in addition to an achievement measure. But the two must go hand-in-hand. Schools showing large gains but failing to meet the proficiency bar shouldn't be excused for lousy test scores just as schools posting outstanding test scores should also be accountable for helping kids make adequate progress year to year.

- For all the impersonality of standardized tests, their scoring-often done off-site and by a computer or impartial, trained rater-is rarely questioned. Portfolio assessments are a different story. Koretz cites the teacher-scoring of writing portfolios in Kentucky in the 1990s. The test scores were important enough that teachers had an incentive to score leniently and so, "when samples of portfolios were rescored by other raters in a state audit of scoring, it was discovered that the scores assigned by many classroom teachers to their own students were substantially too high."

Emmy L. Partin
Emmy L. Partin is a Director of Ohio Policy & Research at the Thomas B. Fordham Institute