The Widget Effect
July 28, 2009
The New Teachers Project, with authors Daniel Weisber, Susan Sexton, Jennifer Mulhern, and David Keeling
This revelatory study, with as much detail, rigor, and thoroughness as one could want, proves what we've long suspected: the formal process of teacher evaluation as it exists today is soft. Evaluations have made teachers into "widgets" because they are all treated the same. Three school districts in Ohio-Akron, Cincinnati, and Toledo-are among the study's group of 12 districts in four states. The data, which comes from new surveys and compilations of teacher evaluation records, plus 130 interviews with district leaders, reveals a system of perfunctory and meaningless back-patting. Even different evaluation methods do not make results more meaningful. Toledo uses a binary rating system (satisfactory/unsatisfactory), and gave only three teachers an unsatisfactory rating over five years. In Akron, which uses a system with five ratings, teachers identified 5 percent of their colleagues as "poor performers," but not one teacher actually received an unsatisfactory rating in an evaluation. Not only does the system fail to identify poor teachers, it also leaves no room to recognize truly exceptional teachers. Cincinnati did the best job identifying "distinguished" teachers by giving only 54 percent of teachers their highest rating. The authors briefly describe some of the legal and organizational hurdles that block useful evaluation, and suggest that the process be reformed in a way that treats evaluation more like a routine check-up and less like, say, a one-time polio test everyone passes but nobody really worries about.
Read the study here.