For advocates of value-added measures of teacher effectiveness, high school presents a conundrum, as testing is less frequent and often in non-contiguous subjects (making it difficult to compare prior years' results). Using data from the ACT's QualityCore program, Dan Goldhaber and his colleagues analyzed two value-added models—the traditional model comparing pre- and post-tests and a more complex analysis that considers individual student outcomes across multiple subjects—and compared their results. The benefit of the more complex model, which uses a "cross-subject student fixed-effects approach," is that it skirts the issue of non-contiguous classes and the need for pretests in subjects that students may not have encountered before. They found that the fixed-effects approach estimated smaller teacher effects (meaning that teacher quality was less important to the final score than other variables), though it is impossible to know which model was "better," as there is no objective measure of teacher quality against which to benchmark. More importantly, Goldhaber found significant variation in the results of the two models: Nearly 10 percent of the teachers who were placed in the top quintile by the fixed-effects model—the top-echelon of educators—were actually in the bottom quintile when the traditional model was used. In the world of high-stakes testing, this is concerning (especially if you're a teacher!) and reminds us that, while value-added measurements have superior predictive power relative to other methods of estimating teacher effectiveness, they are imperfect—and the tradeoffs ought to be made clearer in the popular debate.
SOURCE: Dan D. Goldhaber, Pete Goldschmidt, and Fannie Tseng, “Teacher Value-Added at the High-School Level: Different Models, Different Answers?” Educational Evaluation and Policy Analysis 35(2) (June 2013): 220–36.