Use value-added data to measure teacher effectiveness; but humanely

Nothing matters more to student learning than teacher quality. Not class-size, not poverty, not family background, not even overall school quality. This was the key takeaway from a highly controversial Los Angeles Times analysis of teacher value-added scores for students in the L.A. Unified School District (LAUSD). The significance of this finding can’t be understated. Many people still believe either that “these kids can’t learn” or that “school can only do so much with kids like this until society fixes their families and communities.”

But, the political firestorm around how these findings were reported by the Times may very well result in them being discredited or simply ignored. The Times asked Richard Buddin, a senior economist and education researcher from Rand, to analyze seven years of reading and math scores to calculate performance of teachers who’ve taught grades three through five. To illustrate its point about the importance of teacher quality, the paper actually used Buddin’s analysis to publish – by name – “effective” teachers as well as “poor-performers.”

Unfortunately, the manner in which the Times published the data was unfair to individual teachers. This outing of teachers based exclusively on the use of value-added data triggered a furious reaction by the Los Angeles teachers union. The United Teachers of Los Angeles (UTLA) press release sought to discredit the value of standardized testing, value-added analysis, and even the primacy of teachers in children’s learning. Further, the union has threatened to boycott the newspaper.

Despite the bungled manner in which the data were published, the Times’ analysis contained important findings that should not be discredited or overlooked in the midst of the furor surrounding them, including:

  • Highly effective teachers routinely propel students from below grade level to advanced in a single year. There is a substantial gap at year’s end between students whose teachers were in the top ten percent in effectiveness and the bottom ten percent. The fortunate students ranked 17 percentile points higher in English and 25 points higher in math.
  • Some students landed in the classrooms of the poorest-performing instructors year after year.
  • Contrary to popular belief, the best teachers were not concentrated in schools in the most affluent neighborhoods, nor were the weakest instructors bunched in poor areas.
  • Although many parents fixate on picking the right school for their child, it matters far more which teacher the child gets. Teachers had three times as much influence on students’ academic development as the school they attend.
  • Many of the factors commonly assumed to be important to teachers’ effectiveness were not. Although teachers are paid more for experience, education, and training, none of this had much bearing on whether they improved their students’ performance.

Taken seriously, these findings should encourage Ohio to think carefully about how it can use its wealth of value-added data to help determine teacher quality. The state has a relatively sophisticated system of value-added analysis in reading and math in grades 4-8, and has accumulated multiple years of data.

Currently this value-added data is used to help rate school quality, but it is not used as an indicator of teacher effectiveness. Further, the well-respected Battelle for Kids has been doing excellent work with school districts to help school officials and teachers use value-added data as a diagnostic tool for improving instruction.

The federal Race to the Top (RttT) competition has encouraged states to use value-added data as a measure of teacher effectiveness. In its winning RttT application, for example, Tennessee committed itself to having at least half of teacher evaluations based on student achievement measures, including value-added growth. Ohio has yet to make a similar commitment, but it should. There are ways of using this data as a component of teacher evaluations that are both rigorous and fair to teachers.

Moreover, using student performance data to determine teacher effectiveness is not going away any time soon. That the Times was willing to take on this analysis and stand up to the teachers union is profoundly significant, as was the New York Times’ and the New Yorker’s critical coverage of teachers unions (e.g., articles on New York City’s “rubber rooms”) over the last year. Traditionally left-leaning news outlets -- and many leading Democrats -- are taking the stance that not only does teacher quality data matter, but so does the way we use it to recruit, reward, and retain teachers.

The Times’ value-added analysis is the latest piece to provide evidence of the need for sensible teacher-related reforms.  Ohio – a state that is already ahead of the game in terms of having the data -- should not wait for a similar outing of teachers and should take a proactive rather than reactive approach to coming up with ways to use student growth data as a metric for determining teacher quality. 

More By Author