More By Author
October 25, 2011
September 03, 2009
A new analysis by Mike Podgursky, Cory Koedel, and colleagues offers a handy tutorial of three major student growth measures and an argument for which one is best. The first, Student Growth Percentiles (aka the Colorado Growth Model), does not control for student background or differences in schools but is calculated based on how a student’s performance on a standardized test compares to the performance of all students who received the same score in the previous year or who have a similar score history. Some like this model because it doesn’t set lower expectations for disadvantaged students by including background measures, but it may also penalize disadvantaged schools, since they tend to have lower growth rates. The second method, which they call the one-step value-added measure (VAM), controls for student and school characteristics, including prior performance, while simultaneously calculating test-score growth as a school average. This model may detect causal impacts of schools and teachers, but runs the risk of not capturing important variables in the model, which could advantage high SES schools. The third and final model is a two-step VAM, designed to compare schools and teachers that serve similar students. It calculates growth for each school using test-score data that have been adjusted for various student and school characteristics. The analysts conclude that this model makes the most sense, because it levels the playing field so that winners and losers are representative of the system as a whole. What’s more, schools are more apt to improve if they are competing against similar peers—and even when schools are compared to other schools with similar student bodies, there are still differences in growth between them. That said, some worry that this model could hide inferior performance at high-poverty schools, so they suggest also reporting test-score levels, such as proficiency rates, so that folks can see also differences in absolute achievement across schools. Seems reasonable enough, though stakeholders would need to be educated in how to interpret multiple measures. But one small hiccup: Arne Duncan’s ESEA waiver regulations do not allow states to use the two-step VAM in their accountability system—so there’s that.
SOURCE: Mark Ehlert, Cory Koedel, Eric Parsons and Michael Podgursky, “Choosing the Right Growth Measure,” Education Next 14(2).