Ohio's value-added model of growth is not the only game in town
December 18, 2013
As Ohio transitions to a next-generation accountability system, educators must come to terms with student-growth models. Within the past year, the Buckeye State has introduced three new indicators of school performance that gauge the academic growth of student subgroups. These new indicators stand alongside a school’s growth performance for all of its tested students. Furthermore, the state now requires districts to implement principal and teacher evaluations, half of which are presently based on student-growth measures.
In view of the growing use of student growth in accountability, Ohio’s policymakers and educators should consult A Practitioner’s Guide to Growth Models. The authors of this must-read report provide a clear description of the seven growth models available—from the very rudimentary to the extraordinarily complex—and helpfully contrast the various models from both a statistical and applied perspective.
Of the seven growth models presented, there are two models that states are incorporating into their accountability systems: student-growth percentiles, or SGP (used in Colorado and Massachusetts, for example), and value-added, or VAM (used in Ohio and Pennsylvania). The key takeaways from this report are as follows:
VAM asks a different policy question than SGP.
According to the report, SGP does two things well—it describes and predicts student growth. Growth description unpacks “how much growth” a student group has made (e.g., classroom or school), while growth prediction gets at “growth to where” (i.e., to a proficiency standard) for a group of students. VAM, on the other hand, neither describes nor predicts student growth. Rather, VAM attempts to establish a causal inference, something that SGP does not attempt to do. VAM tries to answer the question of “what caused growth” by linking students’ growth to a teacher, principal, or school, depending on the level of analysis. However, importantly, the authors caution readers about whether VAM can support causal inferences in practice.
In attempting to establish a causal inference, VAM uses the most sophisticated statistical model available.
VAM employs multiple data points to set a performance expectation for a group of students. The model includes students’ entire test-score history—for all grades and subjects available—and also accounts for the persistent effect of past teachers on growth (something for which SGP does not adjust). The authors look at whether a group of students performs better or worse than expected in order to infer the educator-level or school-level effect on growth. The bottom line: VAM has to be considered the most statistically rigorous model of student growth—far beyond even SGPs, which themselves use complex statistical modeling. However…
The trade-off for VAM’s statistical rigor is the lack of transparency and clarity.
The authors’ quotes explicate the VAM conundrum: “Disadvantages to this statistical approach [i.e., VAM] include a lack of parsimony and clarity in model interpretation” and “the EVAAS model is complex, requires highly specialized and proprietary software, and is difficult to explain without reducing teacher estimates to a simplistic ‘value added’ (causal) inference.” I think that anyone trying to interpret Ohio’s VAM results, whether at a teacher or school level, can sympathize with these statements.
As Ohio increasingly moves into a growth-oriented approach to accountability, its educators and policymakers will need to take stock in the state’s value-added model. But measuring student growth is hard and complicated stuff—and this report is a valuable starting point. It walks readers through the various models without too much technical jargon. And importantly for those engaged in the design of accountability systems, it contrasts SGPs and VAM, nicely describing the models’ pros and cons. This report deserves to be at the fingertips of Ohio’s educators and policymakers trying to become literate in the models of student growth.