Like the Cleveland Browns on a Sunday afternoon, the Ohio General Assembly is fumbling about with the state’s value-added system. One month ago, I described two bizarre provisions related to value-added (VAM) that the House tucked into the state’s mid-biennium budget bill (House Bill 487). The Senate has since struck down one of the House’s bad provisions—and kudos for that—but, regrettably, has blundered on the second one.
To recap briefly, the House proposals would have (1) excluded certain students from schools’ value-added computations and (2) changed the computation of value-added estimates—the state’s measure of a school’s impact on student growth—from a three-year to a one-year calculation.
I argued then that the House’s student-exclusion provision would water-down accountability, and that reverting to the one-year estimates would increase the uncertainty around schools’ value-added results.
The Senate has struck down the House’s exclusion provision. Good. But it has failed to rectify the matter of the one-versus-three-year computation. In fact, it has made things worse.
Here’s the Senate’s amendment:
In determining the value-added progress dimension score, the department shall use either up to three years of value-added data as available or value-added data from the most recent school year available, whichever results in a higher score for the district or building.
Now, under the Senate proposal, schools would receive a rating based on whichever VAM estimate is higher—either the one-year or the three-year computation. (Naturally, schools that just recently opened would not have three years of data; hence, the “as available” and “up to” clauses.)
Huh? How is this rational accountability? The Senate seems to have fallen into the Oprah zone: “you get an A, you get an A, everybody gets an A!”
I exaggerate, of course. Not everyone would get an A, based on the “higher score” policy. But let’s consider what happens to school ratings, under the three scenarios in play—the one-year value-added computation (House), the three-year computation (current policy), and the higher score of the two scores (Senate).
Chart 1 compares the letter-grade distribution under the one-year versus three-year (i.e., multi-year) estimates. As you can see, the three-year scores push schools toward the margins (As and Fs), while at the same time, diminishes the number of schools in the middle (Cs). This is to be expected, given what we know about the greater imprecision of the one-year value-added estimates. Greater imprecision tends to push schools toward the middle of the distribution, sans clear evidence to suggest they’ve had a significant impact, either positively (A) or negatively (F). In short, when the data are “noisier”—as they are under the one-year estimates—we’re more likely to wind up with more schools in the mushy middle.
Chart 1: Clearer view of value-added impact under multi-year scores: Multi-year scores push schools toward the margins (A or F); One-year scores push schools toward the middle (C)
Source: Ohio Department of Education. For 2012-13, multi-year VAM scores are available publicly; the author thanks the department for making schools’ one-year VAM scores accessible at his request. (One-year VAMs, school by school, are available here.) Notes: The one-year A-F ratings are simulated, based on schools’ one-year VAM scores for 2012-13. (The “cut points” for the ratings are here.) The multi-year A-F ratings for 2012-13 are actual letter grades, based on schools’ VAM scores (up to three years) from SY 2011, 2012, 2013. Chart displays the school-level (district and charter) distribution of letter grades (n = 2,558).
Now, let’s look at the Senate’s “higher-score” proposal—the real whopper of them all. Consider chart 2 which also includes the higher of the two value-added scores (the green bar). What you’ll notice is that the number of As would likely increase under the proposal, so that virtually half the schools in the state would receive an A. On the other end of the spectrum, the number of Fs would be cut in half, so that just one-in-ten of the schools in the state would receive an F.
Chart 2: Roughly half of schools would get A under “higher score” provision
Are half the schools in Ohio making significant—meaningfully significant—gains for their students? And are just one-in-ten schools failing to move the achievement needle in a significant way? Let’s get real.
As I’ve maintained, current policy—the three-year computation—is the best course for policymakers. It gives us the clearest look at a school’s impact, both good and bad, on student performance. The finagling of value-added isn’t just an academic exercise, either—it has considerable implications for the state’s automatic charter school closure law, voucher eligibility, academic distress commissions, and a number of other accountability policies. Can Ohio’s policymakers rectify this value-added mess? As with the Browns’ playoff chances, here’s hoping!
 Of course, not all schools in the C range are there because of imprecision—some schools may have had a clear, statistically insignificant impact on learning gains.