More By Author
September 23, 2009
October 02, 2009
The House Education Committee tucked two provisions into the Mid-Biennium Review bill that would alter the state’s calculation of student progress. They both relate to the value-added model (VAM), the state’s method for computing a school or district’s impact on student-learning progress over time.
Value added is a statistical model that uses student-level data, collected over time, to isolate the contribution of a school on learning. This calculation is a noble and necessary undertaking, given what research has shown, time and again, about the significant influence of out-of-school factors on students’ educational success (e.g., parents, tutoring, private art and music lessons, faith-based education, etc.).
If the objective is to gain a clearer view of the true effectiveness of a school—its educators and their approach to curriculum, behavior, scheduling, and so forth—we want to minimize the influence of the out-of-school factors. Increasing clarity to school performance applies both to high-wealth schools, which can skate by on the backs of upper-middle-class parents, and to low-wealth schools, which can be handicapped in an accountability system based on raw proficiency measures.
I believe—and yes, to a certain extent, based on faith—that the state is moving in the right direction with its approach to value added. But in my view, the House is making two missteps in its proposed changes to VAM. The following describe the provisions and why the state legislature should remove them as the bill heads to the Senate.
Provision 1: Changes value added from a three-year to one-year calculation
The amendment reads,
The overall score under the value-added progress dimension of a school district or building, for which the department shall use
up to three years of value-added data as from the most recent school year available.
The amendment’s language would actually revert the state back to its one-year value-added calculation, which it used from 2005–06 until 2011–12. Beginning in 2012–13, however, the state switched to a calculation that covered students’ assessments over three years to compute a value-added score.
The about-face is concerning, and here’s why. Recall that value-added scores are estimates of a school’s impact on growth, with a degree of uncertainty related to that estimate. A school’s point estimate is the average student gain on math and reading assessments and is reported in Normal Curve Equivalent units (NCEs). Generally speaking, the larger the sample size, the smaller the degree of uncertainty around that estimate. Battelle for Kids writes in its guidebook, “Uncertainty around growth estimates is greater when the sample size is small.”
Thus, a one-year calculation—with a smaller sample size—increases the uncertainty around the estimate of a school’s value-added score—hardly a desirable property. The following charts display how the uncertainty around the VAM estimates changed, depending on whether Ohio used the one-year (Chart 1) versus three-year (Chart 2) calculation. Notice the wider confidence intervals—the blue lines that indicate the range of plausible VAM scores for a particular school—in the one- versus the three-year computations. Because of the wider range of plausible values, there is more uncertainty about the true value-added “effects” of schools.
A concrete example might be valuable at this juncture. In 2011–12, KIPP Journey Academy—a high-poverty, high-value-added charter school—received a point estimate of 2.34 NCEs with a range of plausible values of 1.11 to 3.57 (a range of 2.46 NCEs). Remember: this estimate used just one year of VAM data. Then, in 2012–13, KIPP received an estimate of 3.53 NCEs with a plausible range of 2.99 to 4.07 (a range of 1.08 NCEs). By using a three-year average, the model was able to shrink the range of plausible values and to zero in on its estimate of the true impact of the school.
However, a one-year estimate could be preferable if a school’s educators are substantially different in the most recent school year versus a couple years back. If this were the case, it might be unfair to hold a different group of educators accountable for the impact of their predecessors, either receiving undue credit or blame. Unless this phenomenon occurs frequently in practice, it seems preferable to seek greater precision in schools’ value-added estimates.
Chart 1: Greater uncertainty around VAM estimates under one-year calculation, Ohio schools 2011–12
Chart 2: Less uncertainty around VAM estimates under three-year calculation, Ohio schools 2012–13
Source: Ohio Department of Education Notes: These charts display the value-added estimates (green dots) and the 95% confidence intervals—the range of plausible values—around that estimate for schools in Ohio (point estimate +/- 1.96 times the standard error). The top of each blue line represents the upper 95% CI and the bottom represents the lower 95% CI. Point estimates are the average learning gains for a school, in Normal Curve Equivalent units, averaged across grades (including grades 4–8) in both math and reading. For Chart 2, there may have been schools that had been open for just one or two years and hence did not have the full three years of value-added data.
Provision 2: Excludes some transfer students from value-added calculations
The Mid-Biennium Review bill adds the following amendment to state law:
For calculating the metric prescribed by division (B)(1)(e) of this section [overall value-added for a school or district], the department shall use assessment scores for only those students to whom the district or building has administered the assessments prescribed by section 3301.0710 of the Revised Code for each of the two most recent consecutive school years.
In effect, this provision would exclude a school’s incoming transfer students, be they transfers from another school or those making a structural move (i.e., from elementary to middle school). (Schools are allowed to exclude, for accountability purposes, the test results of mid-year transfers.) This raises both the mathematical issue of sample size in VAM calculations and, more importantly, a philosophical objection.
First, the mathematical: the “two-consecutive-years” clause would certainly decrease the sample sizes for a school’s value-added calculation. As discussed above, smaller sample sizes can increase the uncertainty around a school’s VAM estimate. Under this provision, a grade 6–8 middle school—a widely used grade configuration—would now have just two, instead of three, grade levels of available data. Seventh- and eighth-grade students would be included in the school’s VAM computation, but sixth graders would now be excluded, since they weren’t educated in that building for fifth grade. For grade 7–8 middle schools across Ohio—there were 149 of them in 2012–13—the state would have just one grade level, eighth grade, to calculate the school’s VAM.
Perhaps more fundamental than the statistical issue is a philosophical one. Some may argue that schools should not be accountable for “new” transfer students. But this argument falters as a matter of principle: if we actually think that schools should be helping all children to learn—not just the school’s long-time students—what is the compelling reason for excluding transfer students? A school is responsible—and accountable for—the students who walk through its doors.
* * *
Value-added is not a “magic model” for computing a school’s impact on learning—there isn’t one out there—but the state has made improvements to value-add in recent years. The House’s amendments should be reconsidered in light of how they could negatively alter the value-added calculations of schools and districts. Moreover, the second provision simply ignores the basic principle that all kids count.
 The value-added model that Ohio and several other states use is considered proprietary by SAS, the company that runs the statistical model.
 The bill does not clearly articulate whether the “subgroup” value-added scores would be a one- or three-year calculation. Presently, the plan is to calculate them as three-year averages once enough years of data are available. (The state used a one-year calculation in 2012-13, since it was the first year for subgroup VAMs.)