More By Author
September 23, 2009
October 02, 2009
“In those days…everyone did what was right in his own eyes.” Words penned millennia ago couldn’t be more relevant today. In the education-policy world, I sense it in the growing antagonism toward external forms of accountability for schools’ (and their students’) performance. I get it: accountability regimes, particularly of the state-driven sort, can be perceived as harsh, punishing, and damaging to professionalism, local control, and school specialization. Others perceive standards and accountability as impinging upon individual liberties around parental control.
Yet looming behind this unrest is the specter of mediocrity and a lack of urgency among Ohio’s K–12 schools—an environment ultimately ill-suited for student success. The zeitgeist has worked its way into state law, as policymakers have begun to yield to the cries of those who would prefer to be judged by standards of their own preference or design—or none at all. As evidence, consider the proliferation of alternative accountability (and assessment) systems that are cropping up in state policy. Three examples come to mind.
Ohio’s Third Grade Reading Guarantee—a needed initiative to lift early literacy—has a loophole the size of Texas. Seemingly everyone in the state is aware that third graders are now required to pass the reading portion of the Ohio Achievement Assessment (OAA) or else face grade retention. This is tough stuff on the surface—but wait. In a lesser-known provision, the state has also allowed schools to administer any one of three alternative reading assessments. If a student who has failed the state exam passes any of these alternatives, she won’t be retained. For example, a student can fail the OAA on multiple occasions but receive a passing score on the Iowa Assessments to move onto fourth grade. (A student has two chances to pass the OAA on the mandated fall and spring administrations. If she fails those, she has the opportunity to pass on a summer OAA administration. If her district allows it, she can also take an alternative exam twice during the school year and once during the summer to demonstrate proficiency.)
Needless to say, early readers should have multiple chances to demonstrate proficiency. Everyone has a bad day. But up to six possible attempts seems beyond lenient. Meanwhile, what is known about the alternative assessments? How were the “promotion scores” determined, and what is the evidence that they are comparable to the state’s definition for proficient? (The Ohio department of education’s website appears surprisingly sparse on this matter.) More importantly, does introducing alternative assessments invite schools to disregard the OAAs (and, in future years, Common Core–related exams) and, instead, seek out whichever alternative assessment is easiest to pass? All told, by allowing alternative assessments and measures, the state has weakened its approach to guaranteeing early literacy along a strict definition of proficiency, especially if the alternative assessments result in higher pass rates.
Dropout-recovery and vocational schools
Ohio has separate alternative accountability schemes for its “dropout-recovery” charter schools and career-and-technical planning districts (CTPDs). The 2012–13 school year was the first for both of these new systems. The accountability metrics for these entities differ from other public high schools and exclude two key measures. First, they are not held accountable to the state’s conventional performance-index measure of aggregate school-level achievement on state exams. Second, they will also not be rated along the state’s value-added measure for high schools, to be implemented starting in 2015–16. CTPDs are not scheduled to receive a value-added rating at all, while dropout-recovery schools will receive a “progress” rating based on a norm-referenced growth measure, not the value-added computation that applies to all other high schools based on the state’s new end-of-course exams. However, both dropout-recovery and CTPDs will receive ratings for high school graduation rates, just as all other high schools do.
State policy should recognize that the desired outcomes for a dropout-recovery school might be somewhat different than a general-education high school. The same goes for vocationally focused schools. At the same time, however, by excluding the key measures of school performance—the performance index and value-added growth—the state has muddled the accountability waters. Are we lowering the standards for students who might need high expectations the most? Is the state sending the message that vocationally focused students don’t need to meet academic standards? (I’d argue that they need academic rigor just the same as college-aspiring students.) And why exempt schools that happen to specialize in educating either at-risk or vocational students from making an impact on learning gains, as measured by state assessments? I’m all for specialized schools of all varieties and types, including dropout-recovery and vocational schools. But specialization shouldn’t mean exemption from conventional accountability.
This past spring, the legislature established an accountability waiver for innovative STEM schools and districts. The law allows qualifying districts to seek a five-year exemption from state accountability, which may include exemptions from state tests, report cards, and personnel evaluations. In turn, these districts pledge to implement an alternative system of their own design. The state superintendent reviews the waiver requests and can approve up to ten such districts. (As far as I can tell, the law is unclear on whether the superintendent can approve ten districts’ waivers per year or whether just up to ten districts can be approved for a waiver at a given time.)
Whether external accountability hinders risk-taking innovation is a thorny question, I admit. But should state policy allow so-called “innovative” districts to determine their own accountability schemes—separate from (presumably) less innovative ones? I protest. Allowing a slice of districts to create their own accountability, under the auspices of innovation, seems like a step too far. Why do some districts, part of this special Innovation Lab Network, get the chance to create their own accountability systems? (And how are districts eligible to be part of it in the first place?) What is the process and criteria by which the state superintendent approves waivers? Will parents and taxpayers know how to interpret results from the districts’ alternative assessments and report cards? Will any of these districts actually give themselves a failing grade (if indeed it deserves one)? Finally, why should anyone assume that innovation can’t prove itself along the state’s own educational standards?
* * *
Nobody welcomes external accountability or the consequences that may follow. Everyone would rather be his own judge, using his own criteria and metrics—and everyone has an excuse for why that would work better. But if we genuinely think the goal of schooling is to give all young people the knowledge, skills, and gumption that they need to confront the travails of life, then the state must hold all of its public schools accountable for student outcomes. That goal is practically universal, regardless of the schools’ zip codes, their “types,” or the race, gender, and innate aptitude of their students.
By cracking open the floodgates of alternative accountability (and the concomitant hodgepodge of assessments and metrics), Ohio policymakers are losing their grip on the purpose of education accountability: to send a coherent message to all schools that there is a consistent standard and that standard is the standard. Yes, schools need autonomy and freedom to operate as they see fit, in order to meet and exceed the standards set forth. Schools also need a dose of leniency from consequences when they face challenging circumstances. But in the end, state leaders must also unswervingly hold schools accountable for results along a consistent set of standards, assessments, and measures.
 Technically, third-grade students don’t have to reach “proficiency” to pass the state exam for retention purposes.