Ohio

We beg for the end of a tired school funding process, extol the benefits of lowering chronic absenteeism, and more.

The recent unveiling of Governor Kasich’s budget plan for the 2018-19 fiscal years has kicked off Ohio’s biennial ritual of debating school funding. Caps and guarantees have long been a central part of that discussion, and it’ll be no different this spring. As I’ve argued before, state leaders should get rid of these pernicious policies.

To allocate state dollars to school districts, Ohio lawmakers have crafted an intricate funding formula designed to drive more aid to districts that need it most (e.g., those with more students to educate, more pupils with special needs, or less capacity to fund education locally). They’ve done a pretty decent job of it, too. Don’t just take our word for it: EdTrust has said Ohio is one of the best in the nation at it. Both caps and guarantees throw a wrench into this system.

Caps limit the increase in a district’s state formula aid from year to year. Conversely, a guarantee ensures that a district won’t receive less funding than it received in a previous year. Caps are generally associated with districts experiencing enrollment growth, while guarantees typically apply to districts with declining enrollment. Changes in district property values and resident incomes can also play a role.

The Kasich Administration has repeatedly urged a phase-out of caps and guarantees to no avail. Each year, the legislature inevitably feels the heat from local districts—particularly from those facing flat or reduced funding—and walks back the governor’s attempt. Caps are also tough to get rid of, as doing so would cost the state millions; moreover, because they affect some (but by no means all) wealthier districts, it could create the appearance that the state is spending more on less needy pupils. The administration, perhaps learning its lesson, structured this year’s budget proposal in a manner that retains the caps and guarantees but modifies them in a few ways.

The governor’s budget imposes a gain cap of 5 percent, meaning that a district cannot receive more than a 5 percent boost in state formula aid relative to a prior fiscal year. According to state budget projections, the governor’s proposal would cap 130 districts in FY18 and deny them $466 million in state funding. Compared to the current year’s cap—set at 7.5 percent—the proposal actually places a tighter cap on funding increases, somewhat at odds with the Administration’s desire to phase out caps. That being said, eliminating (or lifting) the cap would increase costs to the state which itself faces a budget crunch.

To illustrate the effect of the cap on three selected districts, the table below displays estimated capped amounts under the governor’s proposal. Licking Heights, a district with considerable increases in enrollment, loses $9.2 million in state aid that it would otherwise receive under the FY18 formula.

Source: Ohio Office of Budget and Management Note: The average change in enrollment across the 130 districts on the cap was +0.4%; median was -0.4%

The governor’s proposal reduces the amount of guarantee aid for districts with declining enrollment. Generally speaking, if a district covered by the guarantee experienced enrollment declines of more than 5 percent from 2010-11 to 2015-16, it would receive only part of the “full” guarantee amount under the proposal.[1] It’s important to note that such declines are not due to students leaving for charters or inter-district open enrollment, but to factors such as families moving out of the district and/or slowing birth rates.[2] Projections for FY18 indicate that 315 districts would receive guarantee aid at a cost to the state of $181 million.

For illustration, the table below displays the guarantee calculations for selected districts under the governor’s proposal. East Cleveland, a district with large student declines, under the formula would lose $8.5 million in FY18 relative to the prior year, but is shielded from most of that loss via $6.93 million in guarantee aid.

Source: Ohio Office of Budget and Management Note: The average change in enrollment across the 315 districts on the guarantee was -6.6%; median was -5.9%.

If the governor’s plan were to pass today, a whopping 445 out of 610 districts in Ohio would be affected by either the cap or guarantee in FY18—and thus not funded according to the formula.[3] Despite the political pressure they’ll face, the legislature should eliminate these policies. Here’s why:

  1. Both caps and guarantees subvert the state’s own funding formula, which is designed specifically to ensure that more aid goes to districts with the greatest needs. All of us should embrace this type of approach to funding schools rather than one driven by politics—be it partisan, provincial, or special interests. Ohio has made strides in developing a sound formula, but in the words of state budget director Tim Keen, caps and guarantees continue to “short circuit the formula.” Instead of working around it, lawmakers should focus on driving dollars through the formula—and if necessary make refinements to better ensure that all districts receive the proper amount of aid.
  2. Caps deny scores of districts the funding increases they deserve according to the state’s own formula. Some of these capped districts have experienced increases in enrollment, requiring additional resources to make sure they’re meeting the needs of more and more students. Perhaps contrary to perceptions, capped districts are not necessarily wealthy (though some certainly are). Several high-poverty districts would have millions withheld in FY18: Canton takes a $4.2 million hit; Dayton loses $4.0 million; and Columbus—which has seen relatively large enrollment gains—is capped at a staggering $92.6 million. Such districts are educating some of Ohio’s most disadvantaged youngsters, and it’s shameful that the state is denying them dollars they should receive under its own funding formula.
  3. Guarantees squander state taxpayer dollars on pupils who are no longer attending a school district. As discussed above, districts with fewer students are often sheltered from funding reductions through the guarantee; this in turn means that Ohio funds a certain number of “phantom students.”
  4. Guarantees also pay districts based on a previous year’s funding amounts. However, this is not how the rest of the world works, which bases budgets on current or projected economic conditions. No one thinks that any company should get paid based on its customers from years ago, but that’s essentially what the guarantee does.
  5. The common argument in favor of guarantees—districts need time to adjust—is a red herring. That might be fine if guarantee aid were offered temporarily, like Ohio’s catastrophic cost reimbursement program, a short-term funding pool for schools facing sudden costs increases associated with special needs. Instead, guarantees allow districts in long-term decline to avoid making difficult changes about how they do business. This could include restructuring unwieldy labor agreements, moving to shared services, shifting to a more flexible cost structure, and in some cases reducing staff and facility expenditures when enrollment declines are significant.

Caps and guarantees will be among the most urgent matters before the state legislature this spring. True, these policies won’t get droves of citizens to protest at the Statehouse and the political pressure to resort to the status quo will be intense. But if we as a state truly want to fund K-12 education based on the children our schools actually educate, Ohio lawmakers should finally drive a stake through the heart of caps and guarantees.


[1] In FY16, districts were guaranteed 100 percent of its FY15 state aid; LSC Greenbook (p. 12)

[2] The proposal uses a district’s changes in Total ADM, which is a measure of public school students residing in the district (but not necessarily attending a district-run school); for the definition see, ODE District Profile Reports.

[3] The state budget office projects 424 districts on either the cap or guarantee in FY19.

 

 
 

On February 2, the Ohio Department of Education (ODE) released the first draft of its state plan to comply with the Every Student Succeeds Act. ESSA, the year-old federal education law, is the successor to No Child Left Behind (NCLB). While many of ESSA’s accountability provisions are similar to those found in NCLB, a new requirement is for states to have an indicator of “school quality or student success” that goes beyond state standardized test scores or graduation rates.

Ohio’s plan proposes two measures that meet this requirement. The first measure, Prepared for Success, is a carryover from the state’s current report card. It uses multiple indicators to determine the number of students ready for college or career at the end of high school, and is exclusively used for districts and high schools. The second measure, on the other hand, will be used by all schools and districts: student engagement as measured by chronic absenteeism.

Although the threshold for being considered chronically absent depends on the state, the idea behind the term is the same—chronic absentees are students who miss too much school. In Ohio, these students are known as “habitual truants.” They earn this designation by being absent without “legitimate excuse” for “thirty or more consecutive hours, forty-two or more hours in one school month, or seventy-two or more hours in a school year.” Serving these students well has been a struggle for districts and schools in the Buckeye State for years, so it makes sense that the state would use ESSA as an opportunity to address the problem. Putting chronic absenteeism under the umbrella of student engagement makes sense too: If a student misses too much school, they’re not fully engaged in their education—and probably not learning much either.

But chronic absenteeism is a smart addition to Ohio’s state report card for a number of other reasons as well. First, it’s consistent with and supportive of a policy direction already identified by Ohio leaders. The Buckeye State recently revised its truancy laws in House Bill 410. This legislation updates the state’s definition of truancy and prohibits schools from suspending, expelling, or removing students from school solely on the basis of attendance. Instead, the bill outlines an intervention structure for districts and schools to follow that should “vary based on the needs of each individual student.” While unintentional, this structure aligns well with ESSA’s emphasis on locally driven interventions.

Second, Ohio’s new truancy law also revises what schools and districts must report to ODE in regards to chronic absenteeism based on the law’s new definition and intervention structure. Aligning the new measure with data that the state was already planning to start collecting is smart and efficient. It’s also a measure that can be easily disaggregated by subgroup, school, and district, making it potentially more useful.

Finally, and most importantly, reducing chronic absenteeism can increase achievement. In elementary school, truancy can contribute to weaker math and reading skills that persist into later grades. In high school, where chronic absenteeism rates are higher, students often experience future problems with employment, including lower-status occupations, less stable career patterns, higher unemployment rates, and low earnings. Ohio could raise student achievement by lowering its chronic absenteeism rate, and making absenteeism part of the state’s accountability system is a signal that districts and schools must start paying more attention to attendance numbers.

ODE has proposed a statewide long-term goal for chronic absenteeism of 5 percent or less. Recognizing that some groups of students were starting off with much larger absenteeism rates than others, ODE also assigned goals to each subgroup using the 2015-16 school year as a baseline. Furthermore, it devised a transparent equation that results in “consistent annual increases” of expectations.  Here’s a look at the goals for the state as a whole and for various subgroups of students.

 

In order to successfully meet the student engagement indicator, districts and schools will either have to meet the benchmark percentage of 5 percent or less or meet an improvement standard determined by ODE. (For example, the department lists “reducing the percent of chronically absent students by at least 3 percentage points from one year to the next.”) If districts and schools accomplish either of these goals, they will be deemed meeting the indicator. It’s important to note that this indicator is graded based on a “meets” or “not meets” standard, not with an A-F grade. ODE plans to incorporate this student engagement/chronic absenteeism measure into the Indicators Met portion of the state report card’s Achievement Component. It will be one of many subcomponents within the measure and will likely play a very small role in the overall calculation of school grades.

Nevertheless, this measure could unintentionally create an incentive for districts or schools to expel truant students in order to improve their attendance numbers. Recognizing this, ODE proposes that data on expulsions be used as a “check.” The plan notes: “To ensure that districts do not expel truant students as a way to reduce their chronic absenteeism rate, the calculation will include a review of each school’s or district’s expulsion data. Districts or schools that otherwise would meet the indicator but show a significant increase in their expulsion rate with the discipline reason listed as ‘truancy’ will have their ‘met’ demoted to ‘not met’ for this indicator.”

Overall, ODE’s plan for incorporating chronic absenteeism into the state’s accountability system is both thoughtful and nuanced. The state plan reinforces the newly revised state law while also following ESSA guidelines—and shedding light on a problem that, if solved, could improve student achievement in the Buckeye State.

Ohio just released its draft ESSA plan. While there’s much to applaud, the state’s proposals for improving the most chronically underperforming schools are underwhelming—serving to further remind us that sixteen years after the federal government began pushing states to turn around failing schools, our ideas for doing so are still scattershot.

Compared to past federal requirements for school improvement, ESSA is turnaround-lite—intentionally backing away from prescriptive solutions regarding school turnarounds embedded in NCLB and the School Improvement Grant program (SIG). Schools failing to make Adequate Yearly Progress (AYP) under NCLB faced a series of consequences including replacement of school staff, new curriculum, decreased authority of school administration, help from outside turnaround specialists, or restructuring of the school. Restructuring (similar to the more rigorous options that SIG put in place) included alternative school governance, reopening the school as a public charter school, replacement of most or all of school staff and leadership, takeover by an outside entity, or state takeover.

In Ohio, hundreds of millions in SIG dollars were spent with little to show for it. Low-performing schools were allowed to choose from a slate of turnaround options in exchange for funds; unsurprisingly, the majority of Ohio schools selected the least disruptive school improvement option—a professional development plan, an extra hour of learning time, and other supports that tinkered at the edges of change.

ESSA doesn’t require even these minimal efforts at turnaround; it merely mandates that states make districts do something, anything to address their worst schools—and step in if they fail to do so.

The nine pages of Ohio’s draft ESSA plan dedicated to describing its plans to improve low-performing schools (identified thusly) are unremarkable. That’s because Ohio’s ESSA draft contains many of the same elements as school improvement plans of the past that didn’t work, often reading like a SIG application: districts will “build capacity of school principals,” provide “targeted professional development,” and “work collaboratively with their community and stakeholders to determine… specific, evidence-based strategies.” It’s not that any of these concepts are bad, just that if chosen and applied at random they most definitely don’t result in systemic, long-term positive change. Under ESSA, low-performing districts and schools are often the arbiters of their own improvement plans. There exists a certain degree of madness in hoping that low-performing schools/districts will wake up one day and figure out how to fix themselves. Rather than push back against this premise, Ohio’s plan mostly appears to focus on simply complying with the (limited) federal requirements.

And there are few high-stakes repercussions for chronic failure, at least at the school level. Schools that languish in priority or focus status (see Table 1 for how these are to be categorized) will be subject to “additional Department [ODE] oversight on federal expenditures,” subject to more reviews, more paperwork, and more improvement plans (unless of course they are charter schools—then they are likely to close).

Table 1: Quick overview of Priority, Focus, and Watch Status

Ohio should also be cautious before creating a plan that defers too much to the “community” or ignores that school improvement is largely about changing what happens within a school in the way of teaching and learning. While the inclusion of mental health services in Ohio’s plan seems like a good one (this acknowledges the role that trauma and mental health play in truancy and academic performance), and there indeed may be a need for “a more coherent focus on addressing the needs of students, families and communities in struggling schools,” the plan’s lack of emphasis on changing what actually occurs within the four walls of a school on a given day is disconcerting. In fact, the section describing how Ohio will support low-performing schools in their quest to improve includes more references to community groups and organizations (e.g., “Community groups… want more of a voice in developing those local plans”) than it does to teaching and learning. This seems problematic.

There are some positive aspects of Ohio’s school improvement plans: the creation of an online evidence-based clearinghouse to provide resources to schools and districts as they go about selecting improvement plans; the department’s plans to “build its research capacity” and conduct performance monitoring in addition to compliance monitoring; the creation of a peer-to-peer network for districts to engage directly with one another; and incentives for districts to participate in random control trials and other research (important for building the evidence based referenced frequently in ESSA).

So what can Ohio do to strengthen its school improvement plans given that much of its blasé nature stems from an intentionally open-ended federal law? The state could consider two vestiges from the NCLB era that preserve parent agency. Under NCLB, parents whose children attended low-performing schools had more power than under current law. Children in a languishing school were given the option to transfer to a better performing school within their district and were also eligible for supplemental educational services such as tutoring. Despite historically low uptake rates for these options, they provided a safety valve for families.

Ohio’s draft plan lists “direct student services” as one possible intervention that might be required in instances where schools fail to make significant progress. The plan outlines expansion of Advanced Placement, “transitional coursework,” and early literacy initiatives among direct services. (ESSA also allows for high-quality academic tutoring—which Ohio should include in its list.) Ohio’s current wording for direct services is too wishy-washy: the state should commit to providing these services as a clear and consistent option for families when their students attend chronically failing schools. Ohio should also consider reinstating the student transfer option, perhaps even looking into incentives for higher performing districts that take on students from outside their borders.

As Ohio collects public comment over the next month, it should consider strengthening parent choice, guaranteeing the provision of direct services for students in chronically failing schools, and consider reinstating the “transfer out” student option. Improving chronically low performing schools is a monumentally difficult task requiring immense leadership and innovation. While Ohio districts take a crack at it themselves, the state should at least guarantee stronger options for parents and students who lack the ability to exercise choice by moving elsewhere.

Stay tuned for another look at how Ohio can improve its ESSA school accountability plan—specifically by walking back key portions that appear to go beyond what federal law requires.  

Do incentives nudge students to exert more effort in their schoolwork? A recent study by University of Chicago analysts suggests they do, though the structure of the incentive is also important.

The researchers conducted field experiments from 2009 to 2011 in three high-poverty areas, including Chicago Public Schools and two nearby districts, with nearly 6,000 student participants in grades two through ten. Based on improved performance relative to a baseline score on a low-stakes reading or math assessment (not the state exam), various incentives were offered to different groups of pupils, such as a $10 or $20 reward, or a trophy worth about $3 along with public recognition of their accomplishment. The analysts offered no reward to students in a control group. To test whether pupils responded differently to immediate versus delayed incentives, some of the students received their reward right after the test—results were computed on the spot—while others knew the reward would be withheld for one month.

Several interesting findings emerged. First, the larger cash reward ($20) led to positive effects on test performance, while the smaller reward had no impact ($10). This suggests that, if offering a monetary reward, larger payouts will likely lead to more student effort. Second, the $3 trophy and public recognition also had a positive impact on achievement, though not as big of an effect as the $20 incentive. In addition to being cost-effective, this finding is important because in practice, non-cash incentives may be more acceptable in school environments. As the study authors note, “Schools tend to be more comfortable rewarding students with trophies, certificates, and prizes.” Third, incentives that were withheld from students for a month after the test did not improve performance. This suggests that sooner, rather than later, disbursement is an important feature of an effective incentive structure.  

It is possible that external incentives could “crowd out” intrinsic motivation—students may be less likely to work hard once an incentive is removed. The authors find no evidence of this when examining treated students’ low-stakes test scores after the incentives ceased. Instead, they conclude that incentives, when structured well, can help motivate students to put in just a little more effort.

Source: Steven D. Levitt, John A. List, Susanne Neckermann, and Sally Sadoff, “The Behavioralist Goes to School: Leveraging Behavioral Economics to Improve Educational Performance,” American Economic Journal: Economic Policy (November 2016).

 
 

Citizens Leadership Academy (CLA) is preparing Cleveland middle schoolers for success in high school, college, and life—and not just academically. CLA, whose population is 79 percent economically disadvantaged and made up almost entirely of students of color, is second among all public schools in the city on student growth. The school’s eighth graders reach and surpass proficiency at a rate that is more than three times that of their peers across the city. Reading and math proficiency rates at CLA are more than double those of Cleveland Metropolitan School District’s.

No matter how you slice the data, CLA is providing academic preparation that would likely be unavailable to them if the schools—and its broader high-performing charter network (Breakthrough Schools)—did not exist. And yet its academic prowess is just the tip of the iceberg.

The school’s model—as captured in its name, Citizens Leadership Academy—prioritizes and cultivates broader attributes and mindsets necessary for long-term success. As you’ll read in this profile about one student, Keith Lazare Jr., CLA asks students to consider what it means to be active, engaged citizens and community members. Students are asked to grapple not only with tough math problems or reading passages—strengthening the stick-with-it-ness known in education circles as “grit”—but also to develop a sense of responsibility, ownership, and persistence in all aspects of their character. And this is not done in top-down fashion, either. Instead, CLA’s leaders and staff have created an environment where students advocate for themselves. These are skills that will no doubt serve them well in high school, college, professional and personal relationships, job interviews, board rooms, and beyond. CLA cares as much about empowering students and helping them hone their voices as it does about high test scores. This is a testament to their commitment to their students’ lifelong success as well as to the school’s deep understanding of what it takes to lift students in poverty and propel them toward the success they so deserve.

We invite you to read Keith’s story and see for yourself how good charters like CLA are good choices for students in Cleveland and across Ohio.

 
 

NOTE: The Thomas B. Fordham Institute occasionally publishes guest commentaries on its blogs. The views expressed by guest authors do not necessarily reflect those of Fordham.

In the last Ohio Gadfly, I described the many similarities between Washington State’s lengthy debate about high school graduation requirements during the years that I worked there and the debate underway in Ohio now. 

As has been Washington’s habit as well on everything from funding to accountability, the Ohio State Board of Education has kicked the issue to a study panel for the time being. At its meeting on December 13, the Board, after first rejecting proposals to delay or reduce the college and work-ready requirements adopted in 2014, directed the State Superintendent to appoint a work group to “review the graduation requirements and consider alternative approaches." The up-to-twenty-five-member work group with broad representation from the education community is to make a recommendation to Superintendent DeMaria by the Board’s April 2017 meeting.

Following is some immodest advice to the work group from someone who may be new to Ohio but is not new to work groups, task forces, and their brethren.

Set a specific end date for any transition to the new graduation requirements and make equivocating on it hard. As the nonprofit League of Education Voters put it when Washington was addressing the same task, “A transition period is understandable. A transition period with no end date or specific plan does not serve our students’ best interests, nor does it display any urgency for closing our state’s growing achievement and opportunity gaps.” Adopting minimum cumulative scores on the end-of-course (EOC) tests and effective dates in rule makes it more difficult to back away from the requirements when they draw near, because a rule once done would have to be undone. 

Don’t attempt to make your recommendation through a consensus process. The work group is made up of an unwieldy number of members (currently twenty-three) representing interests across the education spectrum, with little time before it’s asked to report in April. The wide representation ensures the Board will have the diverse input it wants and needs. But it also means that the people around the table are not likely to agree on a direction for moving forward. An insistence on consensus is more likely to lead to paralysis than to an actionable recommendation. While seeking as much common ground as possible, the work group should be prepared to vote at its last meeting and to submit a minority report to the Board if needed. Otherwise nothing is accomplished but to delay a decision that is better made sooner than later.

Don’t presume that the “right” answer can be found through data. The work group’s recommendation should be informed by data, but not be driven by it. The first consideration in any decision to revise present graduation requirements should be what best prepares Ohio students to be successful in life beyond high school and what it takes to get them there and not on whether graduation rate projections are politically acceptable. I’ve staffed and followed more than one study committee that operated on the assumption that the answer would come clear if there were just enough data. It won’t.

Consider the inclusion of an evaluation component. Any new graduation requirements are intended to alter behavior, or they would not be worth the effort. The fact is we cannot know with any level of precision how districts, schools, and students will respond to higher standards for achievement on assessments or how they will utilize the options made available to obtain a diploma. The work group might consider recommendations of an independent evaluation when there’s been enough experience with the graduation requirements, such as that California formerly conducted on its high school exit examination. Such an evaluation might examine, for example, impacts on four-year and five-year graduation rates by student subgroups; the strategies used by districts to improve performance on the EOC’s; the use of alternatives such as industry certifications and career readiness assessments, and post-secondary outcomes from those alternatives; and the effectiveness of local and state interventions. 

Finally, keep front and center the central question of what a high school diploma is and what it should represent. The Legislature made clear in passing HB 487 in 2014 that Ohio could no longer persist in granting high school diplomas that provide no assurance to students, parents, colleges, and employers that a student receiving one is ready to move on to post-secondary education or a career. A diploma can no longer be the rite of passage as it was when most of us were in high school, awarded when we’d passed enough courses to “walk” on graduation day. It should be a demonstration of accountability for both the student and the school that the student has attained the skills and learning needed to take the next step on whatever path chosen. A work group recommendation that does not meet that mark—and not just for some generation ahead but for students in the system now—will not meet the expectations of the Board, the legislature, or the public.

Jack Archer served as director of basic education oversight at the Washington State Board of Education until his retirement in 2016.  He lives in Fairview Park.

 
 
 
 

We take a look at a great charter school in Columbus, ask a serious question about school closures, and more

Parents make choices about their child’s schooling based on a variety of factors: location, safety, convenience, academics, extracurriculars, support services, and more. Many families choose their school by moving to the neighborhood of their preference, thus exercising “choice” when making homeownership decisions. It’s important to recognize that not all families have the same luxury. In fact, many don’t. For the most part, parents living in poverty can’t just up and move themselves to a neighborhood with higher-performing, better-programmed, safer schools. Yet their children deserve high-quality educational opportunities, too, in schools that work for them based on their unique learning styles, interests, and needs.

If we believe that parents of all income levels and backgrounds deserve the same choices we exercise for ourselves and our own children, then Ohio’s high-performing charter schools deserve our unwavering support. The 21,000+ events held across the nation last week for National School Choice Week demonstrate the pressing need—and support for—quality school options. Columbus Collegiate Academy (Dana Avenue campus), one of the city’s highest-performing middle schools, helps its eighth graders achieve math and science proficiency at a rate that’s more than double what the district achieves. Meanwhile, its eighth-grade reading proficiency rate is thirty-seven points higher than the district’s. It achieves this despite nearly all of its students being economically disadvantaged. 

Impressive data only tell part of the story. Hearing directly from parents and students sheds much needed light on how life-changing a good charter school option can be. Consider the story of just one student, Farah, who in the video below shares how she felt unsupported and was “made fun of” at a past school. (Given that one in five students reports being bullied, we know Farah is not alone.) Switching to Columbus Collegiate Academy gave Farah a sense of safety—arguably a prerequisite for learning given that we know bullying increases a child’s risk of having poor sleep and facing anxiety, depression, and other hardships.

 

But Farah, like many students from similar backgrounds, was far behind academically. She admits, “I was ‘an all Fs kid.’” Columbus Collegiate’s culture of high expectations, hard work, and relentless commitment to the idea that all children can and will learn helps Farah find a sense of self-efficacy and visualize a path to success.

Take a look at Farah’s inspiring story about the difference a good charter has made in her life.

For more student perspectives, check out Shyanne’s story, as well as profiles on several other students attending high-performing charter schools. 

 
 

“Winners never quit and quitters never win.” There's a lot of truth in that cliché, but it doesn't seem to apply to education. When it comes to chronically low-performing schools, in many cases, the better – and more courageous – course is to “quit” and close a school that is simply beyond repair.

In recent years, attempts to turn around failing schools are most closely linked to the Obama Administration’s supercharged School Improvement Grant (SIG) program. Between 2010 and 2015, the federal government spent $7 billion in efforts to turnaround low-performing schools. In exchange for these funds, grantee schools pledged to implement prescribed interventions, such as replacing personnel or changing instructional practices.

The returns: Not much—or perhaps not clear—according to a massive study by Mathematica and the American Institutes for Research (AIR). The study examined schools in the 2010 SIG cohort and tracked pupil outcomes through three years of implementation. Using data from twenty-two states, their analysis found that SIG had no significant impact on students’ state math or reading test scores. Nor did they find any evidence that SIG increased pupils’ likelihood of high school graduation or college enrollment. Further, the analysts didn’t even uncover an effect of SIG on the practices of participating schools.

Even with large sums of new money tied to well-meaning mandates, SIG turnaround schools could not appreciably move the achievement needle.

Now, what to do? One option is to keep trying to fix bad schools. We’ve been down that road with precious little to show for the time, effort and dollars spent.

Another approach is to close low-performing schools, while also launching excellent new schools to replace them. Andy Smarick, arguably the staunchest proponent of this view, writes:

We’ve tried to fix these deeply troubled schools for eons to no avail. … The wiser course of action is to make persistently underperforming institutions go away and then start new institutions in their place.

His position has been criticized as a “crusade” against SIG, and perhaps it is. Interestingly, though, in SIG’s own program design, one possible “intervention” was to close the school and have students enroll in another one. Predictably, almost no SIG schools took up the offer to voluntarily go out of business. But what if SIG had somehow induced the closure of more low-performing schools, and instead diverted billions of dollars to new school formation? Would kids’ outcomes have been any different?

We certainly don’t know the answer to this counterfactual. But a growing body of research (unrelated to SIG) suggests that students might have benefitted had their low-performing schools closed. In The 74, Matt Barnum highlights recent research from New Orleans that finds students made academic gains when low-performing schools closed. This mirrors Fordham’s own research, conducted by Deven Carlson and Stéphane Lavertu, which found displaced students from Ohio’s urban areas made significant gains on state exams, post-closure. Another study, this one from New York City, revealed that closing low-performing high schools increased the likelihood of students graduating from high school. Though closures may be politically difficult, studies now indicate that students benefit when a low-performing school closes and they relocate to a better one.

The Obama Administration’s parlay on intervention mandates and school turnarounds yielded little pay off for many thousands of children attending low-performing SIG schools. That demands some different thinking on how to lift outcomes in America’s neediest communities. Of course, it would be naïve to think that we can simply close our way to success. Done judiciously, shutting the lowest-performing schools while focusing resources on promising startups, might be our surest bet.

This piece originally appeared on the Real Clear Education blog.

 
 

One of the hallmarks of school accountability is the identification of and intervention in persistently low-preforming schools. Under No Child Left Behind (NCLB), schools were required to make adequate yearly progress (AYP); if they fell short, they were subject to a set of escalating consequences. Much of the backlash against NCLB was a result of these consequences being imposed from afar with little flexibility. So when Congress geared up for reauthorization, it wasn’t surprising that the new law, the Every Student Succeeds Act (ESSA), shifted the responsibility of identification and intervention to the states.

Last week, the Ohio Department of Education (ODE) released an overview of its proposed ESSA state plan. This isn’t the entire plan—the full draft will be released for public comment in early February. In future posts, we’ll do some deep dive analyses of the key areas and potential impacts of the full draft. But in the meantime, there’s plenty in the overview to explore—including how the Buckeye State plans to identify its lowest-performing schools.

ESSA requires states to identify at least two categories of schools: comprehensive support schools (which include the lowest-performing schools in the state) and targeted support schools (which include schools struggling with certain subgroups). Although ESSA requires only two categories, ODE’s plan proposes to carry over the three categories it currently uses that were a part of its federal waiver under NCLB/ESEA: priority schools, focus schools, and watch schools. Identification of these schools will begin at the end of the 2017-18 school year and, per ESSA requirements, the list of identified schools will be updated every three years. Let’s take a closer look at Ohio’s proposal for each of these categories.

Priority Schools

Priority is the name ODE plans to give to schools that fall under ESSA’s category of comprehensive support. There are three ways to fall onto this list:

  • Schools that receive an overall report card grade of F. Although Ohio hasn’t assigned a summative rating to schools in recent years, state law currently requires (but federal law no longer does) overall grades starting in 2018. ESSA requires that schools identified in this category include the lowest-performing 5 percent of Title I schools. Ohio’s plan notes that if less than 5 percent of schools receive an F, the next lowest-performing schools as determined by the overall report card grade will be added to meet the ESSA requirement.
  • Schools with a four-year graduation rate of less than 67 percent. It is an ESSA requirement that Ohio label such schools as comprehensive support schools (or in the case of Ohio, priority schools).
  • Schools with one or more subgroups performing at a level similar to the lowest 5 percent of schools.[1] According to ESSA, these types of schools start out under the targeted support label. If, however, a school fails to meet state-determined exit criteria within a certain number of years, it must transition into the priority category. Ohio already disaggregates some of its report card results based on certain subgroups (e.g., English language learners or race/ethnicity), but ESSA ups the ante by adding homeless students, foster care students, and children of active duty military personnel to the list of required subgroups for accountability. ODE’s plan also proposes to adjust its N-size for subgroups from 30 students to 15.

In order to move off of the priority schools list, schools must accomplish each of the following:

  1. Based on the overall report card grade, achieve school performance higher than the lowest 5 percent of schools for two consecutive years
  2. Earn a four-year graduation rate of more than 67 percent for two consecutive school years (if applicable)
  3. Have no student subgroups performing at a level similar to the lowest 5 percent of schools

Focus Schools

Focus is the name ODE proposes to give schools that fall under ESSA’s category of targeted support. There are three types of schools that will be labeled as focus schools:

  • Schools that earn a grade of D or F for the Gap Closing report card component for two consecutive years
  • Schools that have one or more student subgroups that fail to meet specific locally determined improvement goals for three consecutive years
  • Schools that do not meet multiple student subgroup performance benchmarks

In order to move off of the focus list, schools must earn an overall report card grade of C or better, earn a C or better on the Gap Closing component, and meet subgroup performance goals outlined by the state.

Watch Schools

Ohio’s additional category, watch schools, consists of schools that “struggle to meet the needs of one or more student subgroups.” ODE’s overview includes little detail about these schools, but the forthcoming plan is sure to offer more.

Although school identification is typically associated with low-performing schools, it’s worth noting that Ohio already identifies high performers. The state plans to continue these efforts under ESSA—even though it’s not required—in order to “honor and celebrate school districts that grow and achieve.” These recognition categories include schools that accomplish sustained achievement and substantial progress while serving a significant number of economically disadvantaged students and schools that exceed expectations in student growth for the year.

ODE’s plan for identifying low-performing schools shouldn’t be big news—other than changing the names of the categories and opting to have three categories instead of two, Ohio follows ESSA’s identification provisions pretty closely. The real drama is going to come with the news of which schools have been identified and how districts will select and implement improvement plans that actually work.


[1]This is based on individual subgroup performance.

 
 

On the college football field, Ohio and Michigan are bitter rivals. But in the charter school world they share something in common: Both states’ charter sectors have been saddled with the unflattering label of the “wild west.” Recently, this characterization—generally meant to describe a state without proper accountability policies—has been used in critiques of Michigan native and charter supporter, Betsy DeVos, president-elect Trump’s appointee for secretary of education.

What’s clear is that this label and accompanying narrative are hard to shed, even though both states have significantly strengthened their charter laws. On these Gadfly pages, Daniel Quisenberry has described how Michigan is improving its charter sector. In a Fordham report released today, we show how Ohio’s era of stagecoaches and saloons is starting to give way to a more modernized charter sector.

In On the Right Track, we examine the early implementation of recently enacted charter reforms in our home state of Ohio. Bottom line: The Buckeye State’s reforms are being implemented with rigor and fidelity, bringing promising changes to one of the nation’s oldest, largest, and most notorious charter sectors.

In autumn 2015, Governor John Kasich and Ohio legislators passed a landmark, bipartisan charter reform bill (House Bill 2). This legislation sought to strengthen accountability and transparency, align incentives to ensure quality schools, and rid the sector of conflicts of interest and loopholes that had threatened public trust. House Bill 2 was legislation that we at Fordham strongly supported and were pleased to see enacted into state law.

Among its myriad provisions, the legislation:

  • Ratchets up state oversight over its numerous charter authorizers (more than sixty as of last year). Among the key accountability tools is Ohio’s sharpened authorizer evaluation system that now includes revocation for a poor rating.
  • Eliminates “authorizer hopping.” While Ohio’s plethora of authorizer options allowed schools to find one that fits their needs, it also allowed low-performing schools to escape accountability by switching authorizers. Ohio’s charter reforms now prohibit this, with few exceptions.
  • Empowers charter governing boards to exercise independent control over their schools—and puts safeguards in place to reduce the likelihood they are being controlled by a management company.

But as studies and vast amounts of experience have taught us, whether these legislative reforms bear fruit or wither on the vine hinges largely on implementation. Now that a year has passed since Governor Kasich signed the legislation, we thought it was time to take a first close look. How are these reforms being implemented—with vigor and care, or with neglect? Are there any early indications that the reforms are improving sector performance? Alternatively, are any unintended consequences becoming clear?

To analyze these questions, we looked at several key data points, including trends in Ohio’s charter school closures and startups. We also reviewed each House Bill 2 provision, searching for evidence of implementation or enforcement by state authorities. Three key findings emerge:

  • Ohio’s charter sector is becoming more quality focused. In 2016, twenty-one charters closed across the state, among the highest numbers of school closings on record in Ohio. The schools had received low ratings on state report cards, suggesting that Ohio’s tougher accountability policies are—as they should—decreasing the likelihood that underperforming schools will just go on forever. Additionally, a very small number of new charter schools opened in fall 2015 and 2016—just eight new startups in both years—the lowest numbers of new school openings in Ohio’s charter history. This indicates that authorizers are vetting new schools more diligently as the pressure rises to open schools that promise quality. However, this also raises the troubling possibility that reforms are impeding charter growth, perhaps even deterring potentially excellent schools from entering the sector.
  • Ohio’s rigorous authorizer evaluation system has teeth. In October 2016, the Ohio Department of Education released its first round of high-stakes authorizer ratings under a revamped evaluation system. (Initial evaluation legislation passed in 2012, but that iteration had not been thoroughly implemented.) Twenty-one out of sixty-five total authorizers received an overall Poor rating—the lowest possible—while another thirty-nine were rated Ineffective, the second lowest rating. Authorizers rated Poor had their rights revoked, pending appeal, while Ineffective authorizers are now subject to a quality improvement plan overseen by the state and are prohibited from opening new schools. Poor rated authorizers represent only a small portion of the overall sector—responsible for just 8 percent of Buckeye charter schools; Ineffective entities authorize the majority of charters (62 percent).
  • State authorities are implementing forty-nine out of fifty of the House Bill 2 provisions in a verifiable way. Many of the legislative provisions require state agencies—e.g., the Ohio Department of Education or State Auditor—to enforce or verify adherence to the new charter law. To their credit, these executive agencies are taking their responsibilities seriously and carrying out the new charter law.

The hard work of implementation is, of course, far from done in Ohio. Policy makers still need to make some important adjustments to its authorizer evaluation system, and they must find a way to balance the tighter accountability environment with the need to grow new schools that give families and students the quality options they deserve. Ohio’s charter sector, for instance, would greatly benefit from more generous startup investment dollars—not to mention more equitable operational and facilities funding—to help quality schools replicate or launch promising startups from scratch. Lastly, empirical research will be required to help us grasp whether Ohio’s sector performance, post-reform, improves compared to prior studies that uncovered disappointing results.

In the end, we offer some good news: The implementation of major charter reform in Ohio is off to a strong start. Yes, we know that bad reputations are hard to shake. But before making broad generalizations, come and take a closer look at the changes—for the better—happening right here in America’s heartland.

 
 

The American Federation for Children (AFC) recently released its third annual poll on school choice. The national poll surveyed just over 1,000 likely November 2018 voters early this January via phone calls.

To determine general support and opposition, AFC posed the following question: “Generally speaking, would you say you favor or oppose the concept of school choice? School choice gives parents the right to use the tax dollars associated with their child’s education to send their child to the public or private school which better serves their needs.” By and large, the findings indicate broad support for school choice—68 percent of those surveyed support school choice compared to 28 percent who oppose it. These numbers are similar to AFC results from previous years: 69 and 70 percent of likely voters who expressed support for school choice in 2015 and 2016, respectively.

In addition to overall percentages, AFC broke out the survey numbers by specific demographic groups. Seventy-five percent of Latinos and 72 percent of African Americans support school choice compared to 65 percent of Whites. In terms of political affiliation, 84 percent of Republicans support school choice (up slightly from 80 percent in 2016), compared to 55 percent of Democrats (down from 65 percent in 2016); 67 percent of Independents voiced support for choice. Of the four generations surveyed, Millennials had the highest level of support for choice with 75 percent. 

The AFC survey also finds that seven types of choice gain majority support. They include: special needs scholarships (83 percent support), public charter schools (74 percent support), scholarship tax credit programs (73 percent support), education savings accounts (69 percent support), opportunity scholarships (58 percent support), virtual learning (59 percent support), and school vouchers (51 percent support).

Pollsters also questioned respondents on their support for “two potential school choice proposals that may be introduced in Congress.” Seventy-two percent of potential voters expressed support for a federal scholarship tax credit, and 51 percent of voters supported Trump’s proposal of a $20 billion school choice program.

It’s worth noting that the way pollsters ask questions matters. While the report did mention wording for a few questions, the majority of questions aren’t provided. Nevertheless, other national polls find similar levels of support for school choice

SOURCE: “2017 National School Choice Poll,” American Federation for Children, (January 2017). 

 
 

Ohio charter schools have long reported struggling in their efforts to secure school facilities. A soon-to-be released report, “An Analysis of the Charter School Facility Landscape in Ohio,” from the Ohio Alliance for Public Charter Schools, the National Charter School Resource Center, the Charter School Facilities Initiative, managed by the Colorado League of Charter Schools, and the National Alliance for Public Charter Schools surveys school principals to get the most detailed look to date of Ohio charter school facilities. The survey, which includes data from 81 percent of Ohio's brick and mortar charter schools, examines multiple aspects of charter facilities including the size, uses, and cost per student of each.

Please join Fordham and the Callender Group to hear the report’s authors share the data and Ohio charter schools/school networks talk about what the report means on-the-ground.

Thursday, February 2, 2016
8:30 - 10:00 am

Chase Tower - Sixth floor conference room B
100 East Broad Street
Columbus, OH 43215

PRESENTERS:
Kevin Hesla, National Alliance for Public Charter Schools and report co-author
Jessica M. Johnson, Esq., Colorado League of Charter Schools and report co-author

PANELISTS:
Tiffany Adamski, Regional Director Midwest at iLEAD
Andrew Boy, Founder and Chief Executive Officer, United Schools Network
Lyman Millard, consultant at Breakthrough Schools

MODERATOR:
Mark Real

Doors open at 8:00 am and a light breakfast will be served.

SPACE IS LIMITED, SO REGISTER TODAY

 
 
 
 

We look at Ohio’s Quality Counts ranking, district-charter collaboration options, and more

Education Week just issued its twenty-first “Quality Counts” report card for states. Ohio’s grades are so-so—and nearly identical to last year’s. Yet with a “C” overall and ranking twenty-second nationally, the Buckeye State’s standing relative to other states has fallen dramatically since 2010 when it stood proud at number five.

Ohio’s slide in EdWeek’s Quality Counts ranking has become easy fodder for those wishing to criticize the state’s education policies. Those on the receiving end of blame for Ohio’s fall have included: Governor Kasich (and the lawmakers who upended former Governor Strickland’s “evidence-based” school funding system), Ohio’s charter schools (never mind that nothing whatsoever in the EdWeek score cards takes them into consideration!), and even President Obama (specifically for his 2009 Race to the Top program). I’ve lost track of the number of times I’ve heard or read that Ohio’s plummeting ranking is incontrovertible evidence of things gone awry.

An almost-twenty slot drop in rankings sounds terrible, but my guess is that many people who lament it don’t know what the ratings comprise or that EdWeek’s indicators have changed over time. Let’s take a look at the overall rankings, and then take a deeper dive into changes to Ed Week’s report card to help explain Ohio’s decline.

Table 1 shows Ohio’s performance on the Quality Counts assessment from 2010 to 2017. Ohio’s national ranking and points earned have slipped since 2010 (used here as a starting point as it represents the high-water mark for the state on this particular report card), even while its grade continues to hover in the B-minus to C range.

Table 1: Ohio scores on Education Week’s Quality Counts report card

The 2017 report card is based on three sub-categories.

  • Chance for Success—a measure that includes educational inputs and outputs across the life span, such as family income, parent educational levels, preschool and kindergarten enrollment, fourth- and eighth-grade NAEP scores, and adult educational attainment.
  • K-12 Achievement—looks at student performance on the National Assessment of Educational Progress (NAEP), graduation rates, and percent of students scoring 3 or above on AP exams, as well as gaps in NAEP proficiency between poor and non-poor students.
  • School Finance—a measure that includes state funding systems’ reliance on local property wealth as well as other measures of equity, per pupil expenditures, and share of taxable resources spent on K-12 education.

Graph 1: Ohio’s Quality Counts sub-scores, 2010-2017 (three categories)

As Graph 1 depicts, Ohio’s sub-scores are largely flat. Chance for Success dropped by one point in the last eight years; K-12 Achievement fell by less than two points; and School Finance three and a half points. So why has Ohio’s overall rank suffered so greatly?

There are at least two reasons behind Ohio’s fall from fifth that have nothing to do with its actual performance. The first is the change in composition of the report cards and the other has to do with national context.

Prior to 2014, Education Week graded states in six categories instead of three. Two of these—“Standards, Assessments, and Accountability” and “Transitions and Alignment”—were among Ohio’s top-rated categories, with Standards the only category in which Ohio ever received an A. Table 2 below makes clear that the shift to just three categories in 2015 essentially knocked out Ohio’s highest-rated components, causing an overall decline in its score. Indeed, the largest single-year drop occurred between 2014 and 2015 when Quality Counts was downsized.

Table 2: Ohio’s sub-scores on Education Week’s Quality Counts report card (0-100)

Graph 2 includes all of Ohio’s sub-scores over time, including the three that were removed from the report card (shown in red and oranges below).

Graph 2: Ohio’s Quality Counts sub-scores, 2010-2017 (six categories)

The change in EdWeek’s report card at least partially explains Ohio’s decline in score as well as rank over the decade. Still, readers might wonder why Ohio dropped so suddenly from twelfth to twenty-sixth in 2014—a year before Standards, Assessments, and Accountability (and the other metrics) were removed. That brings us to the second reason for Ohio’s change in relative rankings, which likely had less to do with what Ohio did or didn’t do, and more to do with what happened in other states. The Buckeye State was an early adopter of Common Core state standards and aligned assessments, voting to adopt them in 2010. And as a Race to the Top winner the same year, it was ahead of the pack in adopting new statewide teacher evaluation systems (a component of the old “Teaching Profession” category). It is unsurprising that Ohio as an early adopter of reform earned high marks on these measures.

Ohio’s fourteen-slot fall between 2013 and 2014 very likely resulted from the changes rapidly occurring in other states that earned them extra points Ohio already had under its belt. Other states adopted and implemented the Common Core, tougher assessments, and teacher evaluation systems—all of which boosted their scores. Ohio’s actual score only dropped by a point the year its ranking plummeted. Conversely, its score fell three times as much the following year (2015) while its ranking rose by eight slots. The lesson here is that relative rankings are just that—relative.

Table 3:  Ohio’s dramatic ranking shifts from 2013-2015

Resorting to broad generalizations about Ohio education based solely on the national Quality Counts report card is misinformed at best and intellectually dishonest at worst. Even so, much of the Quality Counts metrics are valuable and can help inform policy priorities. Ohio’s poor showing for NAEP achievement gaps by income; preschool and kindergarten enrollment; funding equity; and adult educational attainment should be especially concerning to us. Ohio leaders wanting the state to compete as a thriving place of opportunity for families should keep close tabs on these metrics, always pushing for ways to move the needle. What they shouldn’t do is pay attention to hyperbolic claims about the demise of our state—at least, not based on this particular report card. 

 
 

Peter Cunningham recently called district-charter collaboration the “great unfilled promise” of school choice. He explains the possibilities by pointing to a host of cities that are already benefiting from collaboration: In New York City, districts and charters are partnering to improve parent engagement. In Rhode Island, charters are sharing with district schools their wealth of knowledge on how to personalize learning effectively. Boston has district, charter, and Catholic schools working together on issues like transportation and professional development and has successfully lowered costs for each sector. The SKY Partnership in Houston is expanding choice and opportunities for students. The common enrollment system in New Orleans has solved a few long-standing problems for parents (like issues with transparency), and partnerships in Denver have set the stage for even more innovation. Though the type and extent of collaboration differs in each of these places, the bottom line is the same: Kids benefit.

Here in the Buckeye State, there are thousands of kids in need of those benefits. Our most recent analysis of state report card data shows that within Ohio’s large urban districts (commonly known as “the Big Eight”), proficiency rates were far below the state average in fourth- and eighth-grade math and ELA. Each of these districts has a significant population of disadvantaged students similar to those that the district-charter partnerships in other cities are serving well. It stands to reason that district-charter collaborative models would greatly improve both the opportunities for and the outcomes of Ohio students.

But things in the Buckeye State are a bit more complicated. For starters, Ohio charters don’t have access to the same resources that charters in other states do, and that’s led to some serious squabbling about who’s stealing from whom. A 2016 survey of Ohio charter principals found that the biggest barriers to growth included lack of funding, trouble securing facilities, and a lack of district cooperation on issues like transportation and student records. Districts, meanwhile, claim that they’re the ones who are being shortchanged. Working to reform funding and access to facilities could go a long way toward making the sectors more amenable to collaboration.

But even with meaningful reform, the tensions between districts and charters in Ohio could remain. After years of being pitted against one other, trust is in short supply. The truth is that there are few tangible incentives for charters and districts to stop finger-pointing and start working together. But if the goal of both sectors is to do what’s best and right for kids, then collaboration can’t remain a pipe dream. When districts and charters continue to operate in silos, kids pay the price.

Fortunately, Cleveland’s existing district-charter partnership is a positive sign of what could be in the Buckeye State. Plans were initiated back in 2012 with the Cleveland Plan, and by 2014, the Cleveland Metropolitan School District (CMSD) began to engage with local charters in earnest. The city became a "Gates Compact City" in 2014—charters and the district signed a pledge to “improve collaboration and work toward shared goals” and were provided with a planning grant to support their work. Progress has continued since, and the Cleveland Education Compact now boasts subcommittees in which district and charter school leaders work jointly on issues like professional development, special education, policy/advocacy/funding, and enrollment and record sharing. No partnership is perfect, and CMSD and its charter partners will certainly have their ups and downs as they determine how best to collaborate moving forward. But the willingness from both groups to work together is worth acknowledging, and their collaborative model is worth considering in other Ohio cities.

Many folks in Ohio didn’t think charter reform could ever become a reality, but in 2015, the Ohio General Assembly passed House Bill 2, a comprehensive reform bill with provisions designed to incentivize higher performing authorizers, charter school boards, and management companies. Despite the short time frame, we’ve already started to notice the charter sector changing for the better. With these comprehensive reforms under our belt and $71 million in CSP funds waiting in the wings, 2017 could be a good year for Ohio charter schools. A good year would certainly be welcome. But a great year would be even better—and a great year becomes far more possible if Ohio can solve the problem of limited collaboration between districts and charter schools. 

 
 

NOTE: The Thomas B. Fordham Institute occasionally publishes guest commentaries on its blogs. The views expressed by guest authors do not necessarily reflect those of Fordham.

Last fall I retired to Northeast Ohio, where my wife and I have family, from Washington state, where I’d been staff to the State Board of Education and the state legislature. In perusing the Plain Dealer one morning, I felt that I could as well have been back in Olympia. 

The story described new state high school graduation requirements linked to higher standards defining readiness for college and career that had been set by Ohio’s State Board of Education and the fierce backlash ensuing from superintendents and others. The State Department of Education calculated that nearly 30 percent of high school juniors were likely to fall short of graduating next year if the new requirements were applied to them. Superintendents organized a protest rally—dubbed by one State Board member a “march for mediocrity”—on the statehouse steps. In light of the concerns voiced, the Board created a task force to make a recommendation on whether the requirements should be changed or phased in in some manner.

That the present controversy resonates with my experience in Washington is not such a surprise. I could have moved to Ohio from many other states grappling with the same question: Can you have higher standards for granting a high school diploma geared to measures of career and college readiness without adversely affecting graduation rates, at least for a while? How do we best resolve the inherent tension between the twin goals of higher standards and higher graduation rates?

While the parallels between Washington and Ohio are striking, there are significant differences, too. I relate the Washington story not for its intrinsic interest, but in the hope that there may be things to be learned from the Evergreen State experience.  

While Ohio requires students to earn a cumulative passing score on a series of end-of-course tests to graduate, Washington has set passing scores on statewide assessments developed by the Smarter Balanced Assessment Consortium (SBAC) linked to Common Core State Standards as a bar for earning a state-approved diploma. Legislation enacted in 2013 required the State Board to establish the scores students must reach to obtain a state diploma. “The scores established by the state board of education for the purpose of earning a certificate of academic achievement and graduation from high school may be different from the scores used for the purpose of determining a student’s career and college readiness,” the act said.

In rule, the Board adopted aspirational language declaring that, “The state’s graduation requirements should ultimately be aligned to the performance levels associated with career and college readiness.” But there is no law requiring that the two be so aligned, and pressure from “the field” will continue to be strong that they not be aligned lest graduation rates slip. 

In a November 2014 presentation on alternative assessments, a consultant to the Board posed the big, unpleasant but unavoidable question, “How can we increase the rigor of a high school diploma and the number of students graduating at the same time?” It is the same question confronting the Ohio State Board now. 

In January 2015, the Washington Board formally stated an intent to adopt an “equal impact” approach to setting the minimum, “or cut,” scores for graduation. Under this approach, the cut scores for the Smarter Balanced assessments would result in the same projected percentage of students earning a diploma as would have been the case had the state still been using the old, superseded state assessments aligned with earlier, less rigorous state standards. 

In August, the Board adopted scores implementing that policy. Superintendents supported the action as fair to students while their districts continued to transition to the new standards and assessments. Business-backed education reformers criticized it as a retreat from standards. “The Washington State Board of Education today fell short of setting a clear path for our state toward all students graduating high school prepared for their next step in life,” said the Seattle-based League of Education Voters.

A former school board member and community college instructor asked, “Isn’t it time that we became serious about education? Setting cut scores based on how many this will allow to pass just further depreciates a high school diploma. We know by community college remediation rates that the current high school diploma is not acceptable. Why continue this myth?”

The Board stated that the initial score-setting begins the process of moving toward more rigorous college-and career-ready standards, and “was not done to compromise or confuse our goal.” Its good intent is not to be doubted. To reiterate, however, there is no law requiring that scores established for graduation move toward and ultimately equal those for career and college readiness, and none on the horizon. I will watch from my new home to see whether the Board’s intent is realized in a political environment not friendly to it.

As I keep an eye on events in Washington, having been so involved for so long, I’ll also be following the deliberations of the task force created by the Ohio State Board of Education with great interest.

I find much to credit in Ohio’s graduation requirements. Having students achieve a minimum cumulative number of points on state end-of-course tests rather than meet a standard on each consortium-designed test and the alternative pathways to graduation offered, including an industry credential and established workforce readiness assessment, are worthy of examination, and perhaps emulation, by other states. 

Change is hard, we all know. It is prudent for the Board to step back and, with the help of the task force, consider what may be the best way to implement the new requirements while doing the least harm. At the same time, the Board should resist the inevitable pressure to back away from higher standards. It is critical that board members not lose their focus on what will best prepare students—meaning this generation of students and not some future one—for success in life after high school in a very different and more challenging world from when my generation was in school. The stakes are very high.

Jack Archer served as director of basic education oversight at the Washington State Board of Education until his retirement in 2016.  He lives in Fairview Park.

 
 

Much prior research indicates that youngsters from single-parent families face a greater risk of poor schooling outcomes compared to their peers from two-parent households. A recent study from the Institute for Family Studies at the University of Virginia adds to this evidence using data from Ohio.

Authors Nicholas Zill and W. Bradford Wilcox examine parent survey data from the National Survey of Children’s Health. This dataset contains information on 1,340 Ohio youngsters—a small but representative sample. The outcomes Zill and Wilcox examine are threefold: 1) whether the parent had been contacted at least once by their child’s school for behavioral or academic problems; 2) whether the child has had to repeat a grade; and 3) a parent’s perception of their child’s engagement in schoolwork.

The upshot: Buckeye children from married, two-parent households fare better on schooling outcomes, even after controlling for race/ethnicity, parental education, and income. Compared to youngsters from non-intact families, children with married parents were about half as likely to have been contacted by their school or to have repeated a grade. They were also more likely to be engaged in their schoolwork, though that result was not statistically significant.

An estimated 895,000 children in Ohio live in a single-parent household, according to the Annie E. Casey Foundation. Each of them may feel the same love and affection as their peers from married families, but the stark reality, as indicated by study after study, is that on average, they face disadvantages manifested in lower schooling outcomes. The challenge for schools is to help all of their students—including ones from single-parent families—to beat the odds.

Source: Nicholas Zill and W. Bradford Wilcox, Strong Families, Successful Students (Institute for Family Studies).

 
 

More than sixty years after Brown v. Board, traditional district schools are more often than not still havens of homogeneity. Static land use guidelines, assignment zones, feeder patterns, and transportation monopolies reinforce boundaries that functionally segregate schools and give rise to the adage that ZIP code means destiny for K-12 students. Asserting that student diversity is an object of increasing parental demand, at least among a certain subset of parents of school-age kids, the National Charter School Resource Center has issued a toolkit for charter school leaders looking to leverage their schools’ unique attributes and flexibilities to build diverse student communities not found in nearby district schools. The report cites a number of studies showing academic benefits of desegregated schools, especially for low-income and minority students. It is unlikely that the mere existence of documentable diversity is at the root of those benefits. More likely, it is a complicated alchemy of choice, quality, culture, and expectations that drives any observable academic boosts. Garden-variety school quality is a strong selling point for any type of school, but this toolkit sets aside that discussion to focus on deliberately building a multi-cultural student body for its own sake. Bear that in mind as we go forward.

Building diversity is not easy, even in a flexibly run and technically borderless charter school. The toolkit provides “context about research and the legal and regulatory guidance” in four main areas required to be addressed: defining, measuring, and sharing school diversity goals; planning school features to attract diverse families; designing recruitment and enrollment processes; and creating and maintaining a supportive school culture.

Goal-setting and recruitment are thorny from the start, the report warns, as using racial or cultural characteristics to even set an enrollment target is riddled with concerns around quotas and discrimination. In states where charter location is less regulated, the calculus may be how to attract families of color to a school in a predominantly white neighborhood. In states like Ohio, where charter location is limited to low-performing school districts, the calculus may be reversed. Either way, the toolkit provides valuable guidance for negotiating these potential pitfalls. Also addressed – although not solved by any means – are the severe constraints charters in many states face in terms of facilities and transportation. The mechanics and legalities of diversification can easily overwhelm the best of plans before the desired diverse student body even gets in the door.

My colleagues here at Fordham have written extensively about the challenges of teaching kids who are at vastly different levels of achievement, which is more likely in a diverse school. The new toolkit has a section on school staffing, training, and professional development, but the resources highlighted there are more about cultural awareness and discipline practices than actually teaching the students being recruited. I highly recommend Mike Petrilli’s 2012 book The Diverse Schools Dilemma for more on the latter.

The tips and guidance in this toolkit are helpful and may give charter school leaders insight into areas where well-intentioned plans to build a diverse student body could unexpectedly founder. But with no discussion of academic quality as a key means of attracting students, we are left with a roadmap to somewhere we might want to go. But not until after we’ve visited the bigger and better-known stops along the way.

SOURCE: “Intentionally Diverse Charter Schools: A Toolkit for Charter School Leaders,” National Charter School Resource Center (January, 2017)

 
 

NOTE: The State Board of Education of Ohio on December 13, 2016 debated whether to change graduation requirements for the Class of 2018 and beyond. Below are the written remarks of John Morris, given before the board.

Members of the Board,

Thank you for giving me a moment to offer testimony on behalf of the construction industry. Members of the industry sent me here to thank you for setting a new higher bar with the class of 2018 graduation requirements. We are excited that this board has supported maintaining high standards for graduating and earning a diploma in the State of Ohio. Members of the construction industry were very pleased when the phase out of the Ohio Graduation Test was announced in favor of multiple end-of-course exams and the opportunity for an industry credential to help a student graduate. We expect this new system to be an improvement over the current system that graduates many without the skills to succeed in college and continuously FAILS to introduce others to the hundreds of thousands of pathways to employment via industry credentials.

For many decades, industries such as construction and manufacturing enjoyed a steady stream of individuals coming directly from "vocational" schools with earned industry credentials and experience. The construction industry regularly took graduates and gave them advanced placement into level three of apprenticeship, only two years from full journeyman status. That luxury and system of education is now all but dead and gone. The increased emphasis on a pre-college curriculum for all has severely diminished the ability of high school students to learn a skilled trade while in school and the elimination of middle school "shop" classes has reduced the number of students who learn at young age of their God-given talents in the trades. You see when I was in sixth grade, I learned in middle school shop class that I was not going to be a carpenter when I struggled to build the birdhouse. I knew I would not be a welder or plumber when I could not properly seal two metal objects together; but I was one who could wire a lamp and do the math needed to calculate wattage. When I struggled in high school, I chose to become an electrician. I owned my own company by the age of 28. I paid my way through college debt-free thanks to a trade. I now hold multiple masters degrees and taught economics at the University of Cincinnati. An industry credential and skilled trade open the door to opportunity for me and it can for many others.

The current graduation requirements that offer alternative pathways through industry credentials are perfectly designed to fix a broken system that mistakenly tells every child who gets a diploma that they are college-ready. We all know this is not the case; yet we've heard that many are already talking about making the tests easier and/or reducing the number of points required to earn a diploma rather than increasing the emphasis on schools to work with industries like construction to help steer students into choices that don't involve certain college failure and accumulation of unforgiveable debt. We applaud this board and the Ohio Department of Education for creating a system that includes pathways to graduation via industry credentials and sincerely hope that you hold onto the idea of higher standards for graduation. We all know that not everyone should go to college and industry credentials offer an alternative pathway to success in life—one without failure and debt. The only way school districts will pursue these pathways is to maintain the system as it is. Do not make their jobs easy. Keep the standards as they are written and make the superintendents do what they should be doing anyway—working to find a path to success for all students, not just those who are on a college path. Construction professionals stand ready to help school districts build pathways to credentials in our industry and are pleased to offer Ohio Students a debt-free pass to a lifetime of earnings through a skilled trade.

Mr. Morris is president of the Ohio Valley Construction Education Foundation, based in Springboro, Ohio.

 
 
 
 

We ring out the old Ohio education news from 2016 and ring in the new

At the end of November, we asked you—our loyal Ohio Gadfly readers—to tell us what you thought were the top education stories for 2016. The choices were numerous and we appreciate all of the responses. In the spirit of “ringing out the old,” we give you the Top 5:

  1. House Bill 2 (HB 2): It is difficult to overstate the importance of this wide-ranging reform of Ohio’s charter school policies, which went into effect in February of this year. Almost immediately, we observed “HB2 effects” rippling throughout the sector, particularly in terms of sponsor decision-making around school closures. Additionally, “sponsor hopping” (in which schools seek out the sponsor of least resistance when anticipating a contract non-renewal) disappeared virtually overnight. Completion of the new, rigorous sponsor evaluations that were strengthened by HB 2 occurred in October (more on these later). Befitting the top placement for this story in 2016, there is much more to say. Stay tuned to the Ohio Education Gadfly for our detailed analysis of the early implementation of HB 2, expected in the New Year.
  2. ECOT vs. ODE: Ohio’s largest online charter school was embroiled in a lawsuit with the Ohio Department of Education for much of 2016. In simplest terms, the dispute centers around the manner in which e-schools track student attendance and scaled-up requirements from the state to more precisely document not just student “learning opportunities” but actual participation in class as measured by hours logged in. The department’s attendance audit of ECOT found that just 6,312 pupils could be documented as full-time students, even as the school received funding for over 15,000 students. ECOT may have to return 60 million dollars in state funding. In response, the school has fought the audit findings in court and in the court of public opinion. Expect to hear more about this issue in 2017.
  3. School report cards: Ohio released not one but two school report cards in 2016. Due to the delays in receiving the 2014-15 PARCC scores, the state released that year’s data in late February, roughly six months in arrears. In September, Ohio published school report cards again—this time on its usual schedule—for the 2015-16 school year. The upshot from both years’ data: Ohio’s more challenging standards meant that fewer students were deemed “proficient” on state exams. For some, this bitter pill has not been easy to digest, but it has been a necessary shift. For far too long Ohio created the false impression that the overwhelming majority of students was proficient on state tests—“doing just fine”—even as one in three college freshmen fell into remedial education. A more honest appraisal of student achievement is now coming into view, and we offer our kudos to policy makers for holding the line on higher proficiency standards.
  4. Charter sponsor evaluations: HB 2 puts sponsors (aka authorizers) at the front and center of Ohio’s charter reform efforts. As many know, sponsors are the entities tasked with overseeing the financial, academic, and operational performance of charter schools and holding them accountable when necessary. Sponsors underwent a new and rigorous evaluation, the results of which came out in October. Five sponsors were rated Effective; thirty-nine were Ineffective; and twenty-one sponsors in the state were ranked Poor, the very bottom category.  No sponsors received the state’s top rating, Exemplary. Sponsors rated Poor are no longer allowed to sponsor schools, while Ineffective sponsors face consequences as well, albeit less severe. The sponsor evaluation system is not perfect and needs some tweaks—among them, making compliance less burdensome and ensuring that student growth comprises a larger portion of the academic grade. But given Ohio’s historic problems with loose vetting of schools, sponsor hopping, and an overall poor track record, the new evaluations for sponsors—and sanctions tied to ratings—are a necessary step on Ohio’s road to charter improvement.
  5. High school graduation debate: Ohio is phasing in tougher graduation standards in the form of more challenging end of course exams starting with the Class of 2018. (Goodbye OGT!) An analysis by the Ohio Department of Education indicates that as many as one third of current high school juniors are not on track to meet those requirements and may not receive a diploma at the end of their senior year. This realization triggered an outcry among school officials clamoring for a lowering of the new requirements. It fell to the State Board of Education in its final meeting of the year to listen to testimony, discuss, and decide whether any changes would be made. In the end, the board decided to authorize a workgroup to study the issue and report back in four months’ time. Speaking of testimony, we have an idea that could ease some of the tensions: Check it out here.

If there is a theme to this year’s top education stories, it has to be increased expectations and accountability among all actors in Ohio’s K-12 education sector. We believe this is good news for Buckeye students and parents. State policymakers must stay the course and keep student success—genuine success that signals true college or career readiness—as the goal, even though holding firm will inevitably cause short-term angst. If high expectations remain, 2016 could well be the year which future analysts remember as the start of Ohio’s education renaissance.

 
 

In late 2016, we at the Ohio Gadfly asked for your predictions on the most important education issues of 2017. Here were your prognostications, along with—as you might expect from us at the Gadfly—commentary on how we hope these debates will unfold in the year to come.

Number 5: School accountability

It’s no surprise to see school accountability on our readers’ list of big issues for 2017. In the coming year, Ohio will submit a revised plan for accountability under the new Every Student Succeeds Act. Fortunately, the law doesn’t require the Buckeye State to undertake a major overhaul of its accountability policies. Ohio can and should stay the course on its key policies (with minor adjustments; see number 2 below). For instance, policymakers should maintain the use of a performance index and student growth measures or value added; they should also preserve a transparent A-F grading system. As Ohio’s ESSA plan is reviewed and debated, policymakers must ensure that accountability policies uphold high expectations for all pupils and offer clear information on school quality.  

Number 4: E-schools

With over 30,000 Ohio pupils attending virtual charter schools, the Buckeye State has one of the largest e-school sectors in the nation. Online learning can be a valuable alternative for many students: Some may need the flexibility and safety that e-schools afford, while others might find their own home to be a more productive learning environment than a crowded or disrupted classroom. Nevertheless, full-time e-schooling appears to be falling short of its full potential. Rigorous research has found that the average student loses academic ground when he enrolls in a virtual school, without regard to the academic level at which he started. Meanwhile, Ohio’s largest e-school has been embroiled in a lawsuit with the state over how it verifies student attendance. So what’s next for e-schools? It’s hard to say for certain, but even their harshest critics are likely to admit that online learning is here to stay. This means that policy makers need to focus on creating the conditions that can help all of Ohio’s online learners thrive. (See here and here for some policy ideas.) Moving forward, it will be important to heed the advice of Michael Horn who writes in Forbes: “Harnessing their [e-schools’] benefits while reigning in their downsides is critical.”

Number 3: The new federal administration

On January 20th, Donald Trump will be inaugurated as the forty-fifth president of the United States. As has been widely reported in the media, Trump has nominated Betsy DeVos to be U.S. Secretary of Education. What is clear is that DeVos is a staunch supporter of school choice, including vouchers and charter schools, and that Trump voiced his own support for choice on the campaign trail. But how their leadership will affect Ohio’s education policies is much less clear. The Buckeye State is already home to several robust choice options (including both charters and vouchers), and recently received a large federal grant to support the replication of high-quality charters. It also remains far from clear how a federal private-school choice program would work, or whether it would even be desirable. As analyst Joy Pullman of the conservative web magazine The Federalist recently told Time, “If DeVos and Trump love school choice, and the children it benefits, they will keep the federal government far, far away from them.” For those interested in what the new administration might do in the area of choice, tune into Fordham’s January 18th event, “A New Federal Push on Private School Choice? Three Options to Consider.” 

Number 2: Every Student Succeeds Act (ESSA)

As noted above, Ohio has a strong school accountability framework, and the bulk of it will likely remain in place under ESSA. Still, policymakers should use the ESSA state plan to make an important adjustment to accountability. They need to place more weight on student growth measures or value added, a more poverty-neutral measure, in the overall school-grading formula. (Overall A-F grades will come online in 2017-18, though they are already used for sponsor evaluations.) Under the current framework, the state will label almost all high-poverty schools as failures—Ds and Fs—due to the overemphasis of measures correlated with demographics, such as proficiency. Regrettably, this type of system fails to distinguish high-performing, high-poverty schools from true failures. This is unhelpful—even misleading—to local leaders and families who rely on report card grades to make decisions. It’s also unfair to the terrific educators at Ohio’s finest high-poverty schools who, in all likelihood, will also find themselves stuck with low ratings. A few other states such as Arkansas, Colorado, Idaho, and Oregon have implemented summative ratings that place at least half the weight on student growth measures. Ohio should follow their lead, and the ESSA plan would be the perfect opportunity to recalibrate report cards and ensure a proper weighting on growth.  

Number 1: Charters and choice

According to Ohio Gadfly readers, charters and choice will be the hottest topic in 2017. As these matters take center stage—if they aren’t already—it will be essential to focus on quality and results. If public charter and private schools of choice are to flourish over the long run, they need to offer students and families a higher-quality experience than the conventional options. And to win the confidence and trust of policymakers, taxpayers and the general public alike, they must consistently demonstrate positive pupil outcomes. Some Buckeye schools of choice have proven themselves, but others are mediocre or worse. As Ohio makes its long overdue turn to quality, policymakers will need to uphold accountability, including the House Bill 2 charter reforms, while also supporting the growth and replication of high-performing schools. If they can achieve these goals, we should start seeing the promise of choice begin to be fulfilled in 2017.

* * *

With our erudite readers, we too anticipate these five issues to dominate Ohio’s policy discussion in 2017. But if last year taught us anything, it’s to prepare for the unexpected! We look forward to keeping you posted on these issues and more in the New Year. 

 
 

NOTE: The State Board of Education of Ohio is today debating whether to change graduation requirements for the Class of 2018 and beyond. Below are the written remarks that Chad Aldis gave before the board today.

Thank you, President Gunlock and state board members, for allowing me to offer public comment today.

My name is Chad Aldis. I am the vice president for Ohio policy and advocacy at the Thomas B. Fordham Institute, an education-oriented nonprofit focused on research, analysis, and policy advocacy with offices in Columbus, Dayton, and Washington, D.C.

High school diplomas are supposed to signal whether a young person possesses a certain set of knowledge and skills. To its credit, Ohio is phasing in new graduation standards that will do that by better matching the expectations of post-secondary institutions, employers, and our armed forces. The new standards ask our young people to demonstrate readiness by either passing end of course exams (EOCs), achieving a remediation free ACT or SAT score, or earning an industry credential.

After years of low graduation standards, Ohio’s new requirements are a major step in the right direction. We need to set the expectations high for the young men and women who will become our business and civic leaders; scientists and engineers; teachers and law enforcement officers; and leaders in many other professions.

I’m not here to say that the decision on graduation standards is an easy one. No one should withhold a diploma from a deserving student. But at the same time, the state shouldn’t award meaningless credentials either. As you debate these issues, I would like to offer two suggestions.

First, I urge the board to exercise patience and not make a rash decision to adjust graduation standards. Over the past few years, we as a state have done much work to signal to families, taxpayers, and students that Ohio is raising expectations for all students. Backtracking prematurely on graduation standards would send the exact opposite message.

Let’s also keep in mind that many students are stepping up to meet these higher expectations. According to last month’s presentation by the Ohio Department of Education, 65 percent of the class of 2018 is on-track to meet the EOC requirements. This doesn’t even include students who will graduate via the career and technical pathway or will achieve the necessary EOC exam points after retakes. At the very least, we should see what the data look like after this school year before deciding whether changes are absolutely needed.

A bigger question is whether lowering the number of EOC exam points necessary to graduate is the right way to address the problem. While it’s one of the few public policy solutions available to you as the State Board, it’s a rather blunt instrument that could reduce the incentive to get every student ready for success after high school. Moreover, once lowered, this is going to be incredibly difficult to increase.

Over the longer term, it might make more sense for state leaders to work towards a multi-tiered approach to awarding diplomas. This would help to ensure that all hard-working students receive the credential they need to take their next step in life, while also creating an incentive structure that encourages our young men and women to aim for higher goals. A tiered approach would build on the honors diploma that Ohio already has in place, and could work like this.  

At a base level, Ohio could create a standard-issue diploma—offered by either the state or local school districts—signifying that pupils have met their coursework requirements, though demonstrated only basic level skills on exams. One step up could be a diploma indicating college and career readiness. To earn this credential, students would need to meet rigorous benchmarks on end of course exams (probably 18 to 21 points), college admissions exams, or complete a demanding industry certification. This diploma would line up closely with Ohio’s new graduation requirements—perhaps even a little more challenging. At a third level, the state could award a prestigious diploma geared to the expectations of our most selective colleges and universities. To enhance their value and offer incentives to students, the state could even tie merit scholarships to the top two diplomas. Through continued transparency, school districts should continue to be incentivized to get as many students as possible to the college and career ready and honors diplomas.

A tiered approach to awarding diplomas recognizes that different students leave high school with a different set of knowledge and skills. But one might ask whether it would lower expectations for certain pupils. In my view, it would not: Every student in Ohio would have the opportunity—and hopefully the incentive—to aim for a diploma that is aligned with her long-term goals. On the other hand, lowering the number of points required on EOC exams would lower expectations for all Ohio students.

For some young people, attaining the standard Ohio diploma will be something they and their families celebrate and cheer. It will also open doors to employment and post-secondary education. For others, meeting the demands of the prestigious, tertiary level diploma will be a source of tremendous pride, and put them on the pathway to leadership in our highest-need career fields.

Ohio is taking an important step forward in raising its graduation standards. I encourage the board to exercise patience and not rush into a decision. At the same time, let’s explore a different approach to awarding diplomas—one that can incentivize students from across the entire academic spectrum to reach their full potential.  

Thank you for the opportunity to offer public comment.

 
 

Back in 2012, the National Association of Charter School Authorizers (NACSA) began evaluating and ranking state charter laws based on eight policies they consider “cornerstones of charter school excellence.” These policies—quite reasonable in our view—are based on the principles of access, autonomy, and accountability and require each state to:

  1. Have at least two high quality authorizers, one of which is an alternative to the local district
  2. Endorse national professional authorizer standards
  3. Evaluate authorizers regularly or as needed
  4. Sanction authorizers that do not meet professional standards or that oversee persistently failing schools
  5. Require authorizers to report annually and publicly on the academic performance of each school they oversee
  6. Require authorizers to maintain charter contracts with academic performance expectations, and encourage high-performers to replicate
  7. Maintain strong renewal standards that permit authorizers to hold schools accountable for failing to meet academic expectations
  8. Close charter schools that perform below an established threshold of academic performance

In their latest annual state policy analysis, NACSA notes a few key changes from last year. Michigan, for instance, gets props for establishing default closure for schools that perform beneath a minimum threshold. Missouri smoothed the way for high-performing charters to replicate, and now mandates annual charter school performance reports. Kudos also go out to Washington for the advocacy campaign that restored the state’s charter law, and to New York’s courts for upholding authorizers’ right to implement “a strong standard of renewal.”

As for rankings, the results are largely similar to the previous year’s report. The forty-four states with charter laws were evaluated based on the eight cornerstone policies, and could earn up to thirty-three points on NACSA’s rubric. The top three states—Indiana, Nevada, and Washington—tie for first place with perfect scores; Ohio and Alabama follow with thirty-two and thirty-one points, respectively.

The report notes that while some states, like Nevada, have risen in the rankings after adopting NACSA’s recommended policies in response to concerns over sector quality, other states, like Alabama, have newer charter laws and thus don’t yet have evidence of implementation outcomes. And it’s fair to point that that some states with great charter school outcomes—like New York—get only middling scores from NACSA—proof that not all that matters can be easily measured.

The lowest-ranking states—Oregon, Wyoming, Alaska, Maryland, Virginia, and Kansas in order—earn between five and zero points—with the last-place Sunflower State the only to get a glaring goose egg.

The most informative part of the report by far, though, is the individual state profiles and their detailed breakdown of each state’s status. These profiles don’t just show points earned, overall score, and ranking, but also include a description of the state’s charter landscape, a comparison between last year and this year’s scores, and specific recommendations to improve the quality of charter school oversight in that particular state. 

Overall, NACSA’s latest report is a useful analysis that both identifies problems and offers solutions. Despite their focus on improving charter law, however, the authors are careful to admit that good policy is only part of the equation—committed advocates and rigorous implementation are also vital.

SOURCE: “On the Road to Great Charter Schools: State Policy Analysis 2016,” National Association of Charter School Authorizers, (December 2016).

 
 

The Data Quality Campaign (DQC) recently released its analysis of state report cards—the annual summations of student and school achievement data that states are required to make available to the public under the Every Student Succeeds Act and, previously, No Child Left Behind—to determine if they’re easy for parents and other community members to find and understand. The authors examined report cards in all fifty states and D.C. for content, presentation, accessibility, and clarity, using more than sixty data points all together.

Unsurprisingly, they found that most were too difficult to find and interpret. Nineteen states maintain labyrinthine department of education websites that require three or more clicks to arrive at the report card after a simple Google search. Once found, they often comprise confusing displays, organization, and jargon that make the information difficult to interpret. For example, across the fifty-one jurisdictions, authors found more than five terms that referred to students of low-income families.

Over a dozen were also out-of-date. Only four state report cards contained all the student performance data that was first required fifteen years ago under No Child Left Behind, and ten states’ latest assessment scores were from the 2012–13 or 2013–14 school year.

More specifically, twenty-three state reports failed to include school quality measures other than test scores, such as attendance or graduation rates. Thirty-eight omit student growth data, either because they don’t track them or don’t report them. And not a single one provided readers with school funding data.

On the bright side, some report cards, such as Ohio's or Washington, D.C.'s, were deemed to be of high quality and ought to serve as examples for other states. They provide valuable information with simple layouts and minimal text. And others, including Minnesota’s and Wisconsin’s, even contain interactive pages that allow users to compare data points and graphs.

Overall, however, the DQC asserts that the information provided by most state report cards is insufficient and suffer from a lack of transparency. They encourage states to seize the opportunity provided by the Every Student Succeeds Act to design more accessible and useful annual report cards that provide community members with the information needed to make important decisions and improvements. As we at Fordham have argued, easily interpretable data are crucial for those working to enact reform, ensure accountability, and provide all students with the education they need to succeed.

SOURCE: “Show Me the Data: State Report Cards Must Answer Questions and Inform Action,” The Data Quality Campaign (December 2016).

 
 

Following the lead of our D.C. colleagues, we totted up the most-read articles posted on Ohio Gadfly Daily in 2016.

The Top Five editorial posts are a microcosm of the issues we address regularly in an effort to advance educational excellence in a very real way here in the Buckeye State:

1. House Bill 420: Opting out of accountability by Jamie Davies O’Leary (published January 25)

At the height of the pushback against Common Core-aligned testing in Ohio, HB 420 was born. It would have allowed schools and districts to exempt from certain accountability measures those students whose parents opted them out of taking standardized tests. We cautioned against the inadvertent deterrent effect on testing participation and the erosion of the state’s accountability system.

2. How will ESSA change Ohio’s school report cards? by Jessica Poiner (published June 13)

Ohio’s accountability and report card system was reasonably robust before the advent of the Every Student Succeeds Act (ESSA), but as we discussed in detail back in June, the myriad new reporting requirements would engender a number of changes for the Buckeye State to be in compliance with ESSA. Our point-by-point analysis is a must-read for anyone who wants to know what the state’s report card system will look like in the ESSA era.

3. ‘Elected’ school boards and the dangerous illusion of democracy by Aaron Churchill (published March 3)

Spurred by the suggestion that elected school boards are de facto “better” than appointed charter school boards, we dug into the data around that little slice of democracy known as the school board race and were not impressed.

4. The problem with graduation rate statistics by Aaron Churchill (published June 13)

The assertion that one school (or school type) should be judged more harshly because of its low graduation rate is missing the point, we argued in this piece. In fact, the way graduation rates are calculated in Ohio can mask important details about student mobility and its effects on both the “sending” and “receiving” schools’ graduation rates.

5. Where are Ohio’s teachers when we need them? by Jamie Davies O’Leary and Elaine Laux (published August 8)

Data released mid-year from the Civil Rights Data Collection (CRDC) indicated a huge problem with teacher absenteeism across the country. Ohio was no exception and this dig into the data from our largest cities pointed out troublesome truths.

* * *

We hope you will visit Ohio Gadfly Daily throughout 2017 as these issues and more take center stage again. 

 
 
 
 

We look at the pros and cons of Ohio's charter operator report cards, try to ease the growing pains of College Credit Plus, review a holiday grab-bag of education policy news stories, and more

Most Ohio Gadfly readers know that we typically offer in-depth commentary one topic at a time. This tendency assumes (pardon the holiday metaphor) that one huge present is preferred—like the Lexus tied up in a bow. We recognize that other folks might prefer a bundle of gifts. So, for those yearning for a little more diversity in their inbox, this one is for you. (No white elephants, we promise.)

A win on ESSA accountability

In late November, the U.S. Department of Education released its revised and final regulations on school accountability under the federal Every Student Succeeds Act (ESSA). In a victory for high achievers, the feds made it crystal clear that states are permitted to use a performance index—as Ohio has long done—as an indicator of student achievement. Regrettably (see here and here for why), the previous draft regulations would have likely forbidden performance indices and forced states to use proficiency rates instead. Now it’s full steam ahead on the performance index as Ohio drafts its ESSA state plan.

Information in the palm of your hand

Kudos to state leaders who are making Ohio’s report card data useful and accessible to policy wonks and the general public alike. A recent Data Quality Campaign (DQC) publication spotlights Ohio’s school report cards as exemplars for providing “data that is valuable to my community” and displaying clear information. The report also notes that Ohio’s database houses the large majority of the data elements DQC deems important for public review (fifteen out of twenty-three). Meanwhile, the Ohio Department of Education last week launched a smart phone app where users can receive updates and check out school report cards anywhere and anytime. Is your local school board member—or maybe real estate agent—waxing poetic on how lovely the schools are? Now you can get the lowdown on the data and see for yourself. As President Reagan once said, “trust but verify.”

No thanks on the similar students measure

In the recent charter reform legislation, state lawmakers ordered the Ohio Department of Education to “conduct a study to evaluate the validity and usefulness of using the ‘similar students measure.’” In a report issued in late November, the Department concluded after said study that the measure was “neither valid nor useful” for use in the Ohio’s accountability system. The measure, pushed by a charter advocacy group and ECOT, adjusts a school’s achievement rate depending on its demographics (for more, see here). One of the central problems, however, is that the measure would set lower proficiency expectations for disadvantaged children. As Chris Woolard, the Department’s accountability chief, told the Columbus Dispatch: “Our system right now has high expectations for all students. This [measure] violates that basic principle that we want all students to be able to succeed.”

Auditor Yost on inter-district open enrollment

Ohio Auditor of State Dave Yost recently released a report on the fiscal impact of inter-district open enrollment. The main takeaway: Districts should weigh the costs and benefits of accepting additional pupils via open enrollment. Under state law, districts are not obligated to accept open enrollees—though state funding follows students, offering districts a financial incentive to do so (no local dollars transfer however). According to his cost-benefit calculations for four Northeast Ohio districts, one posted a net loss of $1,282 per incoming open enrollee, while another gained a whopping $4,563 per open enrollee. The fiscal impact, as the Auditor explains, depends in large part on capacity. When a district has “empty seats,” the cost of educating an open enrollee is minimal—teachers wouldn’t need to be hired, for example—but it would gain the funding tied to the student. The reverse might be true when a district is at or near capacity: Marginal costs could exceed the benefit. The Auditor understands the finances of open enrollment, but this analyst at least wonders whether economic concerns could be used as an excuse by public schools to not accept all comers. (“Sorry kid, we just don’t have the capacity.”) This begs a couple questions: a) just how many districts in Ohio are at full capacity, including suburban ones that prohibit open enrollment altogether; and b) for districts without excess capacity—but facing increasing demand—should the state support expansions, so they are not turning away students?

AP scores of 2 = proficient?

In an amendment to Senate Bill 3, a deregulation bill that passed last week, state legislators added language that would deem an Advanced Placement (AP) score of 2 equivalent to proficiency on certain state end-of-course exams (EOCs). Ohio high schoolers may substitute AP test results for EOCs in the following content areas: US History, US Government, and science (substitutions are not allowed in math or English). This raises an eyebrow, because an AP score of 2 is typically considered mediocre—the second lowest on AP’s 1-5 scoring scale. It’s a score that colleges and universities won’t accept for course credit—a minimum 3 or 4 is required. In addition, an AP score of 3 is needed for schools to earn credit on Ohio’s Prepared for Success report card component. It may be true that an AP score of 2 is technically a closer equivalent to EOC proficiency than a 3 (AP tests are likely more difficult), but it does seem peculiar to call an unsatisfactory AP score “proficient.” Did the student demonstrate proficiency in the AP course? According to the test results, it’s not clear she did. Or maybe this predicament calls into question the notion that different standardized tests are so easily substitutable. 

We hope you enjoyed this package of ed news gifts. Stay tuned in the New Year as we continue to track these stories and much more!

 
 

One of the big Ohio education stories of 2016 was the growing popularity of College Credit Plus (CCP), a program that provides students three ways to earn college credit from public or participating private colleges: by taking a course on a university campus; at the student’s high school where it’s taught by a credentialed teacher; or online. Many students and families have found that the program saves them time and money and provides valuable experience. For families with gifted or advanced students, it is a chance for acceleration even as early as seventh grade; for students in high-poverty rural and urban areas, it may be the only way to take high-level courses in basic subjects, let alone electives.

Before registering, students in grades 7-12 must be admitted to the college based on their readiness in each subject they plan to take a class in—a decision made by each higher education institution and determined by GPA, end-of-course (EOC) exam scores, and other available data. Once admitted, students can register for any course the school offers, except for those that are considered remedial or religious. (The latter restriction is presumably intended to keep church and state separate while a child is enrolled in a public school.)

Most of the media coverage of the growth of College Credit Plus has focused on its cost, but in October, the state released an overview of preliminary information gathered during the first year (and part of the second year) of the program. Here are a few of the most interesting data points:  

  • During the 2015-16 school year, over 52,000 students took classes from 23 community colleges, 13 universities, and 35 private institutions of higher education in Ohio.
  • Participation varies by student race, with African-American and Hispanic pupils underrepresented when compared with their share of the grade 7-12 population.
  • Participation also varies by income level, though the data aren’t clear enough to draw conclusions (the economic status of 45 percent of CCP students is listed as “unknown”).
  • Unsurprisingly, most students took courses in the five main core content areas: English (24 percent), social sciences (18 percent), math (13 percent), science (13 percent), and arts and humanities (11 percent).
  • Just over 90 percent of courses taken by CCP students resulted in credits earned. Three percent resulted in a failing grade; 2 percent resulted in a withdrawal, and 4 percent had no grade reported.
  • The overwhelming majority of CCP courses were taken on high school campuses and most utilized a high school teacher. Student GPAs did not vary significantly based on location.

The preliminary data suggest a few areas that need attention as Ohio works to ensure that CCP is functioning as intended. 

Pay close attention to passage rates

Eyebrows should rise when seeing that over 90 percent of courses taken by CCP students resulted in credits earned. Other data points—such as state test scores and ACT scores—show a troubling lack of proficiency that one might expect would translate to a smaller percentage of students earning credit. Similarly, average scores on Advanced Placement  (AP) exams indicate that far fewer than 90 percent of AP courses taken result in college credit earned or even in what the College Board terms a “qualifying score” (3 and up). Why, then, are CCP’s passing percentages so high?

One reason may be that CCP’s eligibility requirements permit only college-ready students to enroll. By restricting enrollment to students who have successfully demonstrated that they are college ready, usually through widely accepted measures of college readiness like the ACT  and Compass, CCP passing percentages may be high because the college readiness requirement is acting as an effective gatekeeper. But this could also be considered suspect (i.e. the requirement may be a so-low-as-to-be-meaningless bar) considering how many of the participating post-secondary institutions are “open enrollment” campuses. It’s also possible that CCP courses—most likely those taught on a high school campus by a secondary instructor—just aren’t rigorous enough. Passing these courses is determined by the teacher, not by an external review such as Advanced Placement uses, and we live in an era of grade inflation.

How to ensure that courses taught on high school campuses are rigorous?

The majority of CCP courses (nearly 61 percent) were taught on a high school campus by a secondary instructor—an educator who is already teaching at the high school but has earned additional credentials. Although the state data showed that student GPAs varied only slightly based on class location, that’s no reliable gauge of course rigor. The state’s report notes that “monitoring quality and participation when the course is taken on the high school campus” is an “item to discuss.” Policymakers should talk to representatives from K-12 and higher education for ideas on how to maintain rigor, including how best to train secondary teachers to teach post-secondary classes and to evaluate student work by post-secondary standards rather than K-12 criteria.

Maintain entrance requirements for students

Although some folks have bemoaned the challenges that students face in qualifying for CCP, the college-readiness restriction is critical for two reasons. First, it ensures that only students who are academically prepared for the rigors of college are able to participate, a requirement that, if forcefully and dutifully applied, should prevent students from the double-whammy of a failing grade on both their high school and college transcripts. Second, students who are ineligible for CCP one year can still become eligible the following year if they are able to demonstrate that they have achieved college readiness; this could provide students with more motivation to work hard to reach the bar. A college freshman who isn’t college-ready, on the other hand, has no options except expensive, non-credit-bearing remedial courses. Still, we must keep in mind the softness of a “college readiness” criterion when determined and applied by an open-access college.

Keep CCP and co-requisite remediation separate

A bill now before the General Assembly (House Bill 474) would create a “CCP Co-requisite Remediation Pilot Program.” This would aim at high school seniors in need of remediation in math and English by allowing them to “simultaneously enroll in a remedial course and an introductory college course in the same subject area, or enroll in an introductory college course that incorporates remedial curriculum.” Whatever the merits (and weaknesses) of co-requisite remediation, it’s illogical to push college-level remediation into high school via a program that is, by law, intended only for college-ready students. Remediation on college campuses occurs because students didn’t learn what they needed to learn in high school. Why, then, would the state allow students who are still in high school—and thus still have a chance to prepare for college prior to enrollment—to take college level work for which they are unprepared? Why not encourage high schools to do a better job preparing their students for college, rather than shoving those same students further along the path? The risks for students, whose grades in CCP courses appear on both their high school and college transcripts, are just too high.

Take a deep dive into the underrepresentation of minority students in CCP

Participation gaps aren’t only a CCP problem; AP courses face a similar issue. Although a ton of analysis has been done on AP participation gaps, it’s more difficult to diagnose the cause of CCP’s participation gaps (mostly for African American students) based solely on the information released by the state. Analysts should gather more information and investigate what could be causing the discrepancy. We can assume—based on state test scores—that too few minority students are prepared to qualify as college ready while still in high school. But there could be additional factors at play. Is more and better outreach needed in particular schools? Are some schools subtly discouraging participation or less likely to facilitate the high school-located classes intended to minimize transportation challenges? Answers to these questions could begin to narrow participation gaps.

***

CCP is new and we should expect glitches and growing pains. There are no easy solutions to all the problems that it faces, but the initial uptake by students suggests that College Credit Plus is worth continued attention and improvement.   

 
 

Ohio’s charter school reform discussions have mostly focused on sponsors—the entities responsible for providing charter school oversight. Overlooked are the important changes in Ohio’s charter reform law (House Bill 2) around operators. Operators (aka management companies) are often the entities responsible for running the day-to-day functions of charter schools; some of the responsibilities they oversee include selecting curriculum, hiring and firing school leaders and teachers, managing facilities, providing special education services, and more. (To get a sense of the extent of operator responsibilities, read through one of their contracts.)

Extra sunshine on operators has been especially needed in a climate like Ohio’s, where operators historically have wielded significant political influence and power not only with elected officials but even over governing boards. For instance, one utterly backwards provision pre-HB 2 allowed operators to essentially fire a charter’s governing board (with sponsor approval) instead of the other way around—what NACSA President Greg Richmond referred to as the “most breathtaking abuse in the nation” in charter school policy.  

HB 2 installed much-needed changes on this front, barring the most egregious abuses of power and greatly increasing operator transparency. The legislation required that contracts between charter boards and operators be posted on the Ohio Department of Education (ODE) website; that operators collecting more than 20 percent of a school’s funding provide a detailed statement of expenditures; that ODE post a simple directory of operators so the public could know which operators were affiliated with which charter schools—information surprisingly difficult to come by outside of inside charter circles; and that ODE publish an annual academic performance report for operators. These new provisions were at once somewhat obvious, yet revolutionary. Such is the Ohio charter story. 

The new performance reports are out, and that’s a great step forward for Ohio where public information on operators has been historically lacking. But the reports are disappointing in their lack of depth and breadth. The image below shows one report in its entirety; fifty-three operators received a similar half-page report delineating academic performance, attendance, student demographics, and staffing data.

Here are a few observations about the reports and where they could be improved.

  • Operators are not matched with their affiliated schools. This information is available by viewing a separate spreadsheet on ODE’s website, but each management company’s schools should be listed within the report card itself to provide context. Readers should not have to search through multiple spreadsheets and documents to piece this information together.  
  • There are no data on individual schools. Along with the charter schools run by each operator, the performance report should provide key report card ratings for each school. What good is a report card that lists a score for “Center for School Improvement, LLC,” an operator with no known website, without knowing which schools it oversees or how they each perform in key areas like performance index and growth? District report cards contain links to their schools’ ratings; so should operator reports.
  • Academic ratings don’t effectively differentiate quality because almost every operator received a low rating. Nine operators received a “0” academic rating; seventeen received a “1” and five received a “2.” It appears that the scores (1-5 correlating with an A-F scale) were calculated in the same manner as academic ratings for sponsors. (The report does not include a methodology for calculating the operators’ academic rating.) If so, that means that student growth was counted as just 20 percent of the overall score. That’s a problem, because the other indicators composing the score are highly correlated with students’ socioeconomic backgrounds. Overall low ratings among charter operators are primarily a function of the fact that they serve so many at-risk students. The same would and will be true for traditional urban public school districts should the state calculate them in the same manner. The system fails to meaningfully distinguish between some of Ohio’s best operators—networks that get poor students who are behind grade level and move them to performing above the state average, like United Schools Network—and some of its lackluster ones. That needs fixing.
  • There is no distinction between for profit and non-profit charter management companies. Charter opponents tend to speak about the charter sector in broad brush strokes. They often generalize about the “privatized” or “corporate-run” charter industry while failing to acknowledge that there are a fair number of schools in Ohio that contract with non-profit management organizations. Many choice critics seem to genuinely misunderstand the distinction or be unaware of which entities are which. A designation of non-profit versus for-profit status on each operator’s report cards could help improve public understanding and either prove or disprove people’s preconceived notions.
  • The reports focus heavily on inputs. This includes a plethora of data on teachers and staff while at the same time providing hardly anything about actual performance. Readers can see the number of music teachers staffing an operator’s schools, but have no idea which schools they are or how they perform. Student enrollment numbers are not even provided—they should be.
  • It isn’t clear how operator is defined. Fifty-three operators are listed in ODE’s operator database elsewhere, but only forty-nine received a report card. Why? The recent competitive facilities grant award for top-performing charter schools listed some eligible operators (defined earlier this year by ODE and the Ohio Facilities Construction Commission as such), yet not all of those operators received report cards. It’s unclear how the state is defining what constitutes an “operator” or why this definition would differ from the facilities grant eligibility list or from its own master spreadsheet.
  • Expenditures per pupil is listed, but lacks context. Those numbers range from $1899 to $10,880 among various operators, but without information on overall revenue and expenditures or school-by-school information. To be fair, HB 2 required any management company earning more than a 20 percent fee from a school’s annual gross revenues to provide a more detailed financial accounting, which includes information on salaries, wages, benefits, utilities, buildings, equipment, and more. But to the best of my knowledge, this information isn’t available publicly yet—at least not in a way that is easy to find and navigate. That should change.

Ohio evaluates sponsors in significant part based on their schools’ performance, and these evaluations include detailed information about schools’ academic results as well as compliance with various rules and laws. For operators, however—entities that are actually running schools day to day, and in some instances collecting more than 90 percent of schools’ public funds—there is very little information.

HB 2’s operator transparency provisions are necessary to provide valuable information to governing boards, sponsors, taxpayers, the public, and parents more broadly. Taken together, the newly available information on operators is a step forward for Ohio’s charter sector, and ODE deserves credit for creating the first operator performance reports and doing it on time. However, there is still much room to improve the report. In the interest of transparency, Ohio should move toward a much more robust and detailed 2.0 version. 

 
 

One in seven adults’ ages 18-24 in Ohio lacks a high school diploma and faces bleak prospects of prospering in our economy. Dropouts earn $10,000 less each year than the average high school graduate according to the U.S. Census Bureau, are almost twice as likely to be unemployed, and typically earn an average annual income of $20,241 which hovers just above the poverty line for a family of three in Ohio. Dropouts also drag down the Ohio economy; over the course of their life, they consume an estimated $292,000 in public aid beyond what they pay in taxes.

To mitigate the number and cost of dropouts, Ohio has permitted the creation of ninety-four dropout prevention and recovery schools. Collectively, these schools enrolled sixteen thousand students in the 2015-16 year. They serve at-risk and re-enrolling students—pupils who previously dropped out but are now re-entering the education system—with the aim of graduating students who might otherwise slip through the cracks.

To hold these schools accountable for successfully educating at-risk students, Ohio has created an alternative report card. This report card assigns an overall rating of “Exceeds,” “Meets,” or “Does Not Meet” standards based on the school’s state assessment passage rate, graduation rate, ability to achieve progress from year-to-year, and the achievement gaps between student groups. Prior to 2012-13, dropout-recovery schools were rated on the same report-card indicators as all public schools.

Whether this new alternative accountability framework appropriately captures the success of these schools is up for debate. This past summer, a committee of legislators and civic leaders debated the definition of quality, heard from community members and school leaders, and reviewed the components of the current report card. The committee failed to recommend any changes (it had to meet a legislative deadline of August 1), though a new committee convened in November to continue this important work. (Disclosure: Fordham’s Chad Aldis has been named to this newly reconstituted committee.)  

Should the committee choose to maintain the state’s recently created alternative report card, some adjustments are needed to ensure that high-performing dropout-recovery schools are distinguished from schools that continually fail to improve the learning of at-risk students.

Attention should be paid to one component in particular of the accountability rating—the progress measure that, generally speaking, gauges whether dropout-recovery students are making at least one year of academic growth. A very large majority of dropout-recovery schools appears to be falling short of growth expectations.  In 2015-16, just seven schools exceeded the progress standards, eighteen met them, and an overwhelming majority of schools—sixty-nine of them—failed to meet the state’s standard for academic progress. In the previous year (2014-15), only one school exceeded standards, thirty-three met standards, and fifty-nine failed to meet the growth expectations. Given these results, the committee should review this measure’s methodology and confirm that the norm-referenced group used to calculate student growth along the NWEA’s Measures of Academic Progress test is appropriate for dropout-recovery students. Ohio law requires dropout-recovery schools to use a norm-referenced exam, not state exams, to gauge student growth over time. Ensuring that we accurately and fairly capture student progress, especially for pupils who may be years behind, should be a high priority.

Additionally, the way Ohio evaluates graduation rates should make certain that schools are not punished for taking in students who “drop-in” years after the expiration of their expected four-year graduation rate. (Four-year graduation rates, along with extended rates—up to eight years—are included in the alternative accountability system.)

Many also take issue with dropout-recovery schools being measured against the adjusted cohort graduation rate, as dropout-recovery schools face the consequences of a student’s previous school passing them on from one grade to the next without accomplishing adequate academic progress.  A student’s transcript may report that they are in the eighth grade when they have really only mastered reading and math skills at the sixth-grade level. Yet, dropout-recovery schools are held accountable for graduating that student in four years.  Testimony from school leaders during the summer’s dropout prevention and recovery school study committee emphasized the time crunch schools feel as soon as students who are academically far behind step through their doors. School leaders should not face perverse incentives to reject or rush students through the curricula because they are on the hook to meet four- or five-year graduation rates.   

Finally, the performance standards should be adjusted to reflect high yet attainable standards. Currently, for dropout-recovery schools to attain a “Meets” graduation rate, they must graduate just 8 percent of all eligible students in four years. This standard is much lower than that of traditional schools, which must graduate 84 percent of seniors to earn a C rating and 93 percent to earn an A on the four-year graduation indicator. However, dropout-recovery school’s overall graduation rate standard could be set so low to account for the challenges these schools face in meeting typical adjusted cohort graduation rate timelines. Moving forward, the committee should evaluate the performance standards alongside their respective measures to ensure these are aligned and appropriately rigorous. Should the committee decide to phase in higher performance standards, they should also consider what supports, like re-engagement programs, could be implemented to help schools meet these targets.

As the new dropout-recovery committee works into 2017 to define what quality means for these schools, they should give thought to Ohio’s current system and explore ways to better distinguish high-performing from low-performing dropout-recovery schools. The quality of education for at-risk students, and by extension, Ohio’s long-term economic condition, is at stake.

 
 

From the latest issue of the journal Economics in Education Review comes a fascinating paper in which author Metin Akyol creates mathematical models that simulate the effects of private school vouchers on the overall education system. It is not a study of an actual voucher program, but instead a thought experiment meant to test whether both universal and targeted voucher programs can increase the efficiency of the education system as a whole. As strange as this may seem to lay readers, there is in fact a long history of such econometric analyses—and their findings are often worthy of consideration.

Akyol’s complex model can’t be fully explained in this short review, but some features are worth noting. It incorporates the findings of empirical voucher studies to increase its reliability. It simplifies the real world in an effort to find the signal in the noise. Every household therefore has only one child, and the hypothetical school district has neither magnet schools nor charters. And one of its defining assumptions is that more efficient public school spending is an effective proxy for increased educational quality. In other words, it presumes that the money saved by greater efficiency can be reinvested in ways that improve outcomes.

Regardless of how one feels about all this, the model ends up producing outcomes that are very similar to empirical findings regarding actual programs. In one important example, the positive effects on voucher-eligible students who do not opt to leave their district school (found empirically by David Figlio in Ohio) are predicted in Akyol’s targeted-voucher model. Public schools in the model are observed to “up their academic game” to retain students when voucher competition is introduced. Additionally, the model predicts that students lowest on the income scale will be least likely to use vouchers, even in a model where vouchers are universally available and not means-tested. This stands to reason, considering that real-world vouchers often fall short of full private school tuition. (To some extent, it was also borne out in Figlio’s research.) 

Also interesting is the difference in effects between a universal voucher program and a targeted one. Akyol ran models that replicate the prime goal of vouchers—make private schools affordable for more children—in two ways: by manipulating the voucher availability and by simply changing the family income distribution. The results were not the same. The universal voucher model led to an observable decline in “peer group quality” for those students at the lowest end of the income spectrum who did not take the vouchers. This decline in quality was also present to a lesser extent in the model where vouchers were targeted at low-income students. But it was absent in a model that gave high-ability students lower voucher amounts than their lower-ability peers. This appears to be the theoretical sweet spot that results in the most favorable overall outcome: more students were able to access private schools, and public schools felt the competition keenly enough to improve their academics for the students who remained.

None of this means we must redesign real-world voucher programs based on any one of the mathematical models presented in this paper. But to the extent that modeling can predict real-world outcomes, choice advocates and policymakers ought to consider the results, which can elucidate the potential benefits and challenges of particular voucher designs. If the incoming Trump/DeVos education department is going to prioritize vouchers as a means for improving education, Akyol’s mathematical models have at the very least led him to offer some sage advice: “…the outcomes of a voucher program hinge on its design.”

SOURCE: Metin Akyol, “Do educational vouchers reduce inequality and inefficiency in education?Economics of Education Review (December, 2016).

 
 
 
 

We offer solutions to Ohio’s high school diploma dilemma and its teacher evaluation enigma, and show how KIPP: Columbus provides a pathway to success for its students.

KIPP Columbus achieves extraordinary outcomes for its students, predominantly students in poverty and students of color—a fact worth celebrating by itself. In 2015-16 in Ohio’s Urban Eight cities, KIPP Columbus was in the top five percent of all schools (district and charter) on student growth and among the very best (top 3 percent) in Columbus. But it’s not just KIPP’s academic data that are impressive. KIPP Columbus, led by Hannah Powell and a visionary board, has a rare knack for forging powerful partnerships at every turn—ones that strengthen KIPP students, their families, and the entire community near its campus. This year, KIPP launched an early learning center in partnership with the YMCA of Central Ohio to serve infants, toddlers, and pre-school aged youngsters. In a neighborhood lacking high-quality childcare and early learning opportunities, it’s an investment not just for KIPP students, but for the community at large. KIPP Columbus also partners with the Boys and Girls Club of Columbus, Battelle Memorial Institute, and other community organizations.

This profile is about KIPP graduate Steve Antwi-Boasiako, an immigrant and first-generation college student now attending Vanderbilt University, whose entire family has been uplifted by the school. His story illustrates the depth of KIPP’s commitment to students’ long-term success. The “KIPP Through College” program tracks data on what colleges turn out to be a good fit for students (many of whom are first-generation college-goers), assists families with applications and financing information, and even partners with 83 colleges and universities nationally so that cohorts of KIPP students improve their odds of successful completion. In a world where just nine percent of low-income students attain a four-year college degree, KIPP students’ attainment rate (44 percent) is truly remarkable, and even ten percentage points higher than that of the population at large (34 percent).

Steve was an inspiration to younger students while attending KIPP: Columbus;
he now attends Vanderbilt University

We urge you to read Steve’s story. In it, KIPP Columbus’s mission to “prove what’s possible” truly comes to life. Indeed, his remarkable trajectory reminds us that not only will students meet our expectations when we place them high, but will often exceed them beyond our wildest hopes.

 
 

The Every Student Succeeds Act (ESSA) has put the future of teacher evaluations firmly in the hands of states. Ohio is now in full control of deciding how to develop and best implement its nascent system.

It should come as no surprise to folks in the Buckeye State that the Ohio Teacher Evaluation System (OTES) has significant room for improvement. Since its inception in 2009, approximately 90 percent of Ohio teachers have been rated in the top two categories and labeled “skilled” or “accomplished.” Unfortunately, there isn’t significant evidence that the system has impacted the quality of Ohio’s teacher workforce, perhaps because there is no statewide law that permits administrators to dismiss teachers based solely on evaluation ratings. Meanwhile, OTES also doesn’t appear to be delivering on the promise to aid teachers in improving their practice.

A quick glance at the ODE-provided template for the professional growth plan, which is used by all teachers except those who are rated ineffective or have below-average student growth, offers a clue as to why practice may not be improving. It is a one-page, fill-in-the-blank sheet. The performance evaluation rubric by which teachers’ observation ratings are determined doesn’t clearly differentiate between performance levels, offer examples of what each level looks like in practice, or outline possible sources of evidence for each indicator. In fact, in terms of providing teachers with actionable feedback, Ohio’s rubric looks downright insufficient compared to other frameworks like Charlotte Danielson’s Framework for Teaching.

Another problem with OTES is the way it incorporates student learning. Ohio law requires that all teacher evaluations include a student growth component, which consists of test results. For teachers with a valid grade- and subject-specific assessment, that means value-added measures. Unfortunately, only 20 percent of Ohio teachers are able to be measured using the state assessment either in whole or in part.[1] Another 14 percent receive growth scores from a separately administered vendor assessment that increases the testing burden placed upon students and schools. The student growth component for the remaining 66 percent is based on locally developed measures that tend to be both ineffective and unfair: shared attribution, which evaluates teachers based on test scores from subjects they don’t teach, and Student Learning Objectives (SLOs), which are extremely difficult to implement consistently and rigorously and often fail to effectively differentiate teacher performance. In short, the state hasn’t quite figured out how to fairly evaluate all teachers using student achievement data.

A meaningful overhaul of Ohio’s system should aim to solve four significant problems. First, it should address the current framework’s failure to fairly evaluate all teachers. Second, it should do a far better job of differentiating teacher performance. Third, it should provide actionable feedback to all teachers. And finally, it must positively impact the overall quality of the workforce. Crafting a system that does all this is easier said than done. Fortunately, there’s evidence that focusing solely on a rigorous classroom observation cycle, rather than student growth measures, could be the solution.   

In a recent piece for The 74, Matt Barnum examined research on teacher evaluation work in Chicago, including an analysis of a pilot system that focused solely on classroom observations and the system’s impact on the labor market. The analysis found that the first year of the pilot resulted in an 80 percent increase in the exit rate of the lowest-performing teachers; the teachers who replaced exiting educators proved to be higher performing than those who exited. Overall, the findings suggest that evaluation systems based solely on rigorous observations of teacher practice can impact the quality of the workforce. This type of system would also remedy the biggest problem with Ohio’s evaluation structure, which is that current student growth measures unfairly evaluate teachers in many subjects and grade levels.

The second and third problems with the current system—effectively differentiating teachers and offering better feedback—are also improved by zeroing-in on an improved observation cycle. In general, observations provide more detailed information about the complex job of teaching than a list of raw scores ever could. More information means more opportunities to pinpoint variances in performance, but only if the system uses a high-quality rubric and takes advantage of multiple perspectives by including  outside observers and  peer observers. Improving observer training and ensuring a mix of announced and unannounced observations is also important.

When it comes to offering better feedback, it’s widely acknowledged that teachers find evaluations most helpful when they're given actionable feedback on their practice. This type of feedback only comes from observation of practice. Plenty of other sectors understand this. Professional football teams prepare for their next opponent by studying game film. Players study their future opponents, but they also study their own performance from the previous game—the choices they made, what they could have done better, and what they need to continue doing. Teacher evaluations should offer the same opportunities. Teacher coaching, teacher collaboration (which research says can lead to student achievement gains), and peer reviews—all of which have been found to improve teacher practice—are only effective if they include rigorous observation of practice.

It’s true that assessment results are a form of feedback. But as a former teacher, I can attest to the fact that studying test results (value-added or otherwise) was hardly the most effective way for me to improve my pedagogy. I needed to know why my students did or didn’t do well, and that answer couldn’t be found on a data spreadsheet no matter how hard I looked. A far better use of my time and the best way to make me a better teacher faster would have been actionable feedback that came from observing my practice—which, after all, is what most impacted my student’s test scores in the first place.

In summary, research shows that evaluation systems based solely on rigorous observations of teacher practice can impact the quality of the teacher workforce. Research also shows that improving teacher practice can be done through observations conducted by well-trained observers using high-quality frameworks and rubrics. Taken together, it seems that one way to improve Ohio’s teacher evaluation structure is to pilot a system that focuses solely on rigorous classroom observations. Stay tuned for a detailed explanation of what such a system could look like.


[1] The 20 percent is made up of teachers whose scores are fully made up of value-added measures (6 percent) and teachers whose scores are partially made up of value-added measures (14 percent).

 
 

As a form of credentialing, high school diplomas are supposed to signal whether a young person possesses a certain set of knowledge and skills. When meaningful, the diploma mutually benefits individuals who have obtained one—it helps them stand out from the crowd—and colleges or employers that must select from a pool of many candidates.

In recent years, however, Ohio’s high school diploma has been diluted to the point where its value has been rightly questioned. One of the central problems has been the state’s embarrassingly easy exit exams, the Ohio Graduation Tests (OGT). To rectify this situation, Ohio is phasing in new high school graduation requirements starting with the class of 2018. Under these new requirements, students must pass a series of seven end-of-course assessments in order to graduate high school, or meet alternative requirements such as attaining a remediation-free ACT score or earning an industry credential.

The end-of-course exams have proven tougher for students to pass than the OGT, leading to concerns that too many young people will soon be stranded without a diploma. One local superintendent called the situation an “apocalypse,” predicting that more than 30 percent of high school students in his district would fall short of the new standards. He wasn’t alone, as an estimated 200 superintendents and school board members recently voiced their concerns at a Statehouse rally. An analysis by the Ohio Department of Education suggests that statewide, almost one in three pupils from the class of 2018 aren’t on a sure track towards a diploma.

This has put the state in a bind. On the one hand, in an era of heightened standards, no one wants to backtrack and hand out meaningless credentials. On the other hand, policymakers are right to worry about leaving thousands of pupils without a diploma. In today’s economy, such students are likely to struggle to find employment and are unable to join the military.

Can Ohio move forward on high standards—even re-inflating the value of the diploma—without leaving young people behind? Yes, but it will take some rethinking about how the state awards its high school credentials.

The most reasonable alternative, also suggested by several prominent education analysts (including our own Checker Finn), is for Ohio to pursue a multi-tiered approach to awarding diplomas. This would help Ohio maintain high achievement standards in the face of pressure to lower them while also building an incentive structure that could push students to achieve at higher levels. Ohio already has an honors diploma for students who go above and beyond in their coursework—a good start. Yet the honors diploma does not rely on state assessment results nor is it widely recognized as a measure of the accomplishments of Ohio’s highest achievers.

Here’s how a beefed-up, tiered system of awarding diplomas could work. At the base level, Ohio could create a standard-issue diploma signifying that pupils have persevered through thirteen years of school—a certificate of completion more or less. These students would have met their core coursework requirements yet fallen short of the stringent benchmarks of Ohio’s end-of-course exams. If we’re being frank, this is where Ohio has been with tying diplomas to the OGTs over the past decade or so (and perhaps also with its predecessor exam, the Ninth Grade Proficiency Test). One step up would be a college- and career-ready diploma that indicates students have demonstrated readiness, either by meeting rigorous academic targets on state exams or completing a demanding industry certification. This lines up more closely to Ohio’s new graduation requirements. Finally, the state could award a third diploma—a certificate of exceptional accomplishment—that the most academically able students receive. This diploma’s benchmarks could be geared to the expectations of the state’s most selective colleges and universities and would be a cause of celebration for students, parents, schools, and communities. The state could also tie the diploma to a merit-based college scholarship program.

A tiered approach would have several benefits. First, the state could maintain high expectations for all graduates, yet simply award various diplomas depending on whether pupils fell short, reached, or considerably exceeded the end-of-course exam standards. Second, by issuing at least a basic-level diploma, the state could avoid the repercussions of potentially denying one in three students a diploma. Third, by awarding a diploma with distinction, the state may incentivize some pupils to accumulate more “human capital.” For instance, consider a junior who has already secured a college- and career-ready diploma. She may not feel motivated in her senior year—a case of “senioritis.” But with an honors diploma in play, and clear benefits for earning one, there is more reason to work hard. Fourth, the state could allow pupils meeting the criteria for the college- and career-ready diploma as juniors (or earlier) to graduate, and then offer them the funds saved by foregoing their senior year to defray the cost of college. Fifth, a tiered diploma would help young people as they enter the workforce. For college students, it could help them in the competition for internships; likewise, a more meaningful diploma should aid those seeking full-time employment directly after high school.

Ohio’s old and outgoing high school diplomas didn’t signal much of anything to anyone. The requirements were ridiculously easy and practically everyone got a diploma. This diminished its value. Now the state is ratcheting up its graduation requirements: As State Superintendent Paolo DeMaria told the Columbus Dispatch, “This is all about giving greater meaning to a high school diploma.” State leaders should not back down on rigorous graduation standards simply to accommodate more diplomas. But neither should they be hard-headed. The best way forward for Ohio is to remake the diploma and reject the notion that there is only one way forward. 

 
 

Italy has an achievement gap—one that may sound familiar to Americans. PISA scores show a marked gap between Italian students and those of other OECD countries in both math and reading. Digging into the data, Italian education officials found their own in-country gap: Students in the wealthier north perform far better than students in the poorer south. As a result of all of this, starting in 2010, schools in Southern Italy were offered an opportunity to participate in an extended learning time program known as The Quality and Merit Project (abbreviated PQM in Italian). A new study published in the journal Economics of Education Review looks at PQM’s math and reading intervention, which consisted of additional teaching time after school in four of the poorest—and lowest-performing—regions in the country.

A couple of things to note: PQM intervention was focused not on improving PISA test scores, but on improving scores on the typical tests taken by students in lower secondary school (equivalent to grades six to eight in the U.S.). There is no enumeration of which/when/how many tests these students typically take and the researchers are not attempting to make a connection between the intervention and PISA test scores. We as readers should not either. The poor performance of Italian students on PISA simply shone a light on poor performance elsewhere, and perhaps more importantly, unlocked the funding (from the European Union’s Regional Development Fund) that paid teachers to implement an intervention aimed at closing the detected gap.[1] Deciding to initiate the PQM intervention was voluntary on the part of schools. That allowed researchers to match schools that participated with similar schools that didn’t participate. Using results from the typical tests taken by lower secondary school students, the analysts compared changes in test scores before and after the intervention.

The report had two key findings. First, PQM had a positive effect on average test scores in math, but no impact on reading scores. Second, the impact differed depending on pre-intervention achievement: students in the lowest-achieving schools—in the bottom third—made significant gains on math due to the program. For students attending schools in the top two-thirds of achievement, the impact of the after-school program was null in both math and language. According to this evaluation, then, the program worked in a narrow sense—in just math and for the lowest-achieving students.

Researchers conclude that additional in-class instruction time as an intervention in reading is not particularly helpful to students in grades six through eight. “This result is consistent with other studies in the literature showing that it is much harder to intervene on reading and comprehension skills,” they write, “rather than on skills involving practice, like maths, because a large part of literacy work takes place through general vocabulary training in the home environment.” In other words, improving the “skill” of reading is much more than a matter of spending more time on it once fluent decoding has been learned. (We would add that it also relates to content knowledge—something that certainly can and should be taught in school.) However, this research indicates that quantitative reasoning and mathematical knowledge—increased by repetition and “skill building”—responds positively through more time spent on task, especially for low-achieving students.

We need to be careful about the conclusions we draw based on the numerous caveats and unknowns here (not to mention the differing culture and language), but a detailed look at the benefit to students of additional time on task is no bad thing. A longer school day is often seen as a cure-all for students with poor test scores and is sometimes the raison d'être of certain school types. Perhaps a more targeted approach to additional seat time the proper approach.

SOURCE: Erich Battistina and Elena Claudia Meroni, “Should we increase instruction time in low achieving schools? Evidence from Southern Italy,” Economics of Education Review (December, 2016).


[1] As befits this particular journal, the economics of the PQM intervention is addressed in the report. There is not enough space to summarize it here, but it could be instructive to American policymakers about ways to use data to improve outcomes.

 
 

Many prior studies have found that low-income students have less qualified teachers based on measures such as years of teaching experience, teacher licensure test scores, certification status, and educational attainment, but they say very little about how these differences relate to closing the achievement gap, nor do they examine the magnitude of how differences in access to effective teachers might impact performance.

Yet a new Mathematica study is full of surprises. It examines low-income students’ access to effective teachers in grades four through eight over five years (2008–09 to 2012–13). “Low income” is defined as being eligible for free and reduced-price lunch (FRPL), and “high income” includes everyone else (so not much nuance there). The sample includes twenty-six geographically diverse, large school districts across the country, with a median enrollment of 70,000. And analysts measure the effectiveness of each teacher in the district using a value-added model.

There are five key findings.

First, contrary to conventional wisdom, teachers of low-income students are nearly as effective as teachers of high-income students on average (a difference of one percentile point). Specifically, the average teacher of a low-income student is just below the fiftieth percentile, while the average teacher of a high-income student is at the fifty-first percentile.

Second, high- and low-income kids have similar chances of being taught by the most and least effective teachers. For example, 10 percent of both high and low income kids are taught by one of the top 10 percent of teachers in a district.

Third, teachers hired into high poverty schools are equally effective as those hired into low poverty schools. Though both the new hires are less effective than the average teachers, and high poverty schools have more new hires than low poverty schools, neither makes much of a difference because those differences are already small and the performance of new hires improves fast: on average, they become as effective as the average teacher after one year.

Fourth, not surprisingly, on average, teachers who transfer to schools that are higher in poverty than the one they left are less effective than the average teacher. Yet those differences don’t impact equity much because just under 4 percent of all teachers transfer to schools in a higher or lower poverty category anyway (a little more than 4 percent move between schools with similar poverty rates).

Fifth and finally, teacher attrition doesn’t much affect access to effective teachers among high- and low-income kids because the leavers are equally effective among high- and low-poverty schools. Only in a small subset of districts (three out of twenty-six) did they find inequity in access to effective teachers—and it was in math only. In those three districts, if you provided high- and low-income kids with equally effective teachers from fourth to eighth grade, you’d see a reduction in the student achievement gap by at least a tenth of a standard deviation, which is equivalent to four percentile points over a five-year period.

With all that said, the sample the study uses is not nationally representative, even though it is geographically diverse and mostly includes large districts that are lower performing. It also mirrors the types of achievement gaps we see nationally, including in NAEP performance. Therefore, these findings may not hold in small districts or rural areas, for example.

Furthermore, it’s possible that the poorest children in the country (say, those at the tenth percentile of the income distribution) are in fact getting less-effective teachers than the richest kids (those at the ninetieth percentile, for example). But this study couldn’t examine that question because it relied on a binary definition of socio-economic status (i.e., whether a student was eligible for FRPL or not)—and again, findings are not nationally representative.

Still, analysts conclude with a simple summary: The achievement gap arises from factors other than students’ access to effective teachers.

Given that this bottom line finding is the result of an expensive study commissioned by a federal agency and conducted by a well-regarded research shop, it represents a big debunking of conventional wisdom.

SOURCE: Eric Isenberg et al., “Do Low-Income Students Have Equal Access to Effective Teachers? Evidence from 26 Districts,” Institute of Education Sciences, U.S. Department of Education (October 2016).

 
 

As another year ends, we want you to tell us what you think were the most important Ohio education stories in 2016 and what you predict will be the top story next year.

This is the easiest task you’ll be asked to do today. It’s only two questions and should only take a minute to complete. You can preview the questions below. When you’re ready to take the survey, click here or on the image below.

Just like the voting booth, whatever you submit will be confidential. Of course, if you want to write and tell us why, we may even feature your piece on our blog.  

Thanks for your participation.

 
 
 
 

We look at new federal teacher prep regulations, the state of surveillance in schools, and how virtual schools are addressed in the new model charter law from NAPCS

Ohio’s charter school movement has faced a number of challenges over the past decade. A myriad of school closings and allegations of financial misconduct contributed to it being dubbed the Wild, Wild West of charter schools. Making matters worse, a comprehensive analysis in 2014 by Stanford University’s Center for Research on Education Outcomes (CREDO) found that, on average, Ohio charter students lost fourteen days of learning in reading and forty-three days of learning in math over the course of the school year compared to similar students in traditional public schools. To its credit, the Ohio General Assembly recognized these problems and in October 2015 passed House Bill 2 (HB 2)—a comprehensive reform of the Buckeye State’s charter school laws.

While HB 2 has only been in effect since February, there are already signs that the movement is changing for the better in response to the new law. Unfortunately, despite great strides forward, there is one group of charter schools in Ohio that’s still causing serious heartburn for charter school proponents and critics alike: full-time virtual charter schools. Attendance issues, a nasty court battle, the possibility that the state’s largest e-school (ECOT—The Electronic Classroom of Tomorrow) could have to repay $60 million in state funding, and poor academic performance have led to a growing push to improve e-schools.

The problem in Ohio is clear, but the problem isn’t limited to Ohio, especially in regards to low academic achievement. A seminal national study by CREDO released in October 2015 found that students in online charter schools across the nation struggled mightily, losing on average 72 days per year in reading and a jaw-dropping 180 days per year in math.

As we all know, identifying problems is easy. The difficulty is in finding solutions. Fortunately, the recently released model charter school law from the National Alliance for Public Charter Schools (National Alliance) offers a half dozen policy ideas intended to address the growing issues posed by online charter schools. These new model provisions include language addressing authorizing structure, enrollment criteria, enrollment levels, accountability for performance, funding level based upon costs, and performance-based funding. The National Alliance acknowledges that each of these potential solutions won’t universally apply given the unique context of each state’s laws, but it’s worth looking at how four of the model law provisions might impact Ohio.

Performance-based funding

The National Alliance, in one of its most controversial recommendations, suggests that states fund full-time virtual schools via a performance-based funding system. This idea is simple and intuitive on its face, and it confronts head on the student achievement challenges that online charter schools pose to policymakers. However, widespread low achievement in the movement means that implementing performance-based funding would have an enormous impact, making it both technically and politically complicated to implement. The topic has been broached in Ohio, as State Auditor Dave Yost recently called on the General Assembly to examine “learning-based funding,” which would pay e-schools for successfully delivering—not just offering—education. Despite its complex nature, states considering this type of policy don’t have to start from scratch and should investigate similar models being pursued in a handful of states.  

Accountability for performance  

The model law suggests that charter contracts for online schools include additional measures in a variety of areas where full-time virtual schools have typically struggled, such as student attendance and truancy. Determining how to track attendance in a virtual school setting is difficult, but states have an obligation to online charter schools and their students to set clear guidelines for attendance. Fortunately, Ohio law already has a pretty clear expectation thanks to the aforementioned HB 2: “Each internet- or computer-based community school shall keep an accurate record of each individual student’s participation in learning opportunities each day.” While this is a great start, the law could be improved by making it clear how full-time virtual schools will be held accountable for student attendance and participation and for determining how to account for learning by online students that happens when the student isn’t “logged in” to his or her computer.

Enrollment levels

The National Alliance also recommends that states require authorizers to set maximum enrollment levels each year for full-time virtual schools, and that those levels increase based on performance rather than time. Ohio has enrollment restrictions in place, but the limit is based upon year-over-year growth and isn’t impacted by performance. Furthermore, because the size of the movement was already large when the enrollment growth limits—15 percent for schools with more than 3000 students and 25 percent for schools with fewer than 3000 students—were enacted in Ohio, there really hasn’t been much of an impact. States considering adopting this model law would be wise to consider the size of its existing movement and ensure that academic success includes both proficiency and student growth. In the long term, managing enrollment growth could help ensure that the most successful online schools are able to serve the most students. It could also prevent an individual online charter school from becoming “too big to fail” (i.e. closing the school would be too disruptive to students) or too politically powerful to properly hold it accountable for academic performance.  

Enrollment criteria

Charter schools, including online schools, are public schools and must enroll all interested students. This has always been a core principle, but the new model law acknowledges that this idea may need to be reexamined in the context of full-time virtual schools. Because of the incredibly low student achievement of online charter school students, it’s becoming increasingly clear that students without strong learning supports and/or the proper preparation are struggling mightily in an online environment. A recent study from the Thomas B. Fordham Institute shows that Ohio e-school students are lower-achieving, more likely to have repeated a grade, and more likely to be low-income than other students. In other words, e-school students are those who are most desperately in need of a quality education. Unfortunately, the same study shows they’re not getting it: Across all grades and subjects, e-school students have lower performance in math and reading than otherwise-similar students who attend brick-and-mortar district schools. There’s a pretty significant moral quandary here: If full-time virtual schools consistently fail to serve a certain subset of students—a subset that’s most in need of a quality education—then at what point do they forfeit their right to educate these students?

There are two potential solutions here. The first is to transition virtual schools out from under the charter umbrella and establish them as their own type of public school. This would allow them to establish enrollment criteria, much like magnet schools operated by many school districts. This change would allow online charter schools to serve the students who would most benefit from their model without causing potentially irreparable academic harm to enrolled students who aren’t a good fit. In addition, by allowing virtual schools to determine whom they can best serve, it would be easier and fairer to hold them accountable for student achievement under a state accountability system.

The second option is to continue to require virtual schools to serve everyone but build some flexibility into the law. For example, recent changes in HB 2 explicitly allow Ohio full-time virtual charter schools to require an orientation course for new students. Allowing parents and students to better understand from the beginning the expectations and responsibilities inherent in online education is critical. Another policy option would be to require full-time virtual charter school leaders and teachers to engage with students and parents when students fall behind or struggle to meet attendance requirements. If counseling and conferences fail to address the issues, schools could even be required to assist a student to find a more traditional charter public or district school.

The National Alliance deserves praise for developing policy options that could address the appallingly low performance of many full-time virtual charter school students. There are too many students exercising this important educational option to simply turn a blind eye to its still-developing structure. As should be clear from examining how some of the model law’s recommendations would apply in Ohio, this isn’t going to be easy. Policies will—and should—vary considerably from state to state. Overall, the model law provides a great starting point for states when deciding how to help their online charter schools better serve students, and it couldn’t have come at a better time.

Editor’s note: This article was originally published on the National Alliance for Public Charter Schools’ Charter Blog

 
 

Back in 2011, the Obama administration released its plan for improving teacher education. It included a proposal to revise Title II regulations under the Higher Education Act to focus on outcomes-based measures for teacher preparation programs rather than simply reporting on program inputs. It wasn’t a smooth process. Serious pushback and a stalemate on a federal “rulemaking” panel followed. Draft regulations were finally released in 2014, but were immediately met with criticism. Many advocates wondered if the regulations would ever be finalized.

On October 12, the wondering ceased—the U.S. Department of Education at last released its final teacher preparation regulations. While the final rules number hundreds of pages, the provisions garnering the most attention are those outlining what states must annually report for all teacher preparation programs—including traditional, alternative routes, and distance programs. Indicators are limited to novice teachers[1] and include reporting placement and retention rates of graduates during the first three years of their teaching careers, feedback via surveys on effectiveness from both graduates and employers, and student learning outcomes. These indicators (and others) must be included on mandatory institutional and state teacher preparation program report cards that are intended to differentiate between effective, at-risk, and low-performing programs.

The public nature of the report cards ensures a built-in form of accountability. States are required to provide assistance to any program that’s labeled low-performing. Programs that fail to earn an effective rating for two of the previous three years will be denied eligibility for federal TEACH grants, a move that could incentivize aspiring teachers to steer clear of certain programs.

What do these new federal regulations mean for the Buckeye State? Let’s take a closer look.

The Ohio Department of Higher Education already puts out yearly performance reports that publicize data on Ohio’s traditional teacher preparation programs. Many of the regulations’ requirements, like survey results and student learning outcomes, are included in these reports, so the Buckeye State already has a foundation to work from. But right now, Ohio releases its performance reports for the sake of transparency. Institutions aren’t differentiated into performance levels, and there are no consequences for programs that have worrisome data. In order to comply with the federal regulations, Ohio is going to have to start differentiating between programs—and providing assistance to those that struggle. 

Helpfully, the differentiation into three performance levels occurs at the program level, not at the institutional level. This matters because the institutional label is an umbrella that covers several programs, and programs don’t always perform equally well. For example, in NCTQ’s 2014 Teacher Prep Review, the University of Akron’s (UA) undergraduate program for secondary education earned a national ranking of 57. But UA’s graduate program for secondary education earned a very different grade—a national ranking of 259. Using NCTQ’s review as a proxy for the upcoming rankings reveals that grouping all the programs at a specific institution into one institutional rating could hide very different levels of program performance.

Meanwhile, the regulations’ student learning outcomes indicator presents an interesting challenge. This indicator requires states to report annually on student learning outcomes determined in one of three ways: student growth (based on test scores), teacher evaluation results, or “another state-determined measure that is relevant to students’ outcomes, including academic performance.”

Requiring teacher preparation programs to be evaluated based on student learning won’t be easy for Ohio (or many other states). If Ohio opts to go with student growth based on test scores, it’s likely this will mean relying on teachers’ value-added measures. If this is indeed the case, the familiar debate over VAM is sure to surface, as is the fact that only 34 percent of Ohio teachers actually have value-added data available[2]. Even if Ohio’s use of value-added is widely accepted, methodological problems also exist. For instance, the federal regulations’ program size threshold is 25 teachers, and smaller preparation programs in Ohio aren’t going to hit the mark each year. This means that while bigger programs are going to be held accountable for student learning outcomes during graduates’ first three years of teaching, smaller programs aren’t going to be held to the same standard. There’s also the not-so-small problem that value-added data are most precise when they take into account multiple years of data—and novice teachers simply won’t have multiple years of data available.

Using overall teacher evaluation results isn’t a much better alternative. The Ohio Teacher Evaluation System (OTES) needs some serious work—particularly in the realm of student growth measures, which could imprecisely evaluate teachers in many subjects and grade levels due to the use of shared attribution and Student Learning Objectives (SLOs). The third route—using “another state determined measure”—is also challenging. If there was a clear, fair, and effective way to measure student learning without focusing on test scores and teacher evaluations, Ohio would already be using it. Unfortunately, no one has been able to come up with anything yet. The arrival of new federal regulations isn’t likely to inspire a sudden wave of quality ideas.

In short, none of the three options provided for measuring student learning outcomes is a good fit.  Worse yet, Ohio is facing a ticking clock. According to the USDOE’s timeline, states have the 2016-17 school year (which is already half over) to analyze options and develop a reporting system. States are permitted to use the 2017-18 school year to pilot their chosen system, but systems must be fully implemented by 2018-19. Whatever the Buckeye State plans to do in order to comply with the regulations, it’s going to have to make up its mind fast.      

While the regulations’ call for institutional and state report cards is a step in the right direction in terms of transparency and accountability, implementation is going to be messy and perhaps impossible. There are no clear answers for how to effectively evaluate programs based on student learning outcomes. Furthermore, the federally imposed regulations seem to clash with the flexibility that the ESSA era was supposed to bring to the states.[3] Unless Congress takes on reauthorization of the Higher Education Act, it looks like states are going to have to make do with flexibility under one federal education act and tight regulations (and the resulting implementation mess) under another.


[1] A novice teacher is defined as “a teacher of record in the first three years of teaching who teaches elementary or secondary public school students, which may include, at a state’s discretion, preschool students.”

[2] The 34 percent is made up of teachers whose scores are fully made up of value-added measures (6 percent); teachers whose scores are partially made up of value-added measures (14 percent); and teachers whose scores can be calculated using a vendor assessment (14 percent).

[3] It’s worth noting that the provisions related to student learning outcomes did undergo some serious revisions from their original state in order to build in some flexibility. The final regulations indicate that the Department backed off on requiring states to label programs effective only “if the program had ‘satisfactory or higher’ student learning outcomes.” States are also permitted to determine the weighting of each indicator, which includes determining how much the student learning outcomes measure will impact the overall rating.

 
 

To ensure that pupils aren’t stuck in chronically low-performing schools, policymakers are increasingly turning to strategies such as permanent closure or charter-school takeovers. But do these strategies benefit students? A couple recent studies, including our own from Ohio and one from New York City, have found that closing troubled schools improves outcomes. Meanwhile, just one study from Tennessee has examined charter takeovers, and its results were mostly inconclusive.

A new study from Louisiana adds to this research, examining whether closures and charter takeovers improve student outcomes. The analysis uses student-level data and statistical methods to examine the impact of such interventions on students’ state test scores, graduation rates, and matriculation to college. The study focuses on New Orleans and Baton Rouge, with the interventions occurring between 2008 and 2014. During this period, fourteen schools were closed and seventeen were taken over by charter management organizations. Most of these schools—twenty-six of the thirty-one—were located in New Orleans. The five Baton Rouge schools were all high schools.

The study finds that students tend to earn higher test scores after their schools are closed or taken over. In New Orleans, the impact of the interventions was positive and statistically significant on state math and reading scores. New Orleans high-schoolers also experienced an uptick in on-time graduation rates as a result of the interventions, though the Baton Rouge analysis reveals a negative impact on graduation (more on that below). No significant effects were found on college-going rates in either city. With respect to intervention type, the analysis uncovers little difference. Both closure and charter takeover improved pupil achievement. Likewise, the effects were similar on graduation rates—overall neutral when taking together both cities’ results.

More importantly, the research indicates that these intense interventions benefit students most when they result in attendance in a markedly better school. Post-intervention, New Orleans students attended much higher-performing schools, as measured by value added, while in Baton Rouge, students landed in lower quality schools, perhaps explaining the lower graduation rates. Furthermore, the analysis suggests that the positive effects are more pronounced when schools are phased out over time—that is, the closure or takeover is announced and no new students are allowed to enroll—thus minimizing the costs of disruption. These results largely track what we found in Ohio, where students made greater gains on state tests when they transferred to a higher-performing school post-closure.

While not well liked by the general public, the hard evidence continues to accumulate that, given quality alternatives, students benefit when policymakers close or strongly intervene in dysfunctional schools.

SOURCE: Whitney Bross, Douglas N. Harris, and Lihan Liu, The Effects of Performance-Based School Closure and Charter Takeover on Student Performance, Education Research Alliance for New Orleans (October 2016). 

 
 

“If schools continue to embrace the potential benefits that accompany surveillance technology,” assert the authors of a new report issued by the National Association of State Boards of Education (NASBE), “state policymakers must be prepared to confront, and potentially regulate, the privacy consequences of that surveillance.” And thus they define the fulcrum on which this seesaw of a report rests.

Authors J. William Tucker and Amelia Vance do not exaggerate the breadth of education technology that can be used for “surveillance,” either by design or incidentally, citing numerous examples that range from the commonplace to ideas that Big Brother would love. We are all familiar with cameras monitoring public areas in school buildings, but as police use of body cameras increases, school resource officers will likely be equipped with them as well. The authors note that a district in Iowa even issued body cameras to school administrators. (Our own Mike Petrilli wondered a few years about putting cameras in every classroom.)

Cameras have been commonplace inside and outside of school buses for years, but now student swipe cards and GPS bus tracking mean that comings and goings can be pinpointed with increasing accuracy. Web content filters are commonplace in school libraries, but the proliferation of one-to-one devices has led to monitoring applications for use both in the classroom and in students’ homes. Even a student who provides his or her own laptop can be fully monitored when using school Wi-Fi networks. Social media monitoring of students is an imprecise science, but the authors report it is becoming more sophisticated and more widespread in order to identify cyberbullying incidents or to predict planned violent acts on school grounds. And into the realm of science fiction, they add increasing use of thumbprint scanners, iris readers, and other biometric data gathering apparatus.

The authors are thorough in listing the intended benefits of all of these surveillance efforts—student safety, anti-bullying, food-service auditing, transportation efficiency, etc. Those benefits likely made the adopted surveillance an easy sell in schools that have gone this route. But on the other side of the fulcrum are two equally large areas of concern: privacy and equity. These issues are addressed by the report on a higher, more policy-oriented level. Privacy concerns are addressed in terms of which data are, by default, kept by schools (all of it) and for what length of time (indefinitely). The authors assert that without explicit record keeping policies (or unless the storage space runs out), there is neither will nor incentive to do anything but save the data. Additionally, there are unanswered questions, such as what constitutes a student’s “educational record” and by whom that data may be accessed. For example, details of disciplinary actions may be educational records, but what about the surveillance video that led to that disciplinary action? Equity concerns are addressed in terms of varying and unequal degrees of surveillance (high school kids who can afford cars are not monitored on the way home at all, for example) as well as inequitable “targeting” of surveillance techniques on certain students before anything actionable has occurred.

As a result of this rather wide gulf between facts and policy, even NASBE’s good and thorough list of suggestions for state boards to attempt to balance student safety, privacy, and equity concerns with policies seem more like a skateboarder’s efforts to catch up with a speeding train. Those recommendations are: 1) keeping surveillance to a bare minimum, including discontinuing existing efforts once they are no longer needed, 2) using surveillance only in proportion to the perceived problem, 3) keeping all surveillance methods as transparent as possible to students, parents, and the public, 4) keeping discussion of surveillance use and possible discontinuation thereof open to the public, 5) empowering students and parents to use surveillance data in their own defense when disputes arise between students or between students and staff, 6) improving broader inequities in schools so that there is less precedent for families to believe that surveillance is being used inequitably, and 7) training for state and local boards, administrators, teachers, and staff on all aspects of surveillance methods, data use, public records laws, etc.

Balancing students’ safety and their privacy is a difficult and sensitive job, and the recommendations enumerated here are good ones. But how many state board members have the bandwidth to address surveillance issues at that level of granularity? How many local board members (perhaps a more logical place for these decisions to be made)? And what happens when board member seats turn over? Legislative means of addressing these concerns are not even touched upon in this report.

In the end, it seems that the juggernaut of technology has spawned an unprecedented level of student surveillance, and diffuse, widespread fear for student safety—whether legitimate or not—serves only to “feed the beast.” As well-intentioned as this report and its recommendations are, even the most casual observer of today’s schools can’t help but conclude that the seesaw is definitely tipped toward more and more varied surveillance that is unlikely to be checked at the state policy level.

SOURCE: J. William Tucker and Amelia Vance, “School Surveillance: The Consequences for Equity and Privacy,” National Association of State Boards of Education (October, 2016).

 
 

Hopes are high for a new kind of school in Indianapolis. Purdue Polytechnic High School will open in the 2017-18 school year, admitting its first class of 150 ninth graders on the near Eastside. It is a STEM-focused charter school authorized by Purdue University that will utilize a project-based multidisciplinary curriculum intended to give graduates “deep knowledge, applied skills, and experiences in the workplace.”

The location of the school in the Englewood neighborhood is a deliberate step for Purdue, which is aiming to develop a direct feeder for low-income students and students of color into Purdue Polytechnic Institute in West Lafayette. To that end, the high school will teach to mastery—each student moving on to the next level in a subject once they have demonstrated mastery at the current level. If that requires remediation of work, so be it. The school model is designed to keep students engaged, challenge them to reach their maximum potential, and meet high expectations. More importantly, a high school diploma will be “considered a milestone rather than an end goal,” according to the school’s website. College is the expected next step for all Purdue Polytechnic High School graduates. In fact, the high school’s curriculum is modeled on that of Purdue Polytechnic Institute in order to make the transition between the two seamless—minus 65 miles or so.

Shatoya Jordan and Scott Bess have been chosen to lead the new school as principal and head of school, respectively. Both were recently named to the latest class of Innovation School Fellows by The Mind Trust.

Applications for the first class opened last week and hopes are high that this innovative school model will open new doors for students in need of high quality options. Other states, including Ohio, should take note. This partnership could pay big dividends for Purdue, the community, and most importantly, the many low-income students who will have a new opportunity to advance. Hats off to Purdue for supporting this effort.

 
 
 
 

It’s October, and that means election season. One important decision facing many Buckeye voters is whether to approve their school districts’ tax requests. These referenda represent a unique intersection between direct democracy and public finance; unlike most tax policies, which are set by legislatures, voters have the opportunity to decide, in large part, their own property-tax rates. In Ohio, districts must seek voter approval for property taxes above 10 mills (equivalent to 1 percent) on the taxable value of their property.

Some citizens will enter the voting booth well-informed about these tax issues, but for others, the question printed on the ballot might be all they know. Voters have busy lives and they may not always carefully follow their district’s finances and tax issues. This means that the ballot itself ought to clearly and fairly present the proposition to voters. State law prescribes certain standard ballot language, but districts have some discretion in how the proposition is written. County boards of elections and the Secretary of State approve the final language. How does the actual language read? Is it impartial? Can it be easily understood?

Let’s take a look at a few high-profile ballot issues facing voters in November. First, here is the tax-issue posed to Cincinnati voters:

Shall a levy be imposed by the Cincinnati City School District, County of Hamilton, Ohio, for the purpose of PROVIDING FOR THE EMERGENCY REQUIREMENTS OF THE SCHOOL DISTRICT in the sum of $48,000,000 and a levy of taxes to be made outside of the ten-mill limitation estimated by the county auditor to average seven and ninety-three hundredths (7.93) mills for each one dollar of valuation, which amounts to seventy-nine and three-tenths cents ($0.793) for each one hundred dollars of valuation, for five (5) years, commencing in 2016, first due in calendar year 2017?

As with all property-tax issues, one of the most complicated terms is “mill”—the amount of the levy and equal to one thousandth of a dollar. None of us, however, go to the supermarket and buy 100 mills worth of groceries; and in the realm of taxes, we’re more accustomed to seeing them expressed as percentages—a 6 percent sales tax, for instance. Because millage rates are so rarely used in everyday life, a voter may find it hard to discern the size of the request. Is 7.93 mills a huge tax hike, or relatively affordable? Unless a voter has done her homework, she probably wouldn’t know. But voters shouldn’t be expected to be tax experts or follow the news to understand the impact on their personal finances. Simpler, less technical language would help the average voter better understand the question. Perhaps the tax could be stated also as percentages or in more realistic dollar terms—for instance, the proposed levy would increase taxes by $100 for a property with a taxable value of $100,000.

Also noticeable in this tax request is the “emergency” language—it is hard to miss when printed in capital letters. While the district is not in fiscal emergency, it is seeking an emergency levy nevertheless. The state permits this type of levy when districts are projecting a financial deficit in future years. But the prominent ballot language could impact the electoral outcome, especially if marginal or undecided voters tip the scales. Perhaps the district is indeed in financial straits, but shouldn’t that case be made independent of the ballot itself? Opponents might argue that district could address the deficit in other ways, such as by renegotiating unaffordable teacher union contracts. Referenda should be presented as neutrally as possible,[1] because we know from surveys that the wording of questions can alter the results. Though allowed, the use of the word “emergency,” which comes with a powerful connotation, is likely to influence voters.[2]

Now let’s turn to the 274-word question facing Columbus voters.

Shall the Columbus City School District be authorized to do the following: 1. Issue bonds for the purpose of improving the safety and security of existing buildings including needed repairs and/or replacement of roofing, plumbing, fire alarms, electrical systems, HVAC, and lighting; equipping classrooms with upgraded technology; acquiring school buses and other vehicles; and other improvements in the principal amount of $125,000,000, to be repaid annually over a maximum period of 30 years, and levy a property tax outside the ten-mill limitation, estimated by the county auditor to average over the bond repayment period 0.84 mill for each one dollar of tax valuation, which amounts to $0.084 for each one hundred dollars of tax valuation, to pay the annual debt charges on the bonds, and to pay debt charges on any notes issued in anticipation of those bonds? 2. Levy an additional property tax to provide funds for the acquisition, construction, enlargement, renovation, and financing of permanent improvements to implement ongoing maintenance, repair and replacement at a rate not exceeding 0.5 mill for each one dollar of tax valuation, which amounts to $0.05 for each one hundred dollars of tax valuation, for a continuing period of time? 3. Levy an additional property tax to pay current operating expenses (including expanding Pre-Kindergarten education; improving the social, emotional, and physical safety of students; expanding career exploration opportunities; reducing class sizes; providing increased support to students with exceptional needs; and enhancing reading and mathematics instruction) at a rate not exceeding 5.58 mills for each one dollar of tax valuation, which amounts to $0.558 for each one hundred dollars of tax valuation, for a continuing period of time?

I won’t repeat the point about millage, but let me make three additional observations. First and most obviously, this is a complicated request: The district is seeking approval for a tax package that includes not only debt financing but also funding for capital improvements and day-to-day operations. This puts a daunting burden on voters who must either gather the requisite information beforehand, or spend serious time in the booth reading and understanding it.

Second, consider how different Columbus’s tax request is compared to Cincinnati’s. Columbus is seeking a fixed rate levy at a maximum 0.5 mills for permanent improvements and 5.58 mills for operations. In contrast, Cincinnati is seeking a fixed sum levy generating $48 million per year, where the tax rate could vary (note the “estimated” rate). Also there is no set time in which Columbus’s tax would expire, while Cincinnati’s would sunset after five years. This illustrates how varied Ohio’s different property-tax types are, adding more complexity to what voters must know in order to make an informed decision.

Third, note how the 5.58 mill request lists several specific purposes of the levy, such as expanded pre-K, reduced class sizes, and other initiatives. Other district tax requests don’t include such specific lists and could be thought of as more neutral. For instance, Cleveland’s levy request simply states that it would be used for “current expenses for the school district and partnering community schools.” Similarly, Hilliard’s levy request says its purpose is for “current operating expenses.” That’s it. Nothing more with respect to the levy’s purpose. Does enumerating a handful of likable programs improve the chances of passage? It’s hard to know, of course, but they do seem to frame the tax in a more favorable light.

One could argue that voters are responsible for being educated before they enter the booth, and the question itself doesn’t matter. To be fair, local media usually cover school tax issues—albeit much less than top-of-the-ticket races—and I suspect a fair number of voters come modestly well-informed. But we also know that some voters might not be quite as well attuned. That means the ballot words matter and, if the examples of Cincinnati and Columbus are any indication, the language for property tax referenda could be made more understandable and fair. Accomplishing this will probably require revisions in state tax law and/or changes in how county boards oversee districts’ ballot language.

To be clear, I’m not taking a position on either of these tax issues. The benefits of each tax could very well outweigh the costs, or vice-versa. Nor am I suggesting that direct democracy is an inappropriate way of setting tax policy. Other taxing arrangements, of course, have their own set of challenges. My point is that so long as voters are tasked with setting property tax rates, the referenda should be presented as clear, simple, and unbiased propositions. As economist John Cochrane has argued, one imperative of modern governing is to “bring a reasonable simplicity to our public life.” Reasonable simplicity in tax referenda language seems to be warranted.


[1] In the case of the “Brexit” vote, the neutrality of the referendum language came into question and the government was forced to revise it. In Indiana, school tax referenda language has been disapproved by the state on the grounds that it might bias the vote. See here, here, and here for examples of disapproved ballot language.

[2] A look at a couple other emergency levy requests also reveals prominent typeface, so this is not unique to Cincinnati’s emergency request. See here for Parma and here for East Knox.

 

 
 

The Ohio Department of Education (ODE) recently released the results of its revised sponsor evaluation, including new ratings for all of the state’s charter-school sponsors. Called “authorizers” in most other states, sponsors are the entities responsible for monitoring and oversight of charter schools. Under the current rating system, sponsors are evaluated in three areas—compliance, quality practice, and school academic outcomes—and receive overall ratings of “Exemplary,” “Effective,” “Ineffective,” or “Poor.” Of the sixty-five Buckeye State sponsors evaluated, five were rated “Effective,” thirty-nine “Ineffective,” and twenty-one “Poor.” Incentives are built into the system for sponsors rated “Effective” or “Exemplary” (for instance, only having to be evaluated on the quality practice component every three years); however, sponsors rated “Ineffective” are prohibited from sponsoring new schools, and sponsors rated “Poor” have their sponsorship revoked.

Number of charter schools by sponsor rating

Evaluating sponsors is a key step in the direction of accountability and quality control, especially in Ohio, where the charter sector has been beset with performance challenges. Indeed, the point of implementing the evaluation was two-fold. First, the existence of the evaluation system and its rubric for ratings is meant to prod sponsors to focus on academic outcomes of the charter schools in their portfolios. Second, they’re designed to help sponsors improve their own work, which would result in stronger oversight (without micromanagement) of schools and an improved charter sector. Results-driven accountability is important, as is continually improving one’s practice.

What happens next is also important. ODE has time to improve its sponsor evaluation system before the next cycle, and it should take that opportunity seriously. Strengthening both the framework and the process will improve the evaluation. Let us offer a few ideas. 

First, the academic component should be revised to more accurately capture whether schools are making a difference for their students. Largely as a function of current state policy, Ohio charters are mostly located in economically challenged communities. As we’ve long known and are reminded of each year when state report cards on schools and districts are released, academic outcomes correlate closely with demographics. So we need to look at the gains that they are (or aren’t) making in their schools, as well as their present achievement. In communities where children are well below grade level, the extent and velocity of growth matter enormously. Make no mistake: proficiency is also important. But schools whose pupils consistently make well over a year of achievement growth within a single school year are doing what they’re supposed to: helping kids catch up and preparing them for the future.

It’s critical that we make sure that achievement and growth both be given their due when evaluating Ohio schools—and the entities that sponsor them. Fortunately, Ohio will soon unveil a modified school-accountability plan under the federal Every Student Succeeds Act (ESSA): This would be a perfect opportunity to rebalance school report cards in a way that places appropriate weight—for all public schools and sponsors—on student growth over time.

Because dropout recovery charters are graded on a different scale from other kinds of charters, their sponsors may get artificially high ratings on the academic portion of the sponsor evaluation. That needs fine-tuning too.

The compliance component of the sponsor evaluation system also needs attention.  The current version looks at compliance with “all laws and rules,” which is a list of 319 laws and rules applicable to Ohio’s charter schools, many of which don’t apply to individual sponsors. (For example, many sponsors have no e-schools in their portfolios and therefore the laws and rules that apply to such schools aren’t really pertinent to them.) Yet all Ohio sponsors were forced to gather/draft more than a hundred documents and memos—many of them duplicative—for each of their schools over a 30-day period. A better way to do this would be to figure out what applies and what matters most, then examine compliance against those provisions. For example, current item 209 (“The School displays a US flag, not less than five feet in length, when school is in session”) is not as important as whether the school has a safety plan (i.e., how to deal with armed intruders). ODE should focus on compliance with the most critical regulations on a regular basis while spot-checking or periodically checking compliance with the more picayune regulations. Another option would be to review a sample of the required documents each year, much as an auditor randomly reviews transactions. The current compliance regimen is hugely burdensome with, in many cases, very little payoff.

The sponsor evaluation is critically important, and reflects continued progress in Ohio’s efforts to improve charter school outcomes. But it’s also important to get it right if it’s indeed going to improve sponsor practice and in turn the charter sector. In its current form, it measures how well a sponsor responded to rubric questions and whether there were enough staff on hand to upload documents. It needs to quickly move to 2.0 if it seeks to be a credible and effective instrument long-term. 

 
 

Our goal with this post is to convince you that continuing to use status measures like proficiency rates to grade schools is misleading and irresponsible—so much so that the results from growth measures ought to count much more—three, five, maybe even nine times more—than proficiency when determining school performance under the Every Student Succeeds Act (ESSA). We draw upon our experience with our home state of Ohio and its current accountability system, which currently generates separate school grades for proficiency and for growth.

We argue three points:

  1. In an era of high standards and tough tests, proficiency rates are correlated with student demographics and prior achievement. If schools are judged predominantly on these rates, almost every high-poverty school will be labeled a failure. That is not only inaccurate and unfair, but it will also demoralize educators and/or hurt the credibility of school accountability systems. In turn, states will be pressured to lower their proficiency standards.
  2. Growth measures—like “value added” or “student growth percentiles”—are a much fairer way to evaluate schools, since they can control for prior achievement and can ascertain progress over the course of the school year. They can also differentiate between high-poverty schools where kids are making steady progress and those where they are not.
  3. In contrast with conventional wisdom, growth models don’t let too many poor-performing schools “off the hook.” Failure rates for high-poverty schools are still high when judged by “value added” or “student growth percentiles”—they just aren’t as ridiculously high as with proficiency rates.

Finally, we tackle a fourth point, addressing the most compelling argument against growth measures:

  1. That schools can score well on growth measures even if their low-income students and/or students of color don’t close gaps in achievement and college-and-career readiness.

(And these arguments are on top of one of the best reasons to support growth models: Because they encourage schools to pay attention to all students, including their high achievers.)

Point #1: Proficiency rates are poor measures of school quality.

States should use proficiency rates cautiously because of their correlation with student demographics and prior achievement—factors that are outside of schools’ control. Let’s illustrate what this looks like in the Buckeye State. One of Ohio’s primary school-quality indicators is its performance index (PI)—essentially, a weighted proficiency measure that awards more credit when students achieve at higher levels. Decades of research have shown the existence of a link between student proficiency and student demographics, and that unfortunate relationship persists today. Chart 1 displays the correlation between PI scores and a school’s proportion of economically disadvantaged (ED) pupils. Schools with more ED students tend to post lower PI scores—and vice-versa.

Chart 1: Relationship between performance index scores and percent economically disadvantaged, Ohio schools, 2015–16

Data source: Ohio Department of Education Notes: Each point represents a school’s performance index score and its percentage of economically disadvantaged students. The red line displays the linear relationship between the variables. Several high-poverty districts in Ohio participate in the Community Eligibility Provision program; in turn, all of their students are reported as economically disadvantaged. As a result, some less impoverished schools (in high-poverty districts) are reported as enrolling all ED students, explaining some of the high PI scores in the top right portion of the chart.

Given this strong correlation, it’s not surprising that almost all high-poverty urban schools in Ohio get failing grades on the performance index. In 2015–16, a staggering 93 percent of public schools in Ohio’s eight major cities received a D or F on this measure, including several well-regarded schools (more on those below). Adding to their misery, urban schools received even worse ratings on a couple of Ohio’s other proficiency-based measures, such as its indicators met and annual measureable objectives components. Parents and students should absolutely know whether they are proficient in key subjects—and on track for future success. But that’s a different question from whether their schools should be judged by this standard.

Point #2: Growth measures are truer indicators of school quality.

Because they account for prior achievement, ratings based on student growth are largely independent of demographics. This helps us make better distinctions in the performance of high-poverty schools. Like several other states, Ohio uses a value-added measure developed by the analytics firm SAS. (Other states utilize a similar type of measure called “student growth percentiles.”) When we look at the value-added ratings from Ohio’s urban schools, we see differentiation in performance. Chart 2 below shows a fairer balance across the A-F categories on this measure: 22 percent received an A or B rating; 15 percent received C’s; and 63 percent were assigned a D or F rating.*

Chart 2: Rating distribution of Ohio’s urban schools, performance index versus “value added,” 2015–16

*Due to transitions in state tests, Ohio rated schools on just one year of value-added results in 2014–15 and 2015–16 leading to some swings in ratings. In previous years and starting again in 2016–17, the state will use a multi-year average which helps to improve the stability of these ratings.

We suppose one could argue that the performance-index distribution more accurately depicts what is going on in Ohio’s urban schools: Nearly every school, whether district or charter, is failing. Yet we know from experience that this simply isn’t true. Yes, terrible schools exist, but there are also terrific ones whose efforts are best reflected in student growth. In fact, we proudly serve as the charter authorizer for KIPP Columbus and Columbus Collegiate Academy-Main. Both schools have earned an impressive three straight years of value-added ratings of “A,” indicating sustained excellence that is making a big impact in their students’ lives. Yet both of these high-poverty charter schools were assigned Ds on the performance index for 2015–16. That is to say, their students are making impressive gains—catching up, even—but not yet at “grade level” in terms of meeting academic standards. If we as an authorizer relied solely or primarily on PI ratings, these great schools might be shut—wrongly.

Point #3: Growth measures don’t let too many bad schools “off the hook.”

One worry about a growth-centered approach is that it might award honors grades to mediocre or dismal schools. But how often does this occur in the real world? As chart 2 indicates, 63 percent of urban public schools in Ohio received Ds or Fs on the state’s value-added measure last year. In the two previous years, 46 and 39 percent of urban schools were rated D or F. To be sure, fewer high-poverty schools will flunk under value-added as under a proficiency measure. But a well-designed growth-centered system will identify a considerable number of chronically underperforming schools, as indeed it should.

Point #4: It’s true that schools can score well on growth measures even if their low-income students and/or students of color don’t close gaps in achievement and college-and-career readiness. But let’s not shoot the messenger.

Probably the strongest argument against using growth models as the centerpiece of accountability systems is that they don’t expect “enough” growth, especially for poor kids and kids of color. The Education Trust, for example, is urging states to use caution in choosing “comparative” growth models, including growth percentiles and value-added measures, because they don’t tell us whether students are making enough progress to hit the college-ready target by the end of high school, or whether low-performing subgroups are making fast enough gains to close achievement gaps. And that much is true. But let’s keep this in mind: Closing the achievement gap, or readying disadvantaged students for college, is not a one-year “fix.” It takes steady progress—and gains accumulated over time—for lower-achieving students to draw even with their peers. An analysis of Colorado’s highest-performing schools, for example, found that the trajectory of learning gains for the lowest-performing students simply wasn’t fast enough to reach the high standard of college readiness. An article by Harvard’s Tom Kane reports that the wildly successful Boston charter schools cut the black-white achievement gap by roughly one-fifth each year in reading and one-third in math. So even in the most extraordinary academic environments, disadvantaged students may need many years to draw even with their peers (and perhaps longer to meet a high college-ready bar). That is sobering indeed.

We should certainly encourage innovation in growth modelling—and state accountability—that can generate more transparent results on “how much” growth is happening in a school and whether such growth is “enough.” But the first step is accepting that student growth is the right yardstick, not status measures. And the second step is to be realistic about how much growth on an annual basis is humanly possible, even in the very best schools.

***

Using proficiency rates to rate high-poverty schools is an unfair practice to schools that has real-world consequences. Not only does this policy give the false impression that practically all high-poverty schools are ineffective, but it also demeans educators in high-needs schools who are working hard to advance student learning. Plus, it actually weakens the accountability spotlight on the truly bad high-poverty schools, since they cannot be distinguished from the strong ones. Moreover, it can lead to unintended consequences such as shutting schools that are actually benefitting students (as measured by growth), discouraging new-school startups in needy communities (if social entrepreneurs believe that “failure” is inevitable), or thwarting the replication of high-performing urban schools. Lastly, assigning universally low ratings to virtually all high-poverty schools could breed resentment and pushback, pressuring policy makers to water down proficiency standards or easing up on accountability as a whole.

Growth measures won’t magically ensure that all students reach college and career readiness by the end of high school, or close our yawning achievement gaps. But they do offer a clearer picture of which schools are making a difference in their students’ academic lives, allowing policy makers and families to better distinguish the school lemons from peaches. If this information is put to use, students should have more opportunities to reach their lofty goals. Measures of school quality should be challenging, yes, but also fair and credible. Growth percentiles and value-added measures meet those standards. Proficiency rates simply do not. And states should keep that in mind when deciding how much weight to give to these various indicators when determining school grades.

 
 

The central problem with making growth the polestar of accountability systems, as Mike and Aaron argue, is that it is only convincing if one is rating schools from the perspective of a charter authorizer or local superintendent who wants to know whether a given school is boosting the achievement of its pupils, worsening their achievement, or holding it in some kind of steady state. To parents choosing among schools, to families deciding where to live, to taxpayers attempting to gauge the ROI on schools they’re supporting, and to policy makers concerned with big-picture questions such as how their education system is doing when compared with those in another city, state, or country, that information is only marginally helpful—and potentially quite misleading.

Worse still, it’s potentially very misleading to the kids who attend a given school and to their parents, as it can immerse them in a Lake Wobegon of complacency and false reality.

It’s certainly true, as Mike and Aaron say, that achievement tends to correlate with family wealth and with prior academic achievement. It’s therefore also true that judging a school’s effectiveness entirely on the basis of its students’ achievement as measured on test scores is unfair because, yes, a given school full of poor kids might be moving them ahead more than another school (with higher scores) and a population of rich kids. Indeed, the latter might be adding little or no value. (Recall the old jest about Harvard: Its curriculum is fine and its faculty is strong but what really explains its reputation is its admissions office.)

It’s further true that to judge a school simply on the basis of how many of its pupils clear a fixed “proficiency” bar, or because its “performance index” (in Ohio terms) gets above a certain level, not only fails to signal whether that school is adding value to its students but also neglects whatever is or isn’t being learned by (or taught to) the high achievers who had already cleared that bar when they arrived in school.

Yes, yes and yes. We can travel this far down the path with Mike and Aaron. But no farther.

Try this thought experiment. You’re evaluating swim coaches. One of them starts with kids most of whom already know how to swim and, after a few lessons, they’re all making it to the end of the pool. The other coach starts with aquatic newbies and, after a few lessons, some are getting across but most are foundering mid-pool and a few have drowned. Which is the better coach? What grade would you give the second one?

Now try this one. You’re evaluating two business schools. One enrolls upper middle class students who emerge—with or without having learned much—and join successful firms or start successful new enterprises of their own. The other enrolls disadvantaged students, works very hard to educate them, but after graduating most of them fail to get decent jobs and many of their start-up ventures end in bankruptcy. Which is the better business school? What grade would you give the second one?

The point, obviously, is that a school’s (or teacher’s or coach’s) results matter in the real world, more even than the gains its students made while enrolled there. A swim coach whose pupils drown is not a good coach. A business school whose graduates can’t get good jobs or start successful enterprises is not a business school that deserves much praise. Nor, if you were selecting a swim coach or business school for yourself or your loved one, would you—should you—opt for one whose former charges can’t make it in the real world.

Public education exists in the real world, too, and EdTrust is right that we ought not to signal satisfaction with schools whose graduates aren’t ready to succeed in what follows when those schools have done what they can.

Mike and Aaron are trying so hard to find a way to heap praise on schools that “add value” to their pupils that they risk leaving the real world in which those pupils will one day attempt to survive, even to thrive.

Sure, schools whose students show “growth” while enrolled there deserve one kind of praise—and schools that cannot demonstrate growth don’t deserve that kind of praise. But we mustn’t signal to students, parents, educators, taxpayers or policymakers that we are in any way content with schools that show growth if their students aren’t also ready for what follows.

Yes, school ratings should incorporate both proficiency and growth but should they, as Mike and Aaron urge, give far heavier weight to growth? A better course for states is to defy the federal Education Department’s push for a single rating for schools and give every school at least two grades, one for proficiency and one for growth. The former should, in fact, incorporate both proficiency and advanced achievement, and the latter should take pains to calculate growth by all students, not just those “growing toward proficiency.” Neither is a simple calculation—growth being far trickier—but better to have both than to amalgamate them in a single less revealing grade or adjective. Don’t you know quite a bit more than you need to know about a school when you learn that it deserves an A for proficiency and a C for growth—or vice versa—than simply to learn that it got a B? On reflection, how impressed are you by a high school—especially a high school—that looks good on growth metrics but leaves its graduates (and, worse, its dropouts) ill-prepared for what comes next? (Mike and Aaron agree with us that giving a school two—or more—grades is more revealing than single consolidated rating.)

We will not here get into the many technical problems with measures of achievement growth—they can be significant—and we surely don’t suggest that school ratings and evaluations should be based entirely on test scores, no matter how those are sliced and diced. People need to know tons of other things about schools before legitimately judging or comparing them. Our immediate point is simply that Mike and Aaron are half-right. It’s the half that would let kids drown in Lake Wobegon that we protest.

 
 

This report from A+ Colorado examines Denver’s ProComp (Professional Compensation System for Teachers), a system forged collaboratively between the district and teachers union in 2005 that was on the vanguard of reforming teacher pay scales. The analysis is timely for Denver Public Schools and the Denver Classroom Teachers Association, who are back at the negotiating table (the current agreement expires in December 2017).

The A+ report outlines the urgency of getting ProComp’s next iteration right. Denver loses about half of newly-hired teachers within the first three years—a turnover rate that is costly not only for the district, which must recruit, hire, and train new teachers, but for the students who are taught by inexperienced educators (research shows that effectiveness increases greatly in the first five years). Denver Public Schools also faces another challenge in that Denver’s cost of living has increased sharply. The report notes that more than half of all renters face “serious cost burdens,” meaning they spend more than 30 percent of income on housing. The situation is worse for homeowners or would-be homeowners. Thus, ProComp is a critical part of “making DPS an attractive place to teach.” 

ProComp was revolutionary at its outset. Funded in part through an annual $25 million property tax increase (the cost for the entire system is a sizeable $330 million for 4,300 teachers), it aimed to reward teachers working in hard-to-staff positions and schools, as well as those demonstrating instructional effectiveness, measured in part by student test scores. The average teacher salary change in a given year looks markedly different under ProComp than in traditional pay systems. Last year, teachers received an average $1,444 cost of living increase, $1,253 increase in base pay, and $4,914 bonus through one-time incentives. Yet A+ finds that the system still “strongly tracks with experience” and that “teacher pay only looks modestly different than it would under a more traditional salary schedule.” That’s because ProComp maintains traditional “steps” for salary based on teachers’ years of experience and credentials. Increases to base pay are determined by negotiated cost of living increases, as well as meeting ProComp objectives. One-time bonuses are available for serving in hard-to-serve schools, boosting student test scores, or working in a high-performing or high-growth school. Denver’s teachers, when surveyed, perceived ProComp as a repackaging of the same salary as “salary plus bonuses” in exchange for extra work.

A+ finds that despite the intentions and theory of change behind ProComp, to incentivize and reward teachers and ultimately drive student achievement, studies have shown mixed results to date. While the Center for Education Data and Research found small positive effects on student achievement pre- and post-ProComp, that study couldn’t prove causality. A+ concludes that it’s “hard to prove any measurable student achievement gains attributable to ProComp.” Another study from Harvard University found that teachers with students attaining the top and lowest levels of math growth earned about the same.

Even the $25 million pot of money—just 8 percent of the district’s total spending on teacher pay—isn’t targeted to reward individual teachers for effectiveness. In 2015–16, 27 percent of these one-time dollars were allocated for market incentives. Ten percent went to teachers who gained additional education, while 52 percent were aligned to student outcomes—but mostly at the building level. The authors further find that the system is difficult for teachers to understand—a “hodgepodge of incentives” in desperate need of being streamlined and better aligned to solving district challenges. 

Toward that end, A+ makes good recommendations for improving Denver’s system: 1) “Front load” the salary schedule dramatically, awarding 10 percent increases in the first five years (with 1 percent increases thereafter, up to year fifteen); 2) Streamline salary increases and prioritize expertise, specifically by offering two lanes based on education level, instead of seven, and allow subject-matter experts to earn more; 3) Increase pay for teachers teaching in, and returning to, the highest-need schools; 4) Allow for base pay increases, rather than stipends, for taking on leadership roles, thereby better aligning pay with one’s career ladder; 5) reward high performance among teachers individually, either through more bonuses or additional promotional opportunities, to leadership roles and advances on the salary ladder.

Perhaps the most valuable contribution this report makes is a powerful reminder that ProComp (and any teacher pay system, for that matter) should be aligned with district goals. If Denver wants to mitigate teacher turnover, its pay scale must do more to incentivize teachers to stay at earlier points in their careers. The brief is also pertinent nationally. As the breakdown of Cleveland’s promising teacher pay system reminds us, challenge lies in not only crafting innovative pay systems but sustaining them over the long haul. In that respect, there’s a lot to learn from Denver’s eleven-year-old program.

SOURCE: A+ Colorado, “A Fair Share: A New Proposal for Teacher Pay in Denver” (September 2016).

 
 

On October 12, in the ornate Rotunda and Atrium of the Ohio Statehouse, surrounded by family and many of the state’s top education leaders, some of Ohio’s highest-performing beginning teachers were honored for demonstrating superior practice. We at Educopia, Ohio’s partner in administering the Resident Educator Summative Assessment (RESA), feel truly privileged to have hosted the event, which recognized Ohio educators who earned the top 100 overall scores on RESA in each of the past three years. More than 120 of the state’s highest-scoring teachers attended, joined by their spouses, children, and parents in celebration of the honor. State Superintendent Paolo DeMaria, Representative Andrew Brenner - Chair of the House Education Committee, and other state policymakers attended the event. Seeing the teachers beam with pride in front of their families and hearing their sincere gratitude for being recognized for their professional excellence was by far the most moving experience of my career in education policy.

For background, RESA is required for all third-year educators seeking a permanent teaching license in Ohio. It consists of four performance tasks that teachers complete by submitting videos, lesson plans, and student assignments from their actual teaching. The assessment was custom-developed for Ohio with the assistance of national experts Charlotte Danielson and Mari Pearlman to accurately mirror Ohio’s Teaching Standards. Ohio educators, who complete extensive training and earn certification by passing a rigorous examination, score the RESA submissions. The teachers honored at the event were among a very select group: over 15,900 educators have taken RESA since its first year in 2013-2014.

The Ohio Resident Educator program gives new teachers the chance to develop their competencies with the support of a mentor. According to Connie Ball, a program coordinator at Worthington Schools, “The Ohio Resident Educator program provides strong support for beginning teachers allowing them the grace of time to grow in the profession and continue to learn through the guidance of a strong mentorship program and a network of their peers. The program encourages teachers to ask, ‘how can I be a better educator tomorrow than I was today?’ and our teachers are certainly meeting that challenge.”

Through RESA, the state then determines whether candidates have the knowledge and skills to lead a classroom anywhere in the state. This process allows local leadership to focus on what they're best situated to do, which is to work with teachers to help them address areas for improvement. It's a bit like the AP test, in which the test is a consistent bar that all students must pass to get credit, and an AP teacher’s job is to help the students get over it. In Ohio, local leaders and mentors are there to help teachers develop the skills assessed on RESA so they can pass and earn their professional license.

RESA is an objective measure of important teaching practices, such as lesson planning, differentiation of instruction, use of assessment, and the ability to engage students intellectually so they understand concepts deeply. It also measures a teacher's ability to reflect and identify ways to improve her own practice, which is absolutely essential in a profession that requires an ongoing commitment to continual improvement.

Demonstrating the skills that RESA measures is a lot of work, as any teacher will tell you. Just as teachers and schools must commit to ongoing improvement, Educopia, the state’s testing vendor, is gathering feedback and working with the Ohio Department of Education to streamline the assessment to alleviate teacher burden. Still, the RESA “tasks” are not busywork; they capture essential skills required of any effective teacher.

On questionnaires distributed at the end of the event, teachers provided suggestions on how to improve RESA and wrote about what they gained from the RESA process. Among their comments:

  • Madison Welker, an 8th grade teacher, commented, “[T]he idea of reflection aided me to further my impact through instruction.”
  • Allison Meyer, a Kindergarten teacher, wrote, “Reflecting upon my teaching practices in a purposeful manner was incredibly beneficial, as it forced me to stop amongst the hectic day-in and day-out and evaluate my own teaching practices.”
  • Jessica Russell, a Pre-K teacher, also commented on the reflection element of RESA, “RESA has helped make lesson reflection second nature! As soon as I finish teaching a lesson I am already thinking about how I can improve it for next time. It has helped me become my best!”

Pre-K teacher Jessica Russell with State Superintendent of Public Instruction Paolo DeMaria
All photos used in this piece are by kind permission of Educopia/Matt Verber

This was the first year that Educopia hosted such an event to honor outstanding RESA candidates, and it is just the first step in our efforts to recognize high-performing educators in Ohio. We encourage these teachers to continue their professional growth and to consider future roles as teacher leaders, so that they can share what they clearly do so well. Although the event on October 12th honored a select group of teachers who scored in the top 100 on RESA, we hope districts across Ohio recognize all their teachers who are successful on the assessment, which is truly an accomplishment that deserves celebration.

Matt Verber is the Executive Director of Policy & Advocacy of Educopia.

 
 
 
 

We take a deep dive into Ohio’s most recent school report cards, look at a first step in addressing chronic absenteeism, and more

Management expert Peter Drucker once defined leadership as “lifting a person's vision to higher sights.” Ohio has set its policy sights on loftier goals for all K-12 students in the form of more demanding expectations for what they should know and be able to do by the end of each grade en route to college and career readiness. That’s the plan, anyway.

These higher academic standards include the Common Core in math and English language arts along with new standards for science and social studies. (Together, these are known as Ohio’s New Learning Standards.) Aligning with these more rigorous expectations, the state has implemented new assessments designed to gauge whether students are meeting the academic milestones important to success after high school. In 2014-15, Ohio replaced its old state exams with the PARCC assessments and in 2015-16, the state transitioned to exams developed jointly by the American Institutes for Research (AIR) and the Ohio Department of Education.

As the state marches toward higher standards and—one hopes—stronger pupil achievement and school performance, Ohioans are also seeing changes in the way the state reports student achievement and rates its approximately 600 districts and 3,500 public schools. Consider these developments:

As the standards grow more rigorous, pupil proficiency rates have declined. As recently as 2013-14, Ohio would regularly deem more than 80 percent of its students to be “proficient” in core subjects. But these statistics vastly overstated the number of pupils who were mastering math and English content and skills. For instance, the National Assessment of Educational Progress—the “nation’s report card”—indicates that just two in five Ohio students meet its stringent standards for proficiency. According to ACT, barely one in three Buckeye pupils reaches all of its college-ready benchmarks. The Ohio Department of Higher Education’s most recent statistics find that 32 percent of college-going freshman require remediation in either math or English. But with the implementation of higher standards and new exams, the state now reports more honest proficiency statistics: in 2015-16, roughly 55 to 65 percent of students statewide met Ohio’s proficient standard depending on the grade and subject. Although these rates still overstate the fraction of students meeting a college and career ready standard, parents and taxpayers are gaining a truer picture of how many young people meet a high achievement bar.

Higher achievement standards have also meant lower school ratings, particularly on the state’s performance index. This key report card component is a measure of overall student achievement within a school and one that is closely related to proficiency rates (and, for better and worse, closely correlated with socio-economics). While lower performance index scores affect schools throughout Ohio, they create special challenges when examining the results of high-poverty urban schools. Under softer standards, a fair number of urban schools maintained a C or higher rating on this measure, but now almost all of them receive a D or F performance index rating. In 2015-16, a lamentable 94 percent of urban schools were assigned one of those low grades. (High-poverty schools also receive near-universal Ds and Fs on a couple other proficiency-based measures.) Because PI ratings yield so little differentiation, policy makers, analysts, and the media need to use extra care lest they label virtually every urban school poor performing. Student achievement is indeed low in high-poverty communities and we all want to see stronger outcomes for disadvantaged children. But by concentrating on proficiency-based measures, we risk calling some schools failures when they are actually helping their students make up academic ground.

That’s where Ohio’s “value added” rating kicks in. This measure utilizes student-level data and statistical methods to capture the growth that students make (or don’t make) regardless of where they begin on the achievement spectrum. Because value added methods focus on pupil growth instead of point-in-time snapshots of proficiency, they can break the link between demographics and schools’ outcomes as measured strictly by achievement. On value added, urban schools can and do perform as well (or as poorly) as their counterparts from posh suburbs. In the present report, we show that 22 percent of Big Eight public schools earned an A or B on the state’s value added measure in 2015-16. Given the criticism of Buckeye charter schools, it is even more notable that a greater proportion of urban charters earned A or B value added ratings than did their Big Eight[1] district counterparts (29 to 19 percent). Although the evidence is based on just one year of results, one hopes that these results represent the onset of an era of higher charter performance after major reforms were enacted in 2015.

While value added scores haven’t noticeably plummeted or inflated with the rising standards, we should point out some important developments in the measure itself. First, during Ohio’s testing transitions, the state has reported value added results based on one-year calculations rather than multi-year averages, as was done prior to 2014-15. Probably as a result, some schools’ ratings have swung significantly; for example, Dayton Public Schools received an F on value added in 2014-15 but an A in 2015-16. One year of value added results can’t perfectly capture school performance—we need to take into account a longer track record on this report card measure.

Second, Ohio’s value added system now includes high schools. Previous value added ratings were based solely on tests from grades four through eight (third grade assessments form the baseline). With the phase out of the Ohio Graduation Tests (OGT) and the transition to high school end-of-course exams, Ohio has been able to expand value added to high schools. (The OGTs were not aligned to grade-level standards, prohibiting growth calculations; EOCs are aligned to the state’s new learning standards.) Starting in 2015-16, the state assigns value added ratings at the high school level (though it reported high school results in the year prior). In the absence of value added, analysts were limited to proficiency or graduation rates that can disadvantage high-poverty high schools. With the addition of value added, we gain a richer view of high school performance.

Shifting to higher learning standards, transitioning to new tests, and evolving to more comprehensive school report cards has led to some frustration. To a certain degree, the feedback is understandable—it has been a challenging start in the long journey toward academic excellence. In the days ahead, Ohioans should absolutely continue to work together to make sure state standards and accountability policies are as rigorous, coherent, and fair as possible. At the same time, the state should ensure continuity in key policy areas so that we can gauge our progress moving forward.

At the end of the day, we should keep the big picture in mind: High standards, properly implemented, help form the foundation for greater student achievement. Several Ohio school leaders appear ready and willing to tackle these challenges. After the report card release, David Taylor, a leader at Dayton Early College Academy, told the Dayton Daily News, “We hope that people have the patience to understand that the goal posts moved…We’re asking a lot more of our kids and their families. That will require patience and a plan.” On the pages of the same newspaper, Scott Inskeep, superintendent of Kettering City Schools, said, “The AIR assessments were tough…We have to get tough, ourselves, and teach to the depth that is needed to assure student success on these tests.” Ohio has charted a more rugged course for its students and schools. If state and local leaders can maintain this course—setting sights on excellence—we should begin to see more young people fully prepared to face the challenges of tomorrow.

Download the full report here.


[1] The Big Eight cities are Akron, Canton, Cincinnati, Cleveland, Columbus, Dayton, Toledo, and Youngstown.

 

 
 

According to the most recent Civil Rights Data Collection (CRDC) compiled by the U.S. Department of Education,[1] an alarming 6.5 million American students, more than 13 percent nationwide, were chronically absent—defined as missing 15 or more days of school— during the 2013-14 school year. Of these students, more than half are enrolled in elementary school, where truancy can contribute to weaker math and reading skills that persist into later grades. Chronic absenteeism rates are higher in high school: Nearly 20 percent of U.S. high school students are chronically absent, and these teenagers often experience future problems with employment, including lower-status occupations, less stable career patterns, higher unemployment rates, and low earnings.  

The data get even more disconcerting when they’re disaggregated by location. The CRDC explains that nearly 500 school districts reported that 30 percent or more of their students missed at least three weeks of school during the 2013-14 school year. The idea that certain districts struggle more with chronic absenteeism than others caught the attention of Attendance Works (AW), an organization that aims to improve school attendance policies. To create a more in-depth picture of the problem, Attendance Works combined the CRDC data with statistics from the Census Bureau and the National Center for Education Statistics and released a report with a stunning key finding: Half of the nation’s chronically absent students are concentrated in just 4 percent of districts.[2]

These 654 districts are located throughout 47 states and Washington D.C. and include cities, suburbs, towns, and rural areas. AW pays particular attention to two groupings within the 4 percent. The first is a group of large, mostly suburban districts with large numbers of chronically absent students; districts like Fairfax County, Virginia (12 percent of more than 180,000 students are chronically absent) and Montgomery County, Maryland (16 percent of more than 150,000 students are chronically absent), which are known for academic achievement but also their growing low-income populations. The second grouping is composed of “urban school districts with large populations of minority students living in poverty.” AW notes that half the urban districts with high numbers of chronically absent students are highly segregated by race and income: “At least 79 percent of the students in these districts are minority, and at least 28 percent of the children between ages 5 and 17 live in poverty.”

So how did Ohio fare on the AW report? During the 2013-14 school year, the Buckeye State had nearly 1.8 million students. 265,086 students (15 percent) were chronically absent—right around the national average. To illustrate their findings, Attendance Works developed interactive maps. Here’s a look at which Ohio districts hold a spot on one of the maps, in order from the highest percentage of chronically absent students to the lowest:

It’s no surprise to see Cleveland with the highest percentage of chronically absent students: CEO Eric Gordon told the Plain Dealer in 2015 that over the previous three years, the district had averaged 57 percent of kids missing ten days or more in a year.

Attendance Works offers a list of six steps for states and districts to take in order to use the data to create an effective action plan. Each step has a variety of additional recommendations, such as adopting a multi-tiered system of support that addresses common attendance barriers and includes interventions like home visits and personalized outreach, developing tailored action plans, and mentoring.

The good news for Ohio is that many of these recommendations are already being considered. House Bill 410, which was introduced back in December 2015, is a common sense bill that aims to tackle the punitive roots of and the lack of clear and consistent data on student truancy in the Buckeye State. (See here for an in depth overview of the bill.) Unfortunately, the bill has yet to make it out of the Senate.

Recently, some Ohio education groups voiced concerns that many schools lack the required personnel and finances to properly support the absence intervention teams outlined in the bill. (These teams are responsible for developing an intervention plan tailored specifically to the student, with the aim of getting her back to—and keeping her in—school. Teams must include a district administrator, a teacher, and the student’s parent or guardian, and are required to meet certain deadlines.) They also questioned the “extensive reporting” the bill calls for. Given the heavy load of responsibilities that teachers and administrators already have, it would be wise for legislators to seek feedback about how to make absence intervention teams more workable without losing sight of their intended purpose. The same is true for reporting requirements, which could be streamlined but not erased completely.

There’s a growing sense that when lawmakers return to Columbus this fall, they will fine tune and then pass House Bill 410. That’s a good thing. Improving the data systems and intervention protocols for chronic absenteeism is low-hanging fruit, and the General Assembly should do its part to ensure that fruit is harvested and solid policies around student attendance are in place.


[1] The U.S. Department of Education’s Office of Civil Rights, which conducts the CRDC, notes that their data may differ from those of other published reports due to “certain data decisions.” Find out more here.

[2] Like the USDOE, Attendance Works notes that some of their data are incomplete because of data corrections and submission errors. The authors of the study do not believe these issues change the overall patterns they reported.

 
 

NOTE: All photos used in this piece were graciously provided by the Cleveland Transformation Alliance. The photo at the top of this page features HBCU Preparatory School student Meiyah Hill and school principal Tim Roberts.

Standardized test scores are the most common measure of academic success in our nation’s K-12 schools. While they are an important indicator, most observers would agree that tests don’t tell the whole story about what’s happening in our public schools.

Given the recent changes to Ohio’s assessments and standards and their impact on test scores statewide, the need to tell a deeper story about public education has become even more evident.

In Cleveland, we know that Cleveland’s Plan for Transforming Schools is enabling both district and charter schools to create new learning environments that are laying a foundation for sustainable academic improvement. Progress is slow and not always visible from the outside, but it’s happening.

That’s why the Cleveland Transformation Alliance recently partnered with Civic Commons ideastream to share powerful stories about education in Measuring Success Behind the Numbers. The conversation included three storytellers:

  • Student Meiyah Hill talked about how HBCU Preparatory School, a charter middle school in Cleveland, made her feel part of the school family and challenged her so she was ready to get into one of the Cleveland Metropolitan School District’s highest-performing high schools, the School of Architecture and Design at John Hay;
  • Parent Larry Bailey told the story of how he went from being a drop-the-kids-at-the-door dad to leading his school’s parent organization;
  • Principal Lee Buddy, Jr., into his second year at a district school, spoke of his vision and work to expand partnerships and opportunities for his students.


Cleveland parent Larry Bailey

After the audience heard these stories, we sat down with three educators whose job it is to make sure thousands of Meiyah Hills can experience the transformative power of education, as many Larry Baileys get pulled into their children’s education, and hundreds of leaders like Lee Buddy are empowered to make a difference at the school level.


Lee Buddy, Jr., principal of Wade Park School, with one of his students

Connecting the individual stories to the bigger picture of transformation in Cleveland were Diana Ehlert, CMSD central office administrator; JaNice Marshall, who leads efforts at Cuyahoga Community College to engage K-12 students and parents in preparation for college and career; and Mary Ann Vogel, chief educator at the Intergenerational Schools, part of Cleveland’s successful Breakthrough charter school network. The panel’s dialogue focused on school culture, the importance of the broader community in transformation efforts, and how success is measured on an ongoing basis. We wrapped up with a lively Q&A session.


Student Meiya Hill (center), with her family

As the Transformation Alliance found in its second annual report on the implementation and impact of the Cleveland Plan, released in September 2016, progress in Cleveland is too slow, but we expect that changes happening now will lead to clearer academic gains in the future. The stories and dialogue shared in Measuring Success Behind the Numbers provided more evidence of the transformation that’s taking shape in our city.

Piet van Lier is Executive Director of the Cleveland Transformation Alliance.

The Cleveland Transformation Alliance is a public-private partnership created to serve as a voice for accountability and advocacy. The Alliance has four work roles: assess all district and charter schools in Cleveland; communicate with families about school quality and options; ensure fidelity to the Cleveland Plan; monitor charter school quality and growth.

*****

On October 27, 2016, the Alliance will host a dialogue focused on how deeper partnerships are driving educational innovation in Cleveland. See www.innoeducate.eventbrite.com  for more information.

 
 

This report from the Council for a Strong America provides an alarming snapshot of how ill-prepared many of the nation’s young adults are to be productive members of society.

The Council is an 8,500-member coalition comprised of law enforcement leaders, retired admirals and generals, business executives, pastors, and coaches and athletes. Its inaugural “Citizen-Readiness Index” gives more than three quarters of states a C or below on the index, due to staggering numbers of young people who are 1) unprepared for the workforce, 2) involved in crime, or 3) unqualified for the military.

Ohio received an overall C grade, earning some of the top marks for workforce and crime indicators. More specifically, 12 percent of Ohio’s young people ages 16–24 were reported to be unprepared for the workforce, a relatively low percentage nationally that earned Ohio a B. Ohio also earned a B on crime, with eight arrests per one hundred people (among those ages 17–24)—one of the lowest numbers nationwide. On military readiness, however, Ohio earned a D. A whopping 72 percent of youth ages 17–24 were ineligible for military service. Eligibility to enter the military depends on a range of factors, including physical fitness and attainment of a high school diploma.

Nationwide, almost a third of our young people (31 percent) are disqualified from serving in the military due to obesity alone. Factoring in drug abuse, crime (more than 25 percent of young adults have an arrest record), and “educational shortcomings” raises that number to 70 percent. (Unfortunately, the military readiness numbers aren’t broken out at the state level. We don’t, for example, know what percentage of Ohio youth are disqualified due to obesity versus other factors.)

These data are shocking and should remind everyone of the stakes at hand. Given the proven and widely known negative correlation between educational attainment and crime, drug use, unemployment, and other negative life events, it is all the more imperative that K–12 schools do a better job preparing young people not just for college but for life as upstanding, productive citizens.

Unfortunately, the report doesn’t address K–12 public school quality, nor does it provide many concrete steps for state or local leaders—where education policy is truly set—to address the citizen-readiness crisis. Instead, it offers a set of recommendations geared specifically at Congress and the next president to address the problem.

Part 1, “strong families,” calls for Congress to reauthorize the Maternal, Infant and Early Childhood Home Visiting (MIECHV) program. Outlining research on the relationship between childhood trauma (affecting nearly a quarter of all children) and crime and drug use, the report makes a case—albeit a loose, indirect one—for reauthorizing the program, which serves 150,000 at-risk parents and kids.

Part 2, “quality early education,” dives into research on the long-term gains offered by high-quality preschool, but it misses the boat in its broad recommendation to reauthorize Head Start and expand the Preschool Development Grant Program. While making a strong moral case for investing in children, the Council overlooks research indicating that academic gains from preschool often wear off and that many current early education programs are woefully insufficient. (It does, however, acknowledge the uneven quality of Head Start.) Further, because it ignores questions about the quality of K–12 public schools, there’s no guarantee that suggested improvements to early learning will be sustained over time and ultimately reap the intended benefits (higher education attainment, lower crime, increased readiness for military, etc.).

Part 3, healthier schools, is perhaps the most relevant section, given the coalition behind this report (including military generals, coaches, and athletes), and the one that might be most practically addressed at the state level. Even if the other recommendations were implemented fully and school quality was improved dramatically, obesity would still disqualify a significant number of people from the military. Sixty percent of young adults are obese or overweight (according to standards set forth by the American Medical Association). These numbers are worrisome not only in light of military ineligibility but in terms of the life-long health consequences. The report recommends that Congress and the president “defend science-based nutrition standards” like those embedded in the Healthy, Hunger Free Kids Act of 2010. Specifically, it calls on lawmakers to support the Child Nutrition Reauthorization introduced last year by the Senate Agriculture Committee. And it implores states to place a greater priority on physical education programs, which have waned in recent years. According to the report, the percentage of schools requiring students to take physical education has declined significantly in the last fifteen years, as has the amount of time spent on recess.

Despite not devoting energy or ink to discuss the academic quality of K–12 schools, the Citizen-Readiness Index does an excellent job of outlining the dire ill-preparedness of too many young people for jobs, college, or the military. The scope and commitment of the bipartisan coalition behind this report is impressive, even though its recommendations take on an equally broad, everything-and-the-kitchen-sink approach. As Ohio develops its state accountability plan for the Every Student Succeeds Act (ESSA), it might be worth including “citizen readiness” as a high school indicator.

SOURCE: Council for a Strong America, “2016 Citizen-Readiness Index” (September 2016).

 
 

A multitude of research has shown that quality teaching is necessary for students’ achievement and positive labor market outcomes. Rigorous evaluations have been hailed as a way to improve the teacher workforce by recognizing and rewarding excellence, providing detailed and ongoing feedback to improve practice, and identifying low-performers who should be let go. While plenty of time has been devoted to how best to provide teachers with feedback, less time has been spent examining how evaluation systems contribute to the removal of underperforming teachers and the resulting changes in the teacher workforce.   

This study examines The Excellence in Teaching Project (EITP), a teacher evaluation system piloted in Chicago Public Schools (CPS) in 2008. The program focused solely on classroom observations and used Charlotte Danielson’s Framework for Teaching (FFT) as the basis for evaluation (unlike many current systems, which rely on multiple measures including student test scores). Roughly nine percent of all CPS elementary teachers participated in the first year of the pilot, which was considered a “low-stakes intervention” since scores on the FFT rubric were not officially included on teachers’ summative evaluation ratings.

Prior to the use of the FFT, teachers in Chicago were evaluated against a rudimentary checklist of classroom practices. This overly-generous model led to nearly all CPS teachers (approximately 93 percent) receiving one of the top-two ratings in a four-tiered rating system. EITP, on the other hand, utilized the detailed, research-based set of components of the FFT and required teachers to be evaluated multiple times a year. Principals were trained extensively on how to effectively use the framework, and were required to have conferences with teachers before and after observations. Because FFT provided teachers and principals with far more detailed information about instructional performance than the previous system, the framework produced more variation in teacher ratings.

The pilot started with forty-four randomly selected elementary schools in 2008-09; the following year forty-nine schools were added. CPS worked with the University of Chicago Consortium on School Research to craft an experimental design for implementation, and the University of Chicago randomized schools to take part in the first and second cohorts. Both treatment and control schools were statistically indistinguishable in regards to prior test scores (reading and math) and student composition.

Despite the fact that the experimental design was only maintained for one year, researchers were able to determine how the pilot impacted teacher turnover. While there was no average effect on teacher exits, the researchers did find that teachers who had low prior evaluation ratings were more likely to leave the district due to the evaluation pilot. In fact, by the end of the first year of implementation, 23.4 percent of low-rated teachers in schools using the EITP pilot left the district, compared to 13 percent of low-rated teachers in control schools.[1] Non-tenured teachers were also “significantly more likely” to leave. Overall, the first year of the pilot saw an 80 percent increase in the exit rate of the lowest performing teachers and a 46 percent increase in the turnover of non-tenured teachers.[2] The loss of teachers who were both low-performing and non-tenured suggests that “contract protections enjoyed by tenured teachers provided meaningful job security for those who were low-performing,” as there was no difference in the exit of low-rated tenured teachers. Also worth noting is that teachers who remained in EITP schools were higher-performing than those who exited, as were the teachers who replaced exiting educators.

These findings suggest two important conclusions. First, teacher evaluation reforms like the EITP pilot can indeed impact the quality of the teacher workforce by inducing the exit of low-performers. In turn, by replacing low-performing teachers with higher-performing ones, achievement should in theory rise  (though the researchers did not specifically test this hypothesis). Second, given that low-rated non-tenured teachers were significantly more likely to leave than low-rated tenured teachers, the researchers were able to surmise that “tenure reform may be necessary to induce low-performing tenured teachers to leave the profession.”

SOURCE: Lauren Sartain and Matthew P. Steinberg, “Teachers' Labor Market Responses to Performance Evaluation Reform: Experimental Evidence from Chicago Public Schools,” The Journal of Human Resources, (August 2016).


[1] The researchers note that although “the leave rate of low-rated treatment school teachers is imprecisely estimated because very few teachers received low ratings, it is remarkably stable and large in magnitude.”

[2] In CPS, teachers who are in their first, second, and third year of teaching are non-tenured.

 
 

“No one is born fully-formed: it is through self-experience in the world that we become what we are.” - Paulo Freire

As a child, I always had a sense of myself—a way of understanding who I was/am, in a very concrete and tangible way. When I was a young girl others would often comment that I appeared very grounded and steady. At the time I didn't quite know what they meant because I was usually in my own internal world and not really aware of how others viewed me. But I do remember as a child feeling connected to my familial roots and having a deep perception of and sensitivity to my physical, mental, and spiritual existence. That is what knowledge of self meant to me. And that knowledge—expanded in a decidedly global way—would eventually become my foundation for navigating the world as a gifted child and young woman.

Reflecting on my childhood and upbringing, I can see clearly that my parents already had their own plans to make sure I received an extraordinary education at school and at home. They were committed to having me educated in the public schools, but they certainly did not intend to leave the trajectory of my education and fate of my future to others. They were active in shaping how my schools and teachers would interact with me, starting with advocating for me to be placed in the gifted program—called the Mentally Gifted (MG) program in our school district.

Everything we did in MG seemed to dovetail seamlessly with my parents' vision and efforts to educate me. I frequently took weekend trips with my mom or dad to art exhibitions or the local horticultural center. I helped my grandmother in her community garden and with her homemade soap-making business. I attended graduate classes individually with both of my parents. I took piano music lessons with my uncle who was a trained music teacher and drove with family friends to New York where I saw ballets (including the famed Cuban ballet) and Broadway plays and sampled different cuisines such as Japanese tempura and Indian daal. I was beginning to see myself as truly of and in the world at large. In Beloved, Toni Morrison once wrote, regarding the character Baby Sugg:

And no matter, for the sadness was at her center, the desolated center where the self that was no self made its home…fact was she know more about them than she knew about herself, having never had the map to discover what she was like.

This passage has resonated with me for years. It is as if my parents once said to each other, “No. Our Nicole is going to know herself. She has to have a map to discover what she is like.”

Young, Gifted, and Black – Developing Race Consciousness

Self-identifying as a smart and gifted Black girl was beginning to be etched into my psyche and internalized as part of my core identity. That was a good thing—because very soon I would face what seems to be a rite of passage for smart Black kids—bullying and teasing for being different or "acting white.” I did not get teased as badly as a few other students, but I was targeted enough to know I didn’t want or enjoy this kind of attention. Taking public transportation to and from school exposed me to some of the ills from which my parents were trying to shield me. Sexualized cat calls from grown men who viewed young girls as fair game for their lustful desires and incessant teasing from other neighborhood children who didn’t know what to make of a shy girl walking alone from the bus stop carrying a large backpack and violin case were part of my indoctrination into the harsh realities of an urban America shaped by persistent structural racism and sexism.

Part of my buffer was a firm sense of self and race consciousness—an understanding of myself as a young Black girl from a legacy of rich history and beauty and knowledge of the social realities of racial inequality. Race consciousness helped to inoculate me from others’ overt bigotry and internalized self-hatred. My ideas about the world were bigger and more complex. Sneers from racists and taunts from peers were not erased, but those incidents took a backseat to my own evolving map of myself.

Going Global

In the same way that Venus and Serena Williams's parents devised and carried out a plan to raise tennis stars, my parents and family were very strategic about raising a gifted child who would be engaged and immersed in the world at large. Instead of weekly sports lessons, my parents exposed me to constant critical analysis of current events, community activism, global citizenship, and multicultural appreciation.

My family always talked to and about me through an international lens—as a citizen of the world. I remember my dad telling me about his travels throughout Africa and South America as part of his political organizing work. I remember my mom recounting stories of traveling to the World Youth Festival in Germany while she was seven months pregnant with me. When I was still a small girl, my grandmother had long-term visitors from Zimbabwe and London (a “Black Brit” as our guest called herself) staying at her house, exposing me to diversity in a way that no books or school curricula could come close to doing.

The height of this emphasis on internationalism came when my dad announced that he and my mom had signed me up for an international children’s camp in Russia. “Russia! Why?” I thought. I was afraid when they told me I was going to take a plane from New York to Moscow with a group of children who also had activist parents. I worried that the plane would be hijacked or, worse, that my friends would think I was a weirdo for having parents that would send me to such a far-off place that either they had never heard of or had heard of only as the heart of “evil communism.” But off I went. I stayed for five weeks and traveled throughout the country interacting with children literally from all over the world.

As scary as the trip was initially, it ended up being one of the formative and most influential experiences of my young life. I learned in firsthand detail about Apartheid in South Africa and the civil war in Nicaragua. I encountered the vast diversity of African peoples and cultures when I met kids from Ethiopia, Nigeria, Algeria, and Guinea. I was also confronted with the nasty underbelly of racial stereotyping by a couple of fellow American camp goers. Through that unfortunate incident I learned a valuable lesson in how to exude confidence and navigate others’ ignorance and arrogance. I returned home right before I started high school. I transitioned easily into honors and eventually AP classes at my prestigious all-girls high school. My understated confidence and self-awareness also helped me to flourish socially and emotionally (as much as could be expected for a teen girl!).

I knew early on that I would eventually pursue and attain a Ph.D. in psychology. I also had an unconscious understanding that international travel and living would play an important role in my career development. To that end, I traveled to Cuba while I was in graduate school and gained invaluable insights into race, racism, and anti-racism; I was part of the first group of students in my clinical psychology Ph.D. program that participated in an international internship in Grenada, West Indies; and I even carved out a little time during graduate school to travel to Guinea, West Africa, and Japan to study and perform with different African dance groups.

Then, to no one’s surprise, but many people’s confusion, I applied for a prestigious fellowship to conduct my dissertation research in Ethiopia. While living in Ethiopia, I traveled all over the country and to neighboring Sudan and nearby Egypt by myself. Everyone at home thought I was nuts and worried that I was going alone to chaotic and possibly terrorist states. I laughed at the idea, thinking how they were missing out on the elaborate weddings and tea parties I was enjoying in North Africa. Later, as a professional psychologist, I became involved in clinical and advocacy work with marginalized and underserved urban populations and refugees and asylum-seekers from different countries. I was creating a bridge to my earlier educational and social roots. I ended up doing work in Peru, Liberia, Italy, Haiti, and Senegal and presenting at international conferences in Asia, South America, and Europe. Eventually, I took the plunge to work abroad full-time as a psychologist—first in Bahrain and then in Botswana.

Unwittingly, I had been storing accounts of my international adventures over the years and recently had them published in what I call the ultimate travel guide: Global Insights - The Zen of Travel and BEING in the World. In it, I explore the personal-development value of travel and give tips—for parents, students, young, and old—on how to maximize their travel experience.

My advice to parents: Travel can be one of the best educational enrichment opportunities for your gifted child because it crystalizes so much classroom and life experience. The good news is that, despite financial barriers, it can still be accessible to families from all socioeconomic backgrounds. While my parents were certainly educationally advantaged and able to get me involved in a variety of cultural and travel activities, they were by no means wealthy. Funding is available for travel, especially for high-achieving children. Parents can search private and government sources to help support study abroad and cultural immersion trips so that lack of money does not have to translate into missed opportunities. Help your gifted child see the world, and your child will better understand her or his place in it!

Nicole M. Monteiro, Ph.D. is a clinical psychologist and Assistant Professor of Psychology in the Department of Professional Psychology at Chestnut Hill College. Find out more about her work at www.nicolemonteirophd.com and her book at www.zenwanderlust.com.

 
 
 
 

We look at the breakdown of Cleveland’s merit pay plan, examine school closures, and celebrate the lowering of Ohio’s college remediation rate.

We know that teacher quality is the most important in-school factor impacting student performance—and that the variation in teacher quality can be enormous, even within the same school. We also know that most teachers are paid according to step-and-lane salary schedules that exclusively reward years on the job and degrees earned. These systems pay no attention to instructional effectiveness, attendance, leadership and collaboration within one’s school, or any other attributes relevant to being a good worker.

When I entered the classroom at age twenty-two, I looked at my contract and realized I wouldn’t reach my desired salary until I was in my mid-to-late forties. I would reach that level regardless of whether I took one or fifteen sick days every year; whether I put in the bare minimum or a herculean effort (as many educators do in fact do); or whether I clocked out at 3:01 or stayed with my students to offer extra help. No matter the outcomes my kids achieved, my salary would steadily tick upward based only on time accrued. Predictable, yes. But given the urgent task at hand—to keep excellent educators at the instructional helm, address the challenges of burnout and attrition, and professionalize teaching—it’s woefully insufficient. 

That’s why the breakdown of the Cleveland Metropolitan School District’s (CMSD) innovative teacher pay system is so disappointing. Developed in partnership with the Cleveland Teachers Union as part of a comprehensive package of reforms to improve the city’s schools, CMSD’s new teacher pay system was codified in 2012 by HB 525. The law earned rare bipartisan support and teacher buy-in and was the first -of-its-kind in Ohio to base annual salary increases on factors beyond years of experience and degrees—which should matter to some extent, just not singularly.

The multifaceted system went beyond the typical forms of “merit pay” largely disliked by teachers. The law required that all of the following be considered: the level of license a teacher holds; whether the teacher is highly qualified; ratings received on performance evaluations; and any “specialized training and experience in the assigned position.” Further, it allowed (but did not require) the district to compensate teachers for additional factors: working in a high-needs school (those receiving Title 1 funds); working in a challenged school (those in school improvement status); or teaching in a grade-level or subject with a staff shortage, a hard-to-staff school, or a school with an extended day or year—all of which are worthy of reward.

The system informed by the law and agreed upon in the 2013 teachers contract retained a fifteen-step pay system, but allowed for teachers’ placements within that system to be determined by how many “achievement credits” they earned, rather than by years of service and degrees. (Teachers were to earn credits through strong evaluation ratings as well as the ways described above.) Depending on their credit total, newer teachers could be placed further along in the new step system than in the previous model, while more experienced teachers wouldn’t automatically go to a higher rung (though no existing teacher would see her pay cut as a result of the new plan and all teachers received a one-time $1,500 bonus during the transition to the new pay scale).

But Cleveland’s promising compensation strategy has fallen apart just three years in, illustrating how a promising plan can die in the hands of bureaucrats and interest groups. There’s been mounting frustration that teacher raises were tied too heavily to annual evaluations rather than a combination of factors as allowed (but not required) by the original law. Despite “hours of meetings over the last four years,” the district and teachers union couldn’t come to basic agreements about how to define or reward performance. And even though 65 percent of teachers earned significant salary increases during the plan’s first two years, the union complained that some stipends were one-time allowances rather than permanent salary bumps. Meanwhile, the district never granted extra compensation for hard-to-fill jobs (saying there were none), nor would it pay extra for teachers working in corrective action schools. Last month, Cleveland teachers moved to strike, forcing all parties back to the negotiating table to reach a deal before the start of the school year.

That deal erases nearly all of the reforms enacted three years ago. It still grants teacher raises according to annual ratings but flattens those raises out and ensures that nearly everyone except the gravely incompetent earn them. Any teacher receiving the top three ratings—accomplished, skilled, or developing—will get the same raise. Much like the traditional step-and-lane structure, it treats nearly all teachers in equal fashion. The key difference is that ineffective teachers (just one percent of CMSD’s teaching force in 2014–15) will have their pay frozen, and teachers earning the top ratings will get a one-time $4,000 bonus. This will benefit CMSD’s best teachers and is perhaps the only detail deserving of praise.

Cleveland’s capitulation is discouraging, especially given the plan’s potential and the manner in which it fell apart. As the fact-finding report depressingly noted, “Neither time nor resources have been expended to build out the system. As a consequence, the District lost the opportunity to lead the country with respect to innovation where compensation systems are concerned.”

Cleveland’s is a cautionary tale about the importance of what happens in the weeds after a law passes. One conclusion to draw is that the details related to CMSD’s teacher pay plan should have been better prescribed in law, leaving no room for either gridlock or shirking of responsibility by either party. It might also point to the obvious fact that it’s very difficult to achieve change with so many parties at the table—and that’s why policymakers feel they have to resort to “top-down” policy changes like the Youngstown Plan. I’d venture to guess that most policymakers and leaders would like to achieve local buy-in and cooperation—at the very least, few would eschew it on principle. Ohio lawmakers left some details open for CMSD and the union to sort out for themselves—respecting local autonomy and not wanting to over-prescribe policy details. Yet this is where the plan dissipated.

Perhaps the most daunting takeaway is that sustainable change is often resisted, stalled, or derailed due to cognitive inertia—a psychology term that describes what happens when long-held beliefs endure even in the face of counterevidence. When it comes to teacher pay, there seem to be deeply ingrained beliefs that best way to pay teachers is the old, industrial-era manner in which we’ve always done it (despite evidence to the contrary). Excellent teachers stand to benefit the most from differentiated pay systems. Developing and effective teachers do, too—by receiving meaningful professional development and seeing improvements over time. Only the least effective educators stand to lose anything. Yet teachers aren’t coming out in droves to demand better pay systems that develop and reward them—not in Cleveland and not in most of Ohio.

Bipartisan backing and early teacher buy-in in Cleveland clearly were not enough to prevent the model’s breakdown and the district’s default to a system that largely treats all teachers equally. It may be tempting to conclude from the collapse of CMSD’s promising teacher-pay plan that the law should have been more specific, or that top-down reforms may work better to overcome local gridlock. An equally plausible observation—and one that education reformers may do well to consider—is that until prevailing opinion changes within the profession itself, improvements to teacher pay models will be difficult to sustain. Even heavily prescribed plans can be reversed later. Meanwhile, teachers themselves should consider how moving away from a factory model of compensation to one differentiated for performance and skills is one key step toward reaching a long-held goal: professionalizing teaching. 

 
 

Politicians are wise to pay attention to public opinion data, but they are also responsible for crafting sound policies based on research and evidence. So what are they supposed to do when these two goods conflict?  

Anya Kamenetz at NPR was the first to highlight the contradiction between newly released poll results from PDK International and a variety of research related to school closures (“Americans Oppose School Closures, But Research Suggests They're Not A Bad Idea”). The PDK survey revealed that 84 percent of Americans believe that failing schools should be kept open and improved rather than closed. Sixty-two percent said that if a failing public school is kept open, the best approach to improvement is to replace its faculty and administration instead of increasing spending on the same team. In other words, the majority of Americans are firmly committed to their community schools—just not the people working in them.

These findings shouldn’t come as a huge surprise (as my colleague Robert Pondiscio pointed out here). No one wants to see a school closed, no matter how persistently underperforming. For many communities, schools offer not just an education, but a place to gather that’s akin to communal houses from the past. Enrichment and after-school programs—which are profoundly important for low-income youth—often benefit from the use of school buildings. Buildings can also house wrap-around services like health centers, adult education centers, or day care centers.

In addition to their community-wide implications, school closures have also been called “psychologically damaging” for students. A 2009 University of Chicago report examining closure effects on displaced students in Chicago Public Schools (CPS) found that “the largest negative impact of school closings on students’ reading and math achievement occurred in the year before the schools were closed,” leading researchers to believe that closure announcements caused “significant angst” for students, parents, and teachers that may have affected student learning.

The report also indicates that one year after students left their closed schools, their reading and math achievement was not significantly different on average from what researchers “would have expected had their schools not been closed.” This is possibly due to the fact that most displaced students re-enrolled in academically weak schools—which, though disappointing, isn’t a huge surprise either. Even those who see value in school closures will point out that if there aren’t enough high-quality seats elsewhere for displaced students, the action simply reshuffles students from one bad school to another.

So if closing schools is bad for communities and students and the American public hates it, then why is it happening? As Kamenetz points out in her NPR piece, research shows that closing schools isn’t always bad—and therein lies the contradiction. The same University of Chicago study that points to “significant angst” and flat achievement also acknowledges that “displaced students who enrolled in new schools with higher average achievement had larger gains in both reading and math than students who enrolled in receiving schools with lower average achievement.” Translation? Displaced kids who end up in better schools do better. Fordham’s 2015 study on school closures and student achievement found similar results: Three years after closure, students who attended a higher-quality school made greater progress than those who didn’t. In a recent study of closures in New York City, researchers found that “post-closure students generally enrolled in higher-performing high schools than they would have otherwise attended” and that “closures produced positive and statistically significant impacts on several key outcomes for displaced students.”      

School turnarounds, on the other hand, have almost always been found to disappoint

In The Turnaround Fallacy, Andy Smarick offers three compelling arguments for giving up on “fixing” failing schools. First, data shows that very few past turnaround efforts have succeeded. (California is a prime example: After three years of interventions in the lowest-performing 20 percent of schools, only one of 394 middle and high schools managed to reach the mark of “exemplary progress.” Elementary schools fared better—11 percent met the goal—but the results were still disheartening.) Second, there isn’t any clear evidence for how to make turnaround efforts more successful in the future. Even the Institute of Education Sciences seems unable to find turnaround strategies that are backed by strong evidence. And finally, although the long list of turnaround failures in education makes it reasonable for advocates to look outside the sector for successful models to import, there aren’t many. A review of the “two most common approaches to organizational reform in the private sector” found that both approaches “failed to generate the desired results two-thirds of the time or more.”

Let’s review. Many American schools are consistently failing to properly educate students. The public doesn’t like the idea of closing these schools, but many research studies indicate that students who re-enroll in higher-performing schools perform better than they would have if they’d stayed in their previous schools. What the American public wants is to improve failing schools instead of closing them. Unfortunately, research shows that school turnarounds haven’t worked in the past—and no one has any idea how to make them work in the future. Overall, the phrase “damned if you do, damned if you don’t” seems particularly apropos.

So what’s a policy maker to do when schools are failing to properly educate students? Expanding the number of high-quality seats is a good place to start. We might not have a clue about how to turn around bad schools, but we do know of school models that work, especially in the charter sector. Policy makers who want to give kids immediate access to a great education should invest in expanding and replicating schools and networks that are already doing an excellent job.

Boosting the supply of excellent schools will lead to two important changes. First, many families and children will get immediate relief—and life-changing opportunities in new, better schools. And second, over time, failing schools will see their enrollment plummet, creating a fiscally unsustainable situation. At that point, officials can shut them down—not because they are failing academically, but because they are failing financially. And to my knowledge, no public opinion poll has shown Americans averse to closing half-empty, exorbitantly expensive schools. At least not yet.

 
 

College may not be for all, but it is the chosen path of nearly fifty thousand Ohio high school grads. Unfortunately, almost one-third of Ohio’s college goers are unprepared for the academic rigor of post-secondary coursework. To better ensure that all incoming students are equipped with the knowledge and skills needed to succeed in university courses, all Ohio public colleges and universities require their least prepared students to enroll in remedial, non-credit-bearing classes (primarily in math and English).

Remediation is a burden on college students and taxpayers who pay twice. First they shell out to the K–12 system. Then they pay additional taxes toward the state’s higher education system, this time for the cost of coursework that should have been completed prior to entering college (and for which students earn no college credit). The remediation costs further emphasize the importance of every student arriving on campus prepared.

Perhaps the bigger problem with remedial education is that it doesn’t work very well. In Ohio, just 51 percent of freshmen requiring remediation at a flagship university—and 38 percent of those in remedial classes at a non-flagship school—go on to complete entry-level college courses within two academic years. It’s even worse at community colleges: Just 22 percent of students go on to take a college course that is not remedial.  

While far too many college-bound students in Ohio aren’t ready for college upon matriculating, the Buckeye State has made some progress in recent years. Back in 2012, 40 percent of entering college students required remedial coursework, raising concerns of an Ohio college remediation rate crisis. But the three most recent years of data show Ohio’s remediation rate has decreased to 37 percent in 2013, and now to 32 percent for the high school graduating class of 2014. According to the Ohio Department of Higher Education’s most recent report, more students required math remediation (28 percent) than English (13 percent), and 10 percent of first-time students enrolled in both remedial math and English courses.

Table 1. Remediation by subject area

Source: Ohio Department of Higher Education, “2015 Ohio Remediation Report”

In the absence of rigorous research, we can only speculate about what’s behind this drop in remediation rates. One possible explanation is that fewer students who need remedial education are going straight to college. If this were true, we might expect to see college-going rates declining commensurately with the decrease in remediation rates. But college-going rates, while falling between 2009 and 2013, jumped by 5.6 percent from 2013 to 2014. Though we can’t rule it out entirely, this suggests that college-going trends are probably not a leading explanation for the recent fall in remediation.

Another possibility is that the population of students going to college in 2014 was actually better prepared than in previous years. Thirty-two percent of first-time college students in 2014 required remediation upon entry, compared to 41 percent of first-time students in 2009. Between 2009 and 2014, Ohio implemented higher K–12 educational standards; it is possible that we’re starting to see the fruit of those efforts. (In 2012, Ohio began implementing the Common Core academic standards in math and English language arts, along with new learning standards in science and social studies.) At the very least, it doesn’t appear that rising academic standards are having an adverse impact on college readiness. Despite all the travails, the new learning standards might be giving Ohio’s young people a modest boost when it comes to readiness. Not bad!

Or maybe the credit goes to the implementation of Ohio’s “remediation-free” standards in 2013. Ohio’s standards (for public colleges and universities) detail the competencies and ACT/SAT scores each student must achieve in order to enroll in credit-bearing courses. Now students can predict from their ACT subject scores whether they’ll be able to directly enroll in credit-bearing courses. Many states and colleges opt to enroll all students in credit-bearing coursework with increased support instead of offering remedial courses. But Ohio’s standards fail to address how remedial students must be served and whether their remedial status bars them from acquiring credit even with increased support. However, these statewide standards are also being used to hold high schools accountable for college-preparedness; remediation-free status is now also incorporated in the Prepared for Success measure on the state’s school report cards. Maybe this policy is working as intended—encouraging students to improve their reading and math skills before they reach campus.

Further, it is worth considering whether Ohio’s remediation rate decline is being driven by the incentives its colleges and universities face. Public funding for higher education in Ohio is not linked to the remediation rate, but 50 percent of funding for two-year and four-year institutions is determined by the percentage of degree completions (the graduation rate), which also heavily impacts college rankings. To increase graduation rates and rankings, many universities may seek to decrease the number of students they accept who fall below the remediation-free threshold. Still, this preference does not change the number of students in need of remediation as determined by their ACT score.

Ohio’s declining need for remedial education is good news, though there’s still a ways to go before all students matriculating to college are truly ready for it. It’s not entirely clear what is driving this trend—whether it’s enrollment patterns, policy implementation, a bit of both, or other explanations that we didn’t consider. Certainly more research and analysis on this topic is needed to determine causation. In the meantime, we’ll need to monitor how the remediation trend unfolds in the years to come. The falling remediation rates at least indicate that the state is moving in the right direction. If Ohio can stay the course and maintain high academic standards and a focus on college preparedness, the gap between college aspirations and college readiness will hopefully close even further. 

 
 

Ohio’s report card release showed a slight narrowing of the “honesty gap”—the difference between the state’s own proficiency rate and proficiency rates as defined by the National Assessment of Educational Progress (NAEP). The NAEP proficiency standard has been long considered stringent—and one that can be tied to college and career readiness. When states report inflated state proficiency rates relative to NAEP, they may label their students “proficient” but they overstate to the public the number of students who are meeting high academic standards.

The chart below displays Ohio’s three-year trend in proficiency on fourth and eighth grade math and reading exams, compared to the fraction of Buckeye students who met proficiency on the latest round of NAEP. The red arrows show the disparity between NAEP proficiency and the 2015-16 state proficiency rates.

Chart 1: Ohio’s proficiency rates 2013-14 to 2015-16 versus Ohio’s 2015 NAEP proficiency

As you can see, Ohio narrowed its honesty gap by lifting its proficiency standard significantly in 2014-15 with the replacement of the Ohio Achievement Assessments and its implementation of PARCC. (The higher PARCC standards meant lower proficiency rates.) Although Ohio did not continue with the PARCC assessments, the chart above indicates that Ohio continued to raise its proficiency benchmarks on its new reading exams (AIR/ODE developed). Math proficiency, however, remained virtually unchanged in these grades from 2014-15 to 2015-16.

Despite the frustration that some schools are expressing, Ohio policy makers should be commended for continuing to raise standards in 2015-16. Parents and citizens are now getting a much clearer picture of where students stand relative to rigorous academic goals. 

 
 

Twenty-five years into the American charter school movement there remains little research on the impact of charter authorizers, yet these entities are responsible for key decisions in the lives of charter schools, including whether they can open, and when they must close.

A new policy brief from Tulane University’s Education Research Alliance seeks to shed some light on authorizer impact in post-Katrina New Orleans, specifically does the process by which applications are reviewed help to produce effective charter schools? And after those schools have been initially authorized, does that process also shed light on which types of charter schools get renewed?

It merits repeating that the authorizing environment in New Orleans was unlike anywhere else in the country: Louisiana had given control of almost all New Orleans public schools to the Board of Elementary and Secondary Education (BESE) and the Recovery School District (RSD). Independent review of charter applications was mandated in state law, and tons of organizations applied to open new charters.

To facilitate the application process, BESE hired the National Association of Charter School Authorizers (NACSA). NACSA reviewed and rated applications, and in most cases BESE followed those recommendations. As the authors point out, NACSA is the largest evaluator of charter applications in the country and the extent of its work in New Orleans provides some insights regarding the potential impact of authorizer decisions.

First, NACSA examined much more than the charter application alone, including information gleaned via interviews and site visits. The authors found that the only factor that predicted both charter approval and renewal is a school’s rating from NACSA. Interestingly, the authors also found a number of application factors that had no effect on application approval or renewal, including: number of board members with backgrounds in education, whether partners (vendors providing services such as curricular materials, tutoring, college advising, social services, etc.) were for-profit or non-profit, whether a national charter management company (CMO) was involved, whether a principal had been identified at the time of the application, and the amount of instructional time and professional development proposed.

Second, there does not appear to be a link between these application factors and future school performance. However, it did appear that applicants with non-profit partners showed lower state performance scores, lower overall enrollment, and lower enrollment growth than those without such partners.

In terms of charter renewal, it appears that School Performance Scores (the SPS includes indicators of assessment, readiness, graduation, diploma strength and progress) and value added (growth) are strong predictors of charter renewal (in addition to the initial NACSA rating). And while charter schools with higher enrollment growth were more likely to be renewed, enrollment levels themselves were not a factor in renewal decisions.

The takeaway for authorizers is that past performance is the best predictor of future success, and that the answers to some of the questions we typically include in applications (e.g., about board members, partners, school leader) really aren’t predictive of anything. Looking at a paper application simply isn’t enough. Authorizers must also examine qualitative data (such as interviewing school leaders and references, and making detailed site visits).

The study acknowledges a few of its limitations: lack of a clear scientific basis for determining which application factors to measure; the ability to only observe the future performance of schools with the strongest applications (the worst applications didn’t make the cut); and, importantly, the fact that many authorizers (nationally, not just in Louisiana) simply have not had enough applications and renewals to make a comprehensive study possible.

But fear not: Those of us at Fordham have our own study in the works to look at this question in four states. Stay tuned!

SOURCE: Whitney Bross and Douglas N. Harris, "The Ultimate Choice: How Charter Authorizers Approve and Renew Schools in Post-Katrina New Orleans," Education Research Alliance (September 2016).

 
 

The annual release of state report card data in Ohio evokes a flurry of reactions, and this year is no different. The third set of tests in three years, new components added to the report cards, and a precipitous decline in proficiency rates are just some of the topics making headlines. News, analysis, and opinion on the health of our schools and districts – along with criticism of the measurement tools – come from all corners of the state.

Fordham Ohio is your one-stop shop to stay on top of the coverage:

  • Our Ohio Gadfly Daily blog has already featured our own quick look at the proficiency rates reported in Ohio’s schools as compared to the National Assessment of Educational Progress (NAEP). More targeted analysis will come in the days ahead. You can check out the Ohio Gadfly Daily here.
  • Our official Twitter feed (@OhioGadfly) and the Twitter feed of our Ohio Research Director Aaron Churchill (@a_churchill22) have featured graphs and interesting snapshots of the statewide data with more to come.
  • Gadfly Bites, our thrice-weekly compilation of statewide education news clips and editorials, has already featured coverage of state report cards from the Columbus Dispatch, the Dayton Daily News, and the Cleveland Plain Dealer. You can have Ohio education news sent direct to your Inbox by subscribing to Gadfly Bites.

And most importantly, Fordham’s own annual analysis of Ohio report card data. We look in-depth at schools, districts, and charter schools in the state’s Big 8 urban areas. You can see previous years’ reports here and here. Look for it in the coming days.

 
 
 
 

Pages