Ohio

We look at new federal teacher prep regulations, the state of surveillance in schools, and how virtual schools are addressed in the new model charter law from NAPCS

Ohio’s charter school movement has faced a number of challenges over the past decade. A myriad of school closings and allegations of financial misconduct contributed to it being dubbed the Wild, Wild West of charter schools. Making matters worse, a comprehensive analysis in 2014 by Stanford University’s Center for Research on Education Outcomes (CREDO) found that, on average, Ohio charter students lost fourteen days of learning in reading and forty-three days of learning in math over the course of the school year compared to similar students in traditional public schools. To its credit, the Ohio General Assembly recognized these problems and in October 2015 passed House Bill 2 (HB 2)—a comprehensive reform of the Buckeye State’s charter school laws.

While HB 2 has only been in effect since February, there are already signs that the movement is changing for the better in response to the new law. Unfortunately, despite great strides forward, there is one group of charter schools in Ohio that’s still causing serious heartburn for charter school proponents and critics alike: full-time virtual charter schools. Attendance issues, a nasty court battle, the possibility that the state’s largest e-school (ECOT—The Electronic Classroom of Tomorrow) could have to repay $60 million in state funding, and poor academic performance have led to a growing push to improve e-schools.

The problem in Ohio is clear, but the problem isn’t limited to Ohio, especially in regards to low academic achievement. A seminal national study by CREDO released in October 2015 found that students in online charter schools across the nation struggled mightily, losing on average 72 days per year in reading and a jaw-dropping 180 days per year in math.

As we all know, identifying problems is easy. The difficulty is in finding solutions. Fortunately, the recently released model charter school law from the National Alliance for Public Charter Schools (National Alliance) offers a half dozen policy ideas intended to address the growing issues posed by online charter schools. These new model provisions include language addressing authorizing structure, enrollment criteria, enrollment levels, accountability for performance, funding level based upon costs, and performance-based funding. The National Alliance acknowledges that each of these potential solutions won’t universally apply given the unique context of each state’s laws, but it’s worth looking at how four of the model law provisions might impact Ohio.

Performance-based funding

The National Alliance, in one of its most controversial recommendations, suggests that states fund full-time virtual schools via a performance-based funding system. This idea is simple and intuitive on its face, and it confronts head on the student achievement challenges that online charter schools pose to policymakers. However, widespread low achievement in the movement means that implementing performance-based funding would have an enormous impact, making it both technically and politically complicated to implement. The topic has been broached in Ohio, as State Auditor Dave Yost recently called on the General Assembly to examine “learning-based funding,” which would pay e-schools for successfully delivering—not just offering—education. Despite its complex nature, states considering this type of policy don’t have to start from scratch and should investigate similar models being pursued in a handful of states.  

Accountability for performance  

The model law suggests that charter contracts for online schools include additional measures in a variety of areas where full-time virtual schools have typically struggled, such as student attendance and truancy. Determining how to track attendance in a virtual school setting is difficult, but states have an obligation to online charter schools and their students to set clear guidelines for attendance. Fortunately, Ohio law already has a pretty clear expectation thanks to the aforementioned HB 2: “Each internet- or computer-based community school shall keep an accurate record of each individual student’s participation in learning opportunities each day.” While this is a great start, the law could be improved by making it clear how full-time virtual schools will be held accountable for student attendance and participation and for determining how to account for learning by online students that happens when the student isn’t “logged in” to his or her computer.

Enrollment levels

The National Alliance also recommends that states require authorizers to set maximum enrollment levels each year for full-time virtual schools, and that those levels increase based on performance rather than time. Ohio has enrollment restrictions in place, but the limit is based upon year-over-year growth and isn’t impacted by performance. Furthermore, because the size of the movement was already large when the enrollment growth limits—15 percent for schools with more than 3000 students and 25 percent for schools with fewer than 3000 students—were enacted in Ohio, there really hasn’t been much of an impact. States considering adopting this model law would be wise to consider the size of its existing movement and ensure that academic success includes both proficiency and student growth. In the long term, managing enrollment growth could help ensure that the most successful online schools are able to serve the most students. It could also prevent an individual online charter school from becoming “too big to fail” (i.e. closing the school would be too disruptive to students) or too politically powerful to properly hold it accountable for academic performance.  

Enrollment criteria

Charter schools, including online schools, are public schools and must enroll all interested students. This has always been a core principle, but the new model law acknowledges that this idea may need to be reexamined in the context of full-time virtual schools. Because of the incredibly low student achievement of online charter school students, it’s becoming increasingly clear that students without strong learning supports and/or the proper preparation are struggling mightily in an online environment. A recent study from the Thomas B. Fordham Institute shows that Ohio e-school students are lower-achieving, more likely to have repeated a grade, and more likely to be low-income than other students. In other words, e-school students are those who are most desperately in need of a quality education. Unfortunately, the same study shows they’re not getting it: Across all grades and subjects, e-school students have lower performance in math and reading than otherwise-similar students who attend brick-and-mortar district schools. There’s a pretty significant moral quandary here: If full-time virtual schools consistently fail to serve a certain subset of students—a subset that’s most in need of a quality education—then at what point do they forfeit their right to educate these students?

There are two potential solutions here. The first is to transition virtual schools out from under the charter umbrella and establish them as their own type of public school. This would allow them to establish enrollment criteria, much like magnet schools operated by many school districts. This change would allow online charter schools to serve the students who would most benefit from their model without causing potentially irreparable academic harm to enrolled students who aren’t a good fit. In addition, by allowing virtual schools to determine whom they can best serve, it would be easier and fairer to hold them accountable for student achievement under a state accountability system.

The second option is to continue to require virtual schools to serve everyone but build some flexibility into the law. For example, recent changes in HB 2 explicitly allow Ohio full-time virtual charter schools to require an orientation course for new students. Allowing parents and students to better understand from the beginning the expectations and responsibilities inherent in online education is critical. Another policy option would be to require full-time virtual charter school leaders and teachers to engage with students and parents when students fall behind or struggle to meet attendance requirements. If counseling and conferences fail to address the issues, schools could even be required to assist a student to find a more traditional charter public or district school.

The National Alliance deserves praise for developing policy options that could address the appallingly low performance of many full-time virtual charter school students. There are too many students exercising this important educational option to simply turn a blind eye to its still-developing structure. As should be clear from examining how some of the model law’s recommendations would apply in Ohio, this isn’t going to be easy. Policies will—and should—vary considerably from state to state. Overall, the model law provides a great starting point for states when deciding how to help their online charter schools better serve students, and it couldn’t have come at a better time.

Editor’s note: This article was originally published on the National Alliance for Public Charter Schools’ Charter Blog

 
 

Back in 2011, the Obama administration released its plan for improving teacher education. It included a proposal to revise Title II regulations under the Higher Education Act to focus on outcomes-based measures for teacher preparation programs rather than simply reporting on program inputs. It wasn’t a smooth process. Serious pushback and a stalemate on a federal “rulemaking” panel followed. Draft regulations were finally released in 2014, but were immediately met with criticism. Many advocates wondered if the regulations would ever be finalized.

On October 12, the wondering ceased—the U.S. Department of Education at last released its final teacher preparation regulations. While the final rules number hundreds of pages, the provisions garnering the most attention are those outlining what states must annually report for all teacher preparation programs—including traditional, alternative routes, and distance programs. Indicators are limited to novice teachers[1] and include reporting placement and retention rates of graduates during the first three years of their teaching careers, feedback via surveys on effectiveness from both graduates and employers, and student learning outcomes. These indicators (and others) must be included on mandatory institutional and state teacher preparation program report cards that are intended to differentiate between effective, at-risk, and low-performing programs.

The public nature of the report cards ensures a built-in form of accountability. States are required to provide assistance to any program that’s labeled low-performing. Programs that fail to earn an effective rating for two of the previous three years will be denied eligibility for federal TEACH grants, a move that could incentivize aspiring teachers to steer clear of certain programs.

What do these new federal regulations mean for the Buckeye State? Let’s take a closer look.

The Ohio Department of Higher Education already puts out yearly performance reports that publicize data on Ohio’s traditional teacher preparation programs. Many of the regulations’ requirements, like survey results and student learning outcomes, are included in these reports, so the Buckeye State already has a foundation to work from. But right now, Ohio releases its performance reports for the sake of transparency. Institutions aren’t differentiated into performance levels, and there are no consequences for programs that have worrisome data. In order to comply with the federal regulations, Ohio is going to have to start differentiating between programs—and providing assistance to those that struggle. 

Helpfully, the differentiation into three performance levels occurs at the program level, not at the institutional level. This matters because the institutional label is an umbrella that covers several programs, and programs don’t always perform equally well. For example, in NCTQ’s 2014 Teacher Prep Review, the University of Akron’s (UA) undergraduate program for secondary education earned a national ranking of 57. But UA’s graduate program for secondary education earned a very different grade—a national ranking of 259. Using NCTQ’s review as a proxy for the upcoming rankings reveals that grouping all the programs at a specific institution into one institutional rating could hide very different levels of program performance.

Meanwhile, the regulations’ student learning outcomes indicator presents an interesting challenge. This indicator requires states to report annually on student learning outcomes determined in one of three ways: student growth (based on test scores), teacher evaluation results, or “another state-determined measure that is relevant to students’ outcomes, including academic performance.”

Requiring teacher preparation programs to be evaluated based on student learning won’t be easy for Ohio (or many other states). If Ohio opts to go with student growth based on test scores, it’s likely this will mean relying on teachers’ value-added measures. If this is indeed the case, the familiar debate over VAM is sure to surface, as is the fact that only 34 percent of Ohio teachers actually have value-added data available[2]. Even if Ohio’s use of value-added is widely accepted, methodological problems also exist. For instance, the federal regulations’ program size threshold is 25 teachers, and smaller preparation programs in Ohio aren’t going to hit the mark each year. This means that while bigger programs are going to be held accountable for student learning outcomes during graduates’ first three years of teaching, smaller programs aren’t going to be held to the same standard. There’s also the not-so-small problem that value-added data are most precise when they take into account multiple years of data—and novice teachers simply won’t have multiple years of data available.

Using overall teacher evaluation results isn’t a much better alternative. The Ohio Teacher Evaluation System (OTES) needs some serious work—particularly in the realm of student growth measures, which could imprecisely evaluate teachers in many subjects and grade levels due to the use of shared attribution and Student Learning Objectives (SLOs). The third route—using “another state determined measure”—is also challenging. If there was a clear, fair, and effective way to measure student learning without focusing on test scores and teacher evaluations, Ohio would already be using it. Unfortunately, no one has been able to come up with anything yet. The arrival of new federal regulations isn’t likely to inspire a sudden wave of quality ideas.

In short, none of the three options provided for measuring student learning outcomes is a good fit.  Worse yet, Ohio is facing a ticking clock. According to the USDOE’s timeline, states have the 2016-17 school year (which is already half over) to analyze options and develop a reporting system. States are permitted to use the 2017-18 school year to pilot their chosen system, but systems must be fully implemented by 2018-19. Whatever the Buckeye State plans to do in order to comply with the regulations, it’s going to have to make up its mind fast.      

While the regulations’ call for institutional and state report cards is a step in the right direction in terms of transparency and accountability, implementation is going to be messy and perhaps impossible. There are no clear answers for how to effectively evaluate programs based on student learning outcomes. Furthermore, the federally imposed regulations seem to clash with the flexibility that the ESSA era was supposed to bring to the states.[3] Unless Congress takes on reauthorization of the Higher Education Act, it looks like states are going to have to make do with flexibility under one federal education act and tight regulations (and the resulting implementation mess) under another.


[1] A novice teacher is defined as “a teacher of record in the first three years of teaching who teaches elementary or secondary public school students, which may include, at a state’s discretion, preschool students.”

[2] The 34 percent is made up of teachers whose scores are fully made up of value-added measures (6 percent); teachers whose scores are partially made up of value-added measures (14 percent); and teachers whose scores can be calculated using a vendor assessment (14 percent).

[3] It’s worth noting that the provisions related to student learning outcomes did undergo some serious revisions from their original state in order to build in some flexibility. The final regulations indicate that the Department backed off on requiring states to label programs effective only “if the program had ‘satisfactory or higher’ student learning outcomes.” States are also permitted to determine the weighting of each indicator, which includes determining how much the student learning outcomes measure will impact the overall rating.

 
 

To ensure that pupils aren’t stuck in chronically low-performing schools, policymakers are increasingly turning to strategies such as permanent closure or charter-school takeovers. But do these strategies benefit students? A couple recent studies, including our own from Ohio and one from New York City, have found that closing troubled schools improves outcomes. Meanwhile, just one study from Tennessee has examined charter takeovers, and its results were mostly inconclusive.

A new study from Louisiana adds to this research, examining whether closures and charter takeovers improve student outcomes. The analysis uses student-level data and statistical methods to examine the impact of such interventions on students’ state test scores, graduation rates, and matriculation to college. The study focuses on New Orleans and Baton Rouge, with the interventions occurring between 2008 and 2014. During this period, fourteen schools were closed and seventeen were taken over by charter management organizations. Most of these schools—twenty-six of the thirty-one—were located in New Orleans. The five Baton Rouge schools were all high schools.

The study finds that students tend to earn higher test scores after their schools are closed or taken over. In New Orleans, the impact of the interventions was positive and statistically significant on state math and reading scores. New Orleans high-schoolers also experienced an uptick in on-time graduation rates as a result of the interventions, though the Baton Rouge analysis reveals a negative impact on graduation (more on that below). No significant effects were found on college-going rates in either city. With respect to intervention type, the analysis uncovers little difference. Both closure and charter takeover improved pupil achievement. Likewise, the effects were similar on graduation rates—overall neutral when taking together both cities’ results.

More importantly, the research indicates that these intense interventions benefit students most when they result in attendance in a markedly better school. Post-intervention, New Orleans students attended much higher-performing schools, as measured by value added, while in Baton Rouge, students landed in lower quality schools, perhaps explaining the lower graduation rates. Furthermore, the analysis suggests that the positive effects are more pronounced when schools are phased out over time—that is, the closure or takeover is announced and no new students are allowed to enroll—thus minimizing the costs of disruption. These results largely track what we found in Ohio, where students made greater gains on state tests when they transferred to a higher-performing school post-closure.

While not well liked by the general public, the hard evidence continues to accumulate that, given quality alternatives, students benefit when policymakers close or strongly intervene in dysfunctional schools.

SOURCE: Whitney Bross, Douglas N. Harris, and Lihan Liu, The Effects of Performance-Based School Closure and Charter Takeover on Student Performance, Education Research Alliance for New Orleans (October 2016). 

 
 

“If schools continue to embrace the potential benefits that accompany surveillance technology,” assert the authors of a new report issued by the National Association of State Boards of Education (NASBE), “state policymakers must be prepared to confront, and potentially regulate, the privacy consequences of that surveillance.” And thus they define the fulcrum on which this seesaw of a report rests.

Authors J. William Tucker and Amelia Vance do not exaggerate the breadth of education technology that can be used for “surveillance,” either by design or incidentally, citing numerous examples that range from the commonplace to ideas that Big Brother would love. We are all familiar with cameras monitoring public areas in school buildings, but as police use of body cameras increases, school resource officers will likely be equipped with them as well. The authors note that a district in Iowa even issued body cameras to school administrators. (Our own Mike Petrilli wondered a few years about putting cameras in every classroom.)

Cameras have been commonplace inside and outside of school buses for years, but now student swipe cards and GPS bus tracking mean that comings and goings can be pinpointed with increasing accuracy. Web content filters are commonplace in school libraries, but the proliferation of one-to-one devices has led to monitoring applications for use both in the classroom and in students’ homes. Even a student who provides his or her own laptop can be fully monitored when using school Wi-Fi networks. Social media monitoring of students is an imprecise science, but the authors report it is becoming more sophisticated and more widespread in order to identify cyberbullying incidents or to predict planned violent acts on school grounds. And into the realm of science fiction, they add increasing use of thumbprint scanners, iris readers, and other biometric data gathering apparatus.

The authors are thorough in listing the intended benefits of all of these surveillance efforts—student safety, anti-bullying, food-service auditing, transportation efficiency, etc. Those benefits likely made the adopted surveillance an easy sell in schools that have gone this route. But on the other side of the fulcrum are two equally large areas of concern: privacy and equity. These issues are addressed by the report on a higher, more policy-oriented level. Privacy concerns are addressed in terms of which data are, by default, kept by schools (all of it) and for what length of time (indefinitely). The authors assert that without explicit record keeping policies (or unless the storage space runs out), there is neither will nor incentive to do anything but save the data. Additionally, there are unanswered questions, such as what constitutes a student’s “educational record” and by whom that data may be accessed. For example, details of disciplinary actions may be educational records, but what about the surveillance video that led to that disciplinary action? Equity concerns are addressed in terms of varying and unequal degrees of surveillance (high school kids who can afford cars are not monitored on the way home at all, for example) as well as inequitable “targeting” of surveillance techniques on certain students before anything actionable has occurred.

As a result of this rather wide gulf between facts and policy, even NASBE’s good and thorough list of suggestions for state boards to attempt to balance student safety, privacy, and equity concerns with policies seem more like a skateboarder’s efforts to catch up with a speeding train. Those recommendations are: 1) keeping surveillance to a bare minimum, including discontinuing existing efforts once they are no longer needed, 2) using surveillance only in proportion to the perceived problem, 3) keeping all surveillance methods as transparent as possible to students, parents, and the public, 4) keeping discussion of surveillance use and possible discontinuation thereof open to the public, 5) empowering students and parents to use surveillance data in their own defense when disputes arise between students or between students and staff, 6) improving broader inequities in schools so that there is less precedent for families to believe that surveillance is being used inequitably, and 7) training for state and local boards, administrators, teachers, and staff on all aspects of surveillance methods, data use, public records laws, etc.

Balancing students’ safety and their privacy is a difficult and sensitive job, and the recommendations enumerated here are good ones. But how many state board members have the bandwidth to address surveillance issues at that level of granularity? How many local board members (perhaps a more logical place for these decisions to be made)? And what happens when board member seats turn over? Legislative means of addressing these concerns are not even touched upon in this report.

In the end, it seems that the juggernaut of technology has spawned an unprecedented level of student surveillance, and diffuse, widespread fear for student safety—whether legitimate or not—serves only to “feed the beast.” As well-intentioned as this report and its recommendations are, even the most casual observer of today’s schools can’t help but conclude that the seesaw is definitely tipped toward more and more varied surveillance that is unlikely to be checked at the state policy level.

SOURCE: J. William Tucker and Amelia Vance, “School Surveillance: The Consequences for Equity and Privacy,” National Association of State Boards of Education (October, 2016).

 
 

Hopes are high for a new kind of school in Indianapolis. Purdue Polytechnic High School will open in the 2017-18 school year, admitting its first class of 150 ninth graders on the near Eastside. It is a STEM-focused charter school authorized by Purdue University that will utilize a project-based multidisciplinary curriculum intended to give graduates “deep knowledge, applied skills, and experiences in the workplace.”

The location of the school in the Englewood neighborhood is a deliberate step for Purdue, which is aiming to develop a direct feeder for low-income students and students of color into Purdue Polytechnic Institute in West Lafayette. To that end, the high school will teach to mastery—each student moving on to the next level in a subject once they have demonstrated mastery at the current level. If that requires remediation of work, so be it. The school model is designed to keep students engaged, challenge them to reach their maximum potential, and meet high expectations. More importantly, a high school diploma will be “considered a milestone rather than an end goal,” according to the school’s website. College is the expected next step for all Purdue Polytechnic High School graduates. In fact, the high school’s curriculum is modeled on that of Purdue Polytechnic Institute in order to make the transition between the two seamless—minus 65 miles or so.

Shatoya Jordan and Scott Bess have been chosen to lead the new school as principal and head of school, respectively. Both were recently named to the latest class of Innovation School Fellows by The Mind Trust.

Applications for the first class opened last week and hopes are high that this innovative school model will open new doors for students in need of high quality options. Other states, including Ohio, should take note. This partnership could pay big dividends for Purdue, the community, and most importantly, the many low-income students who will have a new opportunity to advance. Hats off to Purdue for supporting this effort.

 
 
 
 

It’s October, and that means election season. One important decision facing many Buckeye voters is whether to approve their school districts’ tax requests. These referenda represent a unique intersection between direct democracy and public finance; unlike most tax policies, which are set by legislatures, voters have the opportunity to decide, in large part, their own property-tax rates. In Ohio, districts must seek voter approval for property taxes above 10 mills (equivalent to 1 percent) on the taxable value of their property.

Some citizens will enter the voting booth well-informed about these tax issues, but for others, the question printed on the ballot might be all they know. Voters have busy lives and they may not always carefully follow their district’s finances and tax issues. This means that the ballot itself ought to clearly and fairly present the proposition to voters. State law prescribes certain standard ballot language, but districts have some discretion in how the proposition is written. County boards of elections and the Secretary of State approve the final language. How does the actual language read? Is it impartial? Can it be easily understood?

Let’s take a look at a few high-profile ballot issues facing voters in November. First, here is the tax-issue posed to Cincinnati voters:

Shall a levy be imposed by the Cincinnati City School District, County of Hamilton, Ohio, for the purpose of PROVIDING FOR THE EMERGENCY REQUIREMENTS OF THE SCHOOL DISTRICT in the sum of $48,000,000 and a levy of taxes to be made outside of the ten-mill limitation estimated by the county auditor to average seven and ninety-three hundredths (7.93) mills for each one dollar of valuation, which amounts to seventy-nine and three-tenths cents ($0.793) for each one hundred dollars of valuation, for five (5) years, commencing in 2016, first due in calendar year 2017?

As with all property-tax issues, one of the most complicated terms is “mill”—the amount of the levy and equal to one thousandth of a dollar. None of us, however, go to the supermarket and buy 100 mills worth of groceries; and in the realm of taxes, we’re more accustomed to seeing them expressed as percentages—a 6 percent sales tax, for instance. Because millage rates are so rarely used in everyday life, a voter may find it hard to discern the size of the request. Is 7.93 mills a huge tax hike, or relatively affordable? Unless a voter has done her homework, she probably wouldn’t know. But voters shouldn’t be expected to be tax experts or follow the news to understand the impact on their personal finances. Simpler, less technical language would help the average voter better understand the question. Perhaps the tax could be stated also as percentages or in more realistic dollar terms—for instance, the proposed levy would increase taxes by $100 for a property with a taxable value of $100,000.

Also noticeable in this tax request is the “emergency” language—it is hard to miss when printed in capital letters. While the district is not in fiscal emergency, it is seeking an emergency levy nevertheless. The state permits this type of levy when districts are projecting a financial deficit in future years. But the prominent ballot language could impact the electoral outcome, especially if marginal or undecided voters tip the scales. Perhaps the district is indeed in financial straits, but shouldn’t that case be made independent of the ballot itself? Opponents might argue that district could address the deficit in other ways, such as by renegotiating unaffordable teacher union contracts. Referenda should be presented as neutrally as possible,[1] because we know from surveys that the wording of questions can alter the results. Though allowed, the use of the word “emergency,” which comes with a powerful connotation, is likely to influence voters.[2]

Now let’s turn to the 274-word question facing Columbus voters.

Shall the Columbus City School District be authorized to do the following: 1. Issue bonds for the purpose of improving the safety and security of existing buildings including needed repairs and/or replacement of roofing, plumbing, fire alarms, electrical systems, HVAC, and lighting; equipping classrooms with upgraded technology; acquiring school buses and other vehicles; and other improvements in the principal amount of $125,000,000, to be repaid annually over a maximum period of 30 years, and levy a property tax outside the ten-mill limitation, estimated by the county auditor to average over the bond repayment period 0.84 mill for each one dollar of tax valuation, which amounts to $0.084 for each one hundred dollars of tax valuation, to pay the annual debt charges on the bonds, and to pay debt charges on any notes issued in anticipation of those bonds? 2. Levy an additional property tax to provide funds for the acquisition, construction, enlargement, renovation, and financing of permanent improvements to implement ongoing maintenance, repair and replacement at a rate not exceeding 0.5 mill for each one dollar of tax valuation, which amounts to $0.05 for each one hundred dollars of tax valuation, for a continuing period of time? 3. Levy an additional property tax to pay current operating expenses (including expanding Pre-Kindergarten education; improving the social, emotional, and physical safety of students; expanding career exploration opportunities; reducing class sizes; providing increased support to students with exceptional needs; and enhancing reading and mathematics instruction) at a rate not exceeding 5.58 mills for each one dollar of tax valuation, which amounts to $0.558 for each one hundred dollars of tax valuation, for a continuing period of time?

I won’t repeat the point about millage, but let me make three additional observations. First and most obviously, this is a complicated request: The district is seeking approval for a tax package that includes not only debt financing but also funding for capital improvements and day-to-day operations. This puts a daunting burden on voters who must either gather the requisite information beforehand, or spend serious time in the booth reading and understanding it.

Second, consider how different Columbus’s tax request is compared to Cincinnati’s. Columbus is seeking a fixed rate levy at a maximum 0.5 mills for permanent improvements and 5.58 mills for operations. In contrast, Cincinnati is seeking a fixed sum levy generating $48 million per year, where the tax rate could vary (note the “estimated” rate). Also there is no set time in which Columbus’s tax would expire, while Cincinnati’s would sunset after five years. This illustrates how varied Ohio’s different property-tax types are, adding more complexity to what voters must know in order to make an informed decision.

Third, note how the 5.58 mill request lists several specific purposes of the levy, such as expanded pre-K, reduced class sizes, and other initiatives. Other district tax requests don’t include such specific lists and could be thought of as more neutral. For instance, Cleveland’s levy request simply states that it would be used for “current expenses for the school district and partnering community schools.” Similarly, Hilliard’s levy request says its purpose is for “current operating expenses.” That’s it. Nothing more with respect to the levy’s purpose. Does enumerating a handful of likable programs improve the chances of passage? It’s hard to know, of course, but they do seem to frame the tax in a more favorable light.

One could argue that voters are responsible for being educated before they enter the booth, and the question itself doesn’t matter. To be fair, local media usually cover school tax issues—albeit much less than top-of-the-ticket races—and I suspect a fair number of voters come modestly well-informed. But we also know that some voters might not be quite as well attuned. That means the ballot words matter and, if the examples of Cincinnati and Columbus are any indication, the language for property tax referenda could be made more understandable and fair. Accomplishing this will probably require revisions in state tax law and/or changes in how county boards oversee districts’ ballot language.

To be clear, I’m not taking a position on either of these tax issues. The benefits of each tax could very well outweigh the costs, or vice-versa. Nor am I suggesting that direct democracy is an inappropriate way of setting tax policy. Other taxing arrangements, of course, have their own set of challenges. My point is that so long as voters are tasked with setting property tax rates, the referenda should be presented as clear, simple, and unbiased propositions. As economist John Cochrane has argued, one imperative of modern governing is to “bring a reasonable simplicity to our public life.” Reasonable simplicity in tax referenda language seems to be warranted.


[1] In the case of the “Brexit” vote, the neutrality of the referendum language came into question and the government was forced to revise it. In Indiana, school tax referenda language has been disapproved by the state on the grounds that it might bias the vote. See here, here, and here for examples of disapproved ballot language.

[2] A look at a couple other emergency levy requests also reveals prominent typeface, so this is not unique to Cincinnati’s emergency request. See here for Parma and here for East Knox.

 

 
 

The Ohio Department of Education (ODE) recently released the results of its revised sponsor evaluation, including new ratings for all of the state’s charter-school sponsors. Called “authorizers” in most other states, sponsors are the entities responsible for monitoring and oversight of charter schools. Under the current rating system, sponsors are evaluated in three areas—compliance, quality practice, and school academic outcomes—and receive overall ratings of “Exemplary,” “Effective,” “Ineffective,” or “Poor.” Of the sixty-five Buckeye State sponsors evaluated, five were rated “Effective,” thirty-nine “Ineffective,” and twenty-one “Poor.” Incentives are built into the system for sponsors rated “Effective” or “Exemplary” (for instance, only having to be evaluated on the quality practice component every three years); however, sponsors rated “Ineffective” are prohibited from sponsoring new schools, and sponsors rated “Poor” have their sponsorship revoked.

Number of charter schools by sponsor rating

Evaluating sponsors is a key step in the direction of accountability and quality control, especially in Ohio, where the charter sector has been beset with performance challenges. Indeed, the point of implementing the evaluation was two-fold. First, the existence of the evaluation system and its rubric for ratings is meant to prod sponsors to focus on academic outcomes of the charter schools in their portfolios. Second, they’re designed to help sponsors improve their own work, which would result in stronger oversight (without micromanagement) of schools and an improved charter sector. Results-driven accountability is important, as is continually improving one’s practice.

What happens next is also important. ODE has time to improve its sponsor evaluation system before the next cycle, and it should take that opportunity seriously. Strengthening both the framework and the process will improve the evaluation. Let us offer a few ideas. 

First, the academic component should be revised to more accurately capture whether schools are making a difference for their students. Largely as a function of current state policy, Ohio charters are mostly located in economically challenged communities. As we’ve long known and are reminded of each year when state report cards on schools and districts are released, academic outcomes correlate closely with demographics. So we need to look at the gains that they are (or aren’t) making in their schools, as well as their present achievement. In communities where children are well below grade level, the extent and velocity of growth matter enormously. Make no mistake: proficiency is also important. But schools whose pupils consistently make well over a year of achievement growth within a single school year are doing what they’re supposed to: helping kids catch up and preparing them for the future.

It’s critical that we make sure that achievement and growth both be given their due when evaluating Ohio schools—and the entities that sponsor them. Fortunately, Ohio will soon unveil a modified school-accountability plan under the federal Every Student Succeeds Act (ESSA): This would be a perfect opportunity to rebalance school report cards in a way that places appropriate weight—for all public schools and sponsors—on student growth over time.

Because dropout recovery charters are graded on a different scale from other kinds of charters, their sponsors may get artificially high ratings on the academic portion of the sponsor evaluation. That needs fine-tuning too.

The compliance component of the sponsor evaluation system also needs attention.  The current version looks at compliance with “all laws and rules,” which is a list of 319 laws and rules applicable to Ohio’s charter schools, many of which don’t apply to individual sponsors. (For example, many sponsors have no e-schools in their portfolios and therefore the laws and rules that apply to such schools aren’t really pertinent to them.) Yet all Ohio sponsors were forced to gather/draft more than a hundred documents and memos—many of them duplicative—for each of their schools over a 30-day period. A better way to do this would be to figure out what applies and what matters most, then examine compliance against those provisions. For example, current item 209 (“The School displays a US flag, not less than five feet in length, when school is in session”) is not as important as whether the school has a safety plan (i.e., how to deal with armed intruders). ODE should focus on compliance with the most critical regulations on a regular basis while spot-checking or periodically checking compliance with the more picayune regulations. Another option would be to review a sample of the required documents each year, much as an auditor randomly reviews transactions. The current compliance regimen is hugely burdensome with, in many cases, very little payoff.

The sponsor evaluation is critically important, and reflects continued progress in Ohio’s efforts to improve charter school outcomes. But it’s also important to get it right if it’s indeed going to improve sponsor practice and in turn the charter sector. In its current form, it measures how well a sponsor responded to rubric questions and whether there were enough staff on hand to upload documents. It needs to quickly move to 2.0 if it seeks to be a credible and effective instrument long-term. 

 
 

Our goal with this post is to convince you that continuing to use status measures like proficiency rates to grade schools is misleading and irresponsible—so much so that the results from growth measures ought to count much more—three, five, maybe even nine times more—than proficiency when determining school performance under the Every Student Succeeds Act (ESSA). We draw upon our experience with our home state of Ohio and its current accountability system, which currently generates separate school grades for proficiency and for growth.

We argue three points:

  1. In an era of high standards and tough tests, proficiency rates are correlated with student demographics and prior achievement. If schools are judged predominantly on these rates, almost every high-poverty school will be labeled a failure. That is not only inaccurate and unfair, but it will also demoralize educators and/or hurt the credibility of school accountability systems. In turn, states will be pressured to lower their proficiency standards.
  2. Growth measures—like “value added” or “student growth percentiles”—are a much fairer way to evaluate schools, since they can control for prior achievement and can ascertain progress over the course of the school year. They can also differentiate between high-poverty schools where kids are making steady progress and those where they are not.
  3. In contrast with conventional wisdom, growth models don’t let too many poor-performing schools “off the hook.” Failure rates for high-poverty schools are still high when judged by “value added” or “student growth percentiles”—they just aren’t as ridiculously high as with proficiency rates.

Finally, we tackle a fourth point, addressing the most compelling argument against growth measures:

  1. That schools can score well on growth measures even if their low-income students and/or students of color don’t close gaps in achievement and college-and-career readiness.

(And these arguments are on top of one of the best reasons to support growth models: Because they encourage schools to pay attention to all students, including their high achievers.)

Point #1: Proficiency rates are poor measures of school quality.

States should use proficiency rates cautiously because of their correlation with student demographics and prior achievement—factors that are outside of schools’ control. Let’s illustrate what this looks like in the Buckeye State. One of Ohio’s primary school-quality indicators is its performance index (PI)—essentially, a weighted proficiency measure that awards more credit when students achieve at higher levels. Decades of research have shown the existence of a link between student proficiency and student demographics, and that unfortunate relationship persists today. Chart 1 displays the correlation between PI scores and a school’s proportion of economically disadvantaged (ED) pupils. Schools with more ED students tend to post lower PI scores—and vice-versa.

Chart 1: Relationship between performance index scores and percent economically disadvantaged, Ohio schools, 2015–16

Data source: Ohio Department of Education Notes: Each point represents a school’s performance index score and its percentage of economically disadvantaged students. The red line displays the linear relationship between the variables. Several high-poverty districts in Ohio participate in the Community Eligibility Provision program; in turn, all of their students are reported as economically disadvantaged. As a result, some less impoverished schools (in high-poverty districts) are reported as enrolling all ED students, explaining some of the high PI scores in the top right portion of the chart.

Given this strong correlation, it’s not surprising that almost all high-poverty urban schools in Ohio get failing grades on the performance index. In 2015–16, a staggering 93 percent of public schools in Ohio’s eight major cities received a D or F on this measure, including several well-regarded schools (more on those below). Adding to their misery, urban schools received even worse ratings on a couple of Ohio’s other proficiency-based measures, such as its indicators met and annual measureable objectives components. Parents and students should absolutely know whether they are proficient in key subjects—and on track for future success. But that’s a different question from whether their schools should be judged by this standard.

Point #2: Growth measures are truer indicators of school quality.

Because they account for prior achievement, ratings based on student growth are largely independent of demographics. This helps us make better distinctions in the performance of high-poverty schools. Like several other states, Ohio uses a value-added measure developed by the analytics firm SAS. (Other states utilize a similar type of measure called “student growth percentiles.”) When we look at the value-added ratings from Ohio’s urban schools, we see differentiation in performance. Chart 2 below shows a fairer balance across the A-F categories on this measure: 22 percent received an A or B rating; 15 percent received C’s; and 63 percent were assigned a D or F rating.*

Chart 2: Rating distribution of Ohio’s urban schools, performance index versus “value added,” 2015–16

*Due to transitions in state tests, Ohio rated schools on just one year of value-added results in 2014–15 and 2015–16 leading to some swings in ratings. In previous years and starting again in 2016–17, the state will use a multi-year average which helps to improve the stability of these ratings.

We suppose one could argue that the performance-index distribution more accurately depicts what is going on in Ohio’s urban schools: Nearly every school, whether district or charter, is failing. Yet we know from experience that this simply isn’t true. Yes, terrible schools exist, but there are also terrific ones whose efforts are best reflected in student growth. In fact, we proudly serve as the charter authorizer for KIPP Columbus and Columbus Collegiate Academy-Main. Both schools have earned an impressive three straight years of value-added ratings of “A,” indicating sustained excellence that is making a big impact in their students’ lives. Yet both of these high-poverty charter schools were assigned Ds on the performance index for 2015–16. That is to say, their students are making impressive gains—catching up, even—but not yet at “grade level” in terms of meeting academic standards. If we as an authorizer relied solely or primarily on PI ratings, these great schools might be shut—wrongly.

Point #3: Growth measures don’t let too many bad schools “off the hook.”

One worry about a growth-centered approach is that it might award honors grades to mediocre or dismal schools. But how often does this occur in the real world? As chart 2 indicates, 63 percent of urban public schools in Ohio received Ds or Fs on the state’s value-added measure last year. In the two previous years, 46 and 39 percent of urban schools were rated D or F. To be sure, fewer high-poverty schools will flunk under value-added as under a proficiency measure. But a well-designed growth-centered system will identify a considerable number of chronically underperforming schools, as indeed it should.

Point #4: It’s true that schools can score well on growth measures even if their low-income students and/or students of color don’t close gaps in achievement and college-and-career readiness. But let’s not shoot the messenger.

Probably the strongest argument against using growth models as the centerpiece of accountability systems is that they don’t expect “enough” growth, especially for poor kids and kids of color. The Education Trust, for example, is urging states to use caution in choosing “comparative” growth models, including growth percentiles and value-added measures, because they don’t tell us whether students are making enough progress to hit the college-ready target by the end of high school, or whether low-performing subgroups are making fast enough gains to close achievement gaps. And that much is true. But let’s keep this in mind: Closing the achievement gap, or readying disadvantaged students for college, is not a one-year “fix.” It takes steady progress—and gains accumulated over time—for lower-achieving students to draw even with their peers. An analysis of Colorado’s highest-performing schools, for example, found that the trajectory of learning gains for the lowest-performing students simply wasn’t fast enough to reach the high standard of college readiness. An article by Harvard’s Tom Kane reports that the wildly successful Boston charter schools cut the black-white achievement gap by roughly one-fifth each year in reading and one-third in math. So even in the most extraordinary academic environments, disadvantaged students may need many years to draw even with their peers (and perhaps longer to meet a high college-ready bar). That is sobering indeed.

We should certainly encourage innovation in growth modelling—and state accountability—that can generate more transparent results on “how much” growth is happening in a school and whether such growth is “enough.” But the first step is accepting that student growth is the right yardstick, not status measures. And the second step is to be realistic about how much growth on an annual basis is humanly possible, even in the very best schools.

***

Using proficiency rates to rate high-poverty schools is an unfair practice to schools that has real-world consequences. Not only does this policy give the false impression that practically all high-poverty schools are ineffective, but it also demeans educators in high-needs schools who are working hard to advance student learning. Plus, it actually weakens the accountability spotlight on the truly bad high-poverty schools, since they cannot be distinguished from the strong ones. Moreover, it can lead to unintended consequences such as shutting schools that are actually benefitting students (as measured by growth), discouraging new-school startups in needy communities (if social entrepreneurs believe that “failure” is inevitable), or thwarting the replication of high-performing urban schools. Lastly, assigning universally low ratings to virtually all high-poverty schools could breed resentment and pushback, pressuring policy makers to water down proficiency standards or easing up on accountability as a whole.

Growth measures won’t magically ensure that all students reach college and career readiness by the end of high school, or close our yawning achievement gaps. But they do offer a clearer picture of which schools are making a difference in their students’ academic lives, allowing policy makers and families to better distinguish the school lemons from peaches. If this information is put to use, students should have more opportunities to reach their lofty goals. Measures of school quality should be challenging, yes, but also fair and credible. Growth percentiles and value-added measures meet those standards. Proficiency rates simply do not. And states should keep that in mind when deciding how much weight to give to these various indicators when determining school grades.

 
 

The central problem with making growth the polestar of accountability systems, as Mike and Aaron argue, is that it is only convincing if one is rating schools from the perspective of a charter authorizer or local superintendent who wants to know whether a given school is boosting the achievement of its pupils, worsening their achievement, or holding it in some kind of steady state. To parents choosing among schools, to families deciding where to live, to taxpayers attempting to gauge the ROI on schools they’re supporting, and to policy makers concerned with big-picture questions such as how their education system is doing when compared with those in another city, state, or country, that information is only marginally helpful—and potentially quite misleading.

Worse still, it’s potentially very misleading to the kids who attend a given school and to their parents, as it can immerse them in a Lake Wobegon of complacency and false reality.

It’s certainly true, as Mike and Aaron say, that achievement tends to correlate with family wealth and with prior academic achievement. It’s therefore also true that judging a school’s effectiveness entirely on the basis of its students’ achievement as measured on test scores is unfair because, yes, a given school full of poor kids might be moving them ahead more than another school (with higher scores) and a population of rich kids. Indeed, the latter might be adding little or no value. (Recall the old jest about Harvard: Its curriculum is fine and its faculty is strong but what really explains its reputation is its admissions office.)

It’s further true that to judge a school simply on the basis of how many of its pupils clear a fixed “proficiency” bar, or because its “performance index” (in Ohio terms) gets above a certain level, not only fails to signal whether that school is adding value to its students but also neglects whatever is or isn’t being learned by (or taught to) the high achievers who had already cleared that bar when they arrived in school.

Yes, yes and yes. We can travel this far down the path with Mike and Aaron. But no farther.

Try this thought experiment. You’re evaluating swim coaches. One of them starts with kids most of whom already know how to swim and, after a few lessons, they’re all making it to the end of the pool. The other coach starts with aquatic newbies and, after a few lessons, some are getting across but most are foundering mid-pool and a few have drowned. Which is the better coach? What grade would you give the second one?

Now try this one. You’re evaluating two business schools. One enrolls upper middle class students who emerge—with or without having learned much—and join successful firms or start successful new enterprises of their own. The other enrolls disadvantaged students, works very hard to educate them, but after graduating most of them fail to get decent jobs and many of their start-up ventures end in bankruptcy. Which is the better business school? What grade would you give the second one?

The point, obviously, is that a school’s (or teacher’s or coach’s) results matter in the real world, more even than the gains its students made while enrolled there. A swim coach whose pupils drown is not a good coach. A business school whose graduates can’t get good jobs or start successful enterprises is not a business school that deserves much praise. Nor, if you were selecting a swim coach or business school for yourself or your loved one, would you—should you—opt for one whose former charges can’t make it in the real world.

Public education exists in the real world, too, and EdTrust is right that we ought not to signal satisfaction with schools whose graduates aren’t ready to succeed in what follows when those schools have done what they can.

Mike and Aaron are trying so hard to find a way to heap praise on schools that “add value” to their pupils that they risk leaving the real world in which those pupils will one day attempt to survive, even to thrive.

Sure, schools whose students show “growth” while enrolled there deserve one kind of praise—and schools that cannot demonstrate growth don’t deserve that kind of praise. But we mustn’t signal to students, parents, educators, taxpayers or policymakers that we are in any way content with schools that show growth if their students aren’t also ready for what follows.

Yes, school ratings should incorporate both proficiency and growth but should they, as Mike and Aaron urge, give far heavier weight to growth? A better course for states is to defy the federal Education Department’s push for a single rating for schools and give every school at least two grades, one for proficiency and one for growth. The former should, in fact, incorporate both proficiency and advanced achievement, and the latter should take pains to calculate growth by all students, not just those “growing toward proficiency.” Neither is a simple calculation—growth being far trickier—but better to have both than to amalgamate them in a single less revealing grade or adjective. Don’t you know quite a bit more than you need to know about a school when you learn that it deserves an A for proficiency and a C for growth—or vice versa—than simply to learn that it got a B? On reflection, how impressed are you by a high school—especially a high school—that looks good on growth metrics but leaves its graduates (and, worse, its dropouts) ill-prepared for what comes next? (Mike and Aaron agree with us that giving a school two—or more—grades is more revealing than single consolidated rating.)

We will not here get into the many technical problems with measures of achievement growth—they can be significant—and we surely don’t suggest that school ratings and evaluations should be based entirely on test scores, no matter how those are sliced and diced. People need to know tons of other things about schools before legitimately judging or comparing them. Our immediate point is simply that Mike and Aaron are half-right. It’s the half that would let kids drown in Lake Wobegon that we protest.

 
 

This report from A+ Colorado examines Denver’s ProComp (Professional Compensation System for Teachers), a system forged collaboratively between the district and teachers union in 2005 that was on the vanguard of reforming teacher pay scales. The analysis is timely for Denver Public Schools and the Denver Classroom Teachers Association, who are back at the negotiating table (the current agreement expires in December 2017).

The A+ report outlines the urgency of getting ProComp’s next iteration right. Denver loses about half of newly-hired teachers within the first three years—a turnover rate that is costly not only for the district, which must recruit, hire, and train new teachers, but for the students who are taught by inexperienced educators (research shows that effectiveness increases greatly in the first five years). Denver Public Schools also faces another challenge in that Denver’s cost of living has increased sharply. The report notes that more than half of all renters face “serious cost burdens,” meaning they spend more than 30 percent of income on housing. The situation is worse for homeowners or would-be homeowners. Thus, ProComp is a critical part of “making DPS an attractive place to teach.” 

ProComp was revolutionary at its outset. Funded in part through an annual $25 million property tax increase (the cost for the entire system is a sizeable $330 million for 4,300 teachers), it aimed to reward teachers working in hard-to-staff positions and schools, as well as those demonstrating instructional effectiveness, measured in part by student test scores. The average teacher salary change in a given year looks markedly different under ProComp than in traditional pay systems. Last year, teachers received an average $1,444 cost of living increase, $1,253 increase in base pay, and $4,914 bonus through one-time incentives. Yet A+ finds that the system still “strongly tracks with experience” and that “teacher pay only looks modestly different than it would under a more traditional salary schedule.” That’s because ProComp maintains traditional “steps” for salary based on teachers’ years of experience and credentials. Increases to base pay are determined by negotiated cost of living increases, as well as meeting ProComp objectives. One-time bonuses are available for serving in hard-to-serve schools, boosting student test scores, or working in a high-performing or high-growth school. Denver’s teachers, when surveyed, perceived ProComp as a repackaging of the same salary as “salary plus bonuses” in exchange for extra work.

A+ finds that despite the intentions and theory of change behind ProComp, to incentivize and reward teachers and ultimately drive student achievement, studies have shown mixed results to date. While the Center for Education Data and Research found small positive effects on student achievement pre- and post-ProComp, that study couldn’t prove causality. A+ concludes that it’s “hard to prove any measurable student achievement gains attributable to ProComp.” Another study from Harvard University found that teachers with students attaining the top and lowest levels of math growth earned about the same.

Even the $25 million pot of money—just 8 percent of the district’s total spending on teacher pay—isn’t targeted to reward individual teachers for effectiveness. In 2015–16, 27 percent of these one-time dollars were allocated for market incentives. Ten percent went to teachers who gained additional education, while 52 percent were aligned to student outcomes—but mostly at the building level. The authors further find that the system is difficult for teachers to understand—a “hodgepodge of incentives” in desperate need of being streamlined and better aligned to solving district challenges. 

Toward that end, A+ makes good recommendations for improving Denver’s system: 1) “Front load” the salary schedule dramatically, awarding 10 percent increases in the first five years (with 1 percent increases thereafter, up to year fifteen); 2) Streamline salary increases and prioritize expertise, specifically by offering two lanes based on education level, instead of seven, and allow subject-matter experts to earn more; 3) Increase pay for teachers teaching in, and returning to, the highest-need schools; 4) Allow for base pay increases, rather than stipends, for taking on leadership roles, thereby better aligning pay with one’s career ladder; 5) reward high performance among teachers individually, either through more bonuses or additional promotional opportunities, to leadership roles and advances on the salary ladder.

Perhaps the most valuable contribution this report makes is a powerful reminder that ProComp (and any teacher pay system, for that matter) should be aligned with district goals. If Denver wants to mitigate teacher turnover, its pay scale must do more to incentivize teachers to stay at earlier points in their careers. The brief is also pertinent nationally. As the breakdown of Cleveland’s promising teacher pay system reminds us, challenge lies in not only crafting innovative pay systems but sustaining them over the long haul. In that respect, there’s a lot to learn from Denver’s eleven-year-old program.

SOURCE: A+ Colorado, “A Fair Share: A New Proposal for Teacher Pay in Denver” (September 2016).

 
 

On October 12, in the ornate Rotunda and Atrium of the Ohio Statehouse, surrounded by family and many of the state’s top education leaders, some of Ohio’s highest-performing beginning teachers were honored for demonstrating superior practice. We at Educopia, Ohio’s partner in administering the Resident Educator Summative Assessment (RESA), feel truly privileged to have hosted the event, which recognized Ohio educators who earned the top 100 overall scores on RESA in each of the past three years. More than 120 of the state’s highest-scoring teachers attended, joined by their spouses, children, and parents in celebration of the honor. State Superintendent Paolo DeMaria, Representative Andrew Brenner - Chair of the House Education Committee, and other state policymakers attended the event. Seeing the teachers beam with pride in front of their families and hearing their sincere gratitude for being recognized for their professional excellence was by far the most moving experience of my career in education policy.

For background, RESA is required for all third-year educators seeking a permanent teaching license in Ohio. It consists of four performance tasks that teachers complete by submitting videos, lesson plans, and student assignments from their actual teaching. The assessment was custom-developed for Ohio with the assistance of national experts Charlotte Danielson and Mari Pearlman to accurately mirror Ohio’s Teaching Standards. Ohio educators, who complete extensive training and earn certification by passing a rigorous examination, score the RESA submissions. The teachers honored at the event were among a very select group: over 15,900 educators have taken RESA since its first year in 2013-2014.

The Ohio Resident Educator program gives new teachers the chance to develop their competencies with the support of a mentor. According to Connie Ball, a program coordinator at Worthington Schools, “The Ohio Resident Educator program provides strong support for beginning teachers allowing them the grace of time to grow in the profession and continue to learn through the guidance of a strong mentorship program and a network of their peers. The program encourages teachers to ask, ‘how can I be a better educator tomorrow than I was today?’ and our teachers are certainly meeting that challenge.”

Through RESA, the state then determines whether candidates have the knowledge and skills to lead a classroom anywhere in the state. This process allows local leadership to focus on what they're best situated to do, which is to work with teachers to help them address areas for improvement. It's a bit like the AP test, in which the test is a consistent bar that all students must pass to get credit, and an AP teacher’s job is to help the students get over it. In Ohio, local leaders and mentors are there to help teachers develop the skills assessed on RESA so they can pass and earn their professional license.

RESA is an objective measure of important teaching practices, such as lesson planning, differentiation of instruction, use of assessment, and the ability to engage students intellectually so they understand concepts deeply. It also measures a teacher's ability to reflect and identify ways to improve her own practice, which is absolutely essential in a profession that requires an ongoing commitment to continual improvement.

Demonstrating the skills that RESA measures is a lot of work, as any teacher will tell you. Just as teachers and schools must commit to ongoing improvement, Educopia, the state’s testing vendor, is gathering feedback and working with the Ohio Department of Education to streamline the assessment to alleviate teacher burden. Still, the RESA “tasks” are not busywork; they capture essential skills required of any effective teacher.

On questionnaires distributed at the end of the event, teachers provided suggestions on how to improve RESA and wrote about what they gained from the RESA process. Among their comments:

  • Madison Welker, an 8th grade teacher, commented, “[T]he idea of reflection aided me to further my impact through instruction.”
  • Allison Meyer, a Kindergarten teacher, wrote, “Reflecting upon my teaching practices in a purposeful manner was incredibly beneficial, as it forced me to stop amongst the hectic day-in and day-out and evaluate my own teaching practices.”
  • Jessica Russell, a Pre-K teacher, also commented on the reflection element of RESA, “RESA has helped make lesson reflection second nature! As soon as I finish teaching a lesson I am already thinking about how I can improve it for next time. It has helped me become my best!”

Pre-K teacher Jessica Russell with State Superintendent of Public Instruction Paolo DeMaria
All photos used in this piece are by kind permission of Educopia/Matt Verber

This was the first year that Educopia hosted such an event to honor outstanding RESA candidates, and it is just the first step in our efforts to recognize high-performing educators in Ohio. We encourage these teachers to continue their professional growth and to consider future roles as teacher leaders, so that they can share what they clearly do so well. Although the event on October 12th honored a select group of teachers who scored in the top 100 on RESA, we hope districts across Ohio recognize all their teachers who are successful on the assessment, which is truly an accomplishment that deserves celebration.

Matt Verber is the Executive Director of Policy & Advocacy of Educopia.

 
 
 
 

We take a deep dive into Ohio’s most recent school report cards, look at a first step in addressing chronic absenteeism, and more

Management expert Peter Drucker once defined leadership as “lifting a person's vision to higher sights.” Ohio has set its policy sights on loftier goals for all K-12 students in the form of more demanding expectations for what they should know and be able to do by the end of each grade en route to college and career readiness. That’s the plan, anyway.

These higher academic standards include the Common Core in math and English language arts along with new standards for science and social studies. (Together, these are known as Ohio’s New Learning Standards.) Aligning with these more rigorous expectations, the state has implemented new assessments designed to gauge whether students are meeting the academic milestones important to success after high school. In 2014-15, Ohio replaced its old state exams with the PARCC assessments and in 2015-16, the state transitioned to exams developed jointly by the American Institutes for Research (AIR) and the Ohio Department of Education.

As the state marches toward higher standards and—one hopes—stronger pupil achievement and school performance, Ohioans are also seeing changes in the way the state reports student achievement and rates its approximately 600 districts and 3,500 public schools. Consider these developments:

As the standards grow more rigorous, pupil proficiency rates have declined. As recently as 2013-14, Ohio would regularly deem more than 80 percent of its students to be “proficient” in core subjects. But these statistics vastly overstated the number of pupils who were mastering math and English content and skills. For instance, the National Assessment of Educational Progress—the “nation’s report card”—indicates that just two in five Ohio students meet its stringent standards for proficiency. According to ACT, barely one in three Buckeye pupils reaches all of its college-ready benchmarks. The Ohio Department of Higher Education’s most recent statistics find that 32 percent of college-going freshman require remediation in either math or English. But with the implementation of higher standards and new exams, the state now reports more honest proficiency statistics: in 2015-16, roughly 55 to 65 percent of students statewide met Ohio’s proficient standard depending on the grade and subject. Although these rates still overstate the fraction of students meeting a college and career ready standard, parents and taxpayers are gaining a truer picture of how many young people meet a high achievement bar.

Higher achievement standards have also meant lower school ratings, particularly on the state’s performance index. This key report card component is a measure of overall student achievement within a school and one that is closely related to proficiency rates (and, for better and worse, closely correlated with socio-economics). While lower performance index scores affect schools throughout Ohio, they create special challenges when examining the results of high-poverty urban schools. Under softer standards, a fair number of urban schools maintained a C or higher rating on this measure, but now almost all of them receive a D or F performance index rating. In 2015-16, a lamentable 94 percent of urban schools were assigned one of those low grades. (High-poverty schools also receive near-universal Ds and Fs on a couple other proficiency-based measures.) Because PI ratings yield so little differentiation, policy makers, analysts, and the media need to use extra care lest they label virtually every urban school poor performing. Student achievement is indeed low in high-poverty communities and we all want to see stronger outcomes for disadvantaged children. But by concentrating on proficiency-based measures, we risk calling some schools failures when they are actually helping their students make up academic ground.

That’s where Ohio’s “value added” rating kicks in. This measure utilizes student-level data and statistical methods to capture the growth that students make (or don’t make) regardless of where they begin on the achievement spectrum. Because value added methods focus on pupil growth instead of point-in-time snapshots of proficiency, they can break the link between demographics and schools’ outcomes as measured strictly by achievement. On value added, urban schools can and do perform as well (or as poorly) as their counterparts from posh suburbs. In the present report, we show that 22 percent of Big Eight public schools earned an A or B on the state’s value added measure in 2015-16. Given the criticism of Buckeye charter schools, it is even more notable that a greater proportion of urban charters earned A or B value added ratings than did their Big Eight[1] district counterparts (29 to 19 percent). Although the evidence is based on just one year of results, one hopes that these results represent the onset of an era of higher charter performance after major reforms were enacted in 2015.

While value added scores haven’t noticeably plummeted or inflated with the rising standards, we should point out some important developments in the measure itself. First, during Ohio’s testing transitions, the state has reported value added results based on one-year calculations rather than multi-year averages, as was done prior to 2014-15. Probably as a result, some schools’ ratings have swung significantly; for example, Dayton Public Schools received an F on value added in 2014-15 but an A in 2015-16. One year of value added results can’t perfectly capture school performance—we need to take into account a longer track record on this report card measure.

Second, Ohio’s value added system now includes high schools. Previous value added ratings were based solely on tests from grades four through eight (third grade assessments form the baseline). With the phase out of the Ohio Graduation Tests (OGT) and the transition to high school end-of-course exams, Ohio has been able to expand value added to high schools. (The OGTs were not aligned to grade-level standards, prohibiting growth calculations; EOCs are aligned to the state’s new learning standards.) Starting in 2015-16, the state assigns value added ratings at the high school level (though it reported high school results in the year prior). In the absence of value added, analysts were limited to proficiency or graduation rates that can disadvantage high-poverty high schools. With the addition of value added, we gain a richer view of high school performance.

Shifting to higher learning standards, transitioning to new tests, and evolving to more comprehensive school report cards has led to some frustration. To a certain degree, the feedback is understandable—it has been a challenging start in the long journey toward academic excellence. In the days ahead, Ohioans should absolutely continue to work together to make sure state standards and accountability policies are as rigorous, coherent, and fair as possible. At the same time, the state should ensure continuity in key policy areas so that we can gauge our progress moving forward.

At the end of the day, we should keep the big picture in mind: High standards, properly implemented, help form the foundation for greater student achievement. Several Ohio school leaders appear ready and willing to tackle these challenges. After the report card release, David Taylor, a leader at Dayton Early College Academy, told the Dayton Daily News, “We hope that people have the patience to understand that the goal posts moved…We’re asking a lot more of our kids and their families. That will require patience and a plan.” On the pages of the same newspaper, Scott Inskeep, superintendent of Kettering City Schools, said, “The AIR assessments were tough…We have to get tough, ourselves, and teach to the depth that is needed to assure student success on these tests.” Ohio has charted a more rugged course for its students and schools. If state and local leaders can maintain this course—setting sights on excellence—we should begin to see more young people fully prepared to face the challenges of tomorrow.

Download the full report here.


[1] The Big Eight cities are Akron, Canton, Cincinnati, Cleveland, Columbus, Dayton, Toledo, and Youngstown.

 

 
 

According to the most recent Civil Rights Data Collection (CRDC) compiled by the U.S. Department of Education,[1] an alarming 6.5 million American students, more than 13 percent nationwide, were chronically absent—defined as missing 15 or more days of school— during the 2013-14 school year. Of these students, more than half are enrolled in elementary school, where truancy can contribute to weaker math and reading skills that persist into later grades. Chronic absenteeism rates are higher in high school: Nearly 20 percent of U.S. high school students are chronically absent, and these teenagers often experience future problems with employment, including lower-status occupations, less stable career patterns, higher unemployment rates, and low earnings.  

The data get even more disconcerting when they’re disaggregated by location. The CRDC explains that nearly 500 school districts reported that 30 percent or more of their students missed at least three weeks of school during the 2013-14 school year. The idea that certain districts struggle more with chronic absenteeism than others caught the attention of Attendance Works (AW), an organization that aims to improve school attendance policies. To create a more in-depth picture of the problem, Attendance Works combined the CRDC data with statistics from the Census Bureau and the National Center for Education Statistics and released a report with a stunning key finding: Half of the nation’s chronically absent students are concentrated in just 4 percent of districts.[2]

These 654 districts are located throughout 47 states and Washington D.C. and include cities, suburbs, towns, and rural areas. AW pays particular attention to two groupings within the 4 percent. The first is a group of large, mostly suburban districts with large numbers of chronically absent students; districts like Fairfax County, Virginia (12 percent of more than 180,000 students are chronically absent) and Montgomery County, Maryland (16 percent of more than 150,000 students are chronically absent), which are known for academic achievement but also their growing low-income populations. The second grouping is composed of “urban school districts with large populations of minority students living in poverty.” AW notes that half the urban districts with high numbers of chronically absent students are highly segregated by race and income: “At least 79 percent of the students in these districts are minority, and at least 28 percent of the children between ages 5 and 17 live in poverty.”

So how did Ohio fare on the AW report? During the 2013-14 school year, the Buckeye State had nearly 1.8 million students. 265,086 students (15 percent) were chronically absent—right around the national average. To illustrate their findings, Attendance Works developed interactive maps. Here’s a look at which Ohio districts hold a spot on one of the maps, in order from the highest percentage of chronically absent students to the lowest:

It’s no surprise to see Cleveland with the highest percentage of chronically absent students: CEO Eric Gordon told the Plain Dealer in 2015 that over the previous three years, the district had averaged 57 percent of kids missing ten days or more in a year.

Attendance Works offers a list of six steps for states and districts to take in order to use the data to create an effective action plan. Each step has a variety of additional recommendations, such as adopting a multi-tiered system of support that addresses common attendance barriers and includes interventions like home visits and personalized outreach, developing tailored action plans, and mentoring.

The good news for Ohio is that many of these recommendations are already being considered. House Bill 410, which was introduced back in December 2015, is a common sense bill that aims to tackle the punitive roots of and the lack of clear and consistent data on student truancy in the Buckeye State. (See here for an in depth overview of the bill.) Unfortunately, the bill has yet to make it out of the Senate.

Recently, some Ohio education groups voiced concerns that many schools lack the required personnel and finances to properly support the absence intervention teams outlined in the bill. (These teams are responsible for developing an intervention plan tailored specifically to the student, with the aim of getting her back to—and keeping her in—school. Teams must include a district administrator, a teacher, and the student’s parent or guardian, and are required to meet certain deadlines.) They also questioned the “extensive reporting” the bill calls for. Given the heavy load of responsibilities that teachers and administrators already have, it would be wise for legislators to seek feedback about how to make absence intervention teams more workable without losing sight of their intended purpose. The same is true for reporting requirements, which could be streamlined but not erased completely.

There’s a growing sense that when lawmakers return to Columbus this fall, they will fine tune and then pass House Bill 410. That’s a good thing. Improving the data systems and intervention protocols for chronic absenteeism is low-hanging fruit, and the General Assembly should do its part to ensure that fruit is harvested and solid policies around student attendance are in place.


[1] The U.S. Department of Education’s Office of Civil Rights, which conducts the CRDC, notes that their data may differ from those of other published reports due to “certain data decisions.” Find out more here.

[2] Like the USDOE, Attendance Works notes that some of their data are incomplete because of data corrections and submission errors. The authors of the study do not believe these issues change the overall patterns they reported.

 
 

NOTE: All photos used in this piece were graciously provided by the Cleveland Transformation Alliance. The photo at the top of this page features HBCU Preparatory School student Meiyah Hill and school principal Tim Roberts.

Standardized test scores are the most common measure of academic success in our nation’s K-12 schools. While they are an important indicator, most observers would agree that tests don’t tell the whole story about what’s happening in our public schools.

Given the recent changes to Ohio’s assessments and standards and their impact on test scores statewide, the need to tell a deeper story about public education has become even more evident.

In Cleveland, we know that Cleveland’s Plan for Transforming Schools is enabling both district and charter schools to create new learning environments that are laying a foundation for sustainable academic improvement. Progress is slow and not always visible from the outside, but it’s happening.

That’s why the Cleveland Transformation Alliance recently partnered with Civic Commons ideastream to share powerful stories about education in Measuring Success Behind the Numbers. The conversation included three storytellers:

  • Student Meiyah Hill talked about how HBCU Preparatory School, a charter middle school in Cleveland, made her feel part of the school family and challenged her so she was ready to get into one of the Cleveland Metropolitan School District’s highest-performing high schools, the School of Architecture and Design at John Hay;
  • Parent Larry Bailey told the story of how he went from being a drop-the-kids-at-the-door dad to leading his school’s parent organization;
  • Principal Lee Buddy, Jr., into his second year at a district school, spoke of his vision and work to expand partnerships and opportunities for his students.


Cleveland parent Larry Bailey

After the audience heard these stories, we sat down with three educators whose job it is to make sure thousands of Meiyah Hills can experience the transformative power of education, as many Larry Baileys get pulled into their children’s education, and hundreds of leaders like Lee Buddy are empowered to make a difference at the school level.


Lee Buddy, Jr., principal of Wade Park School, with one of his students

Connecting the individual stories to the bigger picture of transformation in Cleveland were Diana Ehlert, CMSD central office administrator; JaNice Marshall, who leads efforts at Cuyahoga Community College to engage K-12 students and parents in preparation for college and career; and Mary Ann Vogel, chief educator at the Intergenerational Schools, part of Cleveland’s successful Breakthrough charter school network. The panel’s dialogue focused on school culture, the importance of the broader community in transformation efforts, and how success is measured on an ongoing basis. We wrapped up with a lively Q&A session.


Student Meiya Hill (center), with her family

As the Transformation Alliance found in its second annual report on the implementation and impact of the Cleveland Plan, released in September 2016, progress in Cleveland is too slow, but we expect that changes happening now will lead to clearer academic gains in the future. The stories and dialogue shared in Measuring Success Behind the Numbers provided more evidence of the transformation that’s taking shape in our city.

Piet van Lier is Executive Director of the Cleveland Transformation Alliance.

The Cleveland Transformation Alliance is a public-private partnership created to serve as a voice for accountability and advocacy. The Alliance has four work roles: assess all district and charter schools in Cleveland; communicate with families about school quality and options; ensure fidelity to the Cleveland Plan; monitor charter school quality and growth.

*****

On October 27, 2016, the Alliance will host a dialogue focused on how deeper partnerships are driving educational innovation in Cleveland. See www.innoeducate.eventbrite.com  for more information.

 
 

This report from the Council for a Strong America provides an alarming snapshot of how ill-prepared many of the nation’s young adults are to be productive members of society.

The Council is an 8,500-member coalition comprised of law enforcement leaders, retired admirals and generals, business executives, pastors, and coaches and athletes. Its inaugural “Citizen-Readiness Index” gives more than three quarters of states a C or below on the index, due to staggering numbers of young people who are 1) unprepared for the workforce, 2) involved in crime, or 3) unqualified for the military.

Ohio received an overall C grade, earning some of the top marks for workforce and crime indicators. More specifically, 12 percent of Ohio’s young people ages 16–24 were reported to be unprepared for the workforce, a relatively low percentage nationally that earned Ohio a B. Ohio also earned a B on crime, with eight arrests per one hundred people (among those ages 17–24)—one of the lowest numbers nationwide. On military readiness, however, Ohio earned a D. A whopping 72 percent of youth ages 17–24 were ineligible for military service. Eligibility to enter the military depends on a range of factors, including physical fitness and attainment of a high school diploma.

Nationwide, almost a third of our young people (31 percent) are disqualified from serving in the military due to obesity alone. Factoring in drug abuse, crime (more than 25 percent of young adults have an arrest record), and “educational shortcomings” raises that number to 70 percent. (Unfortunately, the military readiness numbers aren’t broken out at the state level. We don’t, for example, know what percentage of Ohio youth are disqualified due to obesity versus other factors.)

These data are shocking and should remind everyone of the stakes at hand. Given the proven and widely known negative correlation between educational attainment and crime, drug use, unemployment, and other negative life events, it is all the more imperative that K–12 schools do a better job preparing young people not just for college but for life as upstanding, productive citizens.

Unfortunately, the report doesn’t address K–12 public school quality, nor does it provide many concrete steps for state or local leaders—where education policy is truly set—to address the citizen-readiness crisis. Instead, it offers a set of recommendations geared specifically at Congress and the next president to address the problem.

Part 1, “strong families,” calls for Congress to reauthorize the Maternal, Infant and Early Childhood Home Visiting (MIECHV) program. Outlining research on the relationship between childhood trauma (affecting nearly a quarter of all children) and crime and drug use, the report makes a case—albeit a loose, indirect one—for reauthorizing the program, which serves 150,000 at-risk parents and kids.

Part 2, “quality early education,” dives into research on the long-term gains offered by high-quality preschool, but it misses the boat in its broad recommendation to reauthorize Head Start and expand the Preschool Development Grant Program. While making a strong moral case for investing in children, the Council overlooks research indicating that academic gains from preschool often wear off and that many current early education programs are woefully insufficient. (It does, however, acknowledge the uneven quality of Head Start.) Further, because it ignores questions about the quality of K–12 public schools, there’s no guarantee that suggested improvements to early learning will be sustained over time and ultimately reap the intended benefits (higher education attainment, lower crime, increased readiness for military, etc.).

Part 3, healthier schools, is perhaps the most relevant section, given the coalition behind this report (including military generals, coaches, and athletes), and the one that might be most practically addressed at the state level. Even if the other recommendations were implemented fully and school quality was improved dramatically, obesity would still disqualify a significant number of people from the military. Sixty percent of young adults are obese or overweight (according to standards set forth by the American Medical Association). These numbers are worrisome not only in light of military ineligibility but in terms of the life-long health consequences. The report recommends that Congress and the president “defend science-based nutrition standards” like those embedded in the Healthy, Hunger Free Kids Act of 2010. Specifically, it calls on lawmakers to support the Child Nutrition Reauthorization introduced last year by the Senate Agriculture Committee. And it implores states to place a greater priority on physical education programs, which have waned in recent years. According to the report, the percentage of schools requiring students to take physical education has declined significantly in the last fifteen years, as has the amount of time spent on recess.

Despite not devoting energy or ink to discuss the academic quality of K–12 schools, the Citizen-Readiness Index does an excellent job of outlining the dire ill-preparedness of too many young people for jobs, college, or the military. The scope and commitment of the bipartisan coalition behind this report is impressive, even though its recommendations take on an equally broad, everything-and-the-kitchen-sink approach. As Ohio develops its state accountability plan for the Every Student Succeeds Act (ESSA), it might be worth including “citizen readiness” as a high school indicator.

SOURCE: Council for a Strong America, “2016 Citizen-Readiness Index” (September 2016).

 
 

A multitude of research has shown that quality teaching is necessary for students’ achievement and positive labor market outcomes. Rigorous evaluations have been hailed as a way to improve the teacher workforce by recognizing and rewarding excellence, providing detailed and ongoing feedback to improve practice, and identifying low-performers who should be let go. While plenty of time has been devoted to how best to provide teachers with feedback, less time has been spent examining how evaluation systems contribute to the removal of underperforming teachers and the resulting changes in the teacher workforce.   

This study examines The Excellence in Teaching Project (EITP), a teacher evaluation system piloted in Chicago Public Schools (CPS) in 2008. The program focused solely on classroom observations and used Charlotte Danielson’s Framework for Teaching (FFT) as the basis for evaluation (unlike many current systems, which rely on multiple measures including student test scores). Roughly nine percent of all CPS elementary teachers participated in the first year of the pilot, which was considered a “low-stakes intervention” since scores on the FFT rubric were not officially included on teachers’ summative evaluation ratings.

Prior to the use of the FFT, teachers in Chicago were evaluated against a rudimentary checklist of classroom practices. This overly-generous model led to nearly all CPS teachers (approximately 93 percent) receiving one of the top-two ratings in a four-tiered rating system. EITP, on the other hand, utilized the detailed, research-based set of components of the FFT and required teachers to be evaluated multiple times a year. Principals were trained extensively on how to effectively use the framework, and were required to have conferences with teachers before and after observations. Because FFT provided teachers and principals with far more detailed information about instructional performance than the previous system, the framework produced more variation in teacher ratings.

The pilot started with forty-four randomly selected elementary schools in 2008-09; the following year forty-nine schools were added. CPS worked with the University of Chicago Consortium on School Research to craft an experimental design for implementation, and the University of Chicago randomized schools to take part in the first and second cohorts. Both treatment and control schools were statistically indistinguishable in regards to prior test scores (reading and math) and student composition.

Despite the fact that the experimental design was only maintained for one year, researchers were able to determine how the pilot impacted teacher turnover. While there was no average effect on teacher exits, the researchers did find that teachers who had low prior evaluation ratings were more likely to leave the district due to the evaluation pilot. In fact, by the end of the first year of implementation, 23.4 percent of low-rated teachers in schools using the EITP pilot left the district, compared to 13 percent of low-rated teachers in control schools.[1] Non-tenured teachers were also “significantly more likely” to leave. Overall, the first year of the pilot saw an 80 percent increase in the exit rate of the lowest performing teachers and a 46 percent increase in the turnover of non-tenured teachers.[2] The loss of teachers who were both low-performing and non-tenured suggests that “contract protections enjoyed by tenured teachers provided meaningful job security for those who were low-performing,” as there was no difference in the exit of low-rated tenured teachers. Also worth noting is that teachers who remained in EITP schools were higher-performing than those who exited, as were the teachers who replaced exiting educators.

These findings suggest two important conclusions. First, teacher evaluation reforms like the EITP pilot can indeed impact the quality of the teacher workforce by inducing the exit of low-performers. In turn, by replacing low-performing teachers with higher-performing ones, achievement should in theory rise  (though the researchers did not specifically test this hypothesis). Second, given that low-rated non-tenured teachers were significantly more likely to leave than low-rated tenured teachers, the researchers were able to surmise that “tenure reform may be necessary to induce low-performing tenured teachers to leave the profession.”

SOURCE: Lauren Sartain and Matthew P. Steinberg, “Teachers' Labor Market Responses to Performance Evaluation Reform: Experimental Evidence from Chicago Public Schools,” The Journal of Human Resources, (August 2016).


[1] The researchers note that although “the leave rate of low-rated treatment school teachers is imprecisely estimated because very few teachers received low ratings, it is remarkably stable and large in magnitude.”

[2] In CPS, teachers who are in their first, second, and third year of teaching are non-tenured.

 
 

“No one is born fully-formed: it is through self-experience in the world that we become what we are.” - Paulo Freire

As a child, I always had a sense of myself—a way of understanding who I was/am, in a very concrete and tangible way. When I was a young girl others would often comment that I appeared very grounded and steady. At the time I didn't quite know what they meant because I was usually in my own internal world and not really aware of how others viewed me. But I do remember as a child feeling connected to my familial roots and having a deep perception of and sensitivity to my physical, mental, and spiritual existence. That is what knowledge of self meant to me. And that knowledge—expanded in a decidedly global way—would eventually become my foundation for navigating the world as a gifted child and young woman.

Reflecting on my childhood and upbringing, I can see clearly that my parents already had their own plans to make sure I received an extraordinary education at school and at home. They were committed to having me educated in the public schools, but they certainly did not intend to leave the trajectory of my education and fate of my future to others. They were active in shaping how my schools and teachers would interact with me, starting with advocating for me to be placed in the gifted program—called the Mentally Gifted (MG) program in our school district.

Everything we did in MG seemed to dovetail seamlessly with my parents' vision and efforts to educate me. I frequently took weekend trips with my mom or dad to art exhibitions or the local horticultural center. I helped my grandmother in her community garden and with her homemade soap-making business. I attended graduate classes individually with both of my parents. I took piano music lessons with my uncle who was a trained music teacher and drove with family friends to New York where I saw ballets (including the famed Cuban ballet) and Broadway plays and sampled different cuisines such as Japanese tempura and Indian daal. I was beginning to see myself as truly of and in the world at large. In Beloved, Toni Morrison once wrote, regarding the character Baby Sugg:

And no matter, for the sadness was at her center, the desolated center where the self that was no self made its home…fact was she know more about them than she knew about herself, having never had the map to discover what she was like.

This passage has resonated with me for years. It is as if my parents once said to each other, “No. Our Nicole is going to know herself. She has to have a map to discover what she is like.”

Young, Gifted, and Black – Developing Race Consciousness

Self-identifying as a smart and gifted Black girl was beginning to be etched into my psyche and internalized as part of my core identity. That was a good thing—because very soon I would face what seems to be a rite of passage for smart Black kids—bullying and teasing for being different or "acting white.” I did not get teased as badly as a few other students, but I was targeted enough to know I didn’t want or enjoy this kind of attention. Taking public transportation to and from school exposed me to some of the ills from which my parents were trying to shield me. Sexualized cat calls from grown men who viewed young girls as fair game for their lustful desires and incessant teasing from other neighborhood children who didn’t know what to make of a shy girl walking alone from the bus stop carrying a large backpack and violin case were part of my indoctrination into the harsh realities of an urban America shaped by persistent structural racism and sexism.

Part of my buffer was a firm sense of self and race consciousness—an understanding of myself as a young Black girl from a legacy of rich history and beauty and knowledge of the social realities of racial inequality. Race consciousness helped to inoculate me from others’ overt bigotry and internalized self-hatred. My ideas about the world were bigger and more complex. Sneers from racists and taunts from peers were not erased, but those incidents took a backseat to my own evolving map of myself.

Going Global

In the same way that Venus and Serena Williams's parents devised and carried out a plan to raise tennis stars, my parents and family were very strategic about raising a gifted child who would be engaged and immersed in the world at large. Instead of weekly sports lessons, my parents exposed me to constant critical analysis of current events, community activism, global citizenship, and multicultural appreciation.

My family always talked to and about me through an international lens—as a citizen of the world. I remember my dad telling me about his travels throughout Africa and South America as part of his political organizing work. I remember my mom recounting stories of traveling to the World Youth Festival in Germany while she was seven months pregnant with me. When I was still a small girl, my grandmother had long-term visitors from Zimbabwe and London (a “Black Brit” as our guest called herself) staying at her house, exposing me to diversity in a way that no books or school curricula could come close to doing.

The height of this emphasis on internationalism came when my dad announced that he and my mom had signed me up for an international children’s camp in Russia. “Russia! Why?” I thought. I was afraid when they told me I was going to take a plane from New York to Moscow with a group of children who also had activist parents. I worried that the plane would be hijacked or, worse, that my friends would think I was a weirdo for having parents that would send me to such a far-off place that either they had never heard of or had heard of only as the heart of “evil communism.” But off I went. I stayed for five weeks and traveled throughout the country interacting with children literally from all over the world.

As scary as the trip was initially, it ended up being one of the formative and most influential experiences of my young life. I learned in firsthand detail about Apartheid in South Africa and the civil war in Nicaragua. I encountered the vast diversity of African peoples and cultures when I met kids from Ethiopia, Nigeria, Algeria, and Guinea. I was also confronted with the nasty underbelly of racial stereotyping by a couple of fellow American camp goers. Through that unfortunate incident I learned a valuable lesson in how to exude confidence and navigate others’ ignorance and arrogance. I returned home right before I started high school. I transitioned easily into honors and eventually AP classes at my prestigious all-girls high school. My understated confidence and self-awareness also helped me to flourish socially and emotionally (as much as could be expected for a teen girl!).

I knew early on that I would eventually pursue and attain a Ph.D. in psychology. I also had an unconscious understanding that international travel and living would play an important role in my career development. To that end, I traveled to Cuba while I was in graduate school and gained invaluable insights into race, racism, and anti-racism; I was part of the first group of students in my clinical psychology Ph.D. program that participated in an international internship in Grenada, West Indies; and I even carved out a little time during graduate school to travel to Guinea, West Africa, and Japan to study and perform with different African dance groups.

Then, to no one’s surprise, but many people’s confusion, I applied for a prestigious fellowship to conduct my dissertation research in Ethiopia. While living in Ethiopia, I traveled all over the country and to neighboring Sudan and nearby Egypt by myself. Everyone at home thought I was nuts and worried that I was going alone to chaotic and possibly terrorist states. I laughed at the idea, thinking how they were missing out on the elaborate weddings and tea parties I was enjoying in North Africa. Later, as a professional psychologist, I became involved in clinical and advocacy work with marginalized and underserved urban populations and refugees and asylum-seekers from different countries. I was creating a bridge to my earlier educational and social roots. I ended up doing work in Peru, Liberia, Italy, Haiti, and Senegal and presenting at international conferences in Asia, South America, and Europe. Eventually, I took the plunge to work abroad full-time as a psychologist—first in Bahrain and then in Botswana.

Unwittingly, I had been storing accounts of my international adventures over the years and recently had them published in what I call the ultimate travel guide: Global Insights - The Zen of Travel and BEING in the World. In it, I explore the personal-development value of travel and give tips—for parents, students, young, and old—on how to maximize their travel experience.

My advice to parents: Travel can be one of the best educational enrichment opportunities for your gifted child because it crystalizes so much classroom and life experience. The good news is that, despite financial barriers, it can still be accessible to families from all socioeconomic backgrounds. While my parents were certainly educationally advantaged and able to get me involved in a variety of cultural and travel activities, they were by no means wealthy. Funding is available for travel, especially for high-achieving children. Parents can search private and government sources to help support study abroad and cultural immersion trips so that lack of money does not have to translate into missed opportunities. Help your gifted child see the world, and your child will better understand her or his place in it!

Nicole M. Monteiro, Ph.D. is a clinical psychologist and Assistant Professor of Psychology in the Department of Professional Psychology at Chestnut Hill College. Find out more about her work at www.nicolemonteirophd.com and her book at www.zenwanderlust.com.

 
 
 
 

We look at the breakdown of Cleveland’s merit pay plan, examine school closures, and celebrate the lowering of Ohio’s college remediation rate.

We know that teacher quality is the most important in-school factor impacting student performance—and that the variation in teacher quality can be enormous, even within the same school. We also know that most teachers are paid according to step-and-lane salary schedules that exclusively reward years on the job and degrees earned. These systems pay no attention to instructional effectiveness, attendance, leadership and collaboration within one’s school, or any other attributes relevant to being a good worker.

When I entered the classroom at age twenty-two, I looked at my contract and realized I wouldn’t reach my desired salary until I was in my mid-to-late forties. I would reach that level regardless of whether I took one or fifteen sick days every year; whether I put in the bare minimum or a herculean effort (as many educators do in fact do); or whether I clocked out at 3:01 or stayed with my students to offer extra help. No matter the outcomes my kids achieved, my salary would steadily tick upward based only on time accrued. Predictable, yes. But given the urgent task at hand—to keep excellent educators at the instructional helm, address the challenges of burnout and attrition, and professionalize teaching—it’s woefully insufficient. 

That’s why the breakdown of the Cleveland Metropolitan School District’s (CMSD) innovative teacher pay system is so disappointing. Developed in partnership with the Cleveland Teachers Union as part of a comprehensive package of reforms to improve the city’s schools, CMSD’s new teacher pay system was codified in 2012 by HB 525. The law earned rare bipartisan support and teacher buy-in and was the first -of-its-kind in Ohio to base annual salary increases on factors beyond years of experience and degrees—which should matter to some extent, just not singularly.

The multifaceted system went beyond the typical forms of “merit pay” largely disliked by teachers. The law required that all of the following be considered: the level of license a teacher holds; whether the teacher is highly qualified; ratings received on performance evaluations; and any “specialized training and experience in the assigned position.” Further, it allowed (but did not require) the district to compensate teachers for additional factors: working in a high-needs school (those receiving Title 1 funds); working in a challenged school (those in school improvement status); or teaching in a grade-level or subject with a staff shortage, a hard-to-staff school, or a school with an extended day or year—all of which are worthy of reward.

The system informed by the law and agreed upon in the 2013 teachers contract retained a fifteen-step pay system, but allowed for teachers’ placements within that system to be determined by how many “achievement credits” they earned, rather than by years of service and degrees. (Teachers were to earn credits through strong evaluation ratings as well as the ways described above.) Depending on their credit total, newer teachers could be placed further along in the new step system than in the previous model, while more experienced teachers wouldn’t automatically go to a higher rung (though no existing teacher would see her pay cut as a result of the new plan and all teachers received a one-time $1,500 bonus during the transition to the new pay scale).

But Cleveland’s promising compensation strategy has fallen apart just three years in, illustrating how a promising plan can die in the hands of bureaucrats and interest groups. There’s been mounting frustration that teacher raises were tied too heavily to annual evaluations rather than a combination of factors as allowed (but not required) by the original law. Despite “hours of meetings over the last four years,” the district and teachers union couldn’t come to basic agreements about how to define or reward performance. And even though 65 percent of teachers earned significant salary increases during the plan’s first two years, the union complained that some stipends were one-time allowances rather than permanent salary bumps. Meanwhile, the district never granted extra compensation for hard-to-fill jobs (saying there were none), nor would it pay extra for teachers working in corrective action schools. Last month, Cleveland teachers moved to strike, forcing all parties back to the negotiating table to reach a deal before the start of the school year.

That deal erases nearly all of the reforms enacted three years ago. It still grants teacher raises according to annual ratings but flattens those raises out and ensures that nearly everyone except the gravely incompetent earn them. Any teacher receiving the top three ratings—accomplished, skilled, or developing—will get the same raise. Much like the traditional step-and-lane structure, it treats nearly all teachers in equal fashion. The key difference is that ineffective teachers (just one percent of CMSD’s teaching force in 2014–15) will have their pay frozen, and teachers earning the top ratings will get a one-time $4,000 bonus. This will benefit CMSD’s best teachers and is perhaps the only detail deserving of praise.

Cleveland’s capitulation is discouraging, especially given the plan’s potential and the manner in which it fell apart. As the fact-finding report depressingly noted, “Neither time nor resources have been expended to build out the system. As a consequence, the District lost the opportunity to lead the country with respect to innovation where compensation systems are concerned.”

Cleveland’s is a cautionary tale about the importance of what happens in the weeds after a law passes. One conclusion to draw is that the details related to CMSD’s teacher pay plan should have been better prescribed in law, leaving no room for either gridlock or shirking of responsibility by either party. It might also point to the obvious fact that it’s very difficult to achieve change with so many parties at the table—and that’s why policymakers feel they have to resort to “top-down” policy changes like the Youngstown Plan. I’d venture to guess that most policymakers and leaders would like to achieve local buy-in and cooperation—at the very least, few would eschew it on principle. Ohio lawmakers left some details open for CMSD and the union to sort out for themselves—respecting local autonomy and not wanting to over-prescribe policy details. Yet this is where the plan dissipated.

Perhaps the most daunting takeaway is that sustainable change is often resisted, stalled, or derailed due to cognitive inertia—a psychology term that describes what happens when long-held beliefs endure even in the face of counterevidence. When it comes to teacher pay, there seem to be deeply ingrained beliefs that best way to pay teachers is the old, industrial-era manner in which we’ve always done it (despite evidence to the contrary). Excellent teachers stand to benefit the most from differentiated pay systems. Developing and effective teachers do, too—by receiving meaningful professional development and seeing improvements over time. Only the least effective educators stand to lose anything. Yet teachers aren’t coming out in droves to demand better pay systems that develop and reward them—not in Cleveland and not in most of Ohio.

Bipartisan backing and early teacher buy-in in Cleveland clearly were not enough to prevent the model’s breakdown and the district’s default to a system that largely treats all teachers equally. It may be tempting to conclude from the collapse of CMSD’s promising teacher-pay plan that the law should have been more specific, or that top-down reforms may work better to overcome local gridlock. An equally plausible observation—and one that education reformers may do well to consider—is that until prevailing opinion changes within the profession itself, improvements to teacher pay models will be difficult to sustain. Even heavily prescribed plans can be reversed later. Meanwhile, teachers themselves should consider how moving away from a factory model of compensation to one differentiated for performance and skills is one key step toward reaching a long-held goal: professionalizing teaching. 

 
 

Politicians are wise to pay attention to public opinion data, but they are also responsible for crafting sound policies based on research and evidence. So what are they supposed to do when these two goods conflict?  

Anya Kamenetz at NPR was the first to highlight the contradiction between newly released poll results from PDK International and a variety of research related to school closures (“Americans Oppose School Closures, But Research Suggests They're Not A Bad Idea”). The PDK survey revealed that 84 percent of Americans believe that failing schools should be kept open and improved rather than closed. Sixty-two percent said that if a failing public school is kept open, the best approach to improvement is to replace its faculty and administration instead of increasing spending on the same team. In other words, the majority of Americans are firmly committed to their community schools—just not the people working in them.

These findings shouldn’t come as a huge surprise (as my colleague Robert Pondiscio pointed out here). No one wants to see a school closed, no matter how persistently underperforming. For many communities, schools offer not just an education, but a place to gather that’s akin to communal houses from the past. Enrichment and after-school programs—which are profoundly important for low-income youth—often benefit from the use of school buildings. Buildings can also house wrap-around services like health centers, adult education centers, or day care centers.

In addition to their community-wide implications, school closures have also been called “psychologically damaging” for students. A 2009 University of Chicago report examining closure effects on displaced students in Chicago Public Schools (CPS) found that “the largest negative impact of school closings on students’ reading and math achievement occurred in the year before the schools were closed,” leading researchers to believe that closure announcements caused “significant angst” for students, parents, and teachers that may have affected student learning.

The report also indicates that one year after students left their closed schools, their reading and math achievement was not significantly different on average from what researchers “would have expected had their schools not been closed.” This is possibly due to the fact that most displaced students re-enrolled in academically weak schools—which, though disappointing, isn’t a huge surprise either. Even those who see value in school closures will point out that if there aren’t enough high-quality seats elsewhere for displaced students, the action simply reshuffles students from one bad school to another.

So if closing schools is bad for communities and students and the American public hates it, then why is it happening? As Kamenetz points out in her NPR piece, research shows that closing schools isn’t always bad—and therein lies the contradiction. The same University of Chicago study that points to “significant angst” and flat achievement also acknowledges that “displaced students who enrolled in new schools with higher average achievement had larger gains in both reading and math than students who enrolled in receiving schools with lower average achievement.” Translation? Displaced kids who end up in better schools do better. Fordham’s 2015 study on school closures and student achievement found similar results: Three years after closure, students who attended a higher-quality school made greater progress than those who didn’t. In a recent study of closures in New York City, researchers found that “post-closure students generally enrolled in higher-performing high schools than they would have otherwise attended” and that “closures produced positive and statistically significant impacts on several key outcomes for displaced students.”      

School turnarounds, on the other hand, have almost always been found to disappoint

In The Turnaround Fallacy, Andy Smarick offers three compelling arguments for giving up on “fixing” failing schools. First, data shows that very few past turnaround efforts have succeeded. (California is a prime example: After three years of interventions in the lowest-performing 20 percent of schools, only one of 394 middle and high schools managed to reach the mark of “exemplary progress.” Elementary schools fared better—11 percent met the goal—but the results were still disheartening.) Second, there isn’t any clear evidence for how to make turnaround efforts more successful in the future. Even the Institute of Education Sciences seems unable to find turnaround strategies that are backed by strong evidence. And finally, although the long list of turnaround failures in education makes it reasonable for advocates to look outside the sector for successful models to import, there aren’t many. A review of the “two most common approaches to organizational reform in the private sector” found that both approaches “failed to generate the desired results two-thirds of the time or more.”

Let’s review. Many American schools are consistently failing to properly educate students. The public doesn’t like the idea of closing these schools, but many research studies indicate that students who re-enroll in higher-performing schools perform better than they would have if they’d stayed in their previous schools. What the American public wants is to improve failing schools instead of closing them. Unfortunately, research shows that school turnarounds haven’t worked in the past—and no one has any idea how to make them work in the future. Overall, the phrase “damned if you do, damned if you don’t” seems particularly apropos.

So what’s a policy maker to do when schools are failing to properly educate students? Expanding the number of high-quality seats is a good place to start. We might not have a clue about how to turn around bad schools, but we do know of school models that work, especially in the charter sector. Policy makers who want to give kids immediate access to a great education should invest in expanding and replicating schools and networks that are already doing an excellent job.

Boosting the supply of excellent schools will lead to two important changes. First, many families and children will get immediate relief—and life-changing opportunities in new, better schools. And second, over time, failing schools will see their enrollment plummet, creating a fiscally unsustainable situation. At that point, officials can shut them down—not because they are failing academically, but because they are failing financially. And to my knowledge, no public opinion poll has shown Americans averse to closing half-empty, exorbitantly expensive schools. At least not yet.

 
 

College may not be for all, but it is the chosen path of nearly fifty thousand Ohio high school grads. Unfortunately, almost one-third of Ohio’s college goers are unprepared for the academic rigor of post-secondary coursework. To better ensure that all incoming students are equipped with the knowledge and skills needed to succeed in university courses, all Ohio public colleges and universities require their least prepared students to enroll in remedial, non-credit-bearing classes (primarily in math and English).

Remediation is a burden on college students and taxpayers who pay twice. First they shell out to the K–12 system. Then they pay additional taxes toward the state’s higher education system, this time for the cost of coursework that should have been completed prior to entering college (and for which students earn no college credit). The remediation costs further emphasize the importance of every student arriving on campus prepared.

Perhaps the bigger problem with remedial education is that it doesn’t work very well. In Ohio, just 51 percent of freshmen requiring remediation at a flagship university—and 38 percent of those in remedial classes at a non-flagship school—go on to complete entry-level college courses within two academic years. It’s even worse at community colleges: Just 22 percent of students go on to take a college course that is not remedial.  

While far too many college-bound students in Ohio aren’t ready for college upon matriculating, the Buckeye State has made some progress in recent years. Back in 2012, 40 percent of entering college students required remedial coursework, raising concerns of an Ohio college remediation rate crisis. But the three most recent years of data show Ohio’s remediation rate has decreased to 37 percent in 2013, and now to 32 percent for the high school graduating class of 2014. According to the Ohio Department of Higher Education’s most recent report, more students required math remediation (28 percent) than English (13 percent), and 10 percent of first-time students enrolled in both remedial math and English courses.

Table 1. Remediation by subject area

Source: Ohio Department of Higher Education, “2015 Ohio Remediation Report”

In the absence of rigorous research, we can only speculate about what’s behind this drop in remediation rates. One possible explanation is that fewer students who need remedial education are going straight to college. If this were true, we might expect to see college-going rates declining commensurately with the decrease in remediation rates. But college-going rates, while falling between 2009 and 2013, jumped by 5.6 percent from 2013 to 2014. Though we can’t rule it out entirely, this suggests that college-going trends are probably not a leading explanation for the recent fall in remediation.

Another possibility is that the population of students going to college in 2014 was actually better prepared than in previous years. Thirty-two percent of first-time college students in 2014 required remediation upon entry, compared to 41 percent of first-time students in 2009. Between 2009 and 2014, Ohio implemented higher K–12 educational standards; it is possible that we’re starting to see the fruit of those efforts. (In 2012, Ohio began implementing the Common Core academic standards in math and English language arts, along with new learning standards in science and social studies.) At the very least, it doesn’t appear that rising academic standards are having an adverse impact on college readiness. Despite all the travails, the new learning standards might be giving Ohio’s young people a modest boost when it comes to readiness. Not bad!

Or maybe the credit goes to the implementation of Ohio’s “remediation-free” standards in 2013. Ohio’s standards (for public colleges and universities) detail the competencies and ACT/SAT scores each student must achieve in order to enroll in credit-bearing courses. Now students can predict from their ACT subject scores whether they’ll be able to directly enroll in credit-bearing courses. Many states and colleges opt to enroll all students in credit-bearing coursework with increased support instead of offering remedial courses. But Ohio’s standards fail to address how remedial students must be served and whether their remedial status bars them from acquiring credit even with increased support. However, these statewide standards are also being used to hold high schools accountable for college-preparedness; remediation-free status is now also incorporated in the Prepared for Success measure on the state’s school report cards. Maybe this policy is working as intended—encouraging students to improve their reading and math skills before they reach campus.

Further, it is worth considering whether Ohio’s remediation rate decline is being driven by the incentives its colleges and universities face. Public funding for higher education in Ohio is not linked to the remediation rate, but 50 percent of funding for two-year and four-year institutions is determined by the percentage of degree completions (the graduation rate), which also heavily impacts college rankings. To increase graduation rates and rankings, many universities may seek to decrease the number of students they accept who fall below the remediation-free threshold. Still, this preference does not change the number of students in need of remediation as determined by their ACT score.

Ohio’s declining need for remedial education is good news, though there’s still a ways to go before all students matriculating to college are truly ready for it. It’s not entirely clear what is driving this trend—whether it’s enrollment patterns, policy implementation, a bit of both, or other explanations that we didn’t consider. Certainly more research and analysis on this topic is needed to determine causation. In the meantime, we’ll need to monitor how the remediation trend unfolds in the years to come. The falling remediation rates at least indicate that the state is moving in the right direction. If Ohio can stay the course and maintain high academic standards and a focus on college preparedness, the gap between college aspirations and college readiness will hopefully close even further. 

 
 

Ohio’s report card release showed a slight narrowing of the “honesty gap”—the difference between the state’s own proficiency rate and proficiency rates as defined by the National Assessment of Educational Progress (NAEP). The NAEP proficiency standard has been long considered stringent—and one that can be tied to college and career readiness. When states report inflated state proficiency rates relative to NAEP, they may label their students “proficient” but they overstate to the public the number of students who are meeting high academic standards.

The chart below displays Ohio’s three-year trend in proficiency on fourth and eighth grade math and reading exams, compared to the fraction of Buckeye students who met proficiency on the latest round of NAEP. The red arrows show the disparity between NAEP proficiency and the 2015-16 state proficiency rates.

Chart 1: Ohio’s proficiency rates 2013-14 to 2015-16 versus Ohio’s 2015 NAEP proficiency

As you can see, Ohio narrowed its honesty gap by lifting its proficiency standard significantly in 2014-15 with the replacement of the Ohio Achievement Assessments and its implementation of PARCC. (The higher PARCC standards meant lower proficiency rates.) Although Ohio did not continue with the PARCC assessments, the chart above indicates that Ohio continued to raise its proficiency benchmarks on its new reading exams (AIR/ODE developed). Math proficiency, however, remained virtually unchanged in these grades from 2014-15 to 2015-16.

Despite the frustration that some schools are expressing, Ohio policy makers should be commended for continuing to raise standards in 2015-16. Parents and citizens are now getting a much clearer picture of where students stand relative to rigorous academic goals. 

 
 

Twenty-five years into the American charter school movement there remains little research on the impact of charter authorizers, yet these entities are responsible for key decisions in the lives of charter schools, including whether they can open, and when they must close.

A new policy brief from Tulane University’s Education Research Alliance seeks to shed some light on authorizer impact in post-Katrina New Orleans, specifically does the process by which applications are reviewed help to produce effective charter schools? And after those schools have been initially authorized, does that process also shed light on which types of charter schools get renewed?

It merits repeating that the authorizing environment in New Orleans was unlike anywhere else in the country: Louisiana had given control of almost all New Orleans public schools to the Board of Elementary and Secondary Education (BESE) and the Recovery School District (RSD). Independent review of charter applications was mandated in state law, and tons of organizations applied to open new charters.

To facilitate the application process, BESE hired the National Association of Charter School Authorizers (NACSA). NACSA reviewed and rated applications, and in most cases BESE followed those recommendations. As the authors point out, NACSA is the largest evaluator of charter applications in the country and the extent of its work in New Orleans provides some insights regarding the potential impact of authorizer decisions.

First, NACSA examined much more than the charter application alone, including information gleaned via interviews and site visits. The authors found that the only factor that predicted both charter approval and renewal is a school’s rating from NACSA. Interestingly, the authors also found a number of application factors that had no effect on application approval or renewal, including: number of board members with backgrounds in education, whether partners (vendors providing services such as curricular materials, tutoring, college advising, social services, etc.) were for-profit or non-profit, whether a national charter management company (CMO) was involved, whether a principal had been identified at the time of the application, and the amount of instructional time and professional development proposed.

Second, there does not appear to be a link between these application factors and future school performance. However, it did appear that applicants with non-profit partners showed lower state performance scores, lower overall enrollment, and lower enrollment growth than those without such partners.

In terms of charter renewal, it appears that School Performance Scores (the SPS includes indicators of assessment, readiness, graduation, diploma strength and progress) and value added (growth) are strong predictors of charter renewal (in addition to the initial NACSA rating). And while charter schools with higher enrollment growth were more likely to be renewed, enrollment levels themselves were not a factor in renewal decisions.

The takeaway for authorizers is that past performance is the best predictor of future success, and that the answers to some of the questions we typically include in applications (e.g., about board members, partners, school leader) really aren’t predictive of anything. Looking at a paper application simply isn’t enough. Authorizers must also examine qualitative data (such as interviewing school leaders and references, and making detailed site visits).

The study acknowledges a few of its limitations: lack of a clear scientific basis for determining which application factors to measure; the ability to only observe the future performance of schools with the strongest applications (the worst applications didn’t make the cut); and, importantly, the fact that many authorizers (nationally, not just in Louisiana) simply have not had enough applications and renewals to make a comprehensive study possible.

But fear not: Those of us at Fordham have our own study in the works to look at this question in four states. Stay tuned!

SOURCE: Whitney Bross and Douglas N. Harris, "The Ultimate Choice: How Charter Authorizers Approve and Renew Schools in Post-Katrina New Orleans," Education Research Alliance (September 2016).

 
 

The annual release of state report card data in Ohio evokes a flurry of reactions, and this year is no different. The third set of tests in three years, new components added to the report cards, and a precipitous decline in proficiency rates are just some of the topics making headlines. News, analysis, and opinion on the health of our schools and districts – along with criticism of the measurement tools – come from all corners of the state.

Fordham Ohio is your one-stop shop to stay on top of the coverage:

  • Our Ohio Gadfly Daily blog has already featured our own quick look at the proficiency rates reported in Ohio’s schools as compared to the National Assessment of Educational Progress (NAEP). More targeted analysis will come in the days ahead. You can check out the Ohio Gadfly Daily here.
  • Our official Twitter feed (@OhioGadfly) and the Twitter feed of our Ohio Research Director Aaron Churchill (@a_churchill22) have featured graphs and interesting snapshots of the statewide data with more to come.
  • Gadfly Bites, our thrice-weekly compilation of statewide education news clips and editorials, has already featured coverage of state report cards from the Columbus Dispatch, the Dayton Daily News, and the Cleveland Plain Dealer. You can have Ohio education news sent direct to your Inbox by subscribing to Gadfly Bites.

And most importantly, Fordham’s own annual analysis of Ohio report card data. We look in-depth at schools, districts, and charter schools in the state’s Big 8 urban areas. You can see previous years’ reports here and here. Look for it in the coming days.

 
 
 
 

We look at the benefits of Advanced Placement courses, measurement problems in teacher compensation research, Trump’s visit to a Cleveland charter school, and more

As students and teachers settle back into school routines, thousands of high schoolers are getting their first taste of classes that are supposed to prepare them for college. Some of them are sitting in Advanced Placement courses, while others have enrolled in district-designed advanced courses. In general, most people seem to take it for granted that high school courses that are labeled “advanced” are an effective preparation tool for college. A new analysis out of Brookings calls the conventional wisdom into question.

At issue is whether high school courses impact college performance at all. The Brookings authors point to a 2009 review of college preparation from the Institute of Education Sciences (IES) that found “low evidence” that academic preparation for college actually improved college classroom outcomes. Despite myriad college preparation methods reviewed, none of them—including advanced coursework like AP classes—was strongly predictive of college readiness.

The Brookings authors did some further analysis of their own on the impacts of high school course-taking. After examining a nationally representative database of U.S. students and controlling for academic, demographic, and individual-level variables, they found that, on average, advanced high school courses do little to prepare students to succeed in college courses.[1] Brookings also seems to have busted the myth that college students perform better in subjects that they first studied in high school. For example, students who took a year of high school economics earned a final grade in their college economics class that was only .03 points higher than students who had never taken the subject before—a “trivially small” difference that was true even between students who took the exact same college course.

These findings fly in the face of the common belief that taking certain high school classes is a vital part of college preparation. Brookings offers a few possible explanations, including that students are actually learning the “wrong” things in high school. Despite taking advanced courses, many students may not “sufficiently focus on the critical thinking commonly needed in college.” There is also the unfortunate possibility that students simply forget what they’ve learned, regardless of what kind of class they learned it in. Brookings’s analysis notes that the “very slight advantage from prior coursework” they detected could be explained by “the little information that is retained from high school.” The authors further note that, despite Common Core’s focus on nonfiction and argumentative writing, “at least some top universities doubt whether high schools have developed the capacity to train students for college-level writing.” As a result, some have refused to exempt students from entry-level writing requirements, even if they earned top scores on the AP English exam.

As for solutions, the Brookings authors recommend giving schools “more freedom to experiment with innovative and experimental courses that may be more useful to students in the long term.” This could mean career and technical education (which Ohio already does really well), but it could also include College Credit Plus (CCP). CCP is a relatively new program that offers students the chance to earn high school and college credit at the same time. All Ohio students are eligible to participate (provided that they have reached a college readiness benchmark), and students who choose to earn credits through a public college aren’t charged for tuition, textbooks, or fees.

Technically, CCP isn’t all that innovative—it’s merely a new and improved version of the dual enrollment of yesteryear. But given the Brookings findings, it’s worth questioning why we should push students to spend time (and money for test fees) to take an “advanced” preparation course that doesn’t add much value rather than an actual college course. The same is true for students taking AP courses for college credit: If the end goal is to earn college credit, why not skip the middleman—and the possibility that certain colleges won’t give credit for AP scores—and just take a real college class through CCP?

As it turns out, there are a few nuances to CCP that should force Ohio families to carefully weigh their options. First, CCP offers students three ways to earn college credit: by taking a course on a college campus, through a college course delivered at a student’s high school by a credentialed teacher, or via an online course. Unfortunately, the state currently has no oversight into the rigor of any of these three course pathways. It would be reasonable to assume that a class on a college campus is rigorous, but there are many who disagree. Until the state has some cold hard facts about student achievement and outcomes under CCP, it’s difficult to say that the program is a more rigorous option than typical college-prep courses.

Second, any college course that a student takes under CCP serves double transcript duty: An “A” on a student’s college transcript reflects an “A” on her high school transcript, and a failing grade in the college course will also result in a failing grade on the high school transcript. Advanced high school courses, on the other hand, impact only high school transcripts. For example, a student who doesn’t earn a high enough score on an AP test to obtain college credit (or earns a poor overall grade) doesn’t prematurely damage her college record; she just has to enroll in the class once on campus to receive credit. This is why the college readiness benchmark that’s currently mandatory for CCP participation is so crucial—it prevents academically unprepared students from damaging their college transcripts before they’re officially college students.

The bottom line is that the many college-prep pathways available to high schoolers—including College Credit Plus, Advanced Placement, career tech, and International Baccalaureate—offer different things for different students. While it’s important to carefully consider analyses that question the rigor and effectiveness of each program, it’s equally important to maintain several unique options for Buckeye students and families. Selecting which preparation pathway to follow should be like selecting a school—students and families should be empowered to choose the best fit.

NOTE: This piece has been revised from an earlier version—published under a different title—to better reflect the findings of the Brookings analysis.


[1] The authors note one exception: Students who take calculus “mildly” benefit from high school courses in the subject. This is most likely because calculus “is based on cumulative learning to a greater extent than other subjects.”

 

 
 

How does teaching stack up to other occupations in terms of compensation? A recent analysis from the Economic Policy Institute (EPI), an organization with union ties, has gained attention for its findings on the growing teacher “wage gap.” Using data from the Bureau of Labor Statistics Current Population Survey (BLS-CPS), the EPI analysts report a 17 percent disparity between teachers’ weekly wages relative to other college-educated workers. When they add generous benefits on top—including health care and pensions—that gap shrinks to 11 percent. These differences in wages and total compensation, the authors find, are much wider than what teachers faced in mid-1990s. Based on their analysis, they suggest raising teacher wages and benefits across the board.

Do the EPI authors get it right? There are a few problems with their analysis: They chose a questionable comparison group by looking at other college-educated workers, and they don’t account for summers off. (Also see economist Michael Podgursky’s Flypaper article, which argues that BLS benefits data undervalue teacher pensions, leading EPI to overstate the gap in total compensation.)

Let’s start with the problem of EPI’s comparison group—workers holding a college degree. By using this group as a benchmark, the analysts don’t factor in the different labor market values of the skills and abilities associated with education majors versus other university degrees. As Andrew Biggs points out, “all bachelor’s degrees aren’t the same.” For instance, though they hold equivalent degrees, an engineer may possess skills that the broader labor market values differently than those obtained by most university-educated teachers. Unfortunately, this important nuance isn’t captured in the EPI analysis. Without attempts to control for the various skill levels among college-educated workers, wage comparisons become highly tenuous.

Turning now to the question of summer vacation: Comparisons of relative wages should account for the shorter work year and, consequently, the fewer number of annual work hours for teachers. The EPI analysts, however, evade this issue by using weekly wages. They write, “Attempts to compare the hourly pay of teachers and other professionals have resulted in considerable controversy by setting off an unproductive debate about the number of hours teachers work at home versus other professionals.” As a result, the authors “elect to use weekly wages to avoid measurement issues regarding differences in annual weeks worked (teachers’ traditional ‘summers off’) and the number of hours worked per week.”

Should they have avoided this issue and assumed that teachers work similar hours as their non-teaching counterparts?

Probably not. In an article recently published in Education Finance and Policy, Kristine West takes a closer look at teacher wages by relying on self-reported estimates of hours worked, including those worked over the summer. She uses the same BLS survey data on wages but supplements her analysis with data from the American Time Use Survey. This survey is administered to a subset of the BLS-CPS respondents, asking them to report how they’ve spent their time over the past twenty-four hours. Such “time diaries,” West maintains, “provide a clearer picture of hours of work for teachers and non-teachers than either administrative data or recall data from surveys.”

Averaged over the calendar year, West finds that teachers work five fewer hours per week than non-teachers (39.8 versus 34.5 hours). When basing pay comparisons on hours worked, she finds that elementary and middle school teachers are actually paid slightly higher wages relative to similar workers—evidence that contradicts EPI’s assessment. (That being said, her analysis reveals that high school teachers face a 7–14 percent wage gap.) West’s study does not include benefits—it looked at wages only—so it probably underestimates the relative advantage in total compensation that elementary and middle school teachers enjoy. Of course, teachers aren’t necessarily to blame for working fewer hours; summer vacation is a longstanding tradition, though one that is changing in some places. But West’s study does indicate that ignoring summer in pay comparisons, as EPI did, is a questionable and perhaps political choice.

EPI’s greatest shortcoming isn’t faulty measurement, but rather its policy conclusion that raising compensation for each and every teacher would greatly improve American education.

Merely boosting teacher salaries on the backs of taxpayers is unlikely to advance student outcomes. The reason is that today’s teacher compensation structures don’t link pay to increased achievement. In virtually every U.S. school district, a single-salary schedule based on degrees earned and seniority dictates wages. But research has consistently failed to uncover a relationship between degrees and teachers’ contribution to learning. Meanwhile, the link between experience and achievement largely dissipates after a few years in the classroom. Michael Podgursky explains the problem in the Handbook of the Economics of Education: “The single salary schedule suppresses differences between more effective and less effective teachers.” Given this compensation policy, there is no reason to believe that blanket pay raises will do much good for American students.

To be certain, top-performing teachers deserve greater pay, as do educators in high-need subject areas or those working in tougher conditions. Meanwhile, the compensation of poorly performing teachers is incongruent with their meager contributions to student learning. Instead of inefficient salary schedules, school leaders should have the flexibility to allocate pay strategically—just as managers do in many other parts of our economy. So yes, teacher compensation policies need a fundamental overhaul in the U.S. But not in the way that EPI and the teachers’ unions would have you believe.

 
 

There are emerging signs, as I’ve written, that Ohio’s charter law overhaul (HB 2) is working. Significant numbers of poorly performing schools were closed last year, and Ohio’s charter school opening rate has slowed to an unprecedented crawl—both of which serve as evidence that the reforms are influencing sponsor behavior. This tightening of the sector on both ends, while painful for advocates, is absolutely necessary to improve quality overall and tame Ohio charters’ undeniably poor reputation.

It may seem odd that some Ohio charter school advocates are touting the sector’s contraction or this year’s stunted growth (an all-time low of eight new schools). It’s a form of cognitive dissonance shared by those of us who ardently support a family’s right to choose a school but are tired of watching the sector strain under the weight of its own terrible reputation and inflict collateral damage on those high-performing, achievement-gap-closing charter schools that first drew us to the cause.

Cognitive dissonance occurs when reality doesn’t sync up with theory, and when evidence points to something not working as well as the lofty idea of it. For instance, Ohio is a far different place than Massachusetts, where the current cap on charter schools holds hostage children of color who sit at the bottom of waitlists to attend some of the Bay State’s very highest performing schools. Students attending Massachusetts’ charters are unequivocally better off than their district peers. The same cannot be said of their counterparts in Ohio.[1] While there is evidence that Cleveland charters do well and that charters do particularly well with African American students (no small feat—and a statistic that I wish would impress civil rights groups and progressives), the sector has long struggled with quality.

This temporary stalling-out period is a painful reboot for the sake of the 120,000 students currently attending Ohio charters, as well as for future generations of young people who desperately need innovative, yet-to-be-developed options. Therein lies my biggest fear. 

There are hundreds, thousands, maybe tens of thousands of Ohio students for whom regular educational programs just aren’t working. The majority of these kids are enrolled in schools that, for one reason or another, are simply a bad fit. Some are hidden on the margins: students who are homeless, adjudicated, in foster care, bullied, or struggling with addiction.

I met a woman who had a powerful vision for struggling teens in our area. A social worker by trade, she envisioned a charter school for students trying to get sober. I loved the idea, but I also had to wonder what authorizer would willingly take on the unpredictable test scores of substance-dependent students in an era of heightened accountability, when schools’ academic performance counts for one-third of an authorizer’s rating. (That rating, in turn, directly impacts the authorizer’s ability to open future schools). I also wondered how such a school could ever secure start-up funds like federal CSP dollars, which are increasingly reserved for models with proven track records. To be clear: We should absolutely be funneling money toward replicating what works. But we also need to create space for educational entrepreneurs with innovative ideas (and, of course, solid business and academic plans to back them up).

As my colleagues Checker Finn and Brandon Wright have rightly pointed out, for all its successes the current state of chartering is still too narrow. Many states, like Ohio, limit start-up charters to only the most academically (and, by default, economically) challenged areas. It may be true that the “sector’s most significant accomplishment has been extricating disadvantaged children from bleak prospects in dire inner-city schools”; but chartering is too seldom being used to provide options to other kinds of students, in spite of its potential to do just that. Finn and Wright suggest more specialized schools, such as “those that focus on STEM, career and technical education, high-ability learners, special education, socioeconomic integration,” and more. Their vision for broader, bigger, bolder chartering—also echoed in a recent Mind Trust report—is on-target. But the reality on the ground in Ohio is one where geographic caps on charter schools, limited start-up funds, and a risk-averse environment make it unlikely for such a vision to unfold in the foreseeable future.  

Ohio’s mishaps have made it doubly hard for the sector to tolerate risk at the moment—and impossible for schools that want to operate outside of Ohio’s urban neighborhoods. Oh, we could try to lift the geographic cap in statute, which limits charters to a handful of “challenged districts.” We could push for better facilities and start-up funding to attract new models to our state, or grow them organically. We could talk about what it would take to create charter schools for imprisoned teenagers, for gifted kids in suburban communities, and everything in between. But the sins of our collective past keep making those conversations nearly impossible. It’s a heavy political lift to remove caps while we are still mired in debates about whether online schools need to “offer” (versus actually deliver) learning opportunities to students; while we delay the authorizer ratings around which our accountability system revolves almost exclusively; and while we refuse to hold ourselves to higher standards than the public districts from which charter families have departed.

This is truly depressing. The children and adolescents for whom Ohio schools aren’t working could be well served by alternative charter models of all stripes—the likes of which we’ve never seen here before. But there is hope. In order to pave the way for that to occur, Ohio’s sector must go through a serious reboot—and that work has just begun. It can’t be done alone, however; it is the duty of today’s charter advocates to repair the sector, earn the trust of lawmakers and the public, and hold our schools to a higher standard. If we can do this, the pendulum should swing back toward a reasonable middle ground where there’s a greater tolerance for risk taking and entrepreneurship—for opening the field to new innovative models that reach a broader set of Ohio students. Too many kids in Ohio are still in need of an extraordinary, custom-tailored education. It’s time that Ohio’s charter advocates accept the challenge of much higher expectations and get that pendulum moving. 


[1] It’s worth noting that a recent study of Ohio e-schools found that when online charters were removed, Ohio’s brick-and-mortar charter schools did slightly outperform traditional district schools.

 
 

Last week, several of my Fordham colleagues published a fantastic fifty-state review of accountability systems and how they impact high achievers. Lamentably, they found that most states do almost nothing to hold schools accountable for the progress of their most able pupils. There are several reasons for this neglect, as the report’s foreword discusses; but with states now revamping their school report cards under the new federal education law, they have a great chance to bolster accountability for their high-achieving students.

How did Ohio fare? We’re pleased to report that the Buckeye State is a national leader in accounting for the outcomes of high-achieving students. As the Fordham study points out, Ohio accomplishes this in three important ways. First, to rate schools, the state relies heavily on the performance index. This measure gives schools additional credit when students reach advanced levels on state exams, encouraging them to teach to all learners and not just those on the cusp of proficiency. Second, Ohio utilizes a robust value-added measure that expects schools to contribute to all students’ academic growth, including high achievers (and regardless of whether they come from low- or higher-income backgrounds). Third, state report cards include results for gifted students—a feature that only four other states have worked into their accountability systems. In sum, Ohio earns three out of three stars on accountability for high flyers, a feat matched by only two other states, Arkansas and Oregon. (They both earned three out of four stars, having been eligible for four stars because they award “summative” school ratings, a policy that Ohio does not currently feature.)

Three cheers to Ohio policy makers for shining much-needed light on the outcomes of its top students. As the state transitions to post-NCLB accountability, it should stay the course on the policies discussed above. Naturally, we shouldn’t rest on our laurels; Ohio can and should do more. Here are a few starter ideas.

  1. We need more research. What types of gifted programs are most effective? What are the post-secondary outcomes of our most able kids—are they earning college degrees and are some attending prestigious universities, as one might hope? Having firmer answers to questions such as these would reveal strengths and weaknesses, ultimately helping identify ways to improve our policies and practices.
  2. State policy should empower the families of high-achieving students, especially those who are dissatisfied with their current schooling arrangements. Among other things, this could include support for schools with a special focus on high-achieving or gifted students (akin to the “exam schools” that Checker Finn and Jessica Hockett have written about). Per House Bill 64, Ohio is currently studying whether to start sixteen regional, gifted-focused charter schools. Let’s hope these schools become a reality.
  3. Lastly, though not necessarily a policy initiative per se, we can always do more to celebrate the success of students who are going above and beyond. This could include cheering on Ohio’s Mathletes, geography and spelling bees competitors, mock trial teams, and top musicians. We regularly lavish praise on our best athletes; why not extend that treatment to our most studious and motivated pupils?

Ohio is a national leader in incorporating high achievers into school report cards. We should take pride in that because accountability for results is a key way to drive improvement. As they should, state policy makers will continue to debate policy around gifted and talented education—the gifted operating standards, for example, have become a hot topic in the past year. While the details of these standards are important to iron out, students and their outcomes should stay at the center. When it comes to lifting up high achievers, Ohio has made a great start. Now it’s time to push the envelope even further. 

 
 

Although recent analyses show that the child poverty rate isn't as high as many people believe, the fact remains that millions of American students attend under-resourced schools. For many of these children, well-resourced schools are geographically close but practically out of reach; high home prices and the scarcity of open enrollment policies make it all but impossible for low-income families to cross district borders for a better education.

Some research shows that low-income children benefit from attending school with better-off peers. Middle- and upper-income children may also benefit from an economically diverse setting. In short, income integration is a win-win for everyone involved. So why do the vast majority of school districts in the United States remain segregated by income? The answer isn’t much of a mystery: Schools are mainly funded by locally raised property taxes, which functionally “give wealthier communities permission to keep their resources away from the neediest schools.”

In order to examine just how isolating school district borders can be for low-income students, a relatively new nonprofit called EdBuild recently examined 33,500 school borders for school districts in 2014 and identified the difference in childhood poverty rates between districts on either side of the boundary line. (For poverty rates, the report uses the Census Bureau’s Small Area Income and Poverty Estimates, also know as SAIPE.) While a typical school district border has a student poverty rate difference of seven percentage points, EdBuild identified fifty school district borders where the difference ranged from thirty-four to forty-two percentage points. These fifty districts are located in just fourteen states, and Ohio claims nine spots in the top fifty—more than any other state.

The top four most segregated borders are between: 1) Detroit and Grosse Pointe, Michigan; 2) Birmingham and Vestavia Hills, Alabama 3) Birmingham and Mountain Brook, Alabama; and 4) Clairton and West Jefferson Hills, Pennsylvania. Clocking in at number five is Fordham’s hometown of Dayton, which has a 40.7 percent difference in student poverty rate from neighboring Beavercreek. Dayton shows up again in the seventh spot with a 40.3 percent difference from neighboring Oakwood. An interesting caveat to Dayton’s story is that Ohio actually has an inter-district open enrollment program. State law permits kids in Dayton—many of whom are enrolled in schools that aren’t just under-resourced but also academically failing—to enroll in one of the surrounding suburban school districts, but only if the receiving district chooses to allow it. Unfortunately, though not surprisingly, both Beavercreek and Oakwood have declined to accept open enrollment students.

The overall picture presented by the fifty most disparate district borders is bleak. For instance, the poorer of these districts have an average poverty rate of 46 percent compared to their wealthier neighbors’ average of just 9 percent. The average home in the affluent districts is worth approximately $131,000 more than the average home in the neighboring high-poverty district; as a result, wealthy districts are able to generate more local funds via property taxes—about $4,500 more per student. This disparity exists despite the fact that several high-poverty districts tax themselves at a higher rate than their affluent neighbors.

EdBuild’s report doesn’t offer any specific policy recommendations. However, in a related piece in the Atlantic, the organization’s CEO, Rebecca Sibilia, calls for decreasing the importance of district boundaries by “creating a larger tax pool that can fairly resource schools.” These ideas are undoubtedly horrifying to defenders of the public district monopoly and champions of so-called local control, but EdBuild’s report already offers its response to such opposition: “School district boundaries have become the new status quo for separate but unequal. It’s time to rethink the system.”

SOURCE: “Fault Lines: America’s Most Segregating School District Borders,” EdBuild, (August 2016).

 
 

The Every Student Succeeds Act requires states to use “another indicator of student success or school quality,” in addition to test scores and graduation rates, when determining school grades. This is in line with the commonsensical notion that achievement in reading, writing, and math, while an important measure, surely doesn’t encapsulate the whole of what we want schools to accomplish for our young people. Reformers and traditional education groups alike have enthusiastically sought to encourage schools to focus more on “non-cognitive” attributes like grit or perseverance, or social and emotional learning, or long-term outcomes like college completion.

We at Fordham wondered whether charter schools might have something to teach the states about finding well-rounded indicators of school quality. After all, when charter schools first entered the scene in the pre-No Child Left Behind era, the notion was that their “charters” would identify student outcomes to be achieved that would match the mission and character of each individual school. Test scores might play a role, but they surely wouldn’t be the only measure.

As the head of Fordham’s authorizing shop in Dayton, I set out to determine which indicators the best charter school authorizers in the nation were using—measures that transcended test scores. Surely, I reasoned, a quarter-century of chartering must have turned up promising approaches.

Well, there’s good news and bad news.

The good news is that it’s common for authorizers to use parent or student satisfaction survey data as one of many pieces of information in school accountability plans. (In fact, we include family survey results in the accountability plans with our schools as well.) Some authorizers also look at student retention from year to year as a proxy for family satisfaction.

The bad news is that I couldn’t find a single authorizer using measures of non-cognitive skills or social and emotional learning for its entire portfolio of schools. And that should be instructive.

The reason is that authorizers use accountability plans to make high-stakes decisions—such as school corrective action, non-renewal, revocation, and closure—that directly impact the hundreds or thousands of families whose children are enrolled in their charter schools. Consequently, it is imperative that those decisions be defensible and grounded in the most objective outcomes possible. And to date, measures of grit et al. aren’t ready for prime time. There’s simply not enough evidence that they are valid and reliable.

I did find a few authorizers that allow schools to develop school-specific accountability criteria. One authorizer’s schools developed metrics regarding student connectivity, character self-assessments, and the degree to which students made positive contributions to their schools and communities. Student surveys are administered to gather this data.

Another authorizer has some of its schools develop program-specific indicators related to environmental education. These include awareness, knowledge, attitudes, skills, and actions. Tools used to evaluate these indicators vary by school and may include student written work, hands-on experiences with natural systems and processes, completion of student questionnaires, and scores achieved during a Socratic seminar.

At present, non-cognitive measures are certainly helpful and informative—a piece of the overall picture. However, they simply are not far enough along to be major factors in accountability decisions. Perhaps ESSA will help drive efforts to more fully develop these types of indicators. For now, though, we should acknowledge that there’s a reason we use test scores and graduation rates as the primary measures of school quality: They are the best we’ve got. 

 
 

GOP presidential candidate Donald Trump recently visited Cleveland Arts and Social Sciences Academy, a charter school educating predominately minority and low-income children. I write not to comment on Mr. Trump’s candidacy, his thoughts on education policy, or even Ohio’s charter schools. Rather, this is my takeaway from the whole brouhaha—and be forewarned, it’s a wonky one: Ohio needs to return to a multi-year value-added measure.

Here’s why. Charter critics, media, and even a respected education reform group were quick to label Cleveland Arts and Social Sciences Academy a “failure.” They relied on the school’s 2014–15 school report cards, which indeed showed low A–F grades. One glaring rating was the school’s F on Ohio’s value-added measure—not good at face value, because the measure is generally uncorrelated with student demographics and is therefore a metric that high-poverty schools can and do succeed on. (Value added gauges growth over time, regardless of students’ prior achievement.)

Keep in mind, however, that Ohio is presently basing value-added rating on one year of data—and those ratings can swing quite dramatically from year to year. Consider, for example, that Toledo Public Schools received an A rating on value added in 2013-14. But in 2014-15, the district received an F. Did the district’s performance suddenly collapse? Probably not. The same phenomenon has happened to other schools and districts in Ohio; in fact, we once called this the “yo-yo effect.”

This means that when it comes to value-added ratings, we need to take stock of multiple years of data. One year doesn’t tell the entire story. When we consider the longer track record of Cleveland Arts and Social Sciences Academy in the table below, we actually see that it has performed pretty well on the state’s value-added measure, save for 2014–15 and 2010–11.

Notes: Prior to 2012–13, Ohio rated schools on a three-tier system for value-added: Above, Met, or Below. Starting in 2012–13, the state moved to an A–F rating system. In 2013–14, the state rated schools on value-added based on a three-year rolling average (if the school had three years of data). With the transition in state exams, it discontinued the multi-year approach and rated schools on one-year of data in 2014–15. Ohio will again rate schools on one year in 2015–16 due to another transition in testing.

As I’ve argued before, because value-added scores—and their ratings—tend to fluctuate from year to year, we need to smooth out the volatility to gain a clearer understanding of school performance. It’s similar to how economists make “seasonal adjustments” to annual employment data due to, e.g., spikes in retail jobs around the holidays. The state should do the same with value added so as not to mislead the public into believing that a school is a massive failure (or resounding success) based on just one year of data.

At the end of the day, Cleveland Arts and Social Sciences Academy may indeed have experienced a “bad year” in 2014–15. To be sure, that rating should not be excused, and there is some reason for concern. But the preponderance of value-added data indicate that the school has been modestly successful over the longer haul, and certainly not an organizational failure in my view. So let’s call this a lesson learned: When it comes to value-added ratings, base them on multiple years of data.

 
 
 
 

We look at Ohio’s subpar family score reports, another charter school success story, the Columbus mayor’s school choice support, and more

Ohio has developed one of the nation’s best school report cards, packed with data and clear A–F ratings for schools and districts. In this light, the reports that parents receive on their own children’s state exam performance are doubly disappointing. Simply put, the current form of these reports is mediocre. They represent a missed opportunity to clearly convey where children stand academically, how well (or not) they are progressing in school, and how bright (or not) are their future education prospects.

Ohio can and should do a better job communicating with families.

The image below displays a snippet from a sample state test score report for 2015–16. The student’s name (Jane) and high school math score (706) are fictitious. The entire document is available at this link both for grades 3–8 and high school.

These score reports have a couple of helpful features that provide context and comparison, such as giving families the ability to relate their children’s scores to various averages. In this example, Jane’s math score lags behind these averages, which might raise flags for her parents. Additionally, the report breaks down the math test results by subtopics (e.g., ratios and proportions, modelling and reasoning) and even offers ideas, if only cursory ones, on how families can help their children improve in each area.

But the score reports also leave a murky picture of achievement. The headline on the front of the report, shown in the figure above, reads (emphasis added): “Jane’s score is 706. She has performed at the proficient level and meets standards for Mathematics.” That sounds great, but careful readers will observe the following note in the report’s glossary: “The accelerated level of performance suggests that a student is on track for college and career readiness.” So which is it? Is Jane doing fine by state standards, or is she off track academically because she hasn’t made it to “accelerated”? Policy wonks in Columbus will know that the discrepancy between the fine print and the headline is explained by Ohio’s failure to match its proficiency standards with college-and-career-ready benchmarks. But unless they’ve combed through the details, parents are likely to suppose—wrongly—that proficiency signifies being on track for college or the workforce after high school.

Instead of sending mixed signals, state policy makers should tell families straight-up whether their children are on track for college and career. This would be more in line with Ohio’s own commitment to ensure that students are well prepared for post-secondary education or the workforce. Readiness is at the heart of the state board of education’s vision statement, and it was even deemed a “social and moral obligation” by state policy makers when they wrote to federal officials last year (page 25).

State legislators should do their part by aligning proficiency with CCR benchmarks. ODE and the state board of education can pitch in by telling parents on their children’s score reports that proficiency does not match the readiness standard: They should do this in bold print and most certainly not bury the message in a glossary.

In addition to making this utterly fundamental change, Ohio policy makers should consider two other steps that would greatly improve communication with families.

  • Report students’ statewide percentile ranks. As mentioned above, families can gauge where their children’s scores stand relative to their peers by comparing them to various averages at the state, district, school levels. This is a useful starting point, but the state could give parents much clearer information by reporting percentile ranks. In the example above, we know that Jane is below the statewide average—but just how far below? Is she at the forty-fifth or the twenty-fifth percentile? We don’t know, because the report doesn’t say. Reporting ranks would also allow parents of both high- and low-achieving students to annually track whether their children are holding steady, gaining ground, or falling behind their classmates. Nothing in the current report accomplishes that. Percentile ranks shouldn’t be a mystery: The ACT and SAT report them to test takers. Why not Ohio?
     
  • Use predictive analytics to give families a better idea of the college trajectory of their children.   With the help of analytic tools that companies like SAS have already developed, Ohio could let parents see which kinds of colleges their middle school children are likely to be admitted to four or five years into the future. The projections could be displayed on these score reports, as they would be based on state exam results. Much care, of course, would be needed in communicating them (they shouldn’t be presented as deterministic, and actionable steps should be offered to help change these trajectories). If done well and provided early in kids’ lives, this type of information could empower families to make sure their students are on track for success after high school.

Some will say that such candid information, projections, and advice aren’t the state’s role. After all, parents see quarterly report cards four times each year. In many cases, these reports offer timely and relevant information, especially in the absence of a state exam. But in a time of grade inflation, an A or B on the report card may not always mean what it should, especially if we believe that schools sometimes provide less-than-frank diagnoses of children’s academic performance. Report cards also help little when placing a child’s achievement into broader context. State exam results provide an essential extra: an independent checkup on achievement, akin to what an external audit supplies the shareholders of a company. Families deserve the truth about those results, both in relation to post-secondary readiness and to their children’s peers.

Ohio policy makers have made a good start by disseminating clear information about school performance. Now they need to be just as clear and informative about the performance of individual children.

 
 
Columbus Collegiate Academy (CCA) epitomizes the relentlessness and vision necessary to close achievement gaps in urban education. Started in the basement of a church with 57 students in 2008, CCA evolved into one of the city’s top-performing middle schools. It earned national awards for the gains achieved by students who are overwhelmingly disadvantaged, and grew into a network of schools serving 600 students. I visited CCA in its original location in 2009. Despite its unassuming surroundings, I knew right away this school was different. It was the type of place that inspires you the moment you step through the door. Its hallways echoed with the sound of students engaged in learning. College banners and motivational posters reminded students—and visitors—of why they were there. Teachers buzzed with energy, motivated by a combination of urgency and optimism—all students can and will learn. Its founder and visionary leader, Andrew Boy, spoke deliberately and matter of factly about the success CCA would help each student achieve. He was aware of and sensitive to the challenges facing his students—hunger, trauma, housing instability, and the myriad complications of poverty. But these obstacles would not become excuses upon which to hang blanket statements about children. Boy knew that for the most at-risk students, low expectations victimize them even further—and they deserve better.
 
Columbus Collegiate Academy – West, a replica of the original CCA, opened in 2012 in Franklinton in one of the city’s poorest neighborhoods. The school’s relentless focus on academics and high expectations both academically and behaviorally are exemplified through Jahnea’s story. An eighth-grader, she tells about her plans for high school as well as college and beyond—a vision for her own life made possible in no small part because of the expectations CCA leaders and teachers have for her and their willingness to do whatever it takes to help her get there. We hope her story reminds you that this is what’s possible when we invest in and empower high-quality charter schools.
 
Read the full profile here.
 
 

August 16 marked the first day of school for the thousands of children who attend the Dayton Public Schools (DPS). They returned to a district with a new superintendent, but many old problems. Regrettably, Dayton is at the end of a five-year strategic plan that barely moved the needle on the city’s dismal track record for student achievement. In 2014–15, DPS was the lowest-performing of Ohio’s 610 public school districts. That distinction should make Dayton’s citizens cringe.

Superintendent Rhonda Corr—who knows Cleveland well but is new to the Gem City—was given only a one-year contract by the board of education. That’s not enough time to accomplish much beyond figuring out what needs fixing. She’ll need to determine why so few of Dayton’s young people are learning enough to put themselves on track for success in later life.

She may find something nobody has ever spotted before, but previous diagnoses of Dayton’s education woes have uncovered plenty of problems. Some of them are outside the school system’s immediate control, such as the tragic challenge of multi-generational poverty. Others, though, are endemic to the district itself, including a stubborn bureaucracy, eleven different bargaining units, high rates of truancy, and huge numbers of suspensions in the seventh and eighth grades.

Dayton has undertaken numerous efforts to turn the situation around, including the aforementioned strategic plan, DPS’s Contract with the Community, a Theory of Action with all the right buzzwords, Neighborhood Schools, a robust list of community partners, and the mayor’s City of Learners initiative, to name a few. The Council for the Great City Schools has conducted “peer reviews” of DPS at least twice, in 2002 and in 2008. In 2013, the district bravely took a close look at its teacher policies with the help of the National Council on Teacher Quality. The resulting report, Teacher Quality Roadmap: Improving Teacher Policies and Practices in the Dayton Public Schools, contained over twenty findings that paved the way for some overdue changes in school staffing. These included greater principal autonomy, revised procedures for reductions in force, and the establishment of new committees to work on professional development, tenure, and compensation. (Reading up on this history should be at the top of Superintendent Corr’s to-do list!)

None of these reports and plans have been enough to reboot DPS, which is now in line for a state takeover in 2018 unless student achievement improves dramatically. (See here for a comprehensive report from the Ohio Department of Education outlining the district’s challenges.) So Superintendent Corr plainly has her work cut out for her. While it may seem overwhelming, here are some suggestions she might consider: 

  1. Request a performance audit from Auditor of State Dave Yost. These audits—available to all Ohio districts—identify areas of cost savings; Dayton hasn’t had one since 1998. DPS should put the money saved into rebooting its lowest-performing schools.
  2. Provide adequate training, management, and support for leaders. This critical piece of infrastructure was identified as the number-one challenge regarding leadership, governance, and communication (here at page 21). The district needs to clarify roles and functions, provide training, and balance workloads. If it can’t keep good leaders and develop sustainable succession plans, we can’t possibly expect anything to improve.
  3. Staff high-needs schools for success and pilot at least two turnarounds. This recommendation is borrowed in part from Teacher Quality Roadmap. DPS has talented school leaders and teachers. To accomplish it, the district must (a) assemble teams to turn around the two lowest-performing DPS schools; (b) identify school leaders and teachers with several years of success in their respective roles; (c) give the principal freedom to lead the process so that the teams are cohesive, committed, and mission-aligned; (d) study turnaround strategies to identify which one is right for the building and its students and families; (e) pay the team more because they’re taking on more; (f) build in systems for growth into leadership roles, succession, and substantive professional development; and (g), if successful, replicate!
  4. Closely review curricula and implementation, as well as current testing practices. The DPS website touches on many topics but, amazingly, doesn’t address what’s being used in the classrooms or whether it is being implemented effectively. There’s also lots of information about the state’s new academic standards (a.k.a. Common Core), but putting them into practice has proven challenging.
  5. Actively monitor family engagement in each building and use it as a measure of school health. The National Parent Teacher Association has found—of course—that students do better when their families are engaged. Engagement means more than open houses, a holiday performance, and a few conferences here and there. Rather, it has to include the proactive engagement of parents, guardians, relatives, caregivers, and students on the individual level of home visits and phone calls. Developing relationships is essential.

DPS already has a lofty and appropriate mission statement: “Equip our students to achieve success in a global society by implementing an effective and rigorous curriculum with fidelity.” The new superintendent’s job—and all people of good will must wish her success and assist her in every way possible—is to begin to make that a reality for Dayton’s children.

 
 

Ohio leaders have started an important conversation about education policy under the Every Student Succeeds Act. One of the central issues is what accountability will look like—including how to hold schools accountable for the outcomes of student subgroups (e.g., pupils who are low-income or African American). Ohio’s accountability system is largely praiseworthy, but policy makers should address one glaring weakness: subgroup accountability policies.

The state currently implements subgroup accountability via the gap-closing measure, also known as “annual measureable objectives.” Briefly speaking, the measure consists of two steps: First, it evaluates a school’s subgroup proficiency rate against a statewide proficiency goal; second, if a subgroup misses the goal, schools may receive credit if that subgroup shows year-to-year improvement in proficiency.

This approach to accountability is deeply flawed. The reasons boil down to three major problems, some of which I’ve discussed before. First, using pure proficiency rate is a poor accountability policy when better measures of achievement—such as Ohio’s performance index—are available. (See Morgan Polikoff’s and Mike Petrilli’s recent letters to the Department of Education for more on this.) Second, year-to-year changes in proficiency could be conflated with changes in student composition. For example, we might notice a jump in subgroup proficiency. But is this an indication of gap closing? Not necessarily: It might be explained by a change in the subgroup’s student composition.

Third, and perhaps most importantly, while reducing the achievement gap remains an important goal, policy makers should not explicitly pit one group of students against another in accountability systems. Unfortunately, this is what the gap-closing component does: It compels schools to disproportionately focus on certain subgroups at the expense of others.

So let’s scrap the gap-closing measure and start over. But how should Ohio proceed?[1]

In my view, state policy makers should create a new report card component dedicated to subgroup performance. It would rely on disaggregated performance index scores (a status measure) and disaggregated value-added scores (a growth or longitudinal measure). Ohio already breaks down value-added scores by three subgroups and would just need to extend those efforts to additional subgroups. The state would need to introduce a subgroup performance index, although that calculation is relatively simple and straightforward. The component could look something like the following (more subgroups could be added, such as gifted students or homeless students, and weights could be altered):[2]

Table 1: Hypothetical subgroup performance report card component

Subgroup

PI Grade

VA Grade

Points Earned for Subgroup

Race/Ethnicity: Asian

C

D

1.5

Race/Ethnicity: Black

B

C

2.5

Race/Ethnicity: Hispanic

A

D

2.5

Race/Ethnicity: Multiracial

D

D

1.0

Race/Ethnicity: White

B

A

3.5

Students with Disabilities

A

C

3.0

Limited English Proficiency

D

D

1.0

Economically Disadvantaged

C

A

3.0

Composite Subgroup Performance

C

2.25

* Assigning points in the following way: A = 4; B = 3; C = 2; D = 1; F = 0; equally weighting PI and VA grades and across the various subgroups; rounding the average composite number of points at the half-point interval when making the conversion to a letter grade (e.g., 2.25 rounds to 2.00 = C).

A component such as this should ensure a more technically sound and transparent way of holding schools accountable for subgroup outcomes. The approach would assign responsibility for the achievement and growth of both typically higher- and lower-performing subgroups. It would also send the right message. Here in Ohio, our approach to ratings is streamlined (for the most part, just two key measures) and fair (balancing growth and achievement). Our accountability system needs to work with schools to make certain that all students, no matter their background or starting point, can grow academically and reach their potential.


[1] Under ESSA, Ohio will need to implement some type of subgroup accountability measure to identify schools with low-performing subgroups. It may not have to be a standalone report card component or an A–F graded measure as displayed above.

[2] If Ohio goes this route, the state would probably need to disaggregate whichever subgroups are graded on the performance index to also be graded on value added. In other words, Ohio likely could not disaggregate its PI scores for English language learners without disaggregating their VA scores. 

 

 
 

The surprising best seller Hillbilly Elegy: A Memoir of a Family and Culture in Crisis has become something of a cause célèbre on the grounds that it explains the appeal of Donald Trump to the white underclass (from which author J.D. Vance emerged). Writing in the American Conservative, Rod Dreher aptly notes that the book "does for poor white people what Ta-Nehisi Coates's book did for poor black people: give them voice and presence in the public square."

The book should also be required reading among those of us in education policy. It reminds us of the roles that institutions play (and fail to play) in the lives of our young people, and further suggests that education reform cannot be an exclusively race-based movement if its goal is to arrest generational poverty. Poverty is a "family tradition" among Vance's people, white Americans of Scots-Irish descent who were once "day laborers in the Southern slave economy, sharecroppers after that, coal miners after that, and machinists and millworkers during more recent times."

Vance emerges as something of an emissary to elite America from Fishtown, the fictional composite of lower-class white America that Charles Murray described in his 2012 book Coming Apart. This growing segment of American society is marked not just by economic poverty, but also by social and cultural poverty: the decay of bedrock institutions like marriage and organized religion, as well as the erosion of cohesive social standards like the two-parent family. Still, the more apt comparison might be to Random Family, Adrian Nicole LeBlanc's 2003 book about two young women caught up in a suffocating web of destructive relationships, teen pregnancy, drugs, crime, and general dysfunction in the South Bronx.

If the connective tissue between the urban poor and downwardly mobile working-class whites is lost on pundits and policy makers, the same isn’t true of Vance, who describes being deeply struck by William Julius Wilson's book The Truly Disadvantaged. "I wanted to write him a letter and tell him that he had described my home perfectly," Vance writes. "That it had resonated so personally is odd, however, because he wasn't writing about the hillbilly transplants from Appalachia—he was writing about black people in the inner cities." Ditto Charles Murray's Losing Ground, "another book about black folks that could have been written about hillbillies—which addressed the way our government encouraged social decay through the welfare state," he notes.

Watching an episode of The West Wing on television, Vance is struck that "in an entire discussion about why poor kids struggled in school, the emphasis rested entirely on public institutions. As a teacher at my old high school told me recently, 'They want us to be shepherds to these kids. But no one wants to talk about the fact that so many of them are raised by wolves.'" The characterization is unkind, but Vance is unsparing in his analysis of the people he loves and the culture they have created. It can include "an almost religious faith" in hard work and the American dream; yet he describes his town as one "where 30 percent of the young men work less than twenty hours a week, and not a single person [is] aware of his own laziness."

Vance comes from "a world of truly irrational behavior." His family, friends, and neighbors spend their way into poverty. "And when the dust clears—when bankruptcy hits or a family member bails us out of our stupidity—there's nothing left over. Nothing for the kids' college tuition, no investment to grow our wealth, no rainy-day fund if someone loses her job. We know we shouldn't spend like this. Sometimes we beat ourselves up over it, but we do it anyway," he writes. Domestic life is a chaotic mess of failed relationships, drug abuse, and self-sabotage. "We don't study as children, and we don't make our kids study when we're parents," Vance acknowledges. "Our kids perform poorly in school. We might get angry with them, but we never give them the tools—like peace and quiet at home—to succeed." It is only when Vance enjoys a few years of relative stability—living full-time with his "Mamaw" (grandmother), herself a tough, foul-mouthed, and violent character—that he is able to begin to turn his life around.

One must be richly skilled in cherry picking, or else deeply in denial, to see clear public policy solutions to the problems illumined in Hillbilly Elegy. While Vance may see personal behavior rather than policy as exerting a greater influence on life outcomes, public institutions—the Marine Corps and Ohio State University most particularly—played a prominent role in arresting his otherwise inevitable march down the road to nowhere. If Vance's hillbillies' lives are chaotic, their politics are incoherent. "Mamaw's sentiments occupied wildly different parts of the political spectrum," Vance writes, ranging from radical conservative to European-style social democrat depending on her mood or the moment. "Because of this, I initially assumed that Mamaw was an unreformed simpleton and that as soon as she opened her mouth, I might as well close my ears.” Eventually, he perceives wisdom in his grandmother's contradictions: "I began to see the world as Mamaw did. I was scared, confused, angry, and heartbroken. I'd blame large businesses for closing up shop and moving overseas, and then I'd wonder if I might have done the same thing. I'd curse our government for not helping enough, and then I'd wonder if, in its attempts to help, it actually made the problem worse."

If there is any theme that has emerged from the fractious state of our political and civic lives in 2016, it is not how divided we are, but rather how deeply and stubbornly obtuse we are about one another's lives. There is a tendency among refomers to sentimentalize the lives of the poor, or to infuse poverty with a note of tragic heroism. Vance seems aware of this himself, noting in his preface that his object is not to argue that working-class whites "deserve more sympathy than other folks" but that he hopes readers "will be able to take from it an appreciation of how class and family affect the poor without filtering their views through a racial prism."

My first attempt to read LeBlanc’s Random Family failed. The despair it conveyed was bottomless, and it took over a year before I was able to return to it. A similar grimness at times weighs down Hillbilly Elegy. It is only the foreknowledge of how Vance's story ends, with a slot at Yale Law School and a job at a Silicon Valley investment firm, that allowed me to keep turning the pages. But none of this makes his story less essential. I used to assign Random Family to graduate students who were first-year Teach For America corps members; I still view it as required reading for anyone teaching low-income, inner-city children. For education reformers, I would now bookend that recommendation with Vance’s memoir. Both books force us to confront simpleminded views of the ills we seek to address, and to be humble about over-optimistic schemes to set things right. For education reformers, I do not recommend reading Hillbilly Elegy. I recommend studying it.

Editor’s note: This post was originally published in a slightly different form in U.S. News & World Report.

 
 

Columbus Mayor Andrew Ginther is passionately outspoken about Columbus City Schools. He is an alumnus of the district, and his first experience as an elected official came as a member of its board of education. He has regularly praised Columbus City Schools and publicly bemoaned those who have spoken negatively about them. "I was tired of listening to people talk poorly about Columbus schools," Ginther said in a 2011 interview with ThisWeek Community News, explaining why he initially ran for school board. "As a matter of fact, I had a great experience in Columbus City Schools."

So strong is his belief in the district that Ginther is a major proponent of the levy this November that would authorize a 18 percent tax increase on residents to provide an influx of cash to Columbus City Schools.

However, when facing the decision of where to send his own daughter for kindergarten, Ginther chose a different path than the one he acclaims for the rest of the city's children. It is Ginther’s long-term support of Columbus City Schools that made last week’s announcement both surprising and noteworthy. The family’s assigned district school is a shining star that has been ranked as one of the best public elementary schools in the state; it’s a feeder, in fact, for the very high school from which the mayor himself graduated. Yet instead of “going public,” Ginther has decided to pay $20,175 a year for his daughter to attend an elite suburban private school.

Let me be clear: I support Mayor Ginther’s personal decision on how to best educate his child. As he explained in a statement to Columbus Monthly, “Every family must make decisions on what is best for their children to help them learn and grow.” Others can debate the optics of the decision in regard to the district’s levy request, but this is one of the core principles of the school choice movement: the ability of parents to send their children to the school that will serve them best.

His predecessor, Mayor Michael Coleman, established the Office of Education and worked with the Columbus City Schools to offer district-run charter alternatives within the public school system. A member of Columbus City Council at the time, Ginther broadly backed then-Mayor Coleman’s efforts to improve education in the city. Since becoming mayor himself, however, Ginther has been curiously silent on school choice and district alternatives—yet he is now electing to utilize just such an alternative for his own family. If Ginther recognizes the inherent value of school choice by sending his daughter to a prestigious private institution, the least he can do is fight for other families to have options too.

Moving forward, I hope that Mayor Ginther will use his platform to be a strong advocate for school choice so that all parents in the city of Columbus are able to enjoy the same freedom for their children that he has exercised. Anything less than this would be complete hypocrisy.

 
 
 
 

We look at teacher absenteeism data, Ohio’s readiness for an alternative testing regimen, and more

Chronic absenteeism among students elicits serious concern for good reason. When pupils miss many days of school, they risk falling behind. This further puts them at risk of dropping out or being sucked into the criminal justice system through truancy proceedings, which is punitive for both students and their parents. (A bill proposed earlier this year would decriminalize truancy; Ohio lawmakers should revisit it soon.)

If attendance is so critical for students, isn’t it even more critical for teachers—especially since they are the most important in-school factor impacting student success? Yet data from the latest Civil Rights Data Collection (CRDC), a federal survey of all public schools in the country, demonstrates that teacher absenteeism is a pressing problem nationally and in Ohio.

We learn from the CRDC report (from the 2013–14 school year) that 28 percent of Ohio public school teachers (in traditional public and charter schools) were absent for ten or more days for sick or personal leave. This compares to 27 percent of teachers nationally. CRDC does not count paid professional development, field trips, or other off-campus activities with students, nor does this estimate include paid holidays or paid vacation time.

Is teacher absenteeism concentrated in particular districts and schools, or is it generally widespread? In how many of Ohio’s 3,600 schools is it a serious problem? The table presents a brief analysis of statewide findings.


 

*These numbers are cumulative: the forty-seven schools where 90 percent or more teachers missed ten or more days are also included in the previous columns.

The findings show that teacher absenteeism is widespread across the state, and is an urgent problem in at least several hundred schools. Yes, we should be cautious in drawing too many interpretations; this data is from one year and there could be reasonable explanations for variance (especially among small schools). For example, a school may have had lots of teachers on maternity leave, or experienced an outbreak of flu.

Still, it’s apparent that many districts face challenges with teacher attendance, some far worse than others. Graph 1 (below) provides data for Ohio’s “Big 8” urban districts to illustrate the range and severity of the problem, especially in Cleveland. We should be particularly concerned about teacher attendance in districts like these, which serve large percentages of low-income students and students of color, youngsters for whom quality learning time is particularly critical.

Graph 1: Percentage of teachers missing ten or more days, Ohio “Big 8” urban districts (2013–14)

At least four of Ohio’s Big 8 urban districts—Canton, Cleveland, Columbus, and Toledo—had serious problems with teacher attendance during the school year studied. Cleveland’s astounding—appalling—numbers are backed by a 2014 report by the National Council for Teacher Quality (NCTQ) that ranked Cleveland as second worst nationally in terms of chronic teacher absenteeism, defined as teachers missing eighteen or more days. (The NCTQ report relied on an older version of the CRDC data.) In that report, more than a quarter of Cleveland teachers were chronically absent. However, the relatively low numbers in Akron, Dayton, and Youngstown suggest that sky-high absentee rates don’t have to be the norm in urban schools. In fact, the NCTQ report found no relationship between teacher absenteeism and school poverty rates.

Graph 2 depicts the percentage of teachers missing ten or more days in Ohio’s ten largest districts outside the Big 8. These districts are varied in the number of disadvantaged students they serve and seem to align with NCTQ’s previous finding that more-privileged districts sometimes also struggle on the teacher attendance front. Districts marked with asterisks were ranked among the thirty wealthiest in the state. As you can see, even Olentangy Local Schools, a suburban district outside of Columbus in which just 7 percent of students are poor, had almost a third of its teachers miss ten days or more. A quick glance at several other posh districts around the state reveals a similar pattern: Bexley—31 percent; Upper Arlington—30 percent; New Albany—24 percent; Beachwood—43 percent; Oakwood—26 percent; Chagrin Falls—18 percent. This puts them in worse standing on teacher absenteeism than about half of Ohio’s most challenged urban districts. Discounting Cleveland, which clearly skews the “Big 8” urban average, Ohio’s urban districts (Graph 1) actually had fewer teachers missing ten or more days of school (33 percent) than the top ten districts (36 percent) (Graph 2).[1]

Graph 2: Percentage of teachers missing ten or more days, Ohio’s largest (non-Big 8) districts (2013–14)

The costs of teacher absenteeism are immense, totaling more than $4 billion annually for the country as a whole, according to a 2012 report from the Center for American Progress. In Cleveland, the minimum cost of substitute teachers alone is anywhere from $3 to $6 million annually—and likely much higher.[2] Taken to scale, the 29,500 Ohio teachers—28 percent of the state’s total teaching workforce—who missed ten or more days in 2013–14 created monumental costs for districts and taxpayers.[3] Beyond the cost of hiring substitutes, teacher absenteeism takes a toll on school culture. It communicates to staff and students alike that attendance isn’t a priority—and thus may inspire others to take more days and make absenteeism and truancy the norm. When substitutes aren’t available, it creates enormous burdens on fellow teachers who have to teach additional students, or miss planning or break periods to cover for their colleagues. And it disrupts student learning. When any teacher—let alone the majority in a school building—miss 5 or 10 percent of the school year, you can bet that students are missing out instructionally and that the scope and sequences of key subjects is heavily disrupted.

One likely reason that schools have so many absent teachers on a given day is because the leave provisions set forth in collective bargaining agreements tend to be generous. Cleveland’s collective bargaining agreement, for example, allots eighteen paid sick days and three paid personal days out of 185 instructional days (a whopping 11 percent of the school year). Compare this to the average sick time for private sector employees (ten days and possibly a handful of personal days), or for state employees (ten sick days and four personal days)—who work on average eighty days more than teachers. In addition, most contracts allow extensive sick leave accumulation from year to year; consider Dayton Public Schools’ recent  $149,000 sick leave payout to its outgoing superintendent as just one example.

It’s also noteworthy that the teacher absenteeism problem seems to be most acute in traditional public schools bound by collective bargaining agreements. Among the 330+ charter schools in the study—almost none of which has a labor contract—not a single one had very high rates of teacher absenteeism (defined as 75 percent or more missing ten days). Fewer than 2 percent saw half their teachers miss ten days, even though many charter schools have longer work days and school years and their teachers have the same working conditions that make sickness likely, such as germ exposure and sheer exhaustion (in fact, charter educators may have tougher working conditions because their schools can only be located in Ohio’s academically and economically challenged communities). The charter comparison data suggest that the culture in unionized traditional public schools has made high rates of absenteeism the norm in many schools. 

If we are serious about ensuring that more students receive solid instruction and stay on pace, curbing teacher absenteeism must be a priority. Teacher unions responsible for negotiating such generous leave policies should consider the adverse impact such policies have on students. Districts should work hard to improve teacher attendance. And teachers themselves should recognize how much is at stake when they take significant leave beyond the generous vacation, paid professional development, and vacation time allotted to them. When it comes to raising student achievement—especially those for whom so much hangs in the balance—there can be no sacred cows.

Update, August 9, 2016:
To access the CRDC Ohio data for your own school or district, go here. Data are reported for schools but can easily be calculated to find district averages. Column YA ("SCH_FTETEACH_TOT") is the total number of FTE teachers. Column YG ("SCH_FTETEACH_ABSENT") is the number of FTE teachers who were absent more than 10 school days during the school year. Column YH is an added column that calculates the percentage of each school's total FTE teachers missing 10 or more school days.

[1] The average for the Big 8 urban districts, including Cleveland, is 45 percent.

[2] In the 2013–14 CRDC survey, 2,288 Cleveland teachers missed ten or more school days. If we err on the conservative side and assume all of those teachers missed only ten days (even though we know that’s not the case given the high chronic absentee rate reported in the NCTQ report), at a daily substitute rate of $129 to 144 (not including long-term sub rates, which are higher) the district would have paid $2.95 million to $3.3 million for substitute teachers alone. Assuming each of those teachers were absent an average of fourteen days, the costs would be more in the ballpark of $4.1 to $4.6 million. If those teachers averaged eighteen days, the cost would be $5.3 to $5.9 million. Of course this also leaves out any additional overhead costs and the cost of longer-term absences beyond the eighteen days.

[3]If 29,500 teachers each took ten days (the minimum), that’s 295,000 substitutes across the year. If substitutes were found for only 250,000 cases, at a cost of $120 per day (far lower than Cleveland’s average), that totals $30 million annually. And the actual cost is probably much higher.

 
 

Many education stakeholders see the Every Student Succeeds Act (ESSA) as an opportunity to fix the most problematic provisions in NCLB. For many critics, the biggest bogeyman was too much standardized testing and its associated accountability measures. While ESSA maintains the annual testing requirements, it also offers new flexibilities. Among these is the opportunity to apply for the Innovative Assessment Pilot (IAP).

IAP is a provision that permits states to pilot an innovative assessment system in place of a statewide achievement test. “Innovative” is an umbrella term that covers a plethora of different testing options, including (but not limited to) competency-based, instructionally embedded, and performance-based assessments. Regardless of the assessment type chosen by a state, it must result in an annual, summative score for a student. Authority to participate in the pilot—known as “demonstration authority”—will be granted through an application process run by the secretary of education. No more than seven states will be allowed to participate in the pilot for a period of up to five years, with the option to apply for an additional two-year extension.[1]

Folks who are worried that states might use the pilot to weaken state accountability systems will be happy to learn that ESSA establishes guardrails that make that unlikely. As part of the application process, states must demonstrate how they will “validly and reliably aggregate data from the innovative assessment system for purposes of accountability,” specifically the new law’s statewide accountability requirements. The results must also be valid, reliable, and comparable “as compared to the results for such students on the state assessment.”[2] Does that mean that students are double testing—taking both the statewide assessment and the innovative assessment—during the pilot? Yes and no. According to a recent blog post in Education Week, the department’s proposed regulations give states four options for comparing their pilot assessment to their previous statewide test:

  1. States could give the state test once per grade span (but not every grade) in which students take an innovative test (like New Hampshire).
  2. States could give both the state test and the innovative test in certain grades, but they aren’t required to give both tests to every student—they could administer the state test to a representative group.
  3. States could utilize similar questions or items on both the state test and the innovative test.
  4. States could create their own equally rigorous comparability measure.

States can opt to initially run the pilot in a subset of districts rather than statewide (proposed regulations also permit states to focus on a certain grade or a certain subject). However, the innovative assessment system must be scaled statewide by the end of the pilot period, and states must prove throughout the course of the pilot that there is a “high-quality” transition plan for statewide implementation in place.

The requirements for statewide scalability and inclusion in the statewide accountability system might be two serious deterrents for states that were only halfheartedly considering an application. The fact that there may not be any federal funding to help states implement the pilot is another drawback. And for those brave remaining states that are still interested, the extensive application process could change their thinking. The application’s basic requirements include descriptions of the innovative system a state plans to use, experience the state has with all components of the system, and the planned timeline. Sounds easy enough, right? But check out a few additional items that states must demonstrate in their applications:[3]

  • The system must generate results that are valid, reliable, and comparable for all students and subgroups.
  • The system must be developed in collaboration with teachers, school leaders, local districts, parents, civil rights organizations, and stakeholders that represent the interests of students with disabilities, English language learners, and other vulnerable students.
  • The system must annually assess the same percentage of students and subgroups enrolled in schools under IAP that would be assessed under other state testing requirements.
  • States must describe how they will support teachers in developing and scoring pilot assessments.
  • States must describe how they will solicit regular feedback from teachers, school leaders, and parents and how it will respond by making needed changes.

So what about Ohio? Should the Buckeye State roll up its sleeves and dive into the IAP application? Some education advocacy groups have said yes, and with good reason. Ohio is already part of the Innovation Lab Network, which aims to implement student-centered learning approaches. Ohio law already permits the state superintendent to grant waivers to schools interested in piloting an alternative assessment system. The state also boasts a competency-based education pilot and the Ohio Performance Assessment Pilot Project. In short, Ohio seems like the perfect state to take on IAP.

But a solid foundation doesn’t always indicate that it’s time to build a house—and Ohio’s work with innovative assessments doesn’t mean that the state should jump at participating in IAP. Ohio schools are still reeling from administering three separate statewide assessments in as many years. Safe harbor has been in effect since 2014–15, which was the same year that our current standards were first implemented in full. Our current accountability system has reported on, but never actually held schools accountable for, those state standards and their aligned assessments. And speaking of state standards, the Ohio Department of Education (ODE) is presently revising them. According to ODE, schools will be transitioning to revised math and ELA standards during the 2017–18 school year—the same year that IAP could be starting. Add to that the myriad other changes that are quickly approaching with ESSA, and the Buckeye State looks to have plenty on its plate in the coming years even without the huge undertaking of IAP.

To be clear, I’m not saying that Ohio can’t successfully pilot an innovative assessment system. I’m saying that maybe we shouldn’t—at least not yet. The IAP provisions state that three years into the program, the Institute of Education Sciences (IES) must publish a report that examines whether the innovative assessment systems have been successful. The findings from this report will be used to establish a peer review process that will extend the pilot to additional states. Ohio could be one of the first states to apply for the second round of IAP demonstration authority. In the meantime, ODE could focus on getting the rest of ESSA implementation right.

Passing on the first round of the pilot doesn’t mean that Ohio has to abandon its work with innovative assessments either. While gathering stakeholder input for ESSA implementation, the department could gauge interest and ideas for an innovative assessment without the pressure of a looming application deadline and the potential of increased federal oversight in state assessment policy. The results from Ohio’s competency-based education pilot and the Ohio Performance Assessment Pilot Project could be gathered and examined, and Ohio could watch and learn from PACE in New Hampshire and the first round of IAP states.

In short, this may be a good time to observe rather than act. Supporters of innovative assessments often value them because they’re different from standardized tests. But opting for something different also means losing out on the features that made standardized tests so popular in the first place—they’re cheap, reliable, easy to administer, and they make comparing results simple. If we’re serious about making innovative assessments work for Buckeye students, we have to pick the right time to invest in them—not just jump at the first chance we get.


[1] States are permitted to apply as part of a consortium, but a consortium can’t exceed four states, and the limit to the total amount of states participating in the pilot remains the same. The secretary of education determines the official start of the pilot, though 2017–18 is the earliest date.

[2] Although states are permitted to use the results from their innovative assessment as part of their accountability system, they are not required to do so—at least not initially. The USDOE’s proposed regulations regarding IAP make clear that the purpose of the program is to eventually use the innovative assessment system to “meet the academic assessment and statewide accountability system requirements” of Title I.

[3] This list does not include all of the IAP application’s requirements. 

 
 

Competency-based education has attracted attention as a “disruptive innovation” that could remake American schools. Under this model, students move through the curriculum at their own speed by demonstrating competency as determined by the instructor or other assessment tools. At a high-school level, competency can replace the traditional “seat time” method of bestowing credit, a policy that New Hampshire has adopted. Competency-based education allows students of any age to accelerate their learning progress once they master a topic, while enabling them to slow down in an area where they need more work.

Funding based on competency is widely seen as a way to finance e-schools or online course access programs. It could also be used to fund schools that focus on dropout recovery—whether online or brick-and-mortar. Competency-based funding would offer schools (or course providers) incentives to focus on student learning. It would also fit well with the flexible nature of online or dropout recovery programs. But if Ohio policy makers consider the competency-based funding route, what are the design alternatives? A good starting point is to examine the models other states have already piloted. This article offers an overview of what four states have done—Utah, New Hampshire, Florida, and Minnesota—in the hopes of stirring thought about the pathways Ohio might take.

Nota bene: In some cases, the state’s funding model is probably more accurately described as “completion-based”—funding premised on course completion, not necessarily a demonstration of competency (which, as we’ll see in the case of New Hampshire, can be split into competencies within a course). Together, competency- and completion-based funding are sometimes called “performance based” funding. For an excellent review of these distinctions and other policy considerations, see iNACOL’s Performance-Based Funding & Online Learning report.

Utah

The interesting features of Utah’s approach are its differentiation of course-level funding based on the subject matter, and a payment schedule based on three milestones during a student’s course enrollment. Applying only to online courses for students in grades 9–12, Utah allocates funds based on the number and types of courses that students choose. The amount of funding tied to each course depends on its “cost category.” These range from up to $200 per half-credit for health, fitness, and computer literacy to $350 per half-credit for courses such as English language arts, math, and science. (These amounts may be adjusted annually.) Utah disburses funds in the following manner: Confirmation of enrollment accounts for 25 percent of the course allocation; continuation or “active participation” as determined by the provider (either the student’s school district or charter school) releases another 25 percent; and 50 percent of the funding is awarded upon course completion, which hinges on students’ passage of the course as determined by their instructor.

New Hampshire

New Hampshire focuses on competency in its funding of online education. Funding is determined by the number of competencies, or discrete topics, that a course’s students master, as verified by an online instructor. Each one-credit high school course students complete through the state’s online school, known as the Virtual Learning Academy Charter School (VLACS), includes eight competencies that students must master before they pass the entire course. This benchmarking of progress allows for partial funding at the course level. If students master all the competencies in a course, VLACS receives the full appropriation. If not, VLACS receives partial payment based on the fraction of competencies the student achieved. Each half-credit, semester-long course was worth a maximum of $454 in 2014–15.

To illustrate this approach, the table below displays a hypothetical example. George reaches the end of the year meeting only 25 percent competency in the course, so VLACS receives just $113.50. Sam, however, met all the required competencies, releasing the entire per-course allotment.

Table 1. Example of VLAC’s Funding Formula for a half-credit (or semester-length) course

Florida

Unlike Utah and New Hampshire, which allocate partial payments, the Florida Virtual School (FLVS) only receives funds upon course completion, as determined by the teacher and passage of a statewide end-of-course assessment, if applicable. Course funding for FLVS is determined in the following way: The state’s weighted per-pupil amount is divided by six—the number of credits equal to a full load. If only one credit is completed online, the virtual school receives a one-sixth share of the per-pupil allocation (contingent upon course completion). FLVS submits five enrollment estimates throughout the year and receives twice-monthly payments based on these estimates, assuming full course completion. The final enrollment calculation occurs at year’s end, and adjustments are determined based on confirmation of course completion.

Minnesota

Minnesota’s online-learning program also provides funding based on course completion. But unlike Utah, Florida, and New Hampshire—which created single, statewide online education providers—Minnesota certifies multiple, independent entities to serve either as fulltime online schools or as providers of supplemental courses (to students primarily enrolled in a traditional district or charter school). Currently, thirty-two online schools have been approved by the state. Each of them sets its own definitions for course completion, and the Minnesota Department of Education is responsible for verifying the completion of courses. Minnesota’s use of many providers is more akin to Ohio’s online learning environment, with its many e-schools, and could serve as a model, especially if Buckeye policy makers consider making individual online courses available to Ohio students.

***

Competency-based funding holds much potential, but it represents a very different financing arrangement vis-à-vis traditional funding models typically predicated on headcounts. Sensibly, a few states have piloted this approach in online-learning environments. The design and implementation details are, as always, critical, and the nascent efforts of four other states offer models that Ohio policy makers could look toward. In so doing, they should weigh several questions: At what dollar amount should each course be funded, and should they be funded differently by content area? Should the state provide partial payments throughout the duration of the course, a lump-sum payment after a student completes the course, or “business as usual” payments that assume full completion and then require repayment if a student fails to complete the course? How does a state verify that a student has achieved “competency” or successfully completed a course—by instructor verification, a state-approved end-of-course exam, or some combination of both? (Caveat emptor: Pure instructor verification appears to create a conflict of interest. If teachers know that their school’s funding hinges on the grades they award, they might inappropriately pass undeserving pupils.) These—and many more—are questions that policy makers ought to wrestle with. Fortunately, Ohio lawmakers can look to a few other states that are blazing new trails in competency-based education and funding models that align to it.

 
 

A major development of recent years has been the explosive growth of online learning in K–12 education. Sometimes it takes the form of “blended learning,” with students receiving a mix of online and face-to-face instruction. Students may also learn via web-based resources like the Khan Academy, or by enrolling in distance-learning “independent study” courses. In addition, an increasing number of pupils are taking the plunge into fully online schools: In 2015, an estimated 275,000 students enrolled in full-time virtual charter schools across twenty-five states.

The Internet has obviously opened a new frontier of instructional possibilities. Much less certain is whether such opportunities are actually improving achievement, especially for the types of students who enroll in virtual schools. In Enrollment and Achievement in Ohio's Virtual Charter Schools, we at Fordham examined this issue using data from our home state of Ohio, where online charter schools (“e-schools”) are a rapidly growing segment of K–12 education. Today they enroll more than thirty-five thousand students, one of the country’s largest populations of full-time online students. Ohio e-school enrollment has grown 60 percent over the last four years, a rate greater than any other type of public school. But even since they launched, e-schools have received negative press for their poor academic performance, high attrition rates, and questionable capacity to educate the types of students who choose them. It’s clearly a sector that needs attention.

Our study focuses on the demographics, course-taking patterns, and academic results of pupils attending Ohio’s e-schools. It was authored by Dr. June Ahn, an associate professor at New York University’s Steinhardt School of Culture, Education, and Human Development. He’s an expert in how technology can enhance how education is delivered and how students learn.

Using student-level data from 2009–10 through 2012–13, Dr. Ahn reports that e-schools serve a unique population. Compared to students in brick-and-mortar district schools, e-school students are initially lower-achieving (and more likely to have repeated the prior grade), more likely to participate in the federal free and reduced-price lunch program, and less likely to participate in gifted education. (Brick-and-mortar charters attract even lower-performing students.)

The analysis also finds that, controlling for demographics and prior achievement, e-school students perform worse than students who attend brick-and-mortar district schools. Put another way, on average, Ohio’s e-school students start the school year academically behind and lose even more ground (relative to their peers) during the year. That finding corroborates the disappointing results from Stanford University’s Center for Research on Education Outcomes (CREDO) 2015 analysis of virtual charter schools nationwide, which used a slightly different analytical approach.

Importantly, this study considers e-school students separately from those in other charters. It finds that brick-and-mortar charter students in grades 4–8 outperform their peers in district schools in both reading and math. In high school, brick-and-mortar charter students perform better in science, no better or worse in math, and slightly worse in reading and writing compared to students in district schools. This confirms what some Ohioans have long suspected: E-schools weigh down the overall impact of the Buckeye State’s charter sector. Separate out the e-school results and Ohio's brick-and-mortar charters look a lot better than when the entire sector is treated as a whole.

The consistent, negative findings for e-school students are troubling, to say the least. One obvious remedy is to pull the plug—literally and figuratively—but we think that would be a mistake. Surely it’s possible, especially as technology and online pedagogy improve, to create virtual schools that serve students well. The challenge now is to boost outcomes for online learners, not to eliminate the online option. We therefore offer three recommendations for policy makers and advocates in states that, like Ohio, are wrestling to turn the rapid development of online schools into a net plus for their pupils.

First, policy makers should adopt performance-based funding for e-schools. When students complete courses successfully and demonstrate that they have mastered the expected competencies, e-schools would get paid. This creates incentives for e-schools to focus on what matters most—academic progress—while tempering their appetite for enrollment growth and the dollars tied to it. It would also encourage them to recruit students likely to succeed in an online environment—a form of “cream-skimming” that is not only defensible but, in this case, preferable. At the very least, proficiency-based funding is one way for e-schools to demonstrate that they are successfully delivering the promised instruction to students. That should be appealing to them given the difficulty in defining, tracking, and reporting “attendance” and “class time” at an online school.

Second, policy makers should seek ways to improve the fit between students and e-schools. Based on the demographics we report, it seems that students selecting Ohio’s e-schools may be those least likely to succeed in a school format that requires independent learning, self-motivation, and self-regulation. Lawmakers could explore rules that exempt e-schools from policies requiring all charters, virtual ones included, to accept every student who applies and instead allow e-schools to operate more like magnet schools with admissions procedures and priorities. E-schools would be able to admit students best situated to take advantage of the unique elements of virtual schooling: flexible hours and pacing, a safe and familiar location for learning, a chance for individuals with social or behavioral problems to focus on academics, greater engagement from students who are able to choose electives based on their own interests, and the chance to develop high-level virtual communication skills. E-schools should also consider targeting certain students through advertising and outreach, especially if they can’t be selective. At the very least, states with fully online schools should adopt a policy like the one in Ohio, which requires such schools to offer an orientation course—the perfect occasion to set high expectations for students as they enter and let them know what would help them thrive in an online learning environment (e.g., a quiet place to study, a dedicated amount of time to devote to academics).

Third, policy makers should support online course choice (also called “course access”), so that students interested in web-based learning can avail themselves of online options without enrolling full-time. Ohio currently confronts students with a daunting decision: either transfer to a full-time e-school or stay in their traditional school and potentially be denied the chance to take tuition-free, credit-bearing virtual courses aligned to state standards. Instead of forcing an all-or-nothing choice, policy makers should ensure that a menu of course options is available to students, including courses delivered online. To safeguard quality and public dollars, policy makers should also create oversight to vet online options (and veto shoddy or questionable ones). Financing arrangements may need to change, too, perhaps in ways that more directly link funding to actual course providers. If it were done right, however, course choice would not only open more possibilities for students, but also ratchet up the competition that online schools face—and perhaps compel them to improve the quality of their own services.

Innovation is usually an iterative process. Many of us remember the earliest personal computers—splendid products for playing Oregon Trail, but now artifacts of the past. Fortunately, innovators and engineers kept pushing the envelope for faster, nimbler, smarter devices. Today, we are blessed as customers with easy-to-use laptops, tablets, and more. But proximity to technology, no matter how advanced, isn’t enough. E-schools and their kin should facilitate understanding of how best to utilize online curricula and non-traditional learning environments, especially for underserved learners. From this evidence base, providers should then be held to high standards of practice. Though the age of online learning has dawned, there is much room for improvement in online schooling—and nowhere more than in Ohio. For advocates of online learning, and educational choice, the work has just begun.

 
 

The National Council on Teacher Quality (NCTQ) recently reviewed one hundred of the nation’s pre-K teacher prep programs, attempting to answer whether pre-K teacher candidates are being taught what they need to know to be effective in their future jobs.[1] The answer is, largely, no. This should be sobering news, especially for folks here in Ohio as many head to the November ballot hell bent on expanding pre-K.

The bottom line, reported NCTQ, is that most of the programs reviewed spend far too much of their limited time on how to teach older children rather than on the specific training needed to teach three- and four-year-olds. Some specific findings are neatly summarized in the following slides from NCTQ:

NCTQ recommends, among other things, that states narrow their licensure to certify educators for no more than pre-K through third grade (as Ohio already does), rather than treating pre-K as a part of an overall elementary credential; that they encourage teacher prep programs to offer either more-specialized degrees or early childhood education as an add-on endorsement; that they ensure that prep programs require courses in critical areas such as emergent literacy, early childhood development, or early math and science; and that they encourage future pre-K educators to do their student teaching with rock star teachers.

All good suggestions, but all heavy lifts, especially for a state like Ohio, whose teacher prep programs aren’t so hot anyway.

Perhaps Ohioans keen to expand pre-K should first make sure they’ve got the makings of a competent workforce to do the job right. If we’re going to put a lot of ed-reform eggs into the pre-K basket, that basket better be strong and well-stocked.

SOURCE: Hannah Putman, Amber Moore, Kate Walsh, “Some Assembly Required: Piecing Together the Preparation Preschool Teachers Need,” National Council on Teacher Quality (June 2016).


[1] Researchers examined one hundred programs in twenty-nine states that certify pre-K teachers, most of them offering bachelor’s and master’s degrees. They reviewed course requirements and descriptions, course syllabi, student teaching observation and evaluation forms, and other course materials required for degree completion.

 

 
 
You're invited to join in the conversation and contribute to Ohio’s Every Student Succeeds Act (ESSA) plan.
 
 
Engage in a regional meeting to share your thoughts and perspective on the Every Student Succeeds Act (ESSA) and Ohio’s developing state plan. This meeting is an exciting opportunity to gather valuable input from various perspectives from local educators, funders, parents, students and community members. The meeting will include an introduction from state superintendent Paolo DeMaria, a brief overview of ESSA and group discussions around specific provisions and options.
 
ESSA, which passed Congress with bipartisan support and was signed into law by President Obama on Dec. 10, 2015, replaced the No Child Left Behind Act. It has shifted broad authority from the federal government to state and local agencies, providing them with greater flexibility and decision-making power. Ohio’s state plan, which is required by ESSA, will be submitted to the federal government in 2017 and will address topics such as standards, assessments, accountability and assistance for struggling schools.
 
This regional conversation is one of a series of conversations Philanthropy Ohio and its members, in partnership with the Ohio Department of Education, are convening across the state.
 
Join us for this important conversation, a tremendous opportunity for us to collectively discuss ESSA and how it will impact our students, educators and families in Ohio.
 
 
Registration
Register at http://columbusessa.eventbrite.com and share this invitation with colleagues and friends.
 
Cost
Philanthropy Ohio Members: Free
Eligible Non-members: Free
 
Registration Deadline
August 24
 
Questions?
Please email Adrienne Wells ([email protected]) or call 614.914.2249
 
The Thomas B. Fordham Institute is proud to co-host this ESSA stakeholder meeting for the central Ohio region.  Your input will be vital as the state redefines accountability in the post-No Child Left Behind era.
 
 
 
 
 

We call for more evaluation of Ohio education reform efforts, weigh the pros and cons of changes to the state's testing design, and more

We at Fordham recently released an evaluation on Ohio’s largest voucher initiative—the EdChoice Scholarship. The study provides a much deeper understanding of the program and, in our view, should prompt discussion about ways to improve policy and practice. But this evaluation also means that EdChoice is an outlier among the Buckeye State’s slew of education reforms: Unlike the others, it has faced research scrutiny. That should change, and below I offer a few ideas about how education leaders can better support high-quality evaluations of education reforms.

In recent years, Ohio has implemented policies that include the Third Grade Reading Guarantee, rigorous teacher evaluations, the Cleveland Plan, the Straight A Fund, New Learning Standards, and interventions in low-performing schools. Districts and schools are pursuing reform, too, whether changing textbooks, adopting blended learning, and implementing professional development. Millions of dollars have been poured into these initiatives, which aim to boost student outcomes.

But very little is known about how these initiatives are impacting student learning. To my knowledge, the only major state-level reforms that have undergone a rigorous evaluation in Ohio are charter schools, STEM schools, and the EdChoice and Cleveland voucher programs. To be certain, researchers elsewhere have studied policies akin to those adopted in Ohio (e.g., evaluations on retention in Florida). Such studies can be very useful guides, and they might even inspire Buckeye leaders to find out what all the fuss is about. At the same time, it is critical to gather evidence on what works in our own state. Local context and conditions matter.

The research void means that we have no real understanding about whether Ohio’s education reforms are lifting student outcomes. Is the Third Grade Reading Guarantee improving early literacy? We don’t know. Have changes to teacher evaluation increased achievement? There isn’t much evidence on that either, at least not in the Buckeye State. The startling lack of information is not a problem unique to Ohio, but it does put us in a tough situation. We have practically no way of gauging whether course corrections are needed (if the results are null or negative) or if a program should be abandoned when consistently adverse impacts are uncovered. Neither do we know which approaches should be replicated or expanded based on positive findings.

Evaluation is no easy task, and there may be legitimate reasons why researchers haven’t turned a spotlight on Ohio’s reforms. Some are very new, and the time might not be ripe for a study. Moreover, there may not be a straightforward way to analyze a particular program’s impact. Only in rare cases can researchers conduct an experimental study that yields causal estimates. These include programs with admissions lotteries (due to oversubscription), as well as cases in which schools implement an experimental program by design. Even then, however, there are limitations. When such studies aren’t feasible, competent researchers can utilize rigorous quasi-experimental methods; yet given the data or policy design, isolating the impact of a specific program can be challenging. And further barriers may exist in the simple lack of funding or political will.

Policy makers can help to overcome some of these barriers by creating an environment that is more favorable to research and evaluation. Here are three thoughts on how to do this:

  1. Create small-scale pilots that provide sound evidence and quick feedback. Harvard professor Tom Kane suggests that there is an “urgent need for short-cycle clinical trials in education.” I agree. In Ohio, perhaps it could look something like this: On the Third Grade Reading Guarantee, the state could incentivize a group of districts to randomly assign their “off-track” students to different reading intervention programs. A researcher could then investigate the outcomes a year later, helping us learn something about which program holds the most promise. (It would be good to know the costs of each intervention as well.) In the case of ESSA interventions for Ohio’s lowest-performing schools, the state could encourage districts to randomly assign certain strategies to specific schools and then examine the results. Granted, these ideas would need some fleshing out. But the point is that designing policies with research pilots in mind would sharpen our understanding of promising practices.
     
  2. Make collecting high-quality data a top priority. To its great credit, Ohio has developed one of the most advanced education information systems in the nation. For example, the state is among just a few that gather information on pupils in gifted programs. But the state and schools can do more, particularly around the reporting of course-level data that can support larger-scale research on curriculum and instruction. For instance, we’ve noticed some apparent gaps in the way AP course taking is documented. Another area in which Ohio can blaze new paths is the accurate identification of economically disadvantaged students. In addition, as Matt Chingos of the Urban Institute recently described, researchers can no longer rely on free and reduced-price lunch (FRPL) status as a proxy for poverty. An urgent priority for the state—it may require cross-agency cooperation—is to create a better way of indicating pupil disadvantage. A reliable marker of socioeconomic status is also critical for policy, as ESSA requires disaggregated test results.
     
  3. Include evaluation as a standard part of policy design on the front end. When designing a policy reform—whether at a state or local level—one question that should be asked is, “What is the plan for evaluating whether it’s working?” This might require the early engagement of researchers and the setting aside of funds. At the federal level, most programs come with such allocations; Ohio could do that for its big state-level reforms while also encouraging schools to set aside resources for local “R&D.” If evaluation becomes part of policy design on the front end, the benefits are two-fold. First, education leaders should get more timely results than if research were an afterthought, carried out much later in policy implementation (if at all). Second, turning evaluation into a standard practice could mitigate its political risk. Naturally, it is dicey to voluntarily order an evaluation, both for a given policy’s champions and its detractors. Advocates won’t want to see negative results, and no critic wants to see positive ones. But a transparent climate around research should lessen the risks of disseminating results.

Everyone can agree that Ohio needs and deserves a world-class school system that improves achievement for all students. The purpose of education reform is to get us closer to that goal. But the research from Ohio is maddeningly sparse on which changes are working for Buckeye schools and students. Moving forward, authorities at the state and local level must ensure that rigorous evaluation becomes the rock on which reform stands.

 
 

The new education law of the land—the Every Student Succeeds Act (ESSA)—has been the talk of the town since President Obama signed it into law in December 2015. Under the new law, testing doesn’t initially seem that different from the No Child Left Behind (NCLB) days: ESSA retains the requirement that states administer annual assessments in grades 3–8 and once in high school; requires that test results remain a prominent part of new state accountability plans; and continues to expect states to identify and intervene in struggling schools based upon assessment results. But a closer look reveals that ESSA provides a few key flexibilities to states and districts—and opens the door for some pretty significant choices. Let’s take a look at the biggest choices that Ohio will have to make and the benefits and drawbacks of each option. 

Test design

There are two key decisions for states in terms of test design. The first is related to high school testing. ESSA permits districts to use “a locally selected assessment in lieu of the state-designed academic assessment” as long as it’s a “nationally recognized high school academic assessment.” In other words, Ohio districts could forego a statewide high school test by administering a nationally recognized test (like the ACT or the SAT) instead. There are two ways to make this happen: The Ohio Department of Education (ODE) can make such assessments available for districts to choose, or districts can submit an assessment to ODE. In both cases, ODE must approve the test to ensure that it (a) is aligned to state standards, (b) provides data that is both comparable to the statewide assessment and valid and reliable for all subgroups, and (c) provides differentiation between schools’ performance as required by the state accountability plan. 

There are pros and cons for districts that are interested in administering a nationally recognized test[1] rather than Ohio’s statewide assessment:

Pros

Cons

They are “nationally recognized tests” for a reason—districts can be sure that they are rigorous, widely accepted at colleges, and allow for easy performance comparisons.

Nationally recognized tests are developed by national organizations, so districts that want an “Ohio-designed” assessment would be out of luck. This was a complaint that cropped up often during the battle over the PARCC assessment.

Using a college admissions test like the SAT or ACT for both college entry and statewide accountability limits the number of tests students have to take.

Ohio is currently working to revise its standards; if the revisions are extensive and move away from the Common Core, there may be no nationally recognized tests that are aligned to Ohio’s standards 

Using a college entry exam as a high school test would set the proficiency bar at college readiness and lessen the honesty gap—depending on the score ODE chooses to equate with proficiency.

College admissions tests are designed to measure college readiness, not high school learning. Arguably high school learning and college preparation should be the same, but not everyone agrees—and merely saying something “should be the same” doesn’t mean it actually is.

Using a college admissions test could increase the value of testing for students (and parents); many colleges and universities offer scholarships based on scores, so students may take these tests more seriously than they do current statewide assessments.

The ELA portion of nationally recognized tests may adequately cover high school ELA classes, but the math, science, and civics portions of national tests often cover several subjects (like biology and chemistry) in one testing section. As a result, the test may not be an accurate measurement of learning for a specific subject the way an end-of-course exam would be.

Ohio will soon pick up the tab for all juniors to take the ACT or SAT. This means that opting to use one of these tests won’t create extra costs for districts.

Calculating value-added at the high school level for state report cards could become very complicated, and even impossible, based on the grade in which students take the assessment.

The second decision for states regarding test design is whether to utilize performance assessments. When measuring higher-order thinking skills, ESSA permits states to “partially” deliver assessments “in the form of portfolios, projects, or extended performance tasks.” These kinds of assessments have long been championed by teachers and have already been piloted in several Ohio districts. But should the state consider using them for all districts? Here’s a list of the pros and cons:

Pros

Cons

Using performance assessments would answer critics who claim that standardized testing is a “one-size-fits-all” measurement by evaluating a wider range of skills and content.

Important questions remain about who will design the assessments and how ODE can ensure rigor. If assessments are developed at the district level, that’s an extra burden on teachers.

Performance assessments are considered more nuanced than traditional standardized tests, and they may offer a better picture of student learning (though more research would be helpful).

Performance assessments require significantly more time to develop, administer, and grade. Considering that some parents and teachers have recently been frustrated with when they receive test scores, it doesn’t seem likely that more grading time will be added. In addition, Ohio’s most recent budget bill requires state test results to be returned earlier than in past years, making the implementation of performance assessments even more difficult.

Ohio already has a good foundation in place for implementing a performance assessment, and there are models—like PACE in New Hampshire—that provide excellent examples.

A lack of standardization in grading can lead to serious questions about validity and reliability—and the comparability that a fair state accountability system depends on.

Districts interested in competency education would benefit from assessments that effectively measure growth and mastery. 

 

Test administration

ESSA allows states to choose whether to administer a single, summative assessment or “multiple statewide interim assessments” during the year that “result in a single, summative score.” However, Ohio’s most recent budget bill—which was crafted in response to the widespread backlash against PARCC—mandates that state assessments can only be administered “once each year, not over multiple testing windows, and in the second half of the school year.” In other words, current Ohio law prevents ESSA’s interim assessment option from being used for state accountability purposes.

Of course, laws can always be changed, and revising Ohio law to allow multiple windows for interim assessments would come with both benefits and drawbacks. A single, summative assessment like what’s currently in use saves time (in both test administration and grading) and minimizes costs. But choosing to maintain a single, end-of-year assessment also ignores the fact that many districts already utilize interim assessments to measure progress leading up to the statewide test. If most districts spend the money and time administering interim assessments anyway, it could be helpful to provide interims that are actual state tests in order to ensure alignment and valid, reliable data. On the other hand, districts that don’t use interim assessments won’t like being forced to do so—and districts that design their own interims may not like using state-designed ones.

For teachers that complain that state tests aren’t useful for day-to-day instruction, the interim assessment structure offers a solution. It also offers an intriguing new way to measure student progress—a different way than Ohio’s current value-added system— in which scores from the beginning of the year could be compared to scores at the end of the year. Given the very recent change in Ohio law prohibiting multiple testing windows, this ESSA-allowed flexibility will only come to pass if teachers and school districts make clear to lawmakers that it would make their lives easier.

Testing time

ESSA permits states to “set a target limit on the aggregate amount of time devoted” to tests based on the percentage of annual instructional hours they use. This isn’t a new idea: A 2015 report from former State Superintendent Richard Ross found that Ohio students spend, on average, almost twenty hours taking standardized tests during the school year (just 2 percent of the school year). Prior to the report, lawmakers had proposed legislation that would limit testing time, and there was much discussion—but no action—after the report’s publication. ESSA’s time limit provision could reignite Ohio’s debate, but the pros and cons remain the same: While a target limit could prevent over-testing, it would also likely end up as either an invasive limitation on district autonomy (many standardized tests are local in nature or tied to Ohio’s teacher evaluation system) or a compliance burden. Ironically, a statewide limitation on local testing time would be an unfortunate result from a bill intended to champion local decision making based on the needs of students.   

***

Unsurprisingly, the Buckeye State has a lot to consider when making these decisions. Determining what’s best for kids is obviously paramount, and a key element in the decision-making process should be the voices of teachers and parents. The success or failure of each of these decisions depends on implementation, and feedback from teachers and school leaders will offer a glimpse at how implementation will progress. Fortunately, ODE has already taken a proactive approach to ESSA decision making. Anyone interested in education in Ohio would do well to take advantage of these opportunities. The more feedback the department receives, the better.




[1] This comparison is based only on ACT/SAT tests, since it’s widely believed that both will qualify under the “nationally recognized” label. Education First also lists PARCC and Smarter Balanced as options, but Ohio has already exited the PARCC consortium, and the likelihood of the state opting for Smarter Balanced is low. There are other possibilities, but they are not explored here.

 

 
 

This blog was originally posted on Education Next on July 24, 2016.

The Thomas B. Fordham Institute recently released a study on the academic impact of Ohio’s flagship school choice program authored by noted researcher Dr. David Figlio of Northwestern University. The report is noteworthy for its principal findings, namely that, not only is the sky not falling for impacted public schools, the EdChoice program has had a positive impact on the academic performance of public schools whose students are eligible for a scholarship. Surprisingly, the study also found that the students using scholarships to attend private schools who the report studied (more on that later) did not perform as well as their public school peers on the state test.

Matt Barnum of The 74 wrote an article that details some of the possible explanations for the latter finding. Based on my own experience in Ohio, I can attest that many nonpublic schools do not align their curriculum to the state test, nor do they focus much on these measures, and that is likely an important factor. However, it is important to note what the study could not address. As Dr. Figlio made clear in both his report and a presentation to The City Club of Cleveland, the study had significant limitations.

Ohio’s EdChoice program differs from most other school choice programs in a significant manner: student eligibility is determined solely by the performance of their assigned public schools. This has implications for how to study the program. The program creates real choice opportunities for students assigned to these schools (primarily the 10% lowest performing in the state), removes an economic incentive for middle class families to flee to the suburbs for better schools and larger properties (an important consideration for anyone knowledgeable about inner-ring suburbs and urban areas in Ohio), and creates market pressures for public schools to improve their performance, which the study confirms to be the case. This last argument is a cornerstone for many “free-market” school choice advocates and scholars, and the study’s most robust findings appear to confirm it.

The more surprising finding to both advocates and opponents was that using a scholarship to attend a private school appears to have led students to fare worse academically than had they remained in their public school. While there are many possible explanations for this, it is not worth time making excuses for why some students in private schools aren’t doing as well academically as some of their public school peers. Parents do make choices for their children based on many different factors. It may be that a child is not thriving at a particular school or is having social problems with a particular group of children; parents may disagree with teachers and/or the curriculum being taught; or they may desire a more faith-based approach to learning. Every child is unique and has their own needs. But certainly the expectation needs be that, among other benefits, choice should lead to higher levels of academic achievement.

The challenge facing researchers and policymakers is determining how these students would have performed had they stayed in their assigned public schools. As noted before, EdChoice eligibility is based on the performance of their assigned public schools. The most apt comparison of academic performance would seem to be between scholarship students and their peers who are enrolled at their assigned school. We have some data on this, and as noted in an article published by the Thomas B. Fordham Institute in 2014, they indicate that scholarship students do outperform their public-school peers in many districts, in some cases by huge margins: Consider Columbus. In 2013-14, voucher students in grades three to eight outperformed their district peers in math and reading at every grade level on Ohio Achievement Assessments (OAA’s) and Ohio Graduation Tests (OGT’s) (see table below). For instance, on the OGT’s, 96 percent of voucher students were proficient in reading compared to 72 percent of public school students. On the math portion of the OGT, 85 percent of voucher students were proficient compared to 50 percent of their public school peers. Of third graders using a scholarship to attend a private school, 96 percent were proficient in reading compared to 55 percent of students in their assigned public schools. Similarly impressive scores, with some exceptions at certain grade levels, were posted in Cincinnati, Cleveland, Dayton, Toledo, and other districts.

So why didn’t this study show the same pattern? As Dr. Figlio notes, you can’t simply compare these two groups because there must be a reason why some students chose to leave their assigned schools and others did not. In fact, his study shows that, among eligible students, those who used scholarships to move to a private school were higher-performing and less likely to be from low-income families than those who did not. Accordingly, the best available comparison group in this context consists of students attending schools where they had no access to choice because they were not designated in the lowest 10% of the state. Figlio’s study therefore compares students who left the highest-ranked schools that are eligible for EdChoice for private schools against observably similar students who remained in the lowest-ranked schools that are not. As a result it tells us little about how well students using a scholarship (who would have otherwise attended the very lowest-performing public schools) are doing—a fact Dr. Figlio acknowledges. In addition, the set of schools from which the comparison group of students is drawn may also be problematic. While these schools appear to be similar on the surface, there could nonetheless be differences between a school that has never been designated as EdChoice and a school that has been consistently so designated. In addition, the study measured the earlier years of this program. Further study may uncover more positive results—even using this methodology.

The bottom line is that we should be careful in interpreting these findings. Most importantly, the study was unable to examine the achievement of students assigned to the lowest-performing public schools. What we do know is that EdChoice has improved public schools, that parents like the choices they are provided, and that data do seem to indicate greater achievement by many students on scholarships. Despite all the creative headlines that this study has generated, a deeper dive into the report, coupled with intimate knowledge of the program and Ohio, gives us reason to believe that the program is beneficial. There are profound fiscal and equity arguments for school choice, as Dr. John C. White, Louisiana’s State Superintendent eloquently writes, and these programs take time to develop. By encouraging more private schools to participate, ensuring that parents have access to information on how their children are performing, and broadening the number of students eligible, Ohio can make this vital choice program an even greater benefit to the state and its citizens.

Rabbi Frank is the Ohio director of Agudath Israel of America

 
 

This report from Civic Enterprises and Hart Research Associates provides a trove of data on students experiencing homelessness—a dramatically underreported and underserved demographic—and makes policy recommendations (some more actionable than others) to help states, schools, and communities better serve students facing this disruptive life event. 

To glean the information, researchers conducted surveys of homeless youth and homeless liaisons (school staff funded by the federal McKinney-Vento Homeless Assistance Act who have the most in-depth knowledge regarding students facing homelessness), as well as telephone focus groups and in-depth interviews with homeless youth around the country. The findings are sobering.

  • In 2013–14, 1.3 million students experienced homelessness—a 100 percent increase from 2006–07. The figure is still likely understated given the stigma associated with self-reporting and the highly fluid nature of homelessness. Under the McKinney-Vento Homeless Assistance Act, homelessness includes not just living “on the streets” but also residing with other families, living out of a motel or shelter, and facing imminent loss of housing (eviction) without resources to obtain other permanent housing. Almost seven in ten formerly homeless youth reported feeling uncomfortable talking with school staff about their housing situation. Homeless students often don’t describe themselves as such and are therefore deprived of the resources available to them.
  • Unsurprisingly, homelessness takes a serious toll on students’ educational experience. Seventy percent of youth surveyed said that it was hard to do well in school while homeless; 60 percent said that it was hard to even stay enrolled in school. Vast majorities reported homelessness affecting their mental, emotional, and physical health—realities that further hinder the schooling experience.
  • McKinney-Vento liaisons report insufficient training, awareness, and lack of resources dedicated to the problem. One-third of liaisons reported that they were the only people in their districts trained to identify and intervene with homeless youth. Just 44 percent said that other staff were knowledgeable of the signs of homelessness and aware of the problem more broadly. And while rates of student homelessness have increased, supports have not kept pace. Seventy-eight percent of liaisons surveyed said that funding was a core challenge to providing students with better services; 57 percent said that time and staff resources was a serious obstacle.
  • Homeless students face serious logistical and legal barriers related to changing schools (which half reported having to do), such as fulfilling proof of residency requirements, obtaining records, staying up-to-date on credits, or even having a parent/guardian available to sign school forms.

Fortunately, there are policy developments that further shine the spotlight on students experiencing homelessness and equip schools to better address it. The recently reauthorized Every Student Succeeds Act (ESSA) treats homeless students as a sub-group and requires states, districts, and schools to disaggregate their achievement and graduate rate data beginning in the 2016–17 school year. ESSA also increased funding for the McKinney-Vento Education for Homeless Children and Youth program and attempted to address immediate logistical barriers facing homeless students—for instance, by mandating that homeless students be enrolled in school immediately even when they are unable to produce enrollment records. The report urges states to fully implement ESSA’s provisions related to homeless students and offers other concrete recommendations for schools (improving identification systems and training all school staff, not just homeless liaisons) as well as communities (launching public awareness campaigns and collecting better data). In Ohio, almost eighteen thousand students were reported as homeless for the 2014–15 school year. Policy makers would be wise to review this report’s findings and recommendations and consider how to implement and maximize ESSA’s provisions so that our most vulnerable students don’t fall between the cracks.

Source: Erin S. Ingram, John M. Bridgeland, Bruce Reed, and Matthew Atwell, “Hidden in Plain Sight: Homeless Students in America’s Public Schools,” Civic Enterprises and Hart Research Associates (June 2016).

 
 

In 2000, North Carolina’s university system (UNC) announced that it would increase from three to four the minimum number of high school math courses students must complete in order to be considered for admission. The intent was to increase the likelihood that applicants be truly college-ready, thereby increasing the likelihood of degree completion. Researchers from CALDER/AIR recently looked at the UNC data and connected it to K–12 student information to gain an interesting insight into how post-secondary efforts to raise the bar affect student course-taking behavior in high school.

The study posed three questions: Did the tougher college admission requirement increase the number of math courses taken by high school students (North Carolina’s high school graduation requirements remained at three math courses, despite UNC’s higher bar for admissions)?[1] Did it alter enrollment patterns at UNC schools? And did the hoped-for increase in college readiness and completion result?

Overall, high school students did take more math courses after the UNC policy change. As researchers expected, the biggest increases were at the middle- and lower-achievement deciles—high-achievers were already taking more than three courses—but the increases were not uniform across districts. This led researchers to look deeper into math sequences in specific districts across the state (urban, suburban, and rural) both before and after the new policy was announced. They found that some districts made no changes to existing sequences and that a number of them made it difficult to complete four courses by either being too lax (integrated math pathways rather than delineated Algebra/Geometry/etc.) or too stringent (strict prerequisites). Researchers posit that larger and better-resourced districts were able to make needed changes in their course sequences more readily than their counterparts (more and specialized teaching staff, textbooks, technology, etc.), but they do not discount the possibility that some districts and schools were simply unwilling to make the changes. They offer no answers as to why this might be, although researchers speculate that districts with low numbers of college-bound students might not have wanted to expend the resources for meager benefits. Either way, after the two-year policy rollout, any student in a district unable or unwilling to make the needed changes was effectively locked out of UNC—a disheartening thought, even for those who believe that college is not necessary for all kids. It’s also an ill omen for other bar-raising efforts that will come down the pike in the future.

Researchers did detect an increase in college enrollment, but one that fell within rather than beyond the “usual” achievement deciles. In other words, the new policy did not open the doors of college to more lower-performing students by building up skills in K–12, but it did seem to have the intended effect of giving incoming students more and better math training than in previous years. Ultimately, only minor increases in college completion could be associated with the new math requirement (keep in mind that we’re talking only about North Carolinian K–12 students going on to UNC campuses), but those who did likely reaped the benefits of college completion and faced less student debt to go with it. Additionally, there seemed to be an unanticipated bump in the number of STEM-related majors among those graduates.

In the end, the UNC policy change seems to have done little to advance the laudable goal of increasing college completion. But the CALDER researchers have done an excellent job going the extra mile to show what effects post-secondary changes can have at the high school level. To wit: a continuing disconnect between high school graduation and college readiness.

SOURCE: Charles Clotfelter, Steven Hemelt, Helen Ladd, “Raising the Bar for College Admission: North Carolina’s Increase in Minimum Math Course Requirements,” CALDER/AIR Working Paper (July 2016)


[1] A two-year delay in consequences (refused admission for those who hadn’t completed four math courses) for the new policy at UNC created excellent conditions for the researchers to determine whether observed changes in high school math sequences and student course-taking patterns were likely related to the UNC policy change.

 
 

In a previous blog post, we urged Ohio’s newly formed Dropout Prevention and Recovery Study Committee to carefully review the state’s alternative accountability system for dropout-recovery charter schools. Specifically, we examined the progress measure used to gauge student growth—noting some apparent irregularities—but didn’t cover in detail the three other components of the dropout-recovery school report cards: graduation rates, gap closing, and assessment passage rates. Let’s tackle them now.

Each of these components is rated on a three-level scale: Exceeds Standards, Meets Standards, and Does Not Meet Standards. This rating system differs greatly from the A–F grades issued by Ohio to conventional public schools; the performance standards (or cut points) used to determine their ratings are also different. One critical question that the committee should consider is whether the standards for these second-chance schools are set at reasonable and rigorous levels.

Graduation Rates

Dropout-recovery schools primarily educate students who aren’t on track to graduate high school in four years (some students may have already passed this graduation deadline). These schools are still held responsible for graduating students on time. Ohio, however, recognizes that dropout-recovery schools educate students who need extra time to graduate by assigning ratings for extended (six-, seven-, and eight-year) graduation rates in addition to the four- and five-year rates reported for all public schools. The standards for dropout-recovery schools’ graduation rates are also significantly relaxed when compared to traditional public schools. Even with these adjustments, it is unlikely that dropout-recovery schools’ four- and five-year rates are reflective of their true performance (a problem, as we discuss here, for any high school enrolling low-achieving adolescents).

Below, Table 1 displays the four-year graduation rate targets for dropout-recovery schools, along with the 5–8-year graduation rates. Also shown are the standards for traditional public schools (graded on an A–F scale). As you can see, the standards for dropout-recovery schools are considerably lower. For example, a dropout-recovery school could graduate less than 50 percent of a cohort of students and still earn the top rating (Exceeds Standards); meanwhile, traditional schools are required to graduate virtually all of their students to earn an A or B rating (at least 89 percent). Note also that the 5–8-year graduation rate targets are higher than the four-year rate, as schools are held to slightly higher standards given the longer time that students are given to earn their diplomas.  

Table 1. Graduation rate performance standards

But are the adjusted graduation rate standards too low for dropout-recovery schools? To determine this, a good first step might be to compare these standards to the ones set for dropout-recovery schools in other states (e.g., Texas or Arizona). Another way of tackling the question is to examine the current distribution of Ohio’s ratings. If nearly all schools were exceeding standards, then it would be fair to say that they are probably too low. But as Chart 1 indicates, that’s not the case.

Chart 1. Distribution of four-year graduation rate ratings, dropout-recovery schools, 2014–15

As readers can see, there is a fairly even balance across the categories, indicating that the standards could be deemed appropriate for this unique group of schools. That being said, the graduation standards are rather low in absolute terms, and policy makers should consider ratcheting up targets in a reasonable way (perhaps by phasing in incrementally higher standards over multiple years). They could also consider making adjustments to the way graduation rates are calculated.

Assessment Passage Rate

The assessment passage rate measures the percentage of students in twelfth grade, or students within three months of turning twenty-two, who pass the Ohio Graduation Test (OGT) or the End-of-Course exams (beginning with the 2018 graduating class). Ratings are assigned to schools depending on what percentage of their students pass the graduation exams (each subject exam, such as math or social studies, must be passed). A–F report cards for conventional public schools do not use this type of measure, so we don’t display a side-by-side comparison of standards. Dropout-recovery schools with assessment passage rates above 68 percent receive an Exceeds Standards rating; schools whose assessment passage rate is between 32 percent and 68 percent receive Meets Standards; and schools with an assessment passage rate of less than 32 percent receive a Do Not Meet Standards rating. 

Again, to get a sense of whether the performance standards are set at reasonable levels, we can look at the distribution of school ratings. Chart 2 shows that the ratings are fairly well balanced, with most rated schools falling into the Meets Standards category. This suggests that the standards are set at appropriate levels for the passage rate component. Surprisingly, however, twenty-nine schools were not assigned ratings in this component, though it is unclear why these schools were not rated. The fact that almost one in three dropout-recovery schools did not receive a rating is troubling and worthy of further investigation.

Chart 2. Distribution of assessment passage rate ratings, dropout-recovery schools, 2014–15

Gap Closing (Annual Measurable Objectives)

The gap-closing measure gauges how well a school narrows subgroup achievement gaps. This is measured by the percentage of proficiency targets a school meets for certain student subgroups. The very intricate methodology for calculating the percentage of targets met is available here. As we’ve argued elsewhere, this measure has some imperfections, and the state should reconsider its use—at least in its present form—as an accountability measure applying to any public school.

That being said, so long as it plays a key role in school report cards, we should examine the results. Table 2 displays the performance benchmarks for dropout-recovery schools and traditional public schools. The percentages displayed are equal to the number of gap-closing points a school earns divided by the number of points possible. Just like graduation rates, dropout-recovery schools are held to considerably lower standards in comparison to conventional schools. On the one hand, this is understandable given the academic backgrounds of the students they serve. On the other hand, the standards appear at face value to be entirely too low: A dropout-recovery school could earn just 1 percent of the gap-closing points possible and receive a Meets Standards rating.

Table 2. Gap-closing performance standards

Chart 3 displays the distribution of gap-closing ratings for dropout-recovery schools. Unlike the graduation rate and test passage rate components, the distribution for gap closing appears imbalanced and skewed toward the Does Not Meet category. As low as the gap-closing dropout-recovery standards may be, they still result in failing ratings for a disproportionate number of schools (and a modest number of non-rated schools).

Chart 3: Distribution of gap-closing ratings, dropout-recovery schools, 2014–15

* * *

A fair school accountability system recognizes circumstances that are beyond the control of schools. But it should do so without creating glaring “double standards,” at least when it comes to graduation and student proficiency indicators. Fortunately, this is not as much of a concern when growth is the measuring stick—another reason why getting the progress measure right is absolutely essential. The information above suggests that some important questions still need to be answered around accountability for dropout-recovery schools. Getting standards right for Ohio’s dropout-recovery schools can ensure that these second-chance schools are indeed helping young people advance their education.

 
 
 
 

We get up-to-date on Ohio’s voucher program, Academic Distress Commissions, and charter law reform

Shortly after Ohio lawmakers enacted a new voucher program in 2005, the state budget office wrote in its fiscal analysis, “The Educational Choice Scholarships are not only intended to offer another route for student success, but also to impel the administration and teaching staff of a failing school building to improve upon their students’ academic performance.” As economist Milton Friedman had theorized decades earlier, Ohio legislators believed that increased choice and competition would boost education outcomes across the board. “Competition” in the words of Stanford’s Caroline Hoxby, “would be the proverbial rising tide that lifts all boats.”

Today, the EdChoice program provides publicly funded vouchers (or “scholarships”) to more than eighteen thousand Buckeye students, youngsters previously assigned to some of the state’s lowest-performing schools, located primarily in low-income urban communities.[1] That much is known. Yet remarkably little else is known about the program. Which children are using EdChoice when given the opportunity? Is the initiative faithfully working as its founders intended? Are participating students blossoming academically in their private schools of choice? Does the increased competition associated with EdChoice lead to improvements in the public schools that these kids left?

The present study utilizes longitudinal student data from 2003–04 to 2012–13 to answer these important questions. Specifically, the analysis utilizes the results from state tests—which all EdChoice students are required to take—to examine the vouchers’ effects on two groups of pupils. First, the study inspects the scores of public school students who were eligible for vouchers—but did not take one—in order to gauge the competitive effects of EdChoice (i.e., its impact on traditional public school students and their schools). Second, it examines the academic impact of EdChoice on those students who actually use the vouchers to attend private schools.

This is the first study of EdChoice that uses individual student-level data, allowing for a rigorous evaluation of the program’s effectiveness. (Earlier analyses by Matthew Carr and Greg Forster used school-level data to explore its competitive impact.) To lead the research, we tapped Dr. David Figlio of Northwestern University, a distinguished economist who has carried out examinations of Florida’s tax credit scholarship program. He has also written extensively on school accountability, teacher quality, and competition. Given his experience, Dr. Figlio is exceptionally qualified to lead a careful, independent evaluation of Ohio’s EdChoice program.

In this report, he sets forth three main findings:

  • While the students who participate in EdChoice—the pupils who actually use a voucher to attend private schools—are primarily low-income and minority children, they are relatively less disadvantaged than other voucher-eligible students. Figlio reports that more than three in four participants are economically disadvantaged, and three in five are black or Hispanic. Viewed in relation to Ohio’s public school population as a whole, students in EdChoice are highly disadvantaged—not surprising, given eligibility rules that require participants to have attended a low-achieving public school. But relative to students who are eligible for vouchers but choose not to use them, the participants in EdChoice are somewhat higher-achieving and somewhat less economically disadvantaged. This finding may be, in part, an artifact of the program’s basic design: It allows private schools to retain control over admissions, and a child must gain admission into a private school before he or she can apply for a voucher. This multi-step process might be more easily navigated by relatively more advantaged families; their children might also be more likely to meet the private schools’ admissions requirements.
     
  • EdChoice improved the achievement of the public school students who were eligible for the voucher but did not use it. When examining the test results of pupils attending public schools just above and below the eligibility threshold, the analysis finds that achievement in math and reading rose modestly as a result of voucher competition. (The analysis leverages the state’s voucher eligibility rules to isolate voucher competition from other potential competitive effects, such as charter schools.) In other words, the voucher program has worked as intended when it comes to competitive effects. Importantly, this finding helps to address the concern that such programs may hurt students who remain in their public schools, either as a result of funds lost by those schools or the exodus of higher-performing peers. Quite the opposite has occurred in the case of EdChoice: Achievement improved when the voucher program was introduced and public schools faced stiffer competition (and the risk of losing their own students).
     
  • The students who use vouchers to attend private schools have fared worse academically compared to their closely matched peers attending public schools. The study finds negative effects that are greater in math than in English language arts. Such impacts also appear to persist over time, suggesting that the results are not driven simply by the setbacks that typically accompany any change of school.

Let us acknowledge that we did not expect—or, frankly, wish—to see these negative effects for voucher participants; but it’s important to report honestly on what the analysis showed and at least speculate on what may be causing these results. One factor might be related to the limits of credible evaluation: while the rigor of the methodology ensured “apples-to-apples” comparisons of student achievement, Dr. Figlio was limited to studying students who attended (or had left) public schools that were just above or below the state’s cutoff for “low-performing.” By definition, this group did not include the very lowest-performing schools in the state. It’s possible that students who used a voucher to leave one of the latter schools might have improved their achievement; we simply cannot know from this study. The negative effects could also be related to different testing environments—higher stakes for public than private schools—or to curricular differences between what is taught in private schools and the content that’s assessed on state tests. Finally, although this analysis does not enable us to identify individual schools as high- or low-performing, it may be the case that some of the private schools accepting EdChoice students are themselves not performing as well as they should.

***

Taken as a whole, the results reported here for Ohio’s EdChoice program—one of the nation’s largest voucher programs—are a mixed bag. The program benefitted, albeit modestly, thousands of public-school students; yet among the somewhat small number of participants studied here, the results are negative. The study mirrors important trends that can be seen in other voucher research. The modest, positive competitive effect on public school achievement replicates findings from jurisdictions like Florida, Louisiana, and Milwaukee, findings that also offered evidence that voucher competition improved public school outcomes. These are, of course, encouraging for advocates of competition and choice. Yet this study also extends a recent (and, to us, unwelcome) trend that finds negative effects for voucher participants in large statewide programs. While earlier evaluations of privately and publicly funded scholarship programs—usually administered at the city level—found neutral-to-positive impacts on participants, newer studies of Louisiana’s and Indiana’s statewide programs have uncovered negative results, particularly in math.

There’s been much discussion about what might be behind these participant results. Is too much regulation discouraging high-quality private schools from joining the program? Are state exams failing to capture important private school contributions to student success? Do large, statewide programs lack the tools and resources to ensure quality at scale? Or are private schools simply struggling to raise achievement—especially in math—in relation to their public school counterparts? Some or all of these (or other) factors may be at work, but no one really knows for certain. More research on the effects of statewide voucher programs is obviously warranted.

Even though we don’t have all the answers, we believe that thoughtful policy makers can draw from the extant research as well as on-the-ground experience to give these programs the best chance of succeeding for more students, whether attending public or private schools. The pertinent lessons seem to us applicable both in states considering new private school choice programs and in states (like Ohio) that are seeking to improve an existing program.

First, we need to foster a healthy, competitive environment in K–12 education. A competitive jolt can awaken sleepy, lazy, or slipshod schools to clean up their act and attend more closely to the academic needs of their students. On the policy side, this means that lawmakers should continue to encourage a rich supply of school options, including not just private schools (in their many flavors, including religious and non-sectarian) but also public charter, STEM, and career and technical schools. At the same time, families can do their part by demanding more quality school choices. Competition and choice—two sides of the same coin—can incentivize all schools to work harder at meeting the needs of their pupils.

Second, policy makers should resist calls to pile more input-based regulations upon voucher-accepting private schools. Ohio’s private schools already face heavier regulation than those in many states. For example, they must adhere to state operating standards and hire state-licensed or certified teachers. Most of this was true before EdChoice came along (which makes less likely the “overregulation” explanation for disappointing participant results, at least in Ohio). Policy makers should tread lightly when adding to schools’ regulatory burdens: After all, freedom from regulation is precisely what makes private schools different and—for many—worth attending in the first place.

Third, as this study suggests, private schools likely vary when it comes to quality, and the public needs maximum transparency about this. Accordingly, state leaders should help families better understand the quality of their options by providing easy-to-compare information on the performance of voucher-accepting private schools. While Ohio already reports voucher students’ proficiency rates at the school level (subject to FERPA limitations), we know that those results are likely to be conflated with non-schooling factors like family income. They are also hard to track down. To be fair to private schools that educate disadvantaged voucher pupils, we suggest the adoption of a value-added measure—a school quality indicator that is more poverty-neutral than conventional academic proficiency rates. States (including Ohio) should make sure that these academic outcomes for voucher-accepting private schools are easily accessible to parents, perhaps in a report card-like format akin to those adopted for public schools. In Ohio, this would not add any additional testing or regulatory requirements on private schools.

Fourth, policy makers should craft simple, parent-friendly program rules. From the perspective of families, EdChoice is fairly complex, which may have influenced who participates in it. Eligibility hinges on public schools’ annual ratings from the state—which can change from year to year—and the state has no obligation to notify parents of their children’s eligibility. This means that families must bestir themselves to visit the state’s website or seek eligibility information through other channels. To ensure awareness, states should require direct notification of eligibility from the state department of education or a competent nonprofit agency. (This should also happen when eligibility is based on income.) Making matters more complicated, current EdChoice application rules require eligible students first to gain admission to a private school; then the school applies to the state for a voucher. It would be far simpler for parents if they could apply directly to the state for a voucher and then shop for the right private school. This process would not only empower parents but also give policy makers a much clearer picture of the demand for vouchers.

The present report breaks important new ground, but it is by no means the final word on EdChoice. We still have much to learn, including whether vouchers impact non-testing outcomes such as post-secondary success. We also need a deeper understanding about the quality of individual private schools. But the information set forth in the pages that follow is critically important as thoughtful policy makers consider the design and implementation of voucher programs, both in Ohio and across the nation. Programs that aim to better the lives of children must face scrutiny from independent, credible evaluators. Even when its findings are unexpected and painful, rigorous, disinterested evaluation remains the best way to prod improvements and make progress toward the program’s goals. In the case of EdChoice, the program appears to have met one of the two objectives conceived by its founders: Competition has spurred some public school improvement. The challenge ahead is to forge a stronger EdChoice program, one that can lead to widespread academic improvements for children who take their scholarships to the state’s private schools.


[1] In June 2013, Ohio lawmakers created a new voucher program, referred to as the EdChoice Expansion program, for which eligibility is based on family income. This program is starting by phasing in kindergarteners and expanding by one grade level per year. The present research does not cover the income-based EdChoice Expansion. It is limited to the original EdChoice program for which eligibility depends on having attended a low-performing district school.

 

 
 

No Child Left Behind (NCLB) required states to identify and intervene in persistently low-performing schools. Some states opted for more aggressive intervention with the creation of recovery school districts, including the Achievement School District in Tennessee, the Recovery School District in Louisiana, and the Education Achievement Authority in Michigan. Here in the Buckeye State, we don’t have a statewide recovery district—but we do have “academic distress commissions” (ADCs).

ADCs were added to Ohio state law in 2005 as a way for the state to intervene in districts that consistently fail to meet academic standards. Only two districts (Youngstown and Lorain) have ever been placed under ADC control, while a third (Cleveland) avoided the designation because of its implementation of the Cleveland Plan. In the summer of 2015, however, ADCs blasted onto the front pages of Ohio newspapers thanks to House Bill 70. The bill—widely known as the “Youngstown Plan” [1]sharpened the powers and duties of ADCs in Ohio and was signed into law by Governor Kasich in July. (See here for an overview of the bill’s biggest changes to ADCs.) 

In December 2015, the long-awaited reauthorization of NCLB became a reality with the Every Student Succeeds Act (ESSA). The biggest change was the devolution of some authority from the federal government to states—including much greater discretion over how to identify persistently failing schools and what to do about them.

A close read of the new law suggests that it won’t conflict with Ohio’s current ADC structure. ESSA requires the identification of individual schools, while an ADC identifies an entire low-performing district. As long as the Buckeye State follows ESSA’s requirements, it shouldn’t trouble the feds that Ohio also intervenes in struggling districts—particularly since ESSA permits “additional statewide categories of schools” that are identified “at the discretion of the state.”

But as policy makers work to develop Ohio’s state accountability plan under ESSA, it’s worth asking whether the current system of ADCs can provide some helpful lessons and a head start on designing the school-based identification and intervention guidelines required under the new federal law. Here’s a look at a few of the key elements:

School identification

Ohio statute requires that any district receiving an overall grade of F for three consecutive years be placed under the control of an ADC. This is a clear measure that is based on Ohio’s accountability report card data, which the state has been using for some time. ESSA, meanwhile, requires that struggling schools be given a designation of “comprehensive support” (any Title I school that’s in the bottom 5 percent statewide or fails to graduate 67 percent of its students) and “targeted support” (any Title I school with a subgroup that is labeled “consistently underperforming” by the state). Although the comprehensive support measures are clearly defined, Ohio is well positioned to use its robust school report card measures—already utilized in the context of ADCs—to develop the required “consistently underperforming” definition for targeted support. Ohio’s accountability system will have to undergo some relatively minor changes under ESSA, but these changes won’t require the state to stop using report cards and letter grades.

Locally made plans

ADCs are required to get local input in the form of a CEO-convened community stakeholder group,[2] which is tasked with “developing expectations for academic improvement” and “building relationships with organizations in the community that can provide services to students.” Under ESSA, both comprehensive support and targeted support schools will also be subject to improvement plans. What these plans must contain matters, but how they’re developed is just as important—and schools in both categories are required to seek stakeholder input.

To be fair, ADCs and local control haven't exactly gotten along so far. The rapid proposal and passage of House Bill 70 was controversial and drew condemnation from the very stakeholders that ESSA champions—school leaders, teachers, and parents. This should be a warning to Ohio as it devises its ESSA-required intervention strategies: If policy makers want the system to work, they would be wise to earnestly seek local input on how to craft plans for comprehensive support and targeted support schools. 

Creating school choice

Like NCLB before it, ESSA permits districts to allow students who are enrolled in persistently failing schools the option of transferring to another public school served by the district. Ohio’s ADC legislation has similar provisions that the state could consider mirroring for ESSA purposes. First, any student enrolled in an ADC-designated district is eligible to participate in the EdChoice Scholarship Program, Ohio’s largest voucher program. Second, the ADC is responsible for expanding “high-quality school choice options in the district” and can do so by creating a high-quality school accelerator—an organization that is not operated by the district and is responsible for attracting and recruiting high-quality sponsors and schools. The accelerator model could further empower families at comprehensive support and targeted support schools while interventions are underway. 

Exit Criteria

ESSA requires a school identified for intervention to meet a set of high expectations in order to “exit” identification and its associated interventions. The law permits states to craft their own exit criteria as long as schools that fail to meet that criteria within a certain number of years are subjected to “more rigorous state-determined action.” USDOE’s proposed regulations contain some additional stipulations, including that schools must meet exit criteria within four years.

Ohio has already developed similar exit criteria for its ADCs. First, a district must earn an overall grade of C on the state report card. Once that benchmark is met, the district begins a transition period. If it maintains an overall grade higher than F for two consecutive years after the first C, it ceases to be under ADC control. If, however, the district receives an F during the transition period, it reverts back to its ADC designation.

ADC statute also requires rigorous action long before USDOE’s fourth year. Starting in the first year of the existence of an ADC, if a district doesn’t earn an overall C grade, it is subject to an increasingly severe ladder of interventions—which includes reconstituting any school, altering or suspending collective bargaining agreements, and appointing a new board of education. In theory, Ohio could turn some of these criteria and consequences into its exit standards for ESSA-identified schools; however, the backlash against the Youngstown plan could make that move difficult.

***

ESSA’s focus on school identification and an ADC’s attention to district performance could lead some to assume that the two laws aren’t compatible. But they’re far from being mutually exclusive. In fact, the dual focus on schools and districts could work in Ohio’s favor, since ESSA requires states to “provide technical assistance” and support to each district that serves “a significant number” of comprehensive support and targeted support schools. It stands to reason that any district with a significant number of these schools is already or soon to be a candidate for ADC designation, so it makes sense that the state would oversee interventions at the district level while the district oversees interventions at the school level.

Furthermore, Ohio’s experience in developing and working with ADCs gives it a distinct advantage in considering how to design school-level interventions. Ohio’s ADC legislation has plenty of similarities to ESSA, and these similarities should contribute to the state’s development of a strong identification and intervention plan for struggling schools.


[1] Despite being nicknamed the “Youngstown Plan,” HB 70 doesn’t specifically mention Youngstown; on the contrary, it applies statewide and significantly alters the way any ADC—whether already existing or established in the future—is run.

[2] A CEO is appointed by an academic distress commission to lead district improvement efforts. The CEO-convened stakeholder group can include (but is not limited to) educators, civic and business leaders, representatives of higher education institutions, and government service agencies. Additional groups are created for each school and must consist of teachers and parents.

 

 
 

Eighteen months ago, Ohio proved it was finally serious about cleaning up its charter sector, with Governor Kasich and the Ohio General Assembly placing sponsors (a.k.a. authorizers) at the center of a massive charter law overhaul. The effort aimed to hold Ohio’s sixty-plus authorizers more accountable—a strategy based on incentives to spur behavioral change among the gatekeepers of charter school quality. Poorly performing sponsors would be penalized, putting a stop to the fly-by-night, ill-vetted schools that gave a huge black eye to the sector and harmed students. Under House Bill 2, high-performing sponsors would be rewarded, which would encourage authorizing best practices and improve the likelihood of greater quality control during all phases of a charter’s life cycle (start-up, renewal, closure).

While the conceptual framework for these sponsor-centric reforms is now toddler-aged, the actual reforms are still in their infancy. (House Bill 2 went into effect in February of this year, and the earlier enacted but only recently implemented sponsor evaluation is just now getting off the ground.) Even so, just five months in, HB 2 and the comprehensive sponsor evaluation system are having an impact. Eleven schools were not renewed by their sponsors, presumably for poor performance, and twenty more are slated to close. Not only are academically low-performing schools prohibited under law from hopping to a new sponsor,[i] but it also appears that Ohio’s sponsors are behaving more cautiously in general—making it less likely that they will take on even mediocre performers (not just those for whom such prohibitions apply). Even more important, if the data in Graph 1 is any indication, sponsors seem to be applying caution to new school applicants as well—meaning that it’s far less likely that half-baked schools will open in the first place. Given Ohio’s recent track record of mid-year charter closures, this is a major victory, though a dramatically slowed opening rate sustained over time may indicate looming issues for the sector.

For the 2016–17 school year, the Ohio Department of Education lists just ten “potential” new charter schools, an all-time low in the number of charters opened in a given year. At least one is a replication of an existing top-notch school (Breakthrough’s Village Preparatory model). As Graph 1 depicts, this is significantly fewer compared to past years.

Graph 1: Charters opened each year, 1999–2016

Source: Data come from the enrollment history listed in the Ohio Department of Education’s annual community schools report  (2015).[ii]

In the days leading up to HB 2’s passage last October, we urged Ohio to expect more from charter school sponsors during the new school application process: “Closing poor performers when they have no legs left to stand on is only a small part of effective oversight. A far better alternative would be to prevent them from opening in the first place….If Ohio charter schools are going to improve, better decision making about new school applicants will be critical.”

Ohio’s charter reforms appear to have influenced sponsor behavior in opening new schools. There are likely other reasons contributing to the overall slowdown—saturation in the academically challenged communities to which start-ups are restricted, for example, and a lack of start-up capital—but new, higher sponsor expectations and a rigorous sponsor evaluation system undoubtedly have played a sizeable role.

Sponsors await the first round of performance ratings this fall, with one-third of their overall scores tied to the performance of the schools in their portfolios. Fordham remains concerned that the academic component leans too heavily on performance indicators strongly correlated to demographics rather than student growth, making it difficult if not impossible for even good sponsors to earn high marks in this category. At minimum, however, the rigor of the evaluation is spurring long-overdue changes to the new school vetting process and making all sponsors think long and hard before handing out charter contracts.

For the moment, slowing new charter growth is a good thing. The state has just started down the path of serious charter school reform, and the sector, which was too lax for too long, is in need of balance. Like any pendulum, however, it’s possible that Ohio will move itself too far in the other direction—if, for instance, the opening rate continues to stall or even stops entirely. If even the state’s best sponsors can’t open new schools, the sponsor accountability framework will need revision so that families and students are not deprived of high-quality choices. Stagnant growth could also occur if Ohio charters continue to be starved of vital start-up funds. It is telling—and worrisome—that just one of Ohio’s proposed new charter schools is a replication of an existing high-quality network. It’s also troublesome that Ohio’s $71 million in federal Charter School Program funds—a program responsible for helping the state’s best charter networks start and expand—remains on hold.

Now that the state has accomplished the tough task of revising its charter law, Ohio needs to consider ways to make it easier for the best charters to expand and serve more students. Leaders of Ohio’s best charter schools point to inequitable funding, lack of facilities, and human capital challenges as serious impediments to growth. Lawmakers should explore ways to fast-track the replication of Ohio’s best charters while keeping an eye more broadly on preserving the autonomy of the sector, rewarding innovation and risk-taking, and resisting efforts to over-regulate. If we are serious about lifting outcomes for Ohio’s underserved children and preventing the sector from too closely mimicking traditional public schools, these issues demand our attention. For the time being, it’s worth celebrating early milestones indicating that Ohio is on its way to serious improvement.


[i] Low-performing charter schools—those receiving a D or F grade for performance index and a D or F grade for value-added progress on the most recent report card—cannot change sponsors unless several stipulations apply: the school finds a new sponsor rated effective or better, hasn’t switched sponsors in the past, and gains approval from the Ohio Department of Education.

[ii] The number of opens for 2014 slightly contradicts the number provided in my past article, “Expecting more of our gatekeepers of charter school quality.” This article lists forty-eight opens, per Ohio’s enrollment records. However, those same records list no students for several of the fly-by-night schools that did in fact open (and shut mid-year). The previous article counted those five schools for the purposes of illustrating the especially high numbers of poorly vetted schools opening in 2014.

 

 
 

On June 22, the Dropout Prevention and Recovery Study Committee met for its first of three meetings this summer. The committee is composed of two Ohio lawmakers (Representative Andrew Brenner and Senator Peggy Lehner) and several community leaders. It was created under a provision in House Bill 2 (Ohio’s charter reform bill) and is tasked with defining school quality and examining competency-based funding for dropout-recovery schools by August 1.  

Conducting a rigorous review of state policies on the state’s ninety-four dropout-recovery charter schools is exactly the right thing to do—not only as a legal requirement, but also because these schools now educate roughly sixteen thousand adolescents. The discussion around academic quality is of particular importance. These schools have proven difficult to judge because of the students they serve: young adults who have dropped out or are at risk of doing so. By definition, these kids have experienced academic failure already. So what is fair to expect of their second-chance schools?

Let’s review the status of state accountability for dropout-recovery schools and take a closer look at the results from the 2014–15 report cards. In 2012–13, Ohio began to provide data on the success of its dropout-recovery schools on an alternative school report card—a rating system that differs from that of traditional public schools. Dropout-recovery schools, for example, do not receive ratings for the conventional Performance Index or Value Added measures; neither are they assigned ratings on an A–F scale like other Ohio public schools. In 2014–15, the state began rating these schools as “Exceeds Standards,” “Meets Standards,” or “Does Not Meet Standards.”

The overall grades are calculated by following a two-step process outlined in Tables 1 and 2. First, points are assigned to schools based on the four individual report card components: graduation rate (a composite of the four-, five-, six-, seven-, and eight-year adjusted cohort rates); the twelfth-grade assessment passage rates on all of the Ohio Graduation Tests or the end-of-course assessments once they are phased in; gap closing (a.k.a. Annual Measurable Objectives); and student progress. Each school receives a rating of Exceeds, Meets, or Does Not Meet for each component; points are awarded based on that designation.

As you can see from Table 1, the graduation rate and progress components are weighted somewhat more heavily than other two (30 percent versus 20 percent). Once all component points are added and divided by possible points earned, Table 2 can be referenced to determine which rating a school receives. When it came to overall school ratings for 2014–15, 43 percent of dropout-recovery schools received a Does Not Meet rating, 49 percent received a Meets rating, and 8 percent received an Exceeds rating (a total of ninety-three schools were part of this accountability system).  

Table 1. Points assigned to DOPR schools based on component ratings

Table 2. Overall ratings assigned, based on the summation of points earned per category

The graduation rate, assessment passage, and gap-closing measures generally mirror traditional school report cards (though with some modifications). But one of the more interesting aspects of the dropout-recovery report cards is their progress measure (i.e., student growth over time). To measure student growth, dropout-recovery schools use NWEA’s Measures of Academic Progress (MAP) test rather than calculating gains on state exams. (State law specifies the use of a “nationally norm-referenced” assessment to measure progress in dropout-recovery schools. [1]) To gauge progress, dropout-recovery students must have taken both the fall and spring administrations of the MAP test. The differences in achievement from fall to spring are compared to a norm-referenced group (provided by the vendor) to generate an indicator of a student’s academic progress. Consistent with Ohio’s value-added measure, progress is evaluated based on whether a student maintains his relative position from one test administration to the next. In general, a student scoring at, say, the fiftieth percentile in one testing period would be expected to remain at that percentile in the next one. Achievement above or below expected growth classifies a student as exceeding or failing to meet expectations.[2]

While the norm-referenced approach makes sense in the dropout-recovery context, the results from the first year of implementation appear oddly distributed to us. Consider Chart 1, which displays the number of dropout-recovery schools and their progress ratings. While fifty-nine schools did not meet the progress standards, just one—one!—school exceeded the growth standards. (Another thirty-three schools met the growth standards.) It should be noted that students failing to take both administrations of the MAP test are not counted in a school’s overall progress measure, so the disappointing results probably cannot be explained by either mobile or chronically absent students who didn’t take one of the MAP exams.

Chart 1: Progress ratings for Ohio’s dropout-recovery charter schools, 2014–15

Unlike the chart above, traditional district schools’ value-added scores demonstrate a wide range of scores and ratings (see here, for instance). However, because kids in dropout-recovery schools have experienced previous academic difficulties, we might expect their progress to lag behind that of their peers. Still, it seems strange to observe virtually no schools exceeding the growth expectations. It is worth investigating whether these results would be observed using a different value-added measure (such as the one used by districts, which compares a student’s performance to his previous test scores instead of a national norm). State policy makers should also confirm that the comparison group used by NWEA is appropriate for Ohio’s dropout-recovery students.

Further, confirming that a large enough sample size was used to generate these results is imperative. But it’s hard to identify the cause of this odd distribution. It could be related to methodology; or perhaps the average student in most dropout-recovery schools really is failing to achieve adequate progress. This question—along with a host of others surrounding dropout-recovery and prevention school attendance, funding, and “success”—should be addressed by the Dropout Prevention and Recovery Study Committee. To be sure, this is just one year of data, and a first using this type of growth measure. But policy makers should definitely look into these issues and keep a close eye on the 2015–16 progress results to see if the pattern recurs.

The dropout-recovery and prevention report cards provide essential accountability for Ohio’s alternative schools and offer much-needed information to parents, educators, and the public. The nature of dropout-recovery programs as second-chance schools begs the question of whether their success and progress should be measured with a different stick, but adjustments in report card composition and its measures account for the unique population they serve. Now we must confirm that the measures are useful and accurate as we seek to hold schools accountable for the job they were created to do: educating and graduating students who have fallen behind. Here’s hoping the Dropout Prevention and Recovery Study Committee prioritizes the accurate identification of dropout-recovery schools that are making a positive impact, confirming that the accountability system is working as it should.


[1] Unlike the value-added measure, which uses Ohio’s statewide achievement distribution as a reference group, the dropout-recovery measure is using a national norm-referenced population. For more details, see ODE/SAS, “Value-Added Measures for Dropout Recovery Programs” (May 2015).

[2] A growth score of less than -2 equates to a dropout-recovery report card Does Not Meet rating; a score between -2 and +2 equates to a Meets rating; and a score greater than +2 equals an Exceeds rating. These “cut points” are similar to the ones used for traditional public schools’ value-added measure. It is not exactly the same, however, due to the three-tiered rating system used for dropout-recovery schools.

 

 
 

Like Mom and apple pie, everyone loves and believes in a well-rounded education. Ensuring that every child gets one, however, has proven to be a challenge of Herculean magnitude—despite compelling evidence that it’s precisely what disadvantaged students most desperately need to close persistent achievement gaps and compete academically with their more fortunate peers. Enter the Every Student Succeeds Act. As this report from Scott D. Jones and Emily Workman of the Education Commission of the States (ECS) notes, while concerns about providing children a well-rounded education “have not received the same degree of attention as hot-button issues like equitable funding and accountability indicators, it could be considered a foundational element of the new federal law.”

Foundational, perhaps. But is it enforceable? Education Secretary John King has lately been using the bully pulpit to promote the virtues of a well-rounded education. “States now have the opportunity to broaden their definition of educational excellence, to include providing students strong learning experiences in science, social studies, world languages, and the arts,” King is quoted as saying by the ECS authors. “That’s a huge and welcome change.”

Yes and no. In truth, states have always had the “opportunity” to broaden their definition of educational excellence. The question is why, in the main, they haven’t done so. On the one hand, a case can be made that federal education policy has discouraged states from providing a well-rounded education by tacitly promoting a too-narrow view of reading (I have argued elsewhere that this is precisely what happened under NCLB). Accountability policies that demand fast and measurable gains in reading functionally privilege a skills-and-strategies approach to reading instruction. This discourages the kind of steady investments in knowledge and vocabulary that build mature reading comprehension, which is a slow-growing plant. Merely encouraging a well-rounded education is insufficient. If states don’t use their “opportunity” under ESSA to actively and aggressively incentivize the delivery of a well-rounded education, the phrase will remain a mere platitude.

Jones and Workman offer examples of how ESSA might improve policy and practice. For starters we have the law’s expanded definition of a well-rounded education, which now includes writing, engineering, music, technology, and career and technical education. There’s Title I, which requires that all districts provide a “well-rounded program of instruction that meets the needs of all students,” and Title II, which allows funds to be used to help teachers “integrate comprehensive literacy instruction into a well-rounded education.” And there are “flexible block grants” through which ESSA “creates some accountability around incentives for providing a well-rounded education…particularly for minority groups, including women, English language learners, students with disabilities, and low-income students.” By not limiting states to specific areas in which to apply for funding, local education agencies “are free to emphasize any of the multiple subjects listed in ESSA, select their own, or integrate across subjects,” the authors note: “The possibilities are endless in how states can utilize this [block grant] program to make meaningful investments in their students.”

“With ESSA, districts are asked to conduct a comprehensive needs assessment to identify the needs of their unique populations and to make investments to address those needs,” the authors note. Let me offer districts a leg up on that needs assessment: Every child needs a well-rounded education. And you don’t start that after children learn how to read. You build readers by providing it from the very first days of school. The next Massachusetts will be the state that best understands this truth, adopts curriculum that delivers it, trains teachers to implement it, and uses their newfound flexibility to ensure that kids benefit from it.

Or we can just keep talking about it.

SOURCE: Emily Workman and Scott D. Jones, “ESSA’s Well-Rounded Education,” Education Commission of the States (June 2016).

 
 

I remember the exact moment I became a charter school supporter. It was 2006, and I was a few days away from completing my first year of teaching in Camden, New Jersey. The mother of one of my students wanted to speak with me after school. I’ll never forget what she asked me: She wanted to know if she should send her daughter to a nearby charter school for first grade or keep her in our district school. Specifically, she asked, “What would you do if you were me—if this were your child?”

If someone had asked me then my opinion on charter schools, or choice generally, I wouldn’t have had one. But I did have a strong opinion about wanting her child (small for her age, with a tough exterior that could be mistaken for anger if you didn’t know her well) to thrive. The charter up the street was the only one I’d ever heard of, even though the city suffered from a desperate shortage of schools where reading and math proficiency scores weren’t in the single digits. I knew a bit about that particular school. It was safe and orderly, placed high expectations on students, offered an extended school day and school year, and provided opportunities that our school didn’t. So I said yes, unequivocally. Standing there with a young mom around my age—a single mom, living in a neighborhood notorious for poverty and crime but unwilling to let that define her daughter’s story—there really wasn’t much to deliberate about.

One of the primary problems we face in education policy making is our inability, or unwillingness, to see through the eyes of moms, dads, and students in search of better options. We’re reluctant to let go of tired talking points and simply ask, “What would I want for my own child?”

Every student deserves a school like the Dayton Early College Academy (DECA), a charter high school helping students defy the odds in one of Ohio’s lowest-performing districts. Three out of four of its students come from economically disadvantaged families, but 100 percent enroll in college. These students include Khadidja, who is attending West Virginia University this fall, and Shyanne, who appears in the video below and elucidates how DECA’s positive culture and high expectations have inspired her pathway to success.

Shyanne’s Story (Dayton Early College Academy) from Good Charters, Good Choices on Vimeo.

Whatever your opinions about school choice or charter schools up to this point, we urge you to watch this video and listen to what Shyanne has to say. Better yet, go visit a high-quality charter school in your community and talk to teachers, parents, and students directly. Good charter schools like DECA are good choices. They play a critical role in getting more low-income students to and through college, challenge the notion that socioeconomics is destiny, and empower parents who want their kids to have the best shot in life. That’s something all of us have in common. 

 
 
 
 

We look at Ohio’s “high-performing teacher” definition, district-sponsored charter schools, and more

Last year’s biennial budget (HB 64) required Ohio to define what it means to be a “consistently high-performing teacher” by July 1, a date that is fast approaching. This particular provision aimed to make life easier for such teachers by excusing them from the requirement to complete additional coursework (and shell out extra money) each time they renew their licenses. It also exempts them from additional professional development requirements prescribed by their districts or schools. Who could oppose freeing some of our best teachers from a couple of burdensome mandates?

More people than you might think, starting with the group tasked with defining “consistently high-performing”: the Ohio Educator Standards Board. The board recently “voted unanimously to oppose the law” according to Gongwer News—never mind the fact that the law passed last year and contesting it now is futile. Chairwoman Sandra Orth said that defining a high-performing teacher was disrespectful and unproductive. Ohio Federation of Teachers (OFT) President Melissa Cropper also weighed in, calling it “another slap in the face to our profession.” Meanwhile, state board of education member A.J. Wagner characterized this provision as a “law that was made to be broken” and urged fellow members to follow in the footsteps of the standards board and refuse to approve a definition. That’s not an option in the face of a statutory deadline, but it shows how far some are willing to go in order to grandstand.

What aspect of the high-performing teacher definition is so disconcerting, exactly? It doesn’t eliminate professional development, as the provision’s opponents misleadingly suggest; it eliminates specific requirements for Ohio’s top-performing teachers, which is meant to free them up so they can seek out the learning opportunities that they deem most worthwhile. The backlash from teachers’ unions and their allies is both disappointing and predictable. They’ve been outspoken critics for years as Ohio legislators have attempted to alter the teacher evaluation system in an effort to better differentiate teacher performance (even if it hasn’t done so very well). Even though the provision would result in unhampering certain teachers from onerous requirements, opponents are critical because it would necessitate a differentiation of performance within the teaching profession. Still, it’s an awfully low-stakes provision to protest (in contrast to, for example, teacher performance designations tied to layoffs). And it doesn’t advance the cause of Ohio’s teachers, especially its top performers, at all.

There’s a divide between those who believe teacher performance can and should be measured and those who argue the opposite—perhaps because they don’t want it to be measured. The best strategy to contest a system that attempts to reward high-performers is to call into question the idea that it’s possible to measure or define high-performers in the first place. The standards board seems to be employing this strategy—Chairwoman Orth said that a high-performing teacher can’t be defined with “any kind of validity or consistency,” as did the OFT in its ponderously titled blog post, “Does a 'consistently high performing teacher' exist?” Cropper said that the OFT will be lobbying for the removal of the definition in law. That’s right: The mere designation of a high-performing teacher—and any incentives or rewards attached to it—is so insidious that the OFT is willing to spend taxpayer resources to lobby against it. All of this serves as further evidence that vested interests feel a knee-jerk need to defend the status quo, even in areas that are relatively low-stakes and where top-performing teachers could benefit.

The recent spat also underscores a rift between those who believe public education needs freedom from regulation (as Fordham articulated a year ago in our Getting out of the Way report) and those who want to see regulation applied uniformly. Teachers’ unions and those in the latter camp champion one-size-fits-all mandates selectively. In one breath, they call for expensive class-size requirements, the application of traditional school mandates to charter schools, and treating teachers equally. In the next, they complain about state and federal accountability mandates that they perceive to threaten local autonomy and exalt the principles of freedom and creativity (look at the Ohio Education Association’s vision for accountability under ESSA, premised on “upholding creativity over standardization”). It is ironic indeed that many of the same people calling for freedom and trust on high-stakes accountability matters actively oppose a low-stakes provision that would allow Teacher-of-the-Year winners to determine what additional training to pursue (instead of the state prescribing it for them).

Despite the standards board’s abdication of responsibility, Ohio statute—and the impending deadline—remains. The Ohio Department of Education went ahead and developed a definition that was approved at the June state board of education meeting.[1] At least one group decided to meet its statutory responsibilities. The latest mini-scuffle about teacher quality in Ohio is a reminder that teachers’ unions sometimes behave in ways that are surprisingly anti-teacher and—in that they selectively bemoan certain one-size-fits-all policies while simultaneously defending ones that treat their members in one-size-fits-all fashion—hypocritical.


[1] Teachers must receive the highest summative rating on the Ohio Teacher Evaluation System for four out of the last five years, and meet an additional leadership criteriona for three out of five years. The leadership criteria are: possession of a senior or lead professional education license; holding a locally recognized teacher leadership role; serving in a leadership role for a national or state professional academic education organization; and/or receiving a state or national educational recognition or award.

 

 
 

Traditional districts that serve as charter school sponsors are often glossed over in the debate over Ohio’s charter sector. But given their role in two recent reports, it’s an opportune time to take a closer look at their track record.  

First, a Know Your Charter report covered the failings of a number of Buckeye charters receiving federal startup funds (either they closed or never opened). Though the report itself didn’t draw attention to it, we pointed out that school districts sponsored more than 40 percent of these closed schools. Meanwhile, the auditor of state released a review of charter school attendance; among the three schools referred for further action because of extraordinarily low attendance, two had district sponsors (the third was sponsored by an educational service center).

With all of the talk about charters being created to privatize education, it might surprise you to learn that Ohio school districts have long had the authority to sponsor (a.k.a. authorize) charters. In fact, the Buckeye State allows districts to sponsor either conversion or startup charters within certain geographic limitations (e.g., a school must be located within a district’s jurisdiction or in a district nearby).[1] Throughout our eighteen-year charter history, there have been 105 district-sponsored charters—almost one-fifth of Ohio charters ever opened—authorized by sixty-eight districts. Presently, there are forty-two active district sponsors—roughly 7 percent of districts in the state—that together authorize sixty-two schools, the majority of which are dropout recovery schools.

This article takes a closer look at district-sponsored charters along two dimensions: school performance and school closure. Each charter school is linked to the sponsor of record as reported in ODE’s 2014–15 Community Schools Annual Report (Table 2). For closed or suspended schools, the sponsor of record is identified via ODE’s Closed School Directory.[2]

School performance

It’s vitally important to examine the academic quality of the schools in sponsors’ portfolios. Interestingly, CREDO’s 2014 study on Ohio charters contains an analysis that linked a school’s impact to its sponsor. In short, the analysis did not find appreciable differences in charter school impact based on sponsor type (e.g., non-district versus district). But the data for that analysis ends with the 2012–13 year, so it might be useful to start by taking a look at the most recently available report card data. Of course, we acknowledge that the following comparisons are not nearly as rigorous as student-level analyses like CREDO.

General education schools (i.e., non-dropout-recovery schools)

Districts only rarely sponsor general education charter schools, meaning that a small number of their schools can be compared to non-district-sponsored ones. Along the value-added measure (i.e., student growth), only seventeen district-sponsored charter schools received a rating in 2013–14 and 2014–15. The majority of these schools were sponsored by either the Cleveland or Reynoldsburg school districts. We elect to use the value-added measure, as opposed to a proficiency-based one, since it is more likely to reflect actual school performance.[3]

With those qualifications in mind, Table 1 displays the distribution of schools’ value-added ratings by sponsor type for 2013–4 and 2014–15. As you can see, district-sponsored schools have a slight edge with respect to the percentage of schools earning A ratings on value-added: Thirty-five percent of their schools received such a rating in both years, versus 25 and 20 percent for non-district sponsored schools. The strong performance for districts can be partly attributed to Cleveland, which sponsors several of the high-performing Breakthrough charters (evidence that districts can and do sponsor excellent charter schools). The results across the other rating categories are generally inconclusive— either very similar (B and D ratings) or inconsistent across the two years (C and F).

Table 1: Charter school performance on Ohio’s value-added measure by sponsor type, 2013–14 and 2014–15

Dropout-recovery schools

Because districts tend to sponsor dropout-recovery schools, we should take stock of those results as well. In the 2014–15 school year, thirty-five district-sponsored dropout-recovery schools received a progress rating—a measure that is akin to value-added but uses a norm-referenced exam (for more on this measure, see here). As you can see from Table 2, district-sponsored schools somewhat outperform their counterparts: Fifty-one percent received the Meets Standards rating, versus 26 percent of non-district-sponsored schools. However, the fact that only one dropout-recovery school overall received the top rating on this dimension (Exceeds) raises questions about whether any of the dropout-recovery schools—district-sponsored or otherwise—are truly measuring up, or whether there are challenges with the measure that demand closer attention. (The 2014–15 school year was the first year of implementation.) In sum, it is fair to say that the jury is out on the overall quality of district-sponsored dropout-recovery schools, both in absolute terms and in relation to non-district sponsors.

Table 2: Dropout-recovery charter school performance on Ohio’s value-added measure, by sponsor type, 2014–15

School closure

Closure is not the only—or even best—way to define sponsorship quality, but it is a useful data point. Sponsors are responsible for vetting and overseeing schools; as such, good sponsors shouldn’t be linked to an overabundance of closed schools. Of course, chronically low-performing charter schools should close—this is an essential though difficult part of responsible authorizing—so some closures are to be expected. Figure 1 shows that over the life of Ohio’s charter sector, slightly more district-sponsored schools have closed than non-district-sponsored ones (45 percent to 32 percent). In other words, almost half of the schools that have been sponsored by districts do not remain in existence today.

Figure 1: Percentage of charters closed: district sponsors versus non-district sponsors, 1998–99 to 2014–15

But perhaps some of these closed schools opened early in the history of Ohio’s charter program and operated for, say, eight or nine years. Another way of slicing the data is to zero in on schools that closed shortly after opening—the infamous “fly-by-night” schools. A school that closes before reaching its fifth anniversary is much more likely to have had flaws that could have been identified during a rigorous application process, and their closure is more likely to indicate an error in the sponsor’s judgment. When using a five-year threshold, district sponsors again appear to lag slightly behind. Figure 2 shows that a greater percentage of district-sponsored schools closed before reaching this mark (30 percent) than those with a non-district sponsor (21 percent). Taken together, Figures 1 and 2 show that district sponsors are not necessarily more successful at authorizing schools that remain open. Many will find this surprising given that school districts have overseen schools for a century.

Figure 2: Percentage of charters closed before reaching five years of operation: district sponsors versus non-district sponsors, 1998–99 to 2014–15

***

A closer look at district sponsorship reveals some of the same warts as those of the charter sector as a whole. The academic performance of district-sponsored schools varies in much the same way as charters sponsored by other entities. Traditional districts, like their non-district counterparts, have sponsored schools that closed shortly after launch. The struggles of district-sponsored charters shouldn’t be ignored. (Conversely, sponsors of successful schools deserve our praise.) Fortunately, Ohio’s recent charter reforms will force low-capacity sponsors—district and non-district alike—out of the authorizing business. Their exit, along with others, may not be an altogether bad thing.

For better or worse, school districts have played an important role in the development of Ohio’s charter sector. When talking about charter sponsorship, let’s not let districts fly under the radar.


[1] Generally speaking, a conversion charter school refers to the conversion of an existing district school (fully or partially) into a charter school; a start-up charter is a new school.

[2] Three schools without sponsors of record in both files were identified through OEDS-R. A school can switch from a district to a non-district sponsor or vice-versa. In cases such as these, the following analysis assumes that a school’s last sponsor of record is the one that should be held accountable.

[3] The same reasoning applies in the section on dropout-recovery schools.

 

 
 

One of the most controversial aspects of school accountability is how to identify and improve persistently low-performing schools. Under NCLB, states were required to identify districts and schools that failed to make the federal standard known as adequate yearly progress. Failure led to a cascading set of consequences that were viewed by many as inflexible and ineffective.

The passage of a new national education law— the Every Student Succeeds Act (ESSA), signed by President Obama in December— has shifted more of the responsibility for identifying and intervening in persistently low-performing schools to states (though the Department of Education’s regulations attempt to pull some of that responsibility back to Washington—more on that later).

School identification under ESSA is determined by a state’s “system of meaningful differentiation.” This is based on the state’s accountability system, including indicators of student proficiency, student growth, graduation rates, and English language proficiency. The use of these indicators isn’t optional, though the weight of each (and the methodology crafted from them that is then used to identify schools) is left up to states. Using their chosen methodology, states are required to identify a minimum of two statewide categories of schools: comprehensive support and improvement schools and targeted support and intervention schools.[1]

Comprehensive support schools must be identified at least once every three years beginning in the 2017–18 school year. This category must include the lowest-performing 5 percent of Title I schools (it’s possible for states to include a higher percentage, but likely politically unfeasible) and all public high schools that fail to graduate 67 percent or more of their students. States are responsible for notifying districts of any schools that are identified as part of the comprehensive category. Once districts have been notified, they must work with local stakeholders to develop and implement a comprehensive support and improvement plan. By law, these plans must be informed by all the indicators in the state’s accountability system, include evidence-based interventions, be based on a school-level needs assessment, and identify resource inequities that must be addressed.[2] The plan must be approved by the school, the district, and the state. Once approved, the state is responsible for monitoring and reviewing implementation of the plan.

Targeted support schools are identified as such when any subgroup[3] of their students is labeled “consistently underperforming” by the state.[4] Similar to comprehensive schools, the state must notify districts of any schools that have been identified as targeted. However, while comprehensive support schools are subject to plans crafted and implemented by the district, targeted support and improvement plans are developed and implemented by the school rather than the district. The parameters for the plan are largely the same: The school must seek stakeholder input, and the plan must be informed by all indicators and include evidence-based interventions. Schools that have a subgroup performing as poorly as the bottom 5 percent of schools in the state must also identify resource inequities. The plan must be approved by the district, and the district is responsible for monitoring implementation. If the plan ends up being unsuccessful after a certain number of years (as determined by the district), “additional action” is authorized by ESSA.

States are tasked with a few additional responsibilities regarding identified schools. First, they must establish statewide exit criteria for schools that are identified as comprehensive and targeted. If these criteria aren’t satisfied within a certain number of years (determined by the state, but not to exceed four years), the school is subject to more rigorous state-determined action and intervention. Second, states must periodically review resource allocation intended for school improvement for districts that serve a significant number of comprehensive and targeted schools. Third, states must provide technical assistance to each district that serves a significant number of schools implementing comprehensive or targeted support plans. ESSA also permits (but does not require) states to “initiate additional improvement” in any district with a significant number of schools that are consistently identified as comprehensive and fail to meet exit criteria, or in any district with a significant number of targeted schools. 

Interestingly, if a targeted school with a subgroup performing as poorly as the bottom 5 percent of schools in the state fails to satisfy exit criteria within the state-determined number of years, the school as a whole must be identified as a comprehensive support and intervention school. The implications of this provision are enormous: A school that the public perceives as high-performing could land in the statewide comprehensive improvement category by failing to significantly improve outcomes for a particular subgroup of students. While NCLB already required states to disaggregate achievement data based on certain subgroups, ESSA added three new subgroups to the mix: homeless students, foster care students, and children of active duty military personnel. Given that a single persistently low-performing subgroup can force an entire school into the comprehensive category, the subgroups—many of which have either not been compiled or have flown under the radar—could draw serious attention from districts and schools. Sample sizes will matter immensely, and in some cases they may be the only thing that stands between dozens of schools and their identification as comprehensive.[5]

The Department of Education’s (USDOE) recently released proposed regulations will, if enacted, add some additional requirements and responsibilities. Here are a few of the most significant proposed regulations related to school identification and support:

  • Performance on a “school quality or a student success” indicator (like teacher or student engagement) cannot be used to justify the removal of a school from the comprehensive or targeted categories unless the school or subgroup is also making significant progress on at least one of the academic indicators that ESSA requires to be more heavily weighted—achievement, growth, graduation rate, or English language proficiency.
  • While states can create their own definitions for what constitutes a “consistently underperforming” subgroup, the regulations provide suggested definitions.
  • The identification of consistently underperforming subgroups that can lead to a school’s designation as targeted for support and intervention must begin in 2018–19.
  • Any district that is notified by the state of a comprehensive school identification must notify parents of each enrolled student about the identification, the reason(s) for identification, and an explanation of how they can be involved.
  • The proposed regulations list components that each comprehensive support and improvement plan must have, including a description of how stakeholder input was solicited and taken into account.

Overall, while ESSA may have eliminated some of NCLB’s more controversial school intervention provisions, there are still plenty of mandates that states and districts will need to be aware of in order to comply with the new law. Furthermore, although there is significant potential for innovation, personalized support, and stakeholder input in the more localized intervention process, there is also an incredible amount of risk. Without strict oversight and discipline, some states (and districts and schools) could opt to simply go through the motions. That’s a future we just can’t afford.


[1] States are permitted to add additional statewide categories for schools.

[2] Identifying resource inequities can include a review of district- and school-level budgeting.

[3] ESSA requires the following subgroups to be reported on: race and ethnicity, gender, English language proficiency, migrant status, disability status, low-income status, homeless students, foster care students, and children of active duty military personnel.

[4] The state determines consistently underperforming subgroups based on the indicators used in the state’s accountability system.

[5] ESSA leaves selecting a sample size up to states, but the USDOE’s proposed regulations call for states to “submit a justification” for any size larger than thirty students.

 

 
 

School choice advocates have long agreed on the importance of understanding what parents value when selecting a school for their children. A new study from Mathematica seeks to add to that conversation and generally finds the same results as prior research. What makes this study relatively unique, however, is that its analysis is based on parents’ rank-ordered preferences on a centralized school application rather than self-reported surveys.

To analyze preferences, researchers utilized data from Washington, D.C.’s common enrollment system, which includes traditional district schools and nearly all charters. D.C. families that want to send their children to a school other than the one they currently attend (or are zoned to attend) must submit a common application on which they rank their twelve most preferred schools. Students are then matched to available spaces using a random assignment algorithm.

The study tests for five domains of school choice factors: convenience (measured by commute distance from home to school),[1] school demographics (the percentage of students in a school who are the same race or ethnicity as the chooser), academic indicators (including a school’s proficiency rate from the previous year), school neighborhood characteristics (crime rates and measures of residents’ socioeconomic status), and other school offerings (including average class size, uniform policies, and the availability of before- and after-school care). Findings suggest that, of the five factors, convenience, school academic performance, and student body composition are the most predictive of how parents rank school alternatives. (The analysis focuses on only entry grade levels—pre-K and kindergarten for elementary schools, grades five and six for middle school, and grade nine for high school—since these are the most common levels for which families submit applications.)

In terms of subgroup breakdowns, the economic status of choosers impacted preferences for elementary and middle school applicants, but not high school applicants. For example, in elementary school, higher-income applicants preferred schools with high percentages of students of the same race and lower percentages of low-income students; low-income applicants didn’t share the same preferences. In middle school, both low- and higher-income applicants were influenced by school academic performance; however, low-income choosers focused on school proficiency rates (which were observable on the application website) while higher-income choosers were more influenced by accountability ratings (which were not immediately available on the application site). Breakdowns for the three largest race/ethnicity groups (white, Hispanic, and African American) in elementary school showed that while white choosers preferred schools with larger percentages of students from the same racial group, African American choosers “essentially showed indifference for own-group racial composition.” In middle school, however, all but the Hispanic group of applicants had a “pronounced own-group preference and a slight preference for diversity.”   

To round out their analysis, the researchers use their model to predict how parents would rank schools under alternative scenarios. For example, if capacity constraints were eased so that more applicants were able to attend their most preferred schools, enrollment in high-performing schools would increase and segregation by race and income would decrease. Closing the lowest-performing schools would also increase enrollment in high-performing schools and decrease segregation.

Overall, while there are limitations to this particular study and others like it, it’s a valuable analysis of what parents look for in schools—and the importance of expanding their options.

SOURCE: Steven Glazerman and Dallas Dotter, “Market Signals: Evidence on the Determinants and Consequences of School Choice from a Citywide Lottery,” Mathematica Policy Research, (June 2016). 


[1] To ensure accurate distance measurements between schools and residences, the study restricts the student sample to only lottery applicants with a valid D.C. address (approximately 98 percent of applicants).

 

 
 

On the heels of national research studies that have uncovered troubling findings on the performance of virtual charter schools, a new report provides solid, commonsense policy suggestions aimed at improving online schools and holding them more accountable for results. Three national charter advocacy organizations—NAPCS, NACSA, and 50CAN—united to produce these joint recommendations.  

The paper’s recommendations focus on three key issues: authorizing, student enrollment, and funding. When it comes to authorizers, the authors suggest restricting oversight duties for statewide e-schools to state or regional entities, capping authorizing fees, and creating “virtual-specific goals” to which schools are held accountable. Such goals, which would be part of the authorizer-school contract, could include matters of enrollment, attendance, achievement, truancy, and finances. On enrollment, the authors cite evidence that online education may not be a good fit for every child, leading them to suggest that states study whether to create admissions standards for online schools (in contrast to open enrollment). They also recommend limits to enrollment growth based on performance; a high-performing school would have few, if any, caps on growth, while a low-performer would face strict limits. Finally, the report touches on funding policies, including recommendations to fund online schools based on their costs and to implement performance-based funding, an approach that four states have already piloted for online schools (Florida, Minnesota, New Hampshire, and Utah). Interestingly, the report notes how the design of the performance-based funding model varies from state to state. New Hampshire, for example, takes a mastery-based approach (with the teacher verifying mastery), while Florida requires the passage of an end-of-course exam—as determined by the state—to trigger payment.

Perhaps the paper’s most intriguing idea is that states consider decoupling virtual schools from their charter laws. They write, “States may need to consider governing full-time virtual schools outside of the state’s charter school law, simply as full-time virtual public schools.” Indeed, laws and regulations crafted with brick-and-mortar charter schools in mind may be poorly suited to the unique environment of online schools. Enrollment and funding policies are just two examples where it would be helpful to have a separate set of rules (e-schools, however, should not be held to different academic standards).

State policy makers, including those in our home state of Ohio (a state with a large e-school sector), should pay close attention to this report. As my colleague Chad Aldis noted, “Virtual schools have become and will remain an important part of our education system.” If indeed policy misalignment is at least partly behind the poor results we’ve observed, employing the recommendations of this report would be a major step forward for online education.

Source: National Alliance for Public Charter Schools (NAPCS), the National Association of Charter School Authorizers (NACSA), and 50CAN, “A Call to Action to Improve the Quality of Full-Time Virtual Charter Public Schools,” (June 2016).

 
 

A short article published this week in the Columbus Dispatch makes serious reporting mistakes that leave readers with a distorted view of school finance. According to the article, a Columbus citizen millage panel recently discussed a state policy known as the funding “cap.” Briefly speaking, this policy limits the year-to-year growth in state revenue for any particular school district. As we’ve stated in the past, funding caps are poor public policy because they shortchange districts of revenue they ought to receive under their funding formulae. State lawmakers should kill the cap; it circumvents the state’s own formula, it’s unfair to districts on the cap, and it ultimately shortchanges kids.

The article would’ve been right to stop there. Yet somehow charter schools got pulled into the discussion, and that is where the coverage went way off track. The Dispatch writes:

But the formula for one class of school [i.e., Columbus district schools] is now capped, while the other [Columbus charters] isn’t.…But today Columbus charters get $142.4 million from the state to teach 18,000 students, while the district is left with $154.4 million to teach the remaining 52,000 kids, many of whom rank among the poorest in the state. 

The Dispatch seems to have forgotten that Ohio’s school districts are financed in fundamentally different ways than charters. Districts are part of a hybrid funding system—raising revenue via state and local tax sources—while charters depend on state revenue alone. Any discussion that juxtaposes district and charter funding must keep that basic difference (something that charter critics like to ignore as well) in mind. Let’s take a closer look at the problems with these statements and provide a fuller picture.

It is true that Ohio charters are not subject to state funding caps. But they are also not funded according to the district funding formula. To allocate state revenue to Ohio districts, legislators have created a very intricate funding formula, part of which includes adjustments that account for districts’ wealth (their local tax bases). These wealth-related factors, in addition to changes in enrollment, can determine whether a district is capped. A district with a declining tax base would receive more state aid, all else equal, but that increase in aid might be capped.

Charters, however, don’t have the authority to levy taxes; as such, they receive state funding on a per-student basis with no adjustments for local wealth. Inasmuch as caps are unfair to districts, establishing a cap on charters would be even more inappropriate. A cap could directly deny charters funding when they enroll additional students. Moreover, since they don’t have local funding to fall back on, it would put them at an even greater financial disadvantage.

Misunderstanding the state funding formula, for districts and charters alike, could be considered excusable. (It’s complicated!) But the second statement, comparing the revenue received by Columbus charters ($142 million) to that received by the Columbus City Schools ($154 million), is unpardonable. It misleads readers into believing that charters receive much more revenue per pupil—living high on the hog—than the district ($7,900 to $2,900 when you do the math). But these figures leave out the massive amount of local tax revenue that the district raises—and charters do not. Hence, the assertion wildly underreports the amount of public funding received by Columbus City Schools.

Let’s set the record straight: According to the district’s financial statements, Columbus City Schools raised over $370 million in local property tax revenue in the 2015 fiscal year—hundreds of millions of dollars that go to educate district pupils. In fact, according to the state’s Cupp Reports (one of the best sources for financial data) the district received more than $15,000 per student in state and local funding combined in 2014–15. This amount far exceeds what a typical Columbus charter school receives in public aid; it is simply inaccurate to suggest that charters receive more generous taxpayer funding than their nearest district.

In sum, here are two simple ideas to improve our discourse around school funding. First, let’s acknowledge the fundamental differences in how charters and districts are financed. Second, let’s focus on the overall taxpayer dollars that are being used to support the learning of Ohio’s school children. Getting the fundamentals right is a necessary first step before tackling the more complicated funding issues.

 
 
 
 

Pages