Ohio Policy

The new education law of the land—the Every Student Succeeds Act (ESSA)—has been the talk of the town since President Obama...
We at Fordham recently released an evaluation on Ohio’s largest voucher initiative—the EdChoice Scholarship. The study provides a...
Rabbi Eric "Yitz" Frank
This blog was originally posted on Education Next on July 24, 2016. The Thomas B. Fordham Institute recently released a study on...
In a previous blog post, we urged Ohio’s newly formed Dropout Prevention and Recovery Study Committee to carefully review the...
Eighteen months ago, Ohio proved it was finally serious about cleaning up its charter sector, with Governor Kasich and the Ohio...
No Child Left Behind (NCLB) required states to identify and intervene in persistently low-performing schools. Some states opted...
Shortly after Ohio lawmakers enacted a new voucher program in 2005, the state budget office wrote in its fiscal analysis, “The...
I remember the exact moment I became a charter school supporter. It was 2006, and I was a few days away from completing my first...
On June 22, the Dropout Prevention and Recovery Study Committee met for its first of three meetings this summer. The committee is...
Traditional districts that serve as charter school sponsors are often glossed over in the debate over Ohio’s charter sector. But...
A short article published this week in the Columbus Dispatch makes serious reporting mistakes that leave readers with a distorted...
Last year’s biennial budget ( HB 64 ) required Ohio to define what it means to be a “consistently high-performing teacher” by...
Thomas J. Lasley II
NOTE: Tom Lasley, executive director of Learn to Earn Dayton and former dean of the School of Education and Health Sciences at...
Ohio’s largest online school, the Electronic Classroom of Tomorrow (ECOT), has recently caught flack for its low graduation rate...
Too much of what we hear about urban public schools in America is disheartening. A student’s zip code—whether she...
Implementation of the Every Student Succeeds Act (ESSA) is looming on the horizon, and education leaders and policy makers are in...
NOTE: This is the introduction to Fordham Ohio's latest report— Pathway to Success: DECA prepares students for rigors of college...
Earlier this week, the Ohio Department of Education announced a new award for schools that exceeded expectations for student...
In K–12 education, states have historically granted monopolies to school districts. This tradition has left most parents and...

We at Fordham recently released an evaluation on Ohio’s largest voucher initiative—the EdChoice Scholarship. The study provides a much deeper understanding of the program and, in our view, should prompt discussion about ways to improve policy and practice. But this evaluation also means that EdChoice is an outlier among the Buckeye State’s slew of education reforms: Unlike the others, it has faced research scrutiny. That should change, and below I offer a few ideas about how education leaders can better support high-quality evaluations of education reforms.

In recent years, Ohio has implemented policies that include the Third Grade Reading Guarantee, rigorous teacher evaluations, the Cleveland Plan, the Straight A Fund, New Learning Standards, and interventions in low-performing schools. Districts and schools are pursuing reform, too, whether changing textbooks, adopting blended learning, and implementing professional development. Millions of dollars have been poured into these initiatives, which aim to boost student outcomes.

But very little is known about how these initiatives are impacting student learning. To my knowledge, the only major state-level reforms that have undergone a rigorous evaluation in Ohio are charter schools, STEM schools, and the EdChoice and Cleveland voucher programs. To be certain, researchers elsewhere have studied policies akin to those adopted in Ohio (e.g., evaluations on retention in Florida). Such studies can be very useful guides, and they might even inspire Buckeye leaders to find out what all the fuss is about. At the same time, it is critical to gather evidence on what works in our own state. Local context and conditions matter.

The research void means that we have no real understanding about whether Ohio’s education reforms are lifting student outcomes. Is the Third Grade Reading Guarantee improving early literacy? We don’t know. Have changes to teacher evaluation increased achievement? There isn’t much evidence on that either, at least not in the Buckeye State. The startling lack of information is not a problem unique to Ohio, but it does put us in a tough situation. We have practically no way of gauging whether course corrections are needed (if the results are null or negative) or if a program should be abandoned when consistently adverse impacts are uncovered. Neither do we know which approaches should be replicated or expanded based on positive findings.

Evaluation is no easy task, and there may be legitimate reasons why researchers haven’t turned a spotlight on Ohio’s reforms. Some are very new, and the time might not be ripe for a study. Moreover, there may not be a straightforward way to analyze a particular program’s impact. Only in rare cases can researchers conduct an experimental study that yields causal estimates. These include programs with admissions lotteries (due to oversubscription), as well as cases in which schools implement an experimental program by design. Even then, however, there are limitations. When such studies aren’t feasible, competent researchers can utilize rigorous quasi-experimental methods; yet given the data or policy design, isolating the impact of a specific program can be challenging. And further barriers may exist in the simple lack of funding or political will.

Policy makers can help to overcome some of these barriers by creating an environment that is more favorable to research and evaluation. Here are three thoughts on how to do this:

  1. Create small-scale pilots that provide sound evidence and quick feedback. Harvard professor Tom Kane suggests that there is an “urgent need for short-cycle clinical trials in education.” I agree. In Ohio, perhaps it could look something like this: On the Third Grade Reading Guarantee, the state could incentivize a group of districts to randomly assign their “off-track” students to different reading intervention programs. A researcher could then investigate the outcomes a year later, helping us learn something about which program holds the most promise. (It would be good to know the costs of each intervention as well.) In the case of ESSA interventions for Ohio’s lowest-performing schools, the state could encourage districts to randomly assign certain strategies to specific schools and then examine the results. Granted, these ideas would need some fleshing out. But the point is that designing policies with research pilots in mind would sharpen our understanding of promising practices.
     
  2. Make collecting high-quality data a top priority. To its great credit, Ohio has developed one of the most advanced education information systems in the nation. For example, the state is among just a few that gather information on pupils in gifted programs. But the state and schools can do more, particularly around the reporting of course-level data that can support larger-scale research on curriculum and instruction. For instance, we’ve noticed some apparent gaps in the way AP course taking is documented. Another area in which Ohio can blaze new paths is the accurate identification of economically disadvantaged students. In addition, as Matt Chingos of the Urban Institute recently described, researchers can no longer rely on free and reduced-price lunch (FRPL) status as a proxy for poverty. An urgent priority for the state—it may require cross-agency cooperation—is to create a better way of indicating pupil disadvantage. A reliable marker of socioeconomic status is also critical for policy, as ESSA requires disaggregated test results.
     
  3. Include evaluation as a standard part of policy design on the front end. When designing a policy reform—whether at a state or local level—one question that should be asked is, “What is the plan for evaluating whether it’s working?” This might require the early engagement of researchers and the setting aside of funds. At the federal level, most programs come with such allocations; Ohio could do that for its big state-level reforms while also encouraging schools to set aside resources for local “R&D.” If evaluation becomes part of policy design on the front end, the benefits are two-fold. First, education leaders should get more timely results than if research were an afterthought, carried out much later in policy implementation (if at all). Second, turning evaluation into a standard practice could mitigate its political risk. Naturally, it is dicey to voluntarily order an evaluation, both for a given policy’s champions and its detractors. Advocates won’t want to see negative results, and no critic wants to see positive ones. But a transparent climate around research should lessen the risks of disseminating results.

Everyone can agree that Ohio needs and deserves a world-class school system that improves achievement for all students. The purpose of education reform is to get us closer to that goal. But the research from Ohio is maddeningly sparse on which changes are working for Buckeye schools and students. Moving forward, authorities at the state and local level must ensure that rigorous evaluation becomes the rock on which reform stands.

The new education law of the land—the Every Student Succeeds Act (ESSA)—has been the talk of the town since President Obama signed it into law in December 2015. Under the new law, testing doesn’t initially seem that different from the No Child Left Behind (NCLB) days: ESSA retains the requirement that states administer annual assessments in grades 3–8 and once in high school; requires that test results remain a prominent part of new state accountability plans; and continues to expect states to identify and intervene in struggling schools based upon assessment results. But a closer look reveals that ESSA provides a few key flexibilities to states and districts—and opens the door for some pretty significant choices. Let’s take a look at the biggest choices that Ohio will have to make and the benefits and drawbacks of each option. 

Test design

There are two key decisions for states in terms of test design. The first is related to high school testing. ESSA permits districts to use “a locally selected assessment in lieu of the state-designed academic assessment” as long as it’s a “nationally recognized high school academic assessment.” In other words, Ohio districts could forego a statewide high school test by administering a nationally recognized test (like the ACT or the SAT) instead. There are two ways to make this happen: The Ohio Department of Education (ODE) can make such assessments available for districts to choose, or districts can submit an assessment to ODE. In both cases, ODE must approve the test to ensure that it (a) is aligned to state standards, (b) provides data that is both comparable to the statewide assessment and valid and reliable for all subgroups, and (c) provides differentiation between schools’ performance as required by the state accountability plan. 

There are pros and cons for districts that are interested in administering a nationally recognized test[1] rather than Ohio’s statewide assessment:

Pros

Cons

They are “nationally recognized tests” for a reason—districts can be sure that they are rigorous, widely accepted at colleges, and allow for easy performance comparisons.

Nationally recognized tests are developed by national organizations, so districts that want an “Ohio-designed” assessment would be out of luck. This was a complaint that cropped up often during the battle over the PARCC assessment.

Using a college admissions test like the SAT or ACT for both college entry and statewide accountability limits the number of tests students have to take.

Ohio is currently working to revise its standards; if the revisions are extensive and move away from the Common Core, there may be no nationally recognized tests that are aligned to Ohio’s standards 

Using a college entry exam as a high school test would set the proficiency bar at college readiness and lessen the honesty gap—depending on the score ODE chooses to equate with proficiency.

College admissions tests are designed to measure college readiness, not high school learning. Arguably high school learning and college preparation should be the same, but not everyone agrees—and merely saying something “should be the same” doesn’t mean it actually is.

Using a college admissions test could increase the value of testing for students (and parents); many colleges and universities offer scholarships based on scores, so students may take these tests more seriously than they do current statewide assessments.

The ELA portion of nationally recognized tests may adequately cover high school ELA classes, but the math, science, and civics portions of national tests often cover several subjects (like biology and chemistry) in one testing section. As a result, the test may not be an accurate measurement of learning for a specific subject the way an end-of-course exam would be.

Ohio will soon pick up the tab for all juniors to take the ACT or SAT. This means that opting to use one of these tests won’t create extra costs for districts.

Calculating value-added at the high school level for state report cards could become very complicated, and even impossible, based on the grade in which students take the assessment.

The second decision for states regarding test design is whether to utilize performance assessments. When measuring higher-order thinking skills, ESSA permits states to “partially” deliver assessments “in the form of portfolios, projects, or extended performance tasks.” These kinds of assessments have long been championed by teachers and have already been piloted in several Ohio districts. But should the state consider using them for all districts? Here’s a list of the pros and cons:

Pros

Cons

Using performance assessments would answer critics who claim that standardized testing is a “one-size-fits-all” measurement by evaluating a wider range of skills and content.

Important questions remain about who will design the assessments and how ODE can ensure rigor. If assessments are developed at the district level, that’s an extra burden on teachers.

Performance assessments are considered more nuanced than traditional standardized tests, and they may offer a better picture of student learning (though more research would be helpful).

Performance assessments require significantly more time to develop, administer, and grade. Considering that some parents and teachers have recently been frustrated with when they receive test scores, it doesn’t seem likely that more grading time will be added. In addition, Ohio’s most recent budget bill requires state test results to be returned earlier than in past years, making the implementation of performance assessments even more difficult.

Ohio already has a good foundation in place for implementing a performance assessment, and there are models—like PACE in New Hampshire—that provide excellent examples.

A lack of standardization in grading can lead to serious questions about validity and reliability—and the comparability that a fair state accountability system depends on.

Districts interested in competency education would benefit from assessments that effectively measure growth and mastery. 

 

Test administration

ESSA allows states to choose whether to administer a single, summative assessment or “multiple statewide interim assessments” during the year that “result in a single, summative score.” However, Ohio’s most recent budget bill—which was crafted in response to the widespread backlash against PARCC—mandates that state assessments can only be administered “once each year, not over multiple testing windows, and in the second half of the school year.” In other words, current Ohio law prevents ESSA’s interim assessment option from being used for state accountability purposes.

Of course, laws can always be changed, and revising Ohio law to allow multiple windows for interim assessments would come with both benefits and drawbacks. A single, summative assessment like what’s currently in use saves time (in both test administration and grading) and minimizes costs. But choosing to maintain a single, end-of-year assessment also ignores the fact that many districts already utilize interim assessments to measure progress leading up to the statewide test. If most districts spend the money and time administering interim assessments anyway, it could be helpful to provide interims that are actual state tests in order to ensure alignment and valid, reliable data. On the other hand, districts that don’t use interim assessments won’t like being forced to do so—and districts that design their own interims may not like using state-designed ones.

For teachers that complain that state tests aren’t useful for day-to-day instruction, the interim assessment structure offers a solution. It also offers an intriguing new way to measure student progress—a different way than Ohio’s current value-added system— in which scores from the beginning of the year could be compared to scores at the end of the year. Given the very recent change in Ohio law prohibiting multiple testing windows, this ESSA-allowed flexibility will only come to pass if teachers and school districts make clear to lawmakers that it would make their lives easier.

Testing time

ESSA permits states to “set a target limit on the aggregate amount of time devoted” to tests based on the percentage of annual instructional hours they use. This isn’t a new idea: A 2015 report from former State Superintendent Richard Ross found that Ohio students spend, on average, almost twenty hours taking standardized tests during the school year (just 2 percent of the school year). Prior to the report, lawmakers had proposed legislation that would limit testing time, and there was much discussion—but no action—after the report’s publication. ESSA’s time limit provision could reignite Ohio’s debate, but the pros and cons remain the same: While a target limit could prevent over-testing, it would also likely end up as either an invasive limitation on district autonomy (many standardized tests are local in nature or tied to Ohio’s teacher evaluation system) or a compliance burden. Ironically, a statewide limitation on local testing time would be an unfortunate result from a bill intended to champion local decision making based on the needs of students.   

***

Unsurprisingly, the Buckeye State has a lot to consider when making these decisions. Determining what’s best for kids is obviously paramount, and a key element in the decision-making process should be the voices of teachers and parents. The success or failure of each of these decisions depends on implementation, and feedback from teachers and school leaders will offer a glimpse at how implementation will progress. Fortunately, ODE has already taken a proactive approach to ESSA decision making. Anyone interested in education in Ohio would do well to take advantage of these opportunities. The more feedback the department receives, the better.




[1] This comparison is based only on ACT/SAT tests, since it’s widely believed that both will qualify under the “nationally recognized” label. Education First also lists PARCC and Smarter Balanced as options, but Ohio has already exited the PARCC consortium, and the likelihood of the state opting for Smarter Balanced is low. There are other possibilities, but they are not explored here.

 

This blog was originally posted on Education Next on July 24, 2016.

The Thomas B. Fordham Institute recently released a study on the academic impact of Ohio’s flagship school choice program authored by noted researcher Dr. David Figlio of Northwestern University. The report is noteworthy for its principal findings, namely that, not only is the sky not falling for impacted public schools, the EdChoice program has had a positive impact on the academic performance of public schools whose students are eligible for a scholarship. Surprisingly, the study also found that the students using scholarships to attend private schools who the report studied (more on that later) did not perform as well as their public school peers on the state test.

Matt Barnum of The 74 wrote an article that details some of the possible explanations for the latter finding. Based on my own experience in Ohio, I can attest that many nonpublic schools do not align their curriculum to the state test, nor do they focus much on these measures, and that is likely an important factor. However, it is important to note what the study could not address. As Dr. Figlio made clear in both his report and a presentation to The City Club of Cleveland, the study had significant limitations.

Ohio’s EdChoice program differs from most other school choice programs in a significant manner: student eligibility is determined solely by the performance of their assigned public schools. This has implications for how to study the program. The program creates real choice opportunities for students assigned to these schools (primarily the 10% lowest performing in the state), removes an economic incentive for middle class families to flee to the suburbs for better schools and larger properties (an important consideration for anyone knowledgeable about inner-ring suburbs and urban areas in Ohio), and creates market pressures for public schools to improve their performance, which the study confirms to be the case. This last argument is a cornerstone for many “free-market” school choice advocates and scholars, and the study’s most robust findings appear to confirm it.

The more surprising finding to both advocates and opponents was that using a scholarship to attend a private school appears to have led students to fare worse academically than had they remained in their public school. While there are many possible explanations for this, it is not worth time making excuses for why some students in private schools aren’t doing as well academically as some of their public school peers. Parents do make choices for their children based on many different factors. It may be that a child is not thriving at a particular school or is having social problems with a particular group of children; parents may disagree with teachers and/or the curriculum being taught; or they may desire a more faith-based approach to learning. Every child is unique and has their own needs. But certainly the expectation needs be that, among other benefits, choice should lead to higher levels of academic achievement.

The challenge facing researchers and policymakers is determining how these students would have performed had they stayed in their assigned public schools. As noted before, EdChoice eligibility is based on the performance of their assigned public schools. The most apt comparison of academic performance would seem to be between scholarship students and their peers who are enrolled at their assigned school. We have some data on this, and as noted in an article published by the Thomas B. Fordham Institute in 2014, they indicate that scholarship students do outperform their public-school peers in many districts, in some cases by huge margins: Consider Columbus. In 2013-14, voucher students in grades three to eight outperformed their district peers in math and reading at every grade level on Ohio Achievement Assessments (OAA’s) and Ohio Graduation Tests (OGT’s) (see table below). For instance, on the OGT’s, 96 percent of voucher students were proficient in reading compared to 72 percent of public school students. On the math portion of the OGT, 85 percent of voucher students were proficient compared to 50 percent of their public school peers. Of third graders using a scholarship to attend a private school, 96 percent were proficient in reading compared to 55 percent of students in their assigned public schools. Similarly impressive scores, with some exceptions at certain grade levels, were posted in Cincinnati, Cleveland, Dayton, Toledo, and other districts.

So why didn’t this study show the same pattern? As Dr. Figlio notes, you can’t simply compare these two groups because there must be a reason why some students chose to leave their assigned schools and others did not. In fact, his study shows that, among eligible students, those who used scholarships to move to a private school were higher-performing and less likely to be from low-income families than those who did not. Accordingly, the best available comparison group in this context consists of students attending schools where they had no access to choice because they were not designated in the lowest 10% of the state. Figlio’s study therefore compares students who left the highest-ranked schools that are eligible for EdChoice for private schools against observably similar students who remained in the lowest-ranked schools that are not. As a result it tells us little about how well students using a scholarship (who would have otherwise attended the very lowest-performing public schools) are doing—a fact Dr. Figlio acknowledges. In addition, the set of schools from which the comparison group of students is drawn may also be problematic. While these schools appear to be similar on the surface, there could nonetheless be differences between a school that has never been designated as EdChoice and a school that has been consistently so designated. In addition, the study measured the earlier years of this program. Further study may uncover more positive results—even using this methodology.

The bottom line is that we should be careful in interpreting these findings. Most importantly, the study was unable to examine the achievement of students assigned to the lowest-performing public schools. What we do know is that EdChoice has improved public schools, that parents like the choices they are provided, and that data do seem to indicate greater achievement by many students on scholarships. Despite all the creative headlines that this study has generated, a deeper dive into the report, coupled with intimate knowledge of the program and Ohio, gives us reason to believe that the program is beneficial. There are profound fiscal and equity arguments for school choice, as Dr. John C. White, Louisiana’s State Superintendent eloquently writes, and these programs take time to develop. By encouraging more private schools to participate, ensuring that parents have access to information on how their children are performing, and broadening the number of students eligible, Ohio can make this vital choice program an even greater benefit to the state and its citizens.

Rabbi Frank is the Ohio director of Agudath Israel of America

This report from Civic Enterprises and Hart Research Associates provides a trove of data on students experiencing homelessness—a dramatically underreported and underserved demographic—and makes policy recommendations (some more actionable than others) to help states, schools, and communities better serve students facing this disruptive life event. 

To glean the information, researchers conducted surveys of homeless youth and homeless liaisons (school staff funded by the federal McKinney-Vento Homeless Assistance Act who have the most in-depth knowledge regarding students facing homelessness), as well as telephone focus groups and in-depth interviews with homeless youth around the country. The findings are sobering.

  • In 2013–14, 1.3 million students experienced homelessness—a 100 percent increase from 2006–07. The figure is still likely understated given the stigma associated with self-reporting and the highly fluid nature of homelessness. Under the McKinney-Vento Homeless Assistance Act, homelessness includes not just living “on the streets” but also residing with other families, living out of a motel or shelter, and facing imminent loss of housing (eviction) without resources to obtain other permanent housing. Almost seven in ten formerly homeless youth reported feeling uncomfortable talking with school staff about their housing situation. Homeless students often don’t describe themselves as such and are therefore deprived of the resources available to them.
  • Unsurprisingly, homelessness takes a serious toll on students’ educational experience. Seventy percent of youth surveyed said that it was hard to do well in school while homeless; 60 percent said that it was hard to even stay enrolled in school. Vast majorities reported homelessness affecting their mental, emotional, and physical health—realities that further hinder the schooling experience.
  • McKinney-Vento liaisons report insufficient training, awareness, and lack of resources dedicated to the problem. One-third of liaisons reported that they were the only people in their districts trained to identify and intervene with homeless youth. Just 44 percent said that other staff were knowledgeable of the signs of homelessness and aware of the problem more broadly. And while rates of student homelessness have increased, supports have not kept pace. Seventy-eight percent of liaisons surveyed said that funding was a core challenge to providing students with better services; 57 percent said that time and staff resources was a serious obstacle.
  • Homeless students face serious logistical and legal barriers related to changing schools (which half reported having to do), such as fulfilling proof of residency requirements, obtaining records, staying up-to-date on credits, or even having a parent/guardian available to sign school forms.

Fortunately, there are policy developments that further shine the spotlight on students experiencing homelessness and equip schools to better address it. The recently reauthorized Every Student Succeeds Act (ESSA) treats homeless students as a sub-group and requires states, districts, and schools to disaggregate their achievement and graduate rate data beginning in the 2016–17 school year. ESSA also increased funding for the McKinney-Vento Education for Homeless Children and Youth program and attempted to address immediate logistical barriers facing homeless students—for instance, by mandating that homeless students be enrolled in school immediately even when they are unable to produce enrollment records. The report urges states to fully implement ESSA’s provisions related to homeless students and offers other concrete recommendations for schools (improving identification systems and training all school staff, not just homeless liaisons) as well as communities (launching public awareness campaigns and collecting better data). In Ohio, almost eighteen thousand students were reported as homeless for the 2014–15 school year. Policy makers would be wise to review this report’s findings and recommendations and consider how to implement and maximize ESSA’s provisions so that our most vulnerable students don’t fall between the cracks.

Source: Erin S. Ingram, John M. Bridgeland, Bruce Reed, and Matthew Atwell, “Hidden in Plain Sight: Homeless Students in America’s Public Schools,” Civic Enterprises and Hart Research Associates (June 2016).

In 2000, North Carolina’s university system (UNC) announced that it would increase from three to four the minimum number of high school math courses students must complete in order to be considered for admission. The intent was to increase the likelihood that applicants be truly college-ready, thereby increasing the likelihood of degree completion. Researchers from CALDER/AIR recently looked at the UNC data and connected it to K–12 student information to gain an interesting insight into how post-secondary efforts to raise the bar affect student course-taking behavior in high school.

The study posed three questions: Did the tougher college admission requirement increase the number of math courses taken by high school students (North Carolina’s high school graduation requirements remained at three math courses, despite UNC’s higher bar for admissions)?[1] Did it alter enrollment patterns at UNC schools? And did the hoped-for increase in college readiness and completion result?

Overall, high school students did take more math courses after the UNC policy change. As researchers expected, the biggest increases were at the middle- and lower-achievement deciles—high-achievers were already taking more than three courses—but the increases were not uniform across districts. This led researchers to look deeper into math sequences in specific districts across the state (urban, suburban, and rural) both before and after the new policy was announced. They found that some districts made no changes to existing sequences and that a number of them made it difficult to complete four courses by either being too lax (integrated math pathways rather than delineated Algebra/Geometry/etc.) or too stringent (strict prerequisites). Researchers posit that larger and better-resourced districts were able to make needed changes in their course sequences more readily than their counterparts (more and specialized teaching staff, textbooks, technology, etc.), but they do not discount the possibility that some districts and schools were simply unwilling to make the changes. They offer no answers as to why this might be, although researchers speculate that districts with low numbers of college-bound students might not have wanted to expend the resources for meager benefits. Either way, after the two-year policy rollout, any student in a district unable or unwilling to make the needed changes was effectively locked out of UNC—a disheartening thought, even for those who believe that college is not necessary for all kids. It’s also an ill omen for other bar-raising efforts that will come down the pike in the future.

Researchers did detect an increase in college enrollment, but one that fell within rather than beyond the “usual” achievement deciles. In other words, the new policy did not open the doors of college to more lower-performing students by building up skills in K–12, but it did seem to have the intended effect of giving incoming students more and better math training than in previous years. Ultimately, only minor increases in college completion could be associated with the new math requirement (keep in mind that we’re talking only about North Carolinian K–12 students going on to UNC campuses), but those who did likely reaped the benefits of college completion and faced less student debt to go with it. Additionally, there seemed to be an unanticipated bump in the number of STEM-related majors among those graduates.

In the end, the UNC policy change seems to have done little to advance the laudable goal of increasing college completion. But the CALDER researchers have done an excellent job going the extra mile to show what effects post-secondary changes can have at the high school level. To wit: a continuing disconnect between high school graduation and college readiness.

SOURCE: Charles Clotfelter, Steven Hemelt, Helen Ladd, “Raising the Bar for College Admission: North Carolina’s Increase in Minimum Math Course Requirements,” CALDER/AIR Working Paper (July 2016)


[1] A two-year delay in consequences (refused admission for those who hadn’t completed four math courses) for the new policy at UNC created excellent conditions for the researchers to determine whether observed changes in high school math sequences and student course-taking patterns were likely related to the UNC policy change.

In a previous blog post, we urged Ohio’s newly formed Dropout Prevention and Recovery Study Committee to carefully review the state’s alternative accountability system for dropout-recovery charter schools. Specifically, we examined the progress measure used to gauge student growth—noting some apparent irregularities—but didn’t cover in detail the three other components of the dropout-recovery school report cards: graduation rates, gap closing, and assessment passage rates. Let’s tackle them now.

Each of these components is rated on a three-level scale: Exceeds Standards, Meets Standards, and Does Not Meet Standards. This rating system differs greatly from the A–F grades issued by Ohio to conventional public schools; the performance standards (or cut points) used to determine their ratings are also different. One critical question that the committee should consider is whether the standards for these second-chance schools are set at reasonable and rigorous levels.

Graduation Rates

Dropout-recovery schools primarily educate students who aren’t on track to graduate high school in four years (some students may have already passed this graduation deadline). These schools are still held responsible for graduating students on time. Ohio, however, recognizes that dropout-recovery schools educate students who need extra time to graduate by assigning ratings for extended (six-, seven-, and eight-year) graduation rates in addition to the four- and five-year rates reported for all public schools. The standards for dropout-recovery schools’ graduation rates are also significantly relaxed when compared to traditional public schools. Even with these adjustments, it is unlikely that dropout-recovery schools’ four- and five-year rates are reflective of their true performance (a problem, as we discuss here, for any high school enrolling low-achieving adolescents).

Below, Table 1 displays the four-year graduation rate targets for dropout-recovery schools, along with the 5–8-year graduation rates. Also shown are the standards for traditional public schools (graded on an A–F scale). As you can see, the standards for dropout-recovery schools are considerably lower. For example, a dropout-recovery school could graduate less than 50 percent of a cohort of students and still earn the top rating (Exceeds Standards); meanwhile, traditional schools are required to graduate virtually all of their students to earn an A or B rating (at least 89 percent). Note also that the 5–8-year graduation rate targets are higher than the four-year rate, as schools are held to slightly higher standards given the longer time that students are given to earn their diplomas.  

Table 1. Graduation rate performance standards

But are the adjusted graduation rate standards too low for dropout-recovery schools? To determine this, a good first step might be to compare these standards to the ones set for dropout-recovery schools in other states (e.g., Texas or Arizona). Another way of tackling the question is to examine the current distribution of Ohio’s ratings. If nearly all schools were exceeding standards, then it would be fair to say that they are probably too low. But as Chart 1 indicates, that’s not the case.

Chart 1. Distribution of four-year graduation rate ratings, dropout-recovery schools, 2014–15

As readers can see, there is a fairly even balance across the categories, indicating that the standards could be deemed appropriate for this unique group of schools. That being said, the graduation standards are rather low in absolute terms, and policy makers should consider ratcheting up targets in a reasonable way (perhaps by phasing in incrementally higher standards over multiple years). They could also consider making adjustments to the way graduation rates are calculated.

Assessment Passage Rate

The assessment passage rate measures the percentage of students in twelfth grade, or students within three months of turning twenty-two, who pass the Ohio Graduation Test (OGT) or the End-of-Course exams (beginning with the 2018 graduating class). Ratings are assigned to schools depending on what percentage of their students pass the graduation exams (each subject exam, such as math or social studies, must be passed). A–F report cards for conventional public schools do not use this type of measure, so we don’t display a side-by-side comparison of standards. Dropout-recovery schools with assessment passage rates above 68 percent receive an Exceeds Standards rating; schools whose assessment passage rate is between 32 percent and 68 percent receive Meets Standards; and schools with an assessment passage rate of less than 32 percent receive a Do Not Meet Standards rating. 

Again, to get a sense of whether the performance standards are set at reasonable levels, we can look at the distribution of school ratings. Chart 2 shows that the ratings are fairly well balanced, with most rated schools falling into the Meets Standards category. This suggests that the standards are set at appropriate levels for the passage rate component. Surprisingly, however, twenty-nine schools were not assigned ratings in this component, though it is unclear why these schools were not rated. The fact that almost one in three dropout-recovery schools did not receive a rating is troubling and worthy of further investigation.

Chart 2. Distribution of assessment passage rate ratings, dropout-recovery schools, 2014–15

Gap Closing (Annual Measurable Objectives)

The gap-closing measure gauges how well a school narrows subgroup achievement gaps. This is measured by the percentage of proficiency targets a school meets for certain student subgroups. The very intricate methodology for calculating the percentage of targets met is available here. As we’ve argued elsewhere, this measure has some imperfections, and the state should reconsider its use—at least in its present form—as an accountability measure applying to any public school.

That being said, so long as it plays a key role in school report cards, we should examine the results. Table 2 displays the performance benchmarks for dropout-recovery schools and traditional public schools. The percentages displayed are equal to the number of gap-closing points a school earns divided by the number of points possible. Just like graduation rates, dropout-recovery schools are held to considerably lower standards in comparison to conventional schools. On the one hand, this is understandable given the academic backgrounds of the students they serve. On the other hand, the standards appear at face value to be entirely too low: A dropout-recovery school could earn just 1 percent of the gap-closing points possible and receive a Meets Standards rating.

Table 2. Gap-closing performance standards

Chart 3 displays the distribution of gap-closing ratings for dropout-recovery schools. Unlike the graduation rate and test passage rate components, the distribution for gap closing appears imbalanced and skewed toward the Does Not Meet category. As low as the gap-closing dropout-recovery standards may be, they still result in failing ratings for a disproportionate number of schools (and a modest number of non-rated schools).

Chart 3: Distribution of gap-closing ratings, dropout-recovery schools, 2014–15

* * *

A fair school accountability system recognizes circumstances that are beyond the control of schools. But it should do so without creating glaring “double standards,” at least when it comes to graduation and student proficiency indicators. Fortunately, this is not as much of a concern when growth is the measuring stick—another reason why getting the progress measure right is absolutely essential. The information above suggests that some important questions still need to be answered around accountability for dropout-recovery schools. Getting standards right for Ohio’s dropout-recovery schools can ensure that these second-chance schools are indeed helping young people advance their education.

SIGN UP for updates from the Thomas B. Fordham Institute