Standards, Testing & Accountability

The assessments edition

In this week's podcast, Mike Petrilli and Robert Pondiscio preview Fordham’s long-awaited assessments evaluation, analyze low-income families’ education-related tech purchases, and wave the red flag about TFA’s lurch to the Left. In the Research Minute, David Griffith examines how well the nation’s largest school districts promote parent choice and competition between schools.

Amber's RM

Grover (Russ) J. Whitehurst, "Education Choice and Competition Index 2015," Brookings (February 2016).

Last week, we cautioned that Ohio’s opt-out bill (HB 420) offers a perverse incentive for districts and schools to game the accountability system. The bill has since been amended, but it is no closer to addressing the larger issues Ohio faces as it determines how best to maintain accountability in response to the opt-out movement. 

Current law dings schools and districts when a student skips the exam by assigning a zero for that student when calculating the school’s overall score (opting out directly impacts two of ten report card measures). The original version of HB 420 removed those penalties entirely. Instead of earning a zero, absent students would simply not count against the school. Realizing the potential unintended consequences under such a scenario, including the possible counseling out of low-achieving students and larger numbers of opt-outs overall, the drafters of the substitute bill incorporated two changes.

First, the amended version requires the Ohio Department of Education to assign two separate Performance Index (PI) grades for schools and districts for the 2014–15 school year—one reflecting the scores of all students required to take exams (including those who opt out) and another excluding students who didn’t participate. Second, in...

Following in the footsteps of a previous study, CAP researchers have examined the effects of a state’s commitment to standards-based reform (as measured by clear standards, tests aligned to those standards, and whether a state sanctions low-performing schools) on low-income students’ test scores (reading and math achievement on the NAEP from 2003 to 2013). The results indicate that jurisdictions ranked highest in commitment to standards-based reform (e.g., Massachusetts, Florida, Tennessee, the District of Columbia) show stronger gains on NAEP scores for their low-income students. The same relationship seems to be present in states ranked lowest in commitment to standards-based reform: low-income students in Iowa, Kansas, Idaho, Montana, North Dakota, and South Dakota did worse.

As you can imagine, a lot of caveats go with the measure of commitment to standards-based reform. Checking the box for “implemented high standards” alone is likely to pose more questions than it answers. Beyond that, implementation, teaching, and assessment of standards are all difficult, if not impossible, to quantify. The authors acknowledge that some of their evidence is “anecdotal and impressionistic,” but they are talking about the “commitment to standards” piece. They are four-square behind NAEP scores as a touchstone of academic success or lack...

  • If you ask a thoughtful question, you may be pleased to receive a smart and germane answer. If you post that question in your widely read newspaper column on education, you’ll sometimes be greeted with such a torrent of spontaneous engagement that you have to write a second column. That’s what happened to the Washington Post’s Jay Matthews, who asked his readers in December to email him their impressions of Common Core and its innovations for math: Was it baffling them, or their kids, when they sat down to tackle an assignment together? He revealed some of the responses last week, and the thrust was definitively in support of the new standards. “My first reaction to a Common Core worksheet was repulsion,” one mother wrote of her first grader’s homework. “I set that aside and learned how to do what [my son] was doing. And something magical happened: I started doing math better in my head.” The testimonials are an illuminating contribution to what has become a sticky subject over the last few months. Common Core advocates would be well advised to let parents know that their kids’ wonky-looking problem sets can be conquered after all.
  • Homework
  • ...

The eyes of the nation are fixed on a tournament of champions this week. Snacks have been prepared, eager spectators huddle around their screen of preference, and social media is primed to blow up. Veteran commentators have gathered at the scene to observe and pontificate. For the competitors, the event represents the culmination of months of dedicated effort, and sometimes entire careers; everything they’ve worked for, both at the college and professional level, has led up to this moment. The national scrutiny can be as daunting for grizzled journeymen as it is for fresh-faced greenhorns. You know what I’m talking about:

The Fordham Institute’s ESSA Accountability Design Competition.

Okay, you probably know what I’m talking about. If you inhabit the world of education policy, you took notice of Fordham’s January call for accountability system frameworks that would comply with the newly passed Every Student Succeeds Act—and take advantage of the new authority the law grants to states. With the federal influence on local classrooms scaled back so suddenly, it will be up to education agencies in Wisconsin and Mississippi and Alaska to adopt their own methods of setting the agenda for schools and rating their performance in adhering to it.

The purpose of...

Last May, Achieve released a report showing that most states have created a false impression of student success in math and reading proficiency. Known as the “honesty gap” (or, as Fordham has long described it, The Proficiency Illusion), the discrepancy between reported and actual proficiency is found when state test results are compared with NAEP results.[1] For example, Achieve’s May report showed that over half of states showed discrepancies of more than thirty percentage points with NAEP’s gold standard. Ohio was one of the worst offenders: Our old state test scores (the OAA and OGTs) differed by thirty percentage points or more in each of NAEP’s main test subjects, with a whopping forty-nine-point difference in fourth-grade reading.

Less than one year later, new state test scores and biennial NAEP results have created an opportunity to revisit the honesty gap. In its latest report, Achieve finds that the gap has significantly narrowed in nearly half of states. Ohio is one of twenty-six states that has earned the commendation “Significantly Improved” for closing the honesty gap in either fourth-grade reading or eighth-grade math by at least ten percentage points since 2013....

On Tuesday afternoon, we at the Fordham Institute will host a competition to present compelling designs for state accountability systems under the Every Student Succeeds Act. (UPDATE: Event details and video here.) The process has already achieved its objective, with more than two dozen teams submitting proposals that are chock-full of suggestions for states and commonsense recommendations for the U.S. Department of Education. They came from all quarters, including academics (such as Ron FergusonMorgan Polikoff, and Sherman Dorn); educators (including the Teach Plus Teaching Policy Fellows); policy wonks from D.C. think tanks (including the Center for American ProgressAmerican Enterprise Institute, and Bellwether Education Partners); and even a group of Kentucky high school students. Selecting just ten to spotlight in Tuesday’s live event was incredibly difficult.

I’ve pulled out some of the best nuggets from across the twenty-six submissions.

Indicators of Academic Achievement

ESSA requires state accountability systems to include an indicator of academic achievement “as measured by proficiency on the annual assessments.” 

Yet not a single one of our proposals suggests using simple proficiency rates as an indicator here. That’s because everyone is aware of NCLB’s unintended consequence: encouraging schools to pay attention only to the “bubble kids” whose performance...

Editor's note: For a summary of noteworthy content from contenders' proposals, read "Some great ideas from our ESSA Accountability Design Competition."

Under the newly enacted Every Student Succeeds Act (ESSA), states now face the challenge of creating school accountability systems that can vastly improve upon the model required by No Child Left Behind. To help spur creative thinking about how they might do so, and also to inform the Department of Education as it develops its ESSA regulations, the Fordham Institute is hosting an ESSA Accountability Design Competition. (Details here.)

We were thrilled to receive more than two dozen proposals from policy experts, academics, teachers, and students. On Tuesday, February 2, we’ll see ten of these submissions presented on the Fordham stage. (RSVP here.) Participants will pitch and defend their proposals in front of a live audience and an American Idol-style panel of judges.

So why these ten? I chose candidates that I found to be a) particularly well designed; b) especially creative; and/or c) that raised important issues for the Department to consider in the regulatory process. I also aimed to include a variety of voices, including students and teachers. (Their authors also had to...

Late in 2015, Congress passed a new federal education law—the Every Student Succeeds Act (ESSA)—which replaces the outdated No Child Left Behind Act of 2001 (NCLB). The new legislation turns over considerably greater authority to states, which will now have much more flexibility in the design and implementation of accountability systems. At last, good riddance to NCLB’s alphabet soup of policies like “adequate yearly progress” (AYP) and “highly qualified teachers” (HQT)—and yes, the absurd “100 percent proficient by 2014” mandate. Adios, too, to “waivers” that added new restrictions!

But now the question is whether states can do any better. As Ohio legislators contemplate a redesign of school accountability for the Buckeye State, it would first be useful to review our current system. This can help us better understand which elements should be kept and built upon, modified, or scrapped—and which areas warrant greater attention if policy makers are going to improve schools. Since Ohio has an A–F school rating system, it seems fitting to rate the present system’s various elements on an A–F scale. Some will disagree with my ratings—after all, report cards are something of an art—so send along your thoughts or post a comment.

NB: In this...

Ohio lawmakers recently proposed a bill (HB 420) that would remove students who opt out of standardized tests from the calculation of certain school and district accountability measures. Representative Kristina Roegner (R-Hudson), who introduced the bill, declared that “if [a student is] not going to take the test, in no way should the school be penalized for it.” Students who fail to take state exams (for any reason, not just opting out) count against two of ten school report card measures, the performance index score, and the K–3 literacy measure. Non-participating students receive zeroes, which pulls down the overall score on those components.

On first reading, Roegner’s sentiments seem obvious: Why should schools be held responsible for students who decline even to sit for the exams? Is it the job of schools to convince students (or their parents, the more likely objectors) to show up on exam day? While compulsory schooling laws do require students to attend school, there is nothing especially enforceable about exam day in particular. Ohio does not prohibit opting out. Nor does it explicitly allow it, as some states do (e.g., Pennsylvania allows a religious objection to testing; Utah and...

Pages