Standards, Testing & Accountability

In 2007, the Thomas B. Fordham Institute published what was probably the most influential study in our eighteen-year history: The Proficiency Illusion. Using data from state tests and NWEA’s Measures of Academic Progress, our partners at NWEA estimated the “proficiency cut scores” of most of the states in the country. We expected to find a race to the bottom during the No Child Left Behind era; instead we found a walk to the middle. Importantly, though, we also demonstrated the vast discrepancies from state to state—and within states, from subject to subject and even grade to grade—when it came to what counted as “proficient.”  Checker and I wrote in the foreword:

What does it mean for standards-based reform in general and NCLB in particular? It means big trouble—and those who care about strengthening U.S. K–12 education should be furious. There’s all this testing—too much, surely—yet the testing enterprise is unbelievably slipshod. It’s not just that results vary, but that they vary almost randomly, erratically, from place to place and grade to grade and year to year in ways that have little or nothing to do with true differences in pupil achievement. America...

Are states dutifully reporting the fraction of students who are on track for college or career? According to a new report from Achieve, a nonprofit organization that assists states in education reform efforts, the answer is no—and Ohio has been one of the worst offenders.

The report documents the discrepancies between proficiency rates on state tests versus the National Assessment for Educational Progress (NAEP). Achieve’s analysis finds that most states “continue to mislead the public on whether students are proficient” by reporting proficiency rates on state assessments that are significantly higher than those on NAEP. This is a serious problem since NAEP—commonly referred to as the Nation’s Report Card—has long been considered the gold standard of measuring student achievement and, more recently, college preparedness. As Fordham’s Mike Petrilli and Chester Finn explained a few weeks ago, there’s now good evidence that NAEP’s proficiency level in reading is particularly predictive of whether students are ready to succeed in college without taking remedial courses.

Ohio’s longstanding definition of proficiency, on the other hand, is predictive of nothing, as far as we can tell. It certainly doesn’t indicate that a student is on track for college. But that’s surely what parents...

With the end of the school year fast approaching and the annual testing window closing, we can make some preliminary judgments about what's signal and what's noise in the debate over parents opting their children out of state assessments. There have been missteps and lessons for those on both sides of the issue. Four have struck me hard.

1) Respect parental choice. Education reformers who support testing may not agree with parents' decision to opt out. But it's senseless to argue that parents know best when it comes to choosing their child's school, yet are ill-informed when it comes to opting out. Parental choice is like free speech; the test of your belief is whether you still support it when you dislike how it's used. My fellow U.S. News contributor Rick Hess writes that education reformers have dismissed test refusers as "conspiracy theorists and malcontents." That overstates things, but no matter. Those of us who value testing need to do a better job of explaining to unhappy parents what's in it for them. But we also must respect parental prerogative, whether or not we like where it leads.

2) Don't follow the money. Parents have every...

There’s been a lot of pontificating lately about how to interpret the opt-out movement and the message parents are trying to deliver. The Wall Street Journal’s Jason Riley believes that “soccer moms” are mad at Common Core. Jay Greene, channeled by Riley, blames the diminishment of parental control. Rick Hess fingers the reformers’ social justice agenda, which is at odds with the interests of middle class suburban parents.

These guesses are as good as anyone’s because the truth is: We don’t know. To my knowledge, nobody has surveyed a representative sample of the opt-outers; nobody knows for sure what’s motivating them. So let’s pause for a moment and examine what we do know. In other words, let’s establish the fact base.

  1. A whole lot of parents in New York State opted out their kids of state exams this spring. According to New York State Allies for Public Education, almost 200,000 students did not take the tests; in several districts, that number was as high as 70 percent.
  2. A handful of New Jersey districts also saw sky-high opt-out numbers,
  3. ...

With little fanfare, the New York City Department of Education (DOE) last month released a draft of its new “School Quality Snapshot”—Chancellor Carmen Fariña’s bid to evaluate each of Gotham’s more than 1,800 schools based on “multiple measures.” The DOE’s website invites public comment on the new reports until May 8. Here’s mine:

I confess I wasn’t the biggest fan of the single-letter-grade school report cards of the Bloomberg-Klein era. But as a signaling device to schools and teachers about what mattered to the higher-ups at the DOE’s Tweed Courthouse headquarters, they were clear and unambiguous: Raise test scores, but most importantly raise everyone’s test scores. With 85 percent of a school’s grade based on test scores—and 60 percent of the total based on test score growth—the report cards, for good or for ill, left little room for doubt that  testing was king. Valorizing growth was an earnest attempt to measure schools’ contributions to student learning, not merely demographics or zip code.

Fariña, whose contempt for data I’ve remarked upon previously, values “trust.” You may worry if your child can’t read or do math. So does Chancellor Fariña, sort of. But she deeply cares if “teachers trust the...

The education components of Governor Kasich’s proposed budget—and the House's subsequent revisions—made a big splash in Ohio's news outlets. Much of the attention has been devoted to the House’s (unwise) moves to eliminate PARCC funding and their rewrite of Kasich’s funding formula changes. Amidst all this noise, however, are a few other education issues in the House’s revisions that have slipped by largely unnoticed. Let’s examine a few.

Nationally normed vs. criterion-referenced tests

As part of its attempt to get rid of PARCC, the House added text dictating that state assessments “shall be nationally normed, standardized assessments.” This is worrisome, as there is a big difference between norm-referenced and criterion-referenced tests.

A norm-referenced test determines scores by comparing a student’s performance to the entire pool of test takers. Each student’s test score is compared to other students in order to determine their percentile ranking in the distribution of test takers. Examples of norm-referenced tests are the Iowa Test of Basic Skills or the Stanford 10 exams. A criterion-referenced test, on the other hand, is scored on an absolute scale. Instead of being compared to other students, students are compared against a standard of achievement (i.e.,...

We released a new report today, School Closures and Student Achievement: An Analysis of Ohio’s Urban District and Charter Schools, that could change the way we think about school closure.  The study reveals that children displaced by closure make significant academic gains on state math and reading exams after their school closes.

The study examined 198 school closures that occurred between 2006 and 2012 in the Ohio ‘Big Eight’ urban areas (Akron, Canton, Cincinnati, Cleveland, Columbus, Dayton, Toledo, and Youngstown). The research included 120 closed district-run schools and 78 closed charter schools. Taken together, these closures directly affected 22,722 students—disproportionately low-income, low-achieving, and minority students—who were in grades 3-8 at the point of closure.

Three years after closure, the research found that displaced students made the following cumulative gains:

  • Students who had attended a closed district school gained forty-nine additional days of learning in reading and thirty-four additional days in math and;
  • Students who had attended a closed charter school gained forty-six additional days in math.

Further, the study reveals that students who attended a higher-quality school after closure made even greater progress. Three years after closure, displaced students who transferred...

A vast amount of contemporary education policy attention and education reform energy has been lavished on the task of defining and gauging “college readiness” and then taking steps to align K–12 outcomes more closely with it. The ultimate goal is for many more young people to complete high school having been properly prepared for “college-level” work.

The entire Common Core edifice—and the assessments, cut scores, and accountability arrangements built atop it—presupposes that “college-ready” has the same definition that it has long enjoyed: students prepared to succeed, upon arrival at the ivied gates, in credit-bearing college courses that they go right into without needing first to subject themselves to “remediation” (now sometimes euphemized as “developmental education”).

But this goes way beyond Common Core. Advanced Placement courses also rest on the understanding that an “introductory college-level course” in a given subject has a certain set meaning and fixed standards. The people at ACT, the College Board, and NAGB have sweat bullets developing metrics that gauge what a twelfth grader must know and be able to do in order to be truly college-ready—again, in the sense of having solid prospects of succeeding in credit-bearing college courses in one subject or another.

Lying beneath...

Everyone knows that impenetrable jargon is to the education community what sputtering indignation is to Twitter: both irritating and contagious. When teachers and administrators hold forth on the importance of psychometrics and normed modality processing, it emboldens the rest of us to test our comfort with stackable credentials and mastery-based learning. And in the midst of this morass of deliberate obscurantism, a term like “career-ready” should seem like a godsend. But as this new brief from ACT, Inc. reminds us, there are important nuances to even the most outwardly simple concepts.

Nearly ten years ago, the organization released Ready for College and Ready for Work: Same or Different?, a similar publication that made the case for equivalently rigorous education for all high school graduates, regardless of whether they matriculate into colleges or head directly for the workplace. As the authors of Unpacking “Career Readiness” note, the earlier brief “described college and career readiness in terms of benchmarks focusing solely on academic assessments and the level of education…required for success in postsecondary education or targeted workforce training.” They concede, though, that subsequent research “has clearly established the value of additional areas of competency that are important for both college...

The testing “opt-out” movement is testing education reform’s humility.

The number of students not participating in state assessments is large and growing. In one New York district, 70 percent of students opted out; in one New Jersey district, it was 40 percent.

Some reporting makes the case that this phenomenon is part of a larger anti-accountability, anti-Common Core story. Some reformers, it seems to me, believe opting out is the result of ignorance or worse.

Participants are routinely cast as uninformed or irrational. Amanda Ripley implied that opting out of testing is like opting out of vaccines and lice checks. New York Board of Regents Chancellor Merryl Tisch argued, “We don’t refuse to go to the doctor for an annual check-up…we should not refuse to take the test.” A column in the Orlando Sentinel argued we’d “lost our minds” and that the “opt-out movement has officially jumped...

Pages