Standards-Based Reforms

Nationally and in Ohio, we press for the full suite of standards-based reforms across the academic curriculum and throughout the K–12 system, including (but not limited to) careful implementation of the Common Core standards (CCSS) for English language arts (ELA) and mathematics as well as rigorous, aligned state assessments and forceful accountability mechanisms at every level.

Resources:

Our many standards-based blog posts are listed below.


Fordham’s experts on standards-based reforms:


A small storm has blown up around the fact that certain math items on the 2015 National Assessment of Educational Progress (NAEP) do not align with what fourth and eighth graders are actually being taught in a few states—mainly places attempting to implement the Common Core State Standards within their schools’ curricula.

NAEP is only administered in grades four, eight, and twelve. So the specific issue is whether the fourth graders who sat for NAEP this spring had a reasonable opportunity to learn the skills, algorithms, techniques—broadly speaking, “the content”—on that test. If their state standards had moved some portion of what used to be fourth-grade math to the fifth or sixth grade, or replaced it with something else entirely, their state’s NAEP scores would likely be lower.

This kind of misalignment is blamed for some of the math declines that NAEP recently reported. Department officials in Maryland, for example, examined the NAEP math sub-scores and determined that many Maryland fourth graders are no longer being taught some of those things before they take the test.

We are left to wonder: Should NAEP frameworks and assessments be updated to reflect what’s in...

Last week, in the wake of President Obama’s pledge to reduce the amount of time students spend taking tests, my colleagues Robert Pondiscio and Michael Petrilli weighed in with dueling stances on the current state of testing and accountability in America’s schools. Both made valid points, but neither got it exactly right, so let me add a few points to the conversation.

Like Robert, I don’t see how we can improve our schools if we don’t know how they’re doing, which means we need the data we get from standardized tests. But I also believe that—because we’re obligated to intervene when kids aren’t getting the education they deserve—some tests must inevitably be “high-stakes.” The only real alternative to this is an unregulated market, which experience suggests is a bad idea.

Must this logic condemn our children to eternal test-preparation purgatory? I hope not, but I confess to some degree of doubt. The challenge is creating an accountability system that doesn't inadvertently encourage gaming or bad teaching. Yet some recent policy shifts seem to have moved us further away from that kind of system.

As Mike noted, the problem of over-testing has been exacerbated in recent years by the...

A new study by the NAEP Validity Studies Panel analyzes the alignment of the assessment’s 2015 Math Items (the actual test questions) for grades four and eight to the math Common Core State Standards (CCSS).

To do so, the panel enlisted as reviewers eighteen mathematicians, teachers, math educators, and supervisors who have familiarity with Common Core. This group classified all 150 items in the 2015 NAEP math pool for each grade as either matching a CCSS standard or not.

The reviewers determined that the Common Core and NAEP were reasonably aligned at both grade levels— not surprising, since CCSS writers had the NAEP frameworks at their disposal. Further, NAEP is by design broader than the CCSS and is supposed to maintain a degree of independence relative to the “current fashions in instruction and curriculum.”

Panelists found that 79 percent of NAEP items were matched to the content that appears in the CCSS at or below grade 4. The overall alignment of NAEP to CCSS standards at or below grade eight is even closer, 87 percent.

There is, however, variation in matches across content areas. In fourth grade, the least aligned content area was data analysis, statistics,...

OK, everyone, back away from the ledge. With the release of NAEP data this week, the predictable deluge of commentary is well underway—mainly of the gnashing-of-teeth, rending-of-garments variety. NAEP may be the nation’s report card, but it is also the nation’s Rorschach test. Perception is in the eye of the beholder, and many see darkness and misery: “A Decade of Academic Progress Halts,” says the Los Angeles Times. “Student Score in Reading and Math Drop,” says U.S. News & World Report.

One of the frequent criticisms of NAEP punditry is “misNAEPery”—the sin of attributing fluctuations to particular policies, for example. One particularly virulent form of this fallacy—failure to account demographic changes in states over time—has become slightly less tenable this week, courtesy of this illuminating analysis by Matthew Chingos of the Urban Institute.

Not every state is the same. States with higher concentrations of black and Hispanic children, low-income families, and English language learners (ELLs) have a harder time rising to the top because they have more students mired at the bottom. But when you adjust for these demographic realities, a different NAEP emerges. There’s Massachusetts, still sitting pretty atop the tables. But Texas and...

Over the weekend, President Barack Obama received high praise from parents and teachers for acknowledging that testing is taking too much time away from teaching, learning and fostering creativity in schools, and recommending that standardized tests take up no more than 2 percent of total school instructional time. Frankly, this is arrant nonsense.

From time to time, I'm asked to give a talk about education. If I look at how I spend my time over the course of a year, giving presentations and speeches is a very small part of my job—less than 2 percent. However, if my effectiveness were to be judged on the audience response to the handful of talks I give each year, I'd spend a lot more time writing and practicing speeches. I'd fret endlessly over my PowerPoint slides and leave-behinds. I'd sprinkle in more jokes to be entertaining; I'd probably say whatever I thought would get audiences to like me more, rather than challenging my listeners. I'd definitely spend a lot more on suits and dry cleaning than I do now.

But most critically, I'd spend far less time on all the other things I do—writing, reading,...

Unfortunately, the rumors, predictions, and surmises were correct: Scores on the National Assessment of Educational Progress (NAEP) are mostly down or flat. The worst news came in eighth-grade math, where twenty-two states saw declines. One of the only bright spots is fourth-grade reading, where ten states (as well as Washington, D.C., Boston, Chicago, and Cleveland) posted gains.

Why this happened will be combed over and argued. So far, it feels like anyone’s guess (more on that below). But there’s no denying that it’s bad news. It had come to seem like NAEP scores would always go up, at least over the long term, just like it had come to seem like murder rates would always go down. Now the real world has intervened to remind us that social progress is not inevitable. Let’s not sugarcoat it: This is deeply disheartening for our country, our K–12 system, and especially our kids.

As our friends in the research community like to remind us, it’s impossible to draw causal connections from changes in NAEP data; doing so is “misNAEPery.” Yet we can’t help but search for explanations. And we can certainly float hypotheses about the trends—educated guesses that can then be tested using...

President Obama and Secretary of Education Arne Duncan deserve credit for acknowledging this weekend that there’s too much testing in our schools today and that “the administration bears some of the responsibility.”

Indeed it does. That’s because its decision to condition ESEA flexibility on state adoption of teacher evaluation systems has not only raised the stakes of reading and math tests (making them less popular and potentially more damaging to the educational enterprise). It’s also led to a proliferation of tests in “non-tested subjects”—everything from P.E. to social studies and beyond—for the sole purpose of collecting data to judge teachers’ effectiveness.

Yet, as Matt Barnum argues persuasively at the Seventy Four, the feds aren't willing to actually fix this problem:

The new report did not capture a precise measure on what proportion of tests were required by teacher evaluation, but it does point out that many states have put in place new assessments “to satisfy state regulations and laws for teacher and principal evaluation driven by and approved by U.S. Department of Education policies.”

But an initial reading of the department’s guidance suggests it is sticking to these policies: “The Department will work with states...

The French mathematician and philosopher Blaise Pascal famously posited that whether or not one believes in God, it behooves us to behave as if he exists. What have you got to lose? If you’re right, you wind up heaven and spare yourself eternal punishment in hell. And if not, well, what did it cost you apart from a few earthly pleasures here and there? Pascal’s Wager basically suggests that your upside is infinite, while your downside is relatively small. So do the right thing.

We need a Pascal’s Wager of curriculum. Schools are going to teach something, so it behooves us to ensure that the textbooks, workbooks, and software we put in front of students are coherent and of high quality. As this report from the Center for American Progress shows, crappy curriculum costs every bit as much as the good stuff. The authors found “little relationship” between the cost and quality of instructional products. And switching to a more rigorous math curriculum, for example, can deliver far greater returns on investment than other reforms. “The average cost-effectiveness ratio of switching curriculum was almost forty times that of class-size reduction in a well known randomized experiment,” the report notes.

Every...

In this study, authors Jonathan Smith (of the College Board) and Kevin Stange (University of Michigan) use PSAT scores from 2004 and 2005 and enrollment and completion data from the National Student Clearinghouse to estimate the contribution of “peer effects” to community college outcomes and to the documented gap between the bachelor’s degree completion rates of students who enroll at two-year versus four-year institutions.

Interestingly, they find considerable overlap between average PSAT scores at two- and four-year colleges (though the study doesn’t include older students or those attending for-profit institutions), suggesting that many students choose the former for financial reasons rather than academic ones. This is unfortunate, because they also find that students are thirty percentage points less likely to earn a bachelor’s degree if they enroll at a two-year college—even after their academic abilities and those of their peers are taken into account. This means that our current policy of making two-year colleges cheaper than their four-year counterparts may inadvertently lower some students’ odds of earning a bachelor’s degree.

According to the authors, roughly 40 percent of the degree attainment gap can be explained by average peer quality (which is lower at two-year schools); the rest is attributable to...

Editor’s note: This post originally appeared in a slightly different form on the Seventy Four; that one also lambasted Arkansas for backpedaling on its cut scores. Since then, Arkansas acknowledged that it had erred in how it described the state’s performance levels and clarified that it would use the rigorous standards suggested by PARCC.

Way back in 2007, we at the Thomas B. Fordham Institute published a landmark study with experts from the Northwest Evaluation Association: The Proficiency Illusion. It found that state definitions for reading and math “proficiency” were all over the map—and shockingly subpar almost everywhere. In Wisconsin, for instance, eighth graders could be reading at the fourteenth percentile nationally and still be considered proficient.

This was a big problem—not just the inconsistency, though that surely made it harder to compare schools across state lines. Mostly, we worried about the signals that low proficiency standards sent to parents: the false positives indicating that their kids were on track for success when they actually weren’t. How were parents in Madison or Duluth supposed to know that their “proficient” son was really far below grade level, not to mention way off track for success in...

Pages