Despite the tireless marriage-wrecking efforts of Common Core opponents and their acolytes and funders, few states that initially pledged their troth to these rigorous new standards for English and math are in divorce mode. What’s far more fluid, unpredictable, and—frankly—worrying are the two elements of standards-based reform that make a vastly greater difference in the real world than standards themselves: implementation and assessment.
Don’t get me wrong. Standards are important, because they set forth the desired outcomes of schooling and it’s obviously better to aim for clear, ambitious, and academically worthy goals than at targets that are vague, banal, easy, or trendy. Standards are also supposed to provide the framework that shapes and organizes the rest of the education enterprise: curricula, teacher preparation, promotion and graduation expectations, testing and accountability, and just about everything else. (Kindergarten standards, for example, should affect what happens in preschool just as twelfth-grade standards should synch with what gets taught to college freshmen.)
But standards are not self-actualizing. Indeed, they can be purely symbolic, even illusory. Unless thoroughly implemented and properly assessed, they have scant traction in schools, classrooms, and the lives—and futures—of students.
California is the woeful poster child here, as I was reminded the other day (in connection not with the Common Core but with science). For years, it’s had terrific standards in the core subjects, but it’s also had pathetic achievement on external measures such as NAEP. That’s mainly because—in my interpretation, anyway—the Golden State never really put those solid standards into operation in its schools, nor did it make them part of a full-on accountability system for schools, kids, or educators.
Standards are not self actualizing; unless thoroughly implemented and properly assessed, they have scant traction.
Photo by Ginnerobot
Much the same tale can be told of Indiana. Meanwhile, however, Massachusetts and Florida treated their fine standards—in Florida’s case, one might better say “improving” standards—as the girders on which they constructed coherent and comprehensive implementation-assessment-accountability systems. And they have the results to prove it.
For the Common Core standards really to take root and blossom, every state that claims to follow them faces a mammoth implementation challenge. Yes, it will cost some money (though that can be mitigated via astute use of technology), but the harder challenge is the extent to which new standardsdisrupt long-established practices throughout the system and demand that millions of people do things differently than they’re accustomed to. (At the classroom level, these are called “instructional shifts.”)
As dogged implementation watchers (and Common Core hopefuls), we at Fordham have been impressed by some of what we’ve seen, discouraged by others, and impatient with many more. (“When will they get serious?”) As expected, everyone asserts that their curricula, textbooks, and in-service programs are now “aligned” with the Common Core, but who knows whether that’s really true? The country doesn’t yet have mechanisms for determining which are and which aren’t, although evaluation criteria are beginning to emerge, as are whistleblowers. (See, for example, Mark Bauerlein’s pointed critique of some high school curriculum guides in New York City.)
The assessment folks have perhaps the tallest mountains to climb. It’s a truism that what gets tested is what gets taught—but truisms get that label because they’re true. If the assessments that states use in connection with the Common Core don’t match the standards’ ambitious learning expectations, then few young people will end up learning what they will need (in these two subjects) to be truly college and career ready.
Let’s not kid ourselves about how challenging that really is. Keep in mind how few test-takers are currently deemed “college ready” on multiple gauges, from the testimony of professors (and employers) to the annual reports by ACT and other groups. Even more persuasive, check out the thickening body of evidence that NAEP’s “proficient” level—long derided by critics as too tough and typically attained by just one-quarter to one-third of U.S. high school seniors—really does correlate with academic preparedness for college-level work in math and English.
The Common Core–assessment mountains have multiple peaks. Here are the steepest:
Substantive alignment: Does the content of the assessments—the skills and knowledge that they actually probe—faithfully represent what’s in the standards themselves?
Cognitive alignment: The Common Core expects students to think, analyze, and engage in intellectual tasks that have long been AWOL from many U.S. classrooms. These tasks are hard to assess and don’t particularly lend themselves to multiple-choice questions. (Examples: “Analyzing risk in situations such as extreme sports, pandemics, and terrorism.” “Support claim(s) with clear reasons and relevant evidence, using credible sources and demonstrating an understanding of the topic or text.” “Analyze seminal U.S. documents of historical and literary significance (e.g., the Gettysburg Address, King’s ‘Letter from Birmingham Jail’), including how they address related themes and concepts.”) Test items that accurately appraise such learning are complex, time consuming, hard to score, and—therefore—costly. Test makers can’t just recycle their old item banks.
Rigor: It’s not enough to include a topic or task on the test. The evaluation of student responses must also be based on scoring criteria or rubrics that look for what is actually expected and for bona fide proficiency in relation thereto.
Cut scores: How does one judge pupils’ overall performance in relation to “how good is good enough” for, say, promotion from third to fourth grade or high school graduation, bearing in mind the intellectual hurdles of “career and college readiness” and the political hurdle of excessive failure rates, especially during the transition to new standards and higher expectations? (Not to mention the challenge of getting states to agree with each other on those cut scores.)
Adaptability: For assessments to work well for students across the learning spectrum, they must contain scads of items of varying difficulty. But for that to occur, the tests must “adapt” to individual pupils’ approximate levels of performance, not require every kid to answer every question. This generally necessitates computer-based assessments, yet not all schools have the hardware or bandwidth.
Resolving test schizophrenia: Many people look to assessments as “summative” instruments that determine after the fact whether students in a given grade or school have learned what they should. Of far greater value to educators, however, are “formative” assessments that provide feedback on which skills, concepts, and key knowledge bits have or haven’t yet been acquired by which kids, so that something can be done about it while there’s time. Few tests do both things well, but the new assessment systems emerging to accompany the Common Core are tasked with accomplishing precisely this.
Affordability: State testing budgets are lean, probably too lean, and the fastest way to talk officials out of high-quality Common Core assessments is to declare them unaffordable.
Getting all of this right is proving hugely difficult for the twin consortia (PARCC, Smarter Balanced) now developing Common Core assessment systems from scratch—but at least they appear fully committed. More worrying are longtime test vendors, not just commercial firms like Pearson and McGraw-Hill but also veteran nonprofits like ACT, Measured Progress, and the Northwest Evaluation Association. Some seem ready to slap a new cover on their old tests and declare them “aligned” with the Common Core, and some of their salesmen are whispering into the ears of state superintendents, promising assessments that aren’t just aligned but also cheap, speedy, and convenient—even ready next spring. But how can all of that be true? (See above.) How many state officials are test-savvy enough to evaluate such claims? And how many may quietly be heading to a back-door exit from the real challenges of the Common Core, settling for assessments that won’t ever prod the education system itself to raise its sights or teachers to make the demanding pedagogical changes that are needed if their pupils are truly to be prepared for college and career?
In the middle of 2013, we’d be wiser to focus on these challenges than to keep fussing over the standards themselves. And—let’s face it—states that aren’t prepared to scale such mountains might be better off not pretending that they’ve embraced the Common Core.
This article originally appeared in the July 18, 2013 edition of the Education Gadfly Weekly.