Standards-Based Reforms

Nationally and in Ohio, we press for the full suite of standards-based reforms across the academic curriculum and throughout the K–12 system, including (but not limited to) careful implementation of the Common Core standards (CCSS) for English language arts (ELA) and mathematics as well as rigorous, aligned state assessments and forceful accountability mechanisms at every level.

Resources:

Our many standards-based blog posts are listed below.


Fordham’s experts on standards-based reforms:


Following hard on the heels of Fordham’s own reportEvaluating the Content and Quality of Next Generation Assessments, the Center for American Progress looks at the exams offered by the PARCC and Smarter Balanced (SBAC) testing consortia and largely likes what it sees for students with special challenges.

It’s a larger population than many perhaps realize. English language learners (4.4 million) and students with disabilities (6.4 million) constitute more than 20 percent of American school enrollment. “Given these numbers, it is critical that students with disabilities and English language learners have the same opportunities as their peers to demonstrate their knowledge and skills and receive appropriate supports to meet their needs,” the report notes.

Testing “accommodations” have typically meant extra time, questions read out loud or translated into native languages, and so on. While PARCC and SBAC “improve on previous state tests in terms of quality, rigor, and alignment” (Fordham’s report reached the same overarching conclusion) they also represent a significant advance in “universal design”—a principle that considers the user with the greatest physical and cognitive need and makes it a “feature,” not a “fix.” Consider the authors’ example of sidewalk “curb cuts.” Designed to make sidewalks wheelchair accessible, they ended up...

Management sage Peter Drucker once said, “If you want something new, you have to stop doing something old.” In recent years, policy makers have turned the page on Ohio’s old, outdated standards and accountability framework. The task now is to replace it with something that, if implemented correctly, will better prepare Buckeye students for the expectations of college and the rigors of a knowledge- and skills-driven workforce.

While the state’s former policies did establish a basic accountability framework aligned to standards, a reset was badly needed. Perhaps the most egregious problem was the manner in which the state publicly reported achievement. State officials routinely claimed that more than 80 percent of Ohio students were academically “proficient,” leaving most parents and taxpayers with a feel-good impression of the public school system.

The inconvenient truth, however, was that hundreds of thousands of pupils were struggling to master rigorous academic content. Alarmingly, the Ohio Board of Regents regularly reports that 30–40 percent of college freshman need remedial coursework in English or math. Results from the ACT reveal that fewer than half of all graduates meet college-ready benchmarks in all of the assessment’s content areas. Finally, outcomes from the “nation’s report card”—the National Assessment...

Editor’s note: This is the last in a series of blog posts that takes a closer look at the findings and implications of Evaluating the Content and Quality of Next Generation Assessments, Fordham’s new first-of-its-kind report. The prior five posts can be read here, here, herehere, and here.

It’s hard to believe that it’s been twenty-two months (!) since I first talked with folks at Fordham about doing a study of several new “Common Core-aligned” assessments. I believed then, and I still believe now, that this is incredibly important work. State policy makers need good evidence about the content and quality of these new tests, and to date, that evidence has been lacking. While our study is not perfect, it provides the most comprehensive and complete look yet available. It is my fervent hope that policy makers will heed these results. My ideal would be for states to simply adopt multi-state tests that save them effort (and probably money) and promote higher-quality standards implementation. The alternative, as many states have done, is to go it alone. Regardless of the approach, states should at least use the results of this study and other recent and forthcoming investigations of test quality...

Editor’s note: This is the fifth in a series of blog posts that takes a closer look at the findings and implications of Evaluating the Content and Quality of Next Generation Assessments, Fordham’s new first-of-its-kind report. The prior four posts can be read here, here, here, and here.

When one of us was enrolled in a teacher education program umpteen years ago, one of the first things we were taught was how to use Bloom’s taxonomy. Originally developed in 1956, it is a well-known framework that delineates six increasingly complex levels of understanding: knowledge, comprehension, application, analysis, synthesis, and evaluation. More recently—and to the consternation of some—Bloom’s taxonomy has been updated. But the idea that suitably addressing various queries and tasks requires more or less brainpower is an enduring truth (well sort of).  

So it is no surprise that educators care about the “depth of knowledge” (DOK) (also called “cognitive demand”) required of students. Commonly defined as the “type of thinking required by students to solve a task,” DOK has become a proxy for rigor even though it concerns content complexity rather than difficulty. A clarifying example: A student may not have seen a...

On the campaign trail, Senator Ted Cruz reliably wins applause with a call to "repeal every word of Common Core." It's a promise he will be hard-pressed to keep should he find himself in the White House next January. Aside from the bizarre impracticality of that comment as phrased (which words shall we repeal first? "Phonics"? "Multiplication"? Or "Gettysburg Address"?), the endlessly debated, frequently pilloried standards are now a deeply entrenched feature of America's K–12 education landscape—love 'em or hate 'em.

Common Core has achieved "phenomenal success in statehouses across the country," notes Education Next. In a study published last month, the periodical found that "thirty-six states strengthened their proficiency standards between 2013 and 2015, while just five states weakened them." That's almost entirely a function of Common Core. 

Education Next began grading individual states’ standards in 1995, comparing the extent to which their state tests' definition of proficiency aligned with the gold-standard National Assessment of Educational Progress assessment (often referred to as "the nation's report card”). That year, six states received an A grade. As recently as four years ago, only Massachusetts earned that distinction. Today, nearly half of all states, including the District of Columbia, have earned A ratings....

Leading up to this year’s report card release, some school districts expressed concern about the negative impact of students opting out of state assessments on their report card grades. In response, lawmakers proposed a well-intentioned but shortsighted bill attempting to mitigate the impact of opt-outs—first by erasing non-test-takers from their schools’ performance grades and then (after being amended) by reporting two separate Performance Index grades. The Ohio Department of Education devised a temporary reporting solution: Performance Index scores would be reported as normal (including the impact of non-test-takers, as per current law), but a “modified achievement measure” would be made available to illustrate how districts would have scored if non-test-takers didn’t count.

A quick look at the data shows that the impact of opt-outs last year (2014–15) was minimal for the vast majority of Ohio school districts. As depicted in Table 1, fifty-two districts (8.5 percent) experienced a letter grade change because of their non-participation rates (shaded in green). This was most likely driven by the opt-out movement. It’s hard to say for sure, though, because Ohio only captures test participation rates and not the reasons for non-participation—which might include excused or unexcused absences, truancy, or opting...

Editor's note: This letter appeared in the 2015 Thomas B. Fordham Institute Annual Report. To learn more, download the report.

Dear Fordham Friends,

Think tanks and advocacy groups engage in many activities whose impact is notoriously difficult to gauge: things like “thought leadership,” “fighting the war of ideas,” and “coalition building.” We can look at—and tabulate—various short-term indicators of success, but more often than not, we’re left hoping that these equate to positive outcomes in the real world. That’s why I’m excited this year to be able to point to two hugely important, concrete legislative accomplishments and declare confidently, “We had something to do with that.”

Reading

Namely: Ohio’s House Bill 2, which brought historic reforms to the Buckeye State’s beleaguered charter school system, and the Every Student Succeeds Act, the long-overdue update to No Child Left Behind

In neither case can we claim anything close to full credit. On the Washington front especially, our contributions came mostly pre-2015, in the form of writing, speaking, and networking about the flaws of NCLB and outlining a smaller, smarter federal role. We were far from alone; figures...

A new Harvard University study examines the link between Common Core implementation efforts and changes in student achievement.

Analysts surveyed randomly selected teachers of grades 4–8 (about 1,600 in Delaware, Maryland, Massachusetts, New Mexico, and Nevada), asking them a number of questions about professional development they’ve received, materials they’ve used, teaching strategies they’ve employed, and more. Analysts used those responses to create twelve composite indices of various facets of Common Core implementation (such as “principal is leading CCSS implementation”) to analyze the link between each index and students’ performance on the Common Core-aligned assessments PARCC and SBAC. In other words, they sought to link teacher survey responses to their students’ test scores on the 2014–15 PARCC and SBAC assessments, while also controlling for students’ baseline scores and characteristics (along with those of their classroom peers) and teachers’ value-added scores in the prior school year.

The bottom line is that this correlational study finds more statistically significant relationships for math than for English. Specifically, three indices were related to student achievement in math: the frequency and specificity of feedback from classroom observations, the number of days of professional development, and the inclusion of student performance on CCSS-aligned assessments in teacher evaluations....

Editor’s note: This is the fourth in a series of blog posts taking a closer look at the findings and implications of Evaluating the Content and Quality of Next Generation Assessments, Fordham’s new first-of-its-kind report. The first three posts can be read herehere, and here.

It’s historically been one of the most common complaints about state tests: They are of low quality and rely almost entirely on multiple choice items. 

It’s true that item quality has sometimes been a proxy, like it or not, for test quality. Yet there is nothing magical about item quality if the test item itself is poorly designed. Multiple choice items can be entirely appropriate to assess certain constructs and reflect the requisite rigor. Or they can be junk. The same can be said of constructed response items, where students are required to provide an answer rather than choose it from a list of possibilities. Designed well, constructed response items can suitably evaluate what students know and are able to do. Designed poorly, they are a waste of time.

Many assessment experts will tell you that one of the best ways to assess the skills, knowledge, and competencies that we expect students to demonstrate is through...

Fordham’s latest blockbuster report digs deep into three new, multi-state tests (ACT Aspire, PARCC, and Smarter Balanced) and one best-in-class state assessment, Massachusetts’ state exam (MCAS), to answer policymakers’ most pressing questions about the next-generation tests: Do these tests reflect strong college- and career-ready content? Are they of rigorous quality? Broadly, what are their strengths and areas for improvement?

Over the last two years, principal investigators Nancy Doorey and Morgan Polikoff led a team of nearly forty reviewers to find answers to those questions. Here’s a quick sampling of the findings:

  • Overall, PARCC and Smarter Balanced assessments had the strongest matches to college- and career-ready standards, as defined by the Council of Chief State School Officers.
  • ACT Aspire and MCAS both did well regarding the quality of their items and the depth of knowledge they assessed.
  • Still, panelists found that ACT Aspire and MCAS did not adequately assess—or may not assess at all—some of the priority content reflected in the Common Core standards in both ELA/Literacy and mathematics.

As might be expected, the report has garnered national interest. Check out coverage from The 74 MillionU.S. News, and Education Week just for a start.

Or better...

Pages