Standards-Based Reforms

Nationally and in Ohio, we press for the full suite of standards-based reforms across the academic curriculum and throughout the K–12 system, including (but not limited to) careful implementation of the Common Core standards (CCSS) for English language arts (ELA) and mathematics as well as rigorous, aligned state assessments and forceful accountability mechanisms at every level.

Resources:

Our many standards-based blog posts are listed below.


Fordham’s experts on standards-based reforms:


Editor's note: This letter appeared in the 2015 Thomas B. Fordham Institute Annual Report. To learn more, download the report.

Dear Fordham Friends,

Think tanks and advocacy groups engage in many activities whose impact is notoriously difficult to gauge: things like “thought leadership,” “fighting the war of ideas,” and “coalition building.” We can look at—and tabulate—various short-term indicators of success, but more often than not, we’re left hoping that these equate to positive outcomes in the real world. That’s why I’m excited this year to be able to point to two hugely important, concrete legislative accomplishments and declare confidently, “We had something to do with that.”

Reading

Namely: Ohio’s House Bill 2, which brought historic reforms to the Buckeye State’s beleaguered charter school system, and the Every Student Succeeds Act, the long-overdue update to No Child Left Behind

In neither case can we claim anything close to full credit. On the Washington front especially, our contributions came mostly pre-2015, in the form of writing, speaking, and networking about the flaws of NCLB and outlining a smaller, smarter federal role. We were far from alone; figures...

A new Harvard University study examines the link between Common Core implementation efforts and changes in student achievement.

Analysts surveyed randomly selected teachers of grades 4–8 (about 1,600 in Delaware, Maryland, Massachusetts, New Mexico, and Nevada), asking them a number of questions about professional development they’ve received, materials they’ve used, teaching strategies they’ve employed, and more. Analysts used those responses to create twelve composite indices of various facets of Common Core implementation (such as “principal is leading CCSS implementation”) to analyze the link between each index and students’ performance on the Common Core-aligned assessments PARCC and SBAC. In other words, they sought to link teacher survey responses to their students’ test scores on the 2014–15 PARCC and SBAC assessments, while also controlling for students’ baseline scores and characteristics (along with those of their classroom peers) and teachers’ value-added scores in the prior school year.

The bottom line is that this correlational study finds more statistically significant relationships for math than for English. Specifically, three indices were related to student achievement in math: the frequency and specificity of feedback from classroom observations, the number of days of professional development, and the inclusion of student performance on CCSS-aligned assessments in teacher evaluations....

Editor’s note: This is the fourth in a series of blog posts taking a closer look at the findings and implications of Evaluating the Content and Quality of Next Generation Assessments, Fordham’s new first-of-its-kind report. The first three posts can be read herehere, and here.

It’s historically been one of the most common complaints about state tests: They are of low quality and rely almost entirely on multiple choice items. 

It’s true that item quality has sometimes been a proxy, like it or not, for test quality. Yet there is nothing magical about item quality if the test item itself is poorly designed. Multiple choice items can be entirely appropriate to assess certain constructs and reflect the requisite rigor. Or they can be junk. The same can be said of constructed response items, where students are required to provide an answer rather than choose it from a list of possibilities. Designed well, constructed response items can suitably evaluate what students know and are able to do. Designed poorly, they are a waste of time.

Many assessment experts will tell you that one of the best ways to assess the skills, knowledge, and competencies that we expect students to demonstrate is through...

Fordham’s latest blockbuster report digs deep into three new, multi-state tests (ACT Aspire, PARCC, and Smarter Balanced) and one best-in-class state assessment, Massachusetts’ state exam (MCAS), to answer policymakers’ most pressing questions about the next-generation tests: Do these tests reflect strong college- and career-ready content? Are they of rigorous quality? Broadly, what are their strengths and areas for improvement?

Over the last two years, principal investigators Nancy Doorey and Morgan Polikoff led a team of nearly forty reviewers to find answers to those questions. Here’s a quick sampling of the findings:

  • Overall, PARCC and Smarter Balanced assessments had the strongest matches to college- and career-ready standards, as defined by the Council of Chief State School Officers.
  • ACT Aspire and MCAS both did well regarding the quality of their items and the depth of knowledge they assessed.
  • Still, panelists found that ACT Aspire and MCAS did not adequately assess—or may not assess at all—some of the priority content reflected in the Common Core standards in both ELA/Literacy and mathematics.

As might be expected, the report has garnered national interest. Check out coverage from The 74 MillionU.S. News, and Education Week just for a start.

Or better...

Editor’s note: This is the second in a series of blog posts that will take a closer look at the findings and implications of Evaluating the Content and Quality of Next Generation Assessments, Fordham’s new first-of-its-kind report. The first post can be read here

Few policy issues over the past several years have been as contentious as the rollout of new assessments aligned to the Common Core State Standards (CCSS). What began with more than forty states working together to develop the next generation of assessments has devolved into a political mess. Fewer than thirty states remain in one of the two federally funded consortia (PARCC and Smarter Balanced), and that number continues to dwindle. Nevertheless, millions of children have begun taking new tests—either those developed by the consortia, ACT (Aspire), or state-specific assessments constructed to measure student performance against the CCSS, or other college- and career-ready standards.

A key hope for these new tests was that they would overcome the weaknesses of the previous generation of state assessments. Among those weaknesses were poor alignment with the standards they were designed to assess and low overall levels of cognitive demand (i.e., most items required simple recall or...

The Thomas B. Fordham Institute has been evaluating the quality of state academic standards for nearly twenty years. Our very first study, published in the summer of 1997, was an appraisal of state English standards by Sandra Stotsky. Over the last two decades, we’ve regularly reviewed and reported on the quality of state K–12 standards for mathematicsscienceU.S. historyworld historyEnglish language arts, and geography, as well as the Common CoreInternational BaccalaureateAdvanced Placement and other influential standards and frameworks (such as those used by PISA, TIMSS, and NAEP). In fact, evaluating academic standards is probably what we’re best known for.

For most of the last two decades, we’ve also dreamed of evaluating the tests linked to those standards—mindful, of course, that in most places, the tests are the real standards. They’re what schools (and sometimes teachers and students) are held accountable for, and they tend to drive curricula and instruction. (That’s probably the reason why we and other analysts have never been able to demonstrate a close relationship between the quality of standards per se and changes in student achievement.) We wanted to know how well matched the assessments were to the standards, whether they were of high...

Last week, we cautioned that Ohio’s opt-out bill (HB 420) offers a perverse incentive for districts and schools to game the accountability system. The bill has since been amended, but it is no closer to addressing the larger issues Ohio faces as it determines how best to maintain accountability in response to the opt-out movement. 

Current law dings schools and districts when a student skips the exam by assigning a zero for that student when calculating the school’s overall score (opting out directly impacts two of ten report card measures). The original version of HB 420 removed those penalties entirely. Instead of earning a zero, absent students would simply not count against the school. Realizing the potential unintended consequences under such a scenario, including the possible counseling out of low-achieving students and larger numbers of opt-outs overall, the drafters of the substitute bill incorporated two changes.

First, the amended version requires the Ohio Department of Education to assign two separate Performance Index (PI) grades for schools and districts for the 2014–15 school year—one reflecting the scores of all students required to take exams (including those who opt out) and another excluding students who didn’t participate. Second, in...

Following in the footsteps of a previous study, CAP researchers have examined the effects of a state’s commitment to standards-based reform (as measured by clear standards, tests aligned to those standards, and whether a state sanctions low-performing schools) on low-income students’ test scores (reading and math achievement on the NAEP from 2003 to 2013). The results indicate that jurisdictions ranked highest in commitment to standards-based reform (e.g., Massachusetts, Florida, Tennessee, the District of Columbia) show stronger gains on NAEP scores for their low-income students. The same relationship seems to be present in states ranked lowest in commitment to standards-based reform: low-income students in Iowa, Kansas, Idaho, Montana, North Dakota, and South Dakota did worse.

As you can imagine, a lot of caveats go with the measure of commitment to standards-based reform. Checking the box for “implemented high standards” alone is likely to pose more questions than it answers. Beyond that, implementation, teaching, and assessment of standards are all difficult, if not impossible, to quantify. The authors acknowledge that some of their evidence is “anecdotal and impressionistic,” but they are talking about the “commitment to standards” piece. They are four-square behind NAEP scores as a touchstone of academic success or lack...

The eyes of the nation are fixed on a tournament of champions this week. Snacks have been prepared, eager spectators huddle around their screen of preference, and social media is primed to blow up. Veteran commentators have gathered at the scene to observe and pontificate. For the competitors, the event represents the culmination of months of dedicated effort, and sometimes entire careers; everything they’ve worked for, both at the college and professional level, has led up to this moment. The national scrutiny can be as daunting for grizzled journeymen as it is for fresh-faced greenhorns. You know what I’m talking about:

The Fordham Institute’s ESSA Accountability Design Competition.

Okay, you probably know what I’m talking about. If you inhabit the world of education policy, you took notice of Fordham’s January call for accountability system frameworks that would comply with the newly passed Every Student Succeeds Act—and take advantage of the new authority the law grants to states. With the federal influence on local classrooms scaled back so suddenly, it will be up to education agencies in Wisconsin and Mississippi and Alaska to adopt their own methods of setting the agenda for schools and rating their performance in adhering to it.

The purpose of...

Last May, Achieve released a report showing that most states have created a false impression of student success in math and reading proficiency. Known as the “honesty gap” (or, as Fordham has long described it, The Proficiency Illusion), the discrepancy between reported and actual proficiency is found when state test results are compared with NAEP results.[1] For example, Achieve’s May report showed that over half of states showed discrepancies of more than thirty percentage points with NAEP’s gold standard. Ohio was one of the worst offenders: Our old state test scores (the OAA and OGTs) differed by thirty percentage points or more in each of NAEP’s main test subjects, with a whopping forty-nine-point difference in fourth-grade reading.

Less than one year later, new state test scores and biennial NAEP results have created an opportunity to revisit the honesty gap. In its latest report, Achieve finds that the gap has significantly narrowed in nearly half of states. Ohio is one of twenty-six states that has earned the commendation “Significantly Improved” for closing the honesty gap in either fourth-grade reading or eighth-grade math by at least ten percentage points since 2013....

Pages