Standards-Based Reforms

Nationally and in Ohio, we press for the full suite of standards-based reforms across the academic curriculum and throughout the K–12 system, including (but not limited to) careful implementation of the Common Core standards (CCSS) for English language arts (ELA) and mathematics as well as rigorous, aligned state assessments and forceful accountability mechanisms at every level.

Resources:

Our many standards-based blog posts are listed below.


Fordham’s experts on standards-based reforms:


Editor’s note: This is the fourth in a series of blog posts taking a closer look at the findings and implications of Evaluating the Content and Quality of Next Generation Assessments, Fordham’s new first-of-its-kind report. The first three posts can be read herehere, and here.

It’s historically been one of the most common complaints about state tests: They are of low quality and rely almost entirely on multiple choice items. 

It’s true that item quality has sometimes been a proxy, like it or not, for test quality. Yet there is nothing magical about item quality if the test item itself is poorly designed. Multiple choice items can be entirely appropriate to assess certain constructs and reflect the requisite rigor. Or they can be junk. The same can be said of constructed response items, where students are required to provide an answer rather than choose it from a list of possibilities. Designed well, constructed response items can suitably evaluate what students know and are able to do. Designed poorly, they are a waste of time.

Many assessment experts will tell you that one of the best ways to assess the skills, knowledge, and competencies that we expect students to demonstrate is through...

Fordham’s latest blockbuster report digs deep into three new, multi-state tests (ACT Aspire, PARCC, and Smarter Balanced) and one best-in-class state assessment, Massachusetts’ state exam (MCAS), to answer policymakers’ most pressing questions about the next-generation tests: Do these tests reflect strong college- and career-ready content? Are they of rigorous quality? Broadly, what are their strengths and areas for improvement?

Over the last two years, principal investigators Nancy Doorey and Morgan Polikoff led a team of nearly forty reviewers to find answers to those questions. Here’s a quick sampling of the findings:

  • Overall, PARCC and Smarter Balanced assessments had the strongest matches to college- and career-ready standards, as defined by the Council of Chief State School Officers.
  • ACT Aspire and MCAS both did well regarding the quality of their items and the depth of knowledge they assessed.
  • Still, panelists found that ACT Aspire and MCAS did not adequately assess—or may not assess at all—some of the priority content reflected in the Common Core standards in both ELA/Literacy and mathematics.

As might be expected, the report has garnered national interest. Check out coverage from The 74 MillionU.S. News, and Education Week just for a start.

Or better...

Editor’s note: This is the second in a series of blog posts that will take a closer look at the findings and implications of Evaluating the Content and Quality of Next Generation Assessments, Fordham’s new first-of-its-kind report. The first post can be read here

Few policy issues over the past several years have been as contentious as the rollout of new assessments aligned to the Common Core State Standards (CCSS). What began with more than forty states working together to develop the next generation of assessments has devolved into a political mess. Fewer than thirty states remain in one of the two federally funded consortia (PARCC and Smarter Balanced), and that number continues to dwindle. Nevertheless, millions of children have begun taking new tests—either those developed by the consortia, ACT (Aspire), or state-specific assessments constructed to measure student performance against the CCSS, or other college- and career-ready standards.

A key hope for these new tests was that they would overcome the weaknesses of the previous generation of state assessments. Among those weaknesses were poor alignment with the standards they were designed to assess and low overall levels of cognitive demand (i.e., most items required simple recall or...

The Thomas B. Fordham Institute has been evaluating the quality of state academic standards for nearly twenty years. Our very first study, published in the summer of 1997, was an appraisal of state English standards by Sandra Stotsky. Over the last two decades, we’ve regularly reviewed and reported on the quality of state K–12 standards for mathematicsscienceU.S. historyworld historyEnglish language arts, and geography, as well as the Common CoreInternational BaccalaureateAdvanced Placement and other influential standards and frameworks (such as those used by PISA, TIMSS, and NAEP). In fact, evaluating academic standards is probably what we’re best known for.

For most of the last two decades, we’ve also dreamed of evaluating the tests linked to those standards—mindful, of course, that in most places, the tests are the real standards. They’re what schools (and sometimes teachers and students) are held accountable for, and they tend to drive curricula and instruction. (That’s probably the reason why we and other analysts have never been able to demonstrate a close relationship between the quality of standards per se and changes in student achievement.) We wanted to know how well matched the assessments were to the standards, whether they were of high...

Last week, we cautioned that Ohio’s opt-out bill (HB 420) offers a perverse incentive for districts and schools to game the accountability system. The bill has since been amended, but it is no closer to addressing the larger issues Ohio faces as it determines how best to maintain accountability in response to the opt-out movement. 

Current law dings schools and districts when a student skips the exam by assigning a zero for that student when calculating the school’s overall score (opting out directly impacts two of ten report card measures). The original version of HB 420 removed those penalties entirely. Instead of earning a zero, absent students would simply not count against the school. Realizing the potential unintended consequences under such a scenario, including the possible counseling out of low-achieving students and larger numbers of opt-outs overall, the drafters of the substitute bill incorporated two changes.

First, the amended version requires the Ohio Department of Education to assign two separate Performance Index (PI) grades for schools and districts for the 2014–15 school year—one reflecting the scores of all students required to take exams (including those who opt out) and another excluding students who didn’t participate. Second, in...

Following in the footsteps of a previous study, CAP researchers have examined the effects of a state’s commitment to standards-based reform (as measured by clear standards, tests aligned to those standards, and whether a state sanctions low-performing schools) on low-income students’ test scores (reading and math achievement on the NAEP from 2003 to 2013). The results indicate that jurisdictions ranked highest in commitment to standards-based reform (e.g., Massachusetts, Florida, Tennessee, the District of Columbia) show stronger gains on NAEP scores for their low-income students. The same relationship seems to be present in states ranked lowest in commitment to standards-based reform: low-income students in Iowa, Kansas, Idaho, Montana, North Dakota, and South Dakota did worse.

As you can imagine, a lot of caveats go with the measure of commitment to standards-based reform. Checking the box for “implemented high standards” alone is likely to pose more questions than it answers. Beyond that, implementation, teaching, and assessment of standards are all difficult, if not impossible, to quantify. The authors acknowledge that some of their evidence is “anecdotal and impressionistic,” but they are talking about the “commitment to standards” piece. They are four-square behind NAEP scores as a touchstone of academic success or lack...

The eyes of the nation are fixed on a tournament of champions this week. Snacks have been prepared, eager spectators huddle around their screen of preference, and social media is primed to blow up. Veteran commentators have gathered at the scene to observe and pontificate. For the competitors, the event represents the culmination of months of dedicated effort, and sometimes entire careers; everything they’ve worked for, both at the college and professional level, has led up to this moment. The national scrutiny can be as daunting for grizzled journeymen as it is for fresh-faced greenhorns. You know what I’m talking about:

The Fordham Institute’s ESSA Accountability Design Competition.

Okay, you probably know what I’m talking about. If you inhabit the world of education policy, you took notice of Fordham’s January call for accountability system frameworks that would comply with the newly passed Every Student Succeeds Act—and take advantage of the new authority the law grants to states. With the federal influence on local classrooms scaled back so suddenly, it will be up to education agencies in Wisconsin and Mississippi and Alaska to adopt their own methods of setting the agenda for schools and rating their performance in adhering to it.

The purpose of...

Last May, Achieve released a report showing that most states have created a false impression of student success in math and reading proficiency. Known as the “honesty gap” (or, as Fordham has long described it, The Proficiency Illusion), the discrepancy between reported and actual proficiency is found when state test results are compared with NAEP results.[1] For example, Achieve’s May report showed that over half of states showed discrepancies of more than thirty percentage points with NAEP’s gold standard. Ohio was one of the worst offenders: Our old state test scores (the OAA and OGTs) differed by thirty percentage points or more in each of NAEP’s main test subjects, with a whopping forty-nine-point difference in fourth-grade reading.

Less than one year later, new state test scores and biennial NAEP results have created an opportunity to revisit the honesty gap. In its latest report, Achieve finds that the gap has significantly narrowed in nearly half of states. Ohio is one of twenty-six states that has earned the commendation “Significantly Improved” for closing the honesty gap in either fourth-grade reading or eighth-grade math by at least ten percentage points since 2013....

Ohio lawmakers recently proposed a bill (HB 420) that would remove students who opt out of standardized tests from the calculation of certain school and district accountability measures. Representative Kristina Roegner (R-Hudson), who introduced the bill, declared that “if [a student is] not going to take the test, in no way should the school be penalized for it.” Students who fail to take state exams (for any reason, not just opting out) count against two of ten school report card measures, the performance index score, and the K–3 literacy measure. Non-participating students receive zeroes, which pulls down the overall score on those components.

On first reading, Roegner’s sentiments seem obvious: Why should schools be held responsible for students who decline even to sit for the exams? Is it the job of schools to convince students (or their parents, the more likely objectors) to show up on exam day? While compulsory schooling laws do require students to attend school, there is nothing especially enforceable about exam day in particular. Ohio does not prohibit opting out. Nor does it explicitly allow it, as some states do (e.g., Pennsylvania allows a religious objection to testing; Utah and...

My wife and I both spend time working with our kids on their homework. We have also made a family tradition of “Saturday School,” a routine that my wife and I instituted a couple of years ago because our kids’ school was using a pre-Common Core math curriculum that wasn’t keeping pace with the standards. It has become a weekly exercise for the whole family’s brain. On my personal blog, I’ve shared some of the math problems that I’d written for Saturday School so that other parents could use the problems at home if they wished.

On busy nights, most parents (including me) are hard-pressed to find time to help with daily homework. That’s why my first piece of advice for parents is that they help strengthen their children’s work ethic and accountability by ensuring that homework is completed. My kids have their own dedicated space at home for schoolwork. When they get home from school, the next day’s homework has to be complete and correct before there is any screen time or other activities.

Parents can also help at home with skill building and fluency practice—things like memorizing basic math facts. When it comes to skills, practice is essential....

Pages