Common Core Watch

A new Harvard University study examines the link between Common Core implementation efforts and changes in student achievement.

Analysts surveyed randomly selected teachers of grades 4–8 (about 1,600 in Delaware, Maryland, Massachusetts, New Mexico, and Nevada), asking them a number of questions about professional development they’ve received, materials they’ve used, teaching strategies they’ve employed, and more. Analysts used those responses to create twelve composite indices of various facets of Common Core implementation (such as “principal is leading CCSS implementation”) to analyze the link between each index and students’ performance on the Common Core-aligned assessments PARCC and SBAC. In other words, they sought to link teacher survey responses to their students’ test scores on the 2014–15 PARCC and SBAC assessments, while also controlling for students’ baseline scores and characteristics (along with those of their classroom peers) and teachers’ value-added scores in the prior school year.

The bottom line is that this correlational study finds more statistically significant relationships for math than for English. Specifically, three indices were related to student achievement in math: the frequency and specificity of feedback from classroom observations, the number of days of professional development, and the inclusion of student performance on CCSS-aligned assessments in teacher evaluations....

Editor’s note: This is the fourth in a series of blog posts taking a closer look at the findings and implications of Evaluating the Content and Quality of Next Generation Assessments, Fordham’s new first-of-its-kind report. The first three posts can be read herehere, and here.

It’s historically been one of the most common complaints about state tests: They are of low quality and rely almost entirely on multiple choice items. 

It’s true that item quality has sometimes been a proxy, like it or not, for test quality. Yet there is nothing magical about item quality if the test item itself is poorly designed. Multiple choice items can be entirely appropriate to assess certain constructs and reflect the requisite rigor. Or they can be junk. The same can be said of constructed response items, where students are required to provide an answer rather than choose it from a list of possibilities. Designed well, constructed response items can suitably evaluate what students know and are able to do. Designed poorly, they are a waste of time.

Many assessment experts will tell you that one of the best ways to assess the skills, knowledge, and competencies that we expect students to demonstrate is through...

Editor’s note: This is the third in a series of blog posts that will take a closer look at the findings and implications of Evaluating the Content and Quality of Next Generation Assessments, Fordham’s new first-of-its-kind report. The first two posts can be read here and here.

The ELA/literacy panels were led by Charles Perfetti (distinguished professor of psychology and director and senior scientist at the University of Pittsburgh’s Learning Research and Development Center) and Lynne Olmos (a seventh-, eighth-, and ninth-grade teacher from the Mossyrock School District in Washington State). The math panels were led by Roger Howe (professor of mathematics at Yale University) and Melisa Howey (a K–6 math coordinator in East Hartford, Connecticut).

Here’s what they had to say about the study.

***

Which of the findings or takeaways do you think will be most useful to states and policy makers?

CP: The big news is that better assessments for reading and language arts are here, and we can expect further improvements. Important for states is that, whatever they decide about adoption of Common Core State Standards, they will have access to better assessments that will be consistent with their goals of improving reading and language arts education....

Editor’s note: This is the second in a series of blog posts that will take a closer look at the findings and implications of Evaluating the Content and Quality of Next Generation Assessments, Fordham’s new first-of-its-kind report. The first post can be read here

Few policy issues over the past several years have been as contentious as the rollout of new assessments aligned to the Common Core State Standards (CCSS). What began with more than forty states working together to develop the next generation of assessments has devolved into a political mess. Fewer than thirty states remain in one of the two federally funded consortia (PARCC and Smarter Balanced), and that number continues to dwindle. Nevertheless, millions of children have begun taking new tests—either those developed by the consortia, ACT (Aspire), or state-specific assessments constructed to measure student performance against the CCSS, or other college- and career-ready standards.

A key hope for these new tests was that they would overcome the weaknesses of the previous generation of state assessments. Among those weaknesses were poor alignment with the standards they were designed to assess and low overall levels of cognitive demand (i.e., most items required simple recall or...

A decade ago, U.S. education policies were a mess. It was the classic problem of good intentions gone awry.

At the core of the good idea was the commonsense insight that if we want better and more equitable results from our education system, we should set clear expectations for student learning, measure whether our kids are meeting those expectations, and hold schools accountable for their outcomes (mainly gauged in terms of academic achievement).

And sure enough, under the No Child Left Behind law, every state in the land mustered academic standards in (at least) reading and math, annual tests in grades 3–8, and some sort of accountability system for their public schools.

Unfortunately, those standards were mostly vague, shoddy, or misguided; the tests were simplistic and their “proficiency” bar set too low. The accountability systems encouraged all manner of dubious practices, such as focusing teacher effort on a small subset of students at risk of failing the exams rather than advancing every child’s learning.

What a difference a decade makes. To be sure, some rooms in the education policy edifice remain in disarray. But thanks to the hard work and political courage of the states, finally abetted by some...

Nancy Doorey

Editor’s note: This is the first in a series of blog posts that will take a closer look at the findings and implications of Evaluating the Content and Quality of Next Generation Assessments, Fordham’s new first-of-its-kind report.

Debates about testing—and state tests in particular—have reached new levels of intensity and rancor. While affecting only a fraction of the U.S. public schools population, the opt-out movement reflects a troubling trend for state and district leaders who rely on the tests to monitor their efforts to prepare all students to successfully transition to higher education or the workforce.  

The recently adopted Every Student Succeeds Act (ESSA), like its NCLB predecessor, requires annual standardized testing in English language arts/literacy and mathematics in grades three through eight and once in high school. While ESSA contains some new flexibilities and even encourages use of much more than test scores to evaluate performance, states will continue to use—and the public will continue to debate—state tests.

And that’s exactly why Fordham’s new study and the companion study by HumRRO are so important. In a time of opt-out initiatives and heated debate, state decision-makers need to know whether a given test is worth fighting...

The Thomas B. Fordham Institute has been evaluating the quality of state academic standards for nearly twenty years. Our very first study, published in the summer of 1997, was an appraisal of state English standards by Sandra Stotsky. Over the last two decades, we’ve regularly reviewed and reported on the quality of state K–12 standards for mathematicsscienceU.S. historyworld historyEnglish language arts, and geography, as well as the Common CoreInternational BaccalaureateAdvanced Placement and other influential standards and frameworks (such as those used by PISA, TIMSS, and NAEP). In fact, evaluating academic standards is probably what we’re best known for.

For most of the last two decades, we’ve also dreamed of evaluating the tests linked to those standards—mindful, of course, that in most places, the tests are the real standards. They’re what schools (and sometimes teachers and students) are held accountable for, and they tend to drive curricula and instruction. (That’s probably the reason why we and other analysts have never been able to demonstrate a close relationship between the quality of standards per se and changes in student achievement.) We wanted to know how well matched the assessments were to the standards, whether they were of high...

New York State education officials raised a ruckus two weeks ago when they announced that annual statewide reading and math tests, administered in grades 3–8, would no longer be timed. The New York Post quickly blasted the move as “lunacy” in an editorial. “Nowhere in the world do standardized exams come without time limits,” the paper thundered. “Without time limits, they’re a far less accurate measure.” Eva S. Moskowitz, founder of the Success Academy charter schools had a similar reaction. “I don’t even know how you administer a test like that,” she told the New York Times

I’ll confess that my initial reaction was not very different. Intuitively, testing conditions would seem to have a direct impact on validity. If you test Usain Bolt and me on our ability to run one hundred meters, I might finish faster if I’m on flat ground and the world record holder is forced to run up a very steep incline. But that doesn’t make me Usain Bolt’s equal. By abolishing time limits, it seemed New York was seeking to game the results, giving every student a “special education accommodation” with extended time for testing. 

But after reading the research and talking to leading psychometricians, I’ve concluded that both...

If you’ve been keeping up with the Common Core scandal pages, you may be wondering who Dianne Barrow is.

Until this month, the answer would have been, “An anonymous functionary scuttling about the publishing behemoth known as Houghton Mifflin Harcourt.” That was before Barrow, who now finds herself a cog without a machine, was featured in an eight-minute video produced by Project Veritas and its merry prankster front man James O’Keefe. In it, she explains how entities like HMH and Pearson view Common Core as a chance to sell second-rate books to schools suddenly required to teach from standards-aligned materials. (She also mouths off about home-schoolers, but that’s basically included as bonus content.) “You don’t think that educational publishing companies are in it for the kids, do you? No, they’re in it for the money,” she says.

Take a coffee break and check out the video. Not because it contains any footage of journalistic merit, or because its makers are especially credible. In fact, the opposite is true. O’Keefe is one of those charming types whose mugshot pops up if you google him, a memento of his arrest and guilty plea following a bungled attempt to break into a U.S. senator’s office and tamper with phones....

My wife and I both spend time working with our kids on their homework. We have also made a family tradition of “Saturday School,” a routine that my wife and I instituted a couple of years ago because our kids’ school was using a pre-Common Core math curriculum that wasn’t keeping pace with the standards. It has become a weekly exercise for the whole family’s brain. On my personal blog, I’ve shared some of the math problems that I’d written for Saturday School so that other parents could use the problems at home if they wished.

On busy nights, most parents (including me) are hard-pressed to find time to help with daily homework. That’s why my first piece of advice for parents is that they help strengthen their children’s work ethic and accountability by ensuring that homework is completed. My kids have their own dedicated space at home for schoolwork. When they get home from school, the next day’s homework has to be complete and correct before there is any screen time or other activities.

Parents can also help at home with skill building and fluency practice—things like memorizing basic math facts. When it comes to skills, practice is essential....

Pages