More By Author
September 23, 2009
October 02, 2009
One reason we wonks love state-level—and now city-level—data from the National Assessment of Educational Progress is that it helps us identify jurisdictions that are making strong progress compared to their peers. We assume those places are leading the pack because something they are doing is working. (Plus, it makes us feel better about our chosen profession. Policy matters!)
But does it? Matthew DiCarlo of the Shanker Institute, among others, has wondered whether the ups and downs of test score trends are necessarily related to what’s happening in the classroom, much less what’s happening in the statehouse or the school board. NAEP results, DiCarlo wrote, “can’t be used to draw even moderately strong inferences about what works and what doesn’t.”
He may be onto something. Take a look at the two graphs below, which show the relationship between changes in eleven cities’ NAEP results and changes in their median incomes:
Cities furthest to the left (San Diego, Chicago, Charlotte, and Cleveland) have seen their median incomes decline most dramatically from 2005 to 2011. Cities toward the right have seen their incomes increase (Boston and Houston) or increase dramatically (Washington, D.C.). These trends could reflect shifts in the population of these cities, with richer or poorer people moving in or out, or simply the impact of the Great Recession on a stable population; we suspect it’s some of both.
Meanwhile, on the y-axis, we show changes in average NAEP scale scores at the fourth-grade level (2005–2011) by city. Eleven cities reported NAEP reading and math scores in both 2005 and 2011. The cities at the top of the vertical axis had higher NAEP gains—Washington, D.C., Boston, and Atlanta in reading; Washington, D.C., in math. At the bottom of the vertical axis, we observe the laggards, cities such as Charlotte, New York City, Houston, and especially Cleveland.
When we plot the points and fit a line through them, we see that cities with higher NAEP gains have generally experienced higher gains in median income. The slope of these lines—0.15 for reading and 0.14 for math—are modestly significant, indicating that changes in wealth appear to relate to changes in NAEP scores.
To be sure, these data don’t prove that demography is destiny. Education reform still matters. Atlanta’s and Boston’s strong gains in reading and Chicago’s and San Diego’s strong gains in math put them way above our regression line; Cleveland’s catastrophic performance puts it well below the line in both subjects. And, of course, this is a very small sample size.
But these demographic data surely add important context. San Diego and Austin, for example, post similarly impressive gains in reading—but the former did it while getting shellacked by the recession. And Houston’s and New York City’s performance looks downright disappointing on both subjects, considering these cities’ relative economic health. (Yes, Joel Klein, we’re expecting your call any minute now.)
And then there’s Washington, D.C. Even before Michelle Rhee stormed into town, the District was posting impressive gains on the NAEP. But does the credit go improvements in the system and its classrooms or to the city’s gentrification and strong economy?
We understand that even raising the issues of wealth and demographics is going to make us suspect in some reform circles. But this simple (and yes, simplistic) exercise should remind us all that when looking for proof points, context matters.