Education foxes and hens
May 06, 2009
I'm reminded again and again of America's need for independent education-achievement testing-and-audit bureaus to track and report student performance and school achievement and to sort out the claims and counterclaims regarding when these indicators have risen and when not--and perhaps also to explain why.
The National Assessment Governing Board and National Center for Education Statistics perform some of this function for the country as a whole, though they don't blow whistles when someone makes dubious claims or suggests impossible causal relations--even (perhaps especially) when that someone is the former Education Secretary who appointed all of NAGB's current members.
Writing in the Washington Post on Monday, Margaret Spellings, relentlessly defending the No Child Left Behind Act over the implementation of which she long presided, tried to attribute NAEP gains since 1999 to the impact of NCLB. She neglected to remind readers that NCLB was proposed in January 2001, signed into law in January 2002, and that the first school year on which it could conceivably have had any influence would be 2002-03. The most recent (long-term) NAEP results come from spring 2008, meaning that five years is the longest period for which any student gains could even be associated with NCLB, much less attributed to that statute. (Because this was no random experiment, any gains or declines could equally have been caused by global warming, Taliban infiltration, or whatever.) Unfortunately, the long-term-trend NAEP wasn't administered in 2003 (or 2002, for that matter), so one faces a challenge in deciding what year to use as a baseline. The assessment was given in 1999, then again in 2004 and in 2008. Spellings opted for 1999--because doing so strengthened her claim. However, of the gains recorded between 1999 and 2008 (for 9- and 13-year-olds)--and there were modest gains in both math and reading--the lion's share occurred between 1999 and 2004, not between 2004 and 2008. One could even suggest that NCLB slowed the rate of gain. I wouldn't say that. But Spellings shouldn't suggest that NCLB caused the gains, either, since most of them occurred prior to its enactment.
Regrettably, not a peep was to be heard from NAGB or NCES about her dubious use of NAEP results.
By contrast, there's been a lively and continuing exchange in New York City between Diane Ravitch, functioning as a sort of one-woman truth squad, and Chancellor Joel Klein's throng, about what gains by Gotham's schools and children can legitimately be associated with the changes wrought in that city's education system by Messrs. Klein and Bloomberg (now running for re-election, of course). As Ravitch has repeatedly shown (in this New York Times op-ed, for example, and this response to Jennifer Bell-Ellwanger, who works for Klein), Joel's team, much like Spellings, has chosen a serves-their-own-purposes baseline against which to claim credit for achievement gains even though their reforms hadn't kicked in at the time when the greatest gains were recorded (on New York State tests, in this case).
That exchange has indeed been lively, but it lacks any arbitrator to resolve it. That's because America has a long and sorry tradition, particularly at the state and local level, of entrusting testing and the analysis and reporting of test results (and other performance indicators) to the very system whose performance is, in effect, being appraised. That's an inherent conflict of interest, a commingling of the company treasurer's function with that of an outside auditor.
Advocates must certainly be expected to reach for whichever data they think make the most convincing case for their accomplishments, exertions, and assertions (and, of course, they then suggest causal relationships that no reputable scientist would accept). Critics, similarly, choose the evidence that bolsters their arguments. This will continue. But the advocates usually prevail because they generally look "official" and their critics can be made to look like cranks. The underlying problem, however, is that the advocates, official or not, typically have their own axes to grind, their own records to defend, their own interests to advance.
The Oklahoma legislature tried during its current session to address this problem but Governor Brad Henry vetoed it. Supported by a highly unusual fraternity of business and education groups--even the teacher unions--lawmakers sought to transfer control of testing and academic standards from the state education department to a new and independent/nonpartisan "Education Quality and Accountability Office." The logic was that the Education Department should implement programs and policies but somebody else should measure, report, and judge the outcomes. Makes sense to me. But apparently not to the governor. As a result, Oklahoma's education chickens will continue to be weighed and measured by the foxes.
That's the norm across America but it's one that needs disrupting.
Am I dreaming? After I wrote something similar on Flypaper, a reader said I might as well hunt for unicorns as for objective, independent audits of education performance. But I'm undeterred. The NAGB/NCES model, even if muted at times when it might do well to make noise, is basically sound. States and districts would be better off with their own versions of this than continuing to expect the carnivores to report an honest count of the poultry.