This prediction will puzzle, upset, and maybe infuriate a great many readers—and, of course, it could turn out to be wrong—but enough clues, tips, tidbits, and intuitions have converged in recent weeks that I feel obligated to make it:
Will PARCC and Smarter Balanced be eclipsed by longer-established, fleeter-footed testing firms like the College Board and ACT?
Image by Benjamin Chun.
I expect that PARCC and Smarter Balanced (the two federally subsidized consortia of states that are developing new assessments meant to be aligned with Common Core standards) will fade away, eclipsed and supplanted by long-established yet fleet-footed testing firms that already possess the infrastructure, relationships, and durability that give them huge advantages in the competition for state and district business.
In particular, I predict (as does Andy Smarick) that the new ACT-Aspire assessment system, which is supposed to be ready for use in 2014 (a full year earlier than either of the consortium products) and which some states are considering as their new assessment vehicle, will be joined by kindred products to be developed and marketed by the College Board. And the two of them will dominate the market for new Common Core assessments.
One straw in the wind: Alabama’s announcement last week that it is foreswearing both consortia and will use the ACT assessment system. And, of course, both Kentucky and New York have already concocted and deployed their own versions of Common Core assessments—possibly but not necessarily interim models.
Although the College Board and ACT have traditionally focused on the high-school-to-college transition, both also have experience earlier in the K–12 sequence. ACT Explore is aimed at eighth and ninth graders, ACT Engage goes down to sixth grade, and ACT “WorkKeys” is a significant player in determining career-readiness. The College Board’s Pre-SAT test is typically taken in tenth grade. Its “Readiness Pathway” assessment program reaches down to eighth grade, and its “Springboard” program to sixth—with “alignment” guides already prepared for Common Core standards in both English language arts and math for grades six through twelve.
So it’s not too big a stretch for either organization to dip deeper into the K–12 curriculum and assessment business, and it’s no stretch at all for their chief test-administration partners—Pearson in the case of ACT, ETS for the College Board. Each has ample experience in devising and administering tests from the early grades onward. (In fact, Pearson already has pre-K assessments.)
At least as importantly, these organizations know how to give tests to millions of people. They have the infrastructure and the test security. They have the systems for scoring and reporting. Perhaps above all, they have the relationships and the trust of thousands of school systems, dozens of states, and millions of parents. Plenty of states already use ACT products as part of their existing assessment systems. And both organizations are long established, well led, deep-pocketed, and pretty sure to be around a decade or two from now.
As yet, the new consortia have none of those things. They’re struggling with organizational structures, governance, post-federal financing, test-development agonies, uncertain costs, conflicting views of “cut scores,” and all manner of other puzzles.
Those would be significant challenges were there no competition, but ACT has made no secret of its intention to seek states’ Common Core assessment contracts—and Alabama may turn out to be the first of many to sign up. The College Board hasn’t (to my knowledge) announced itself yet, but testing insiders know that it’s lately been on a hiring binge—even luring key assessment developers from ACT—that surely points in this direction.
Will the ACT and College Board versions of Common Core assessments be true “next-generation” tests that probe deeper understanding and more sophisticated (“higher-order”) skills in more revealing ways? Will they be “adaptive” (via computer or otherwise) to kids at different levels of achievement or will they, like most of today’s tests (see discussion here at the seventeen-minute point), do a weak job of differentiating performance at the top and at the bottom of their range of difficulty? I do not know. But I do know that all of these accoutrements carry dollar costs that state assessment budgets may not be able to bear—and veteran testing firms are accustomed to cutting their cloth to fit the wearer’s dimensions.
I assume that scores and scales on the new assessments will be comparable across states (as are current ACT and SAT scores), but individual states will likely set their own “cut points” for purposes of grade-to-grade promotion and high school graduation. That’s tricky, however, if you’re serious about bona fide “career and college readiness,” which is a meaningless concept if it differs by state; what’s more, the new standards aren’t really worth the bother unless “proficiency” levels for every grade cumulate to a desired end-point by senior year. (I predict that, as with consortium-developed assessments, the ACT and College Board folks will recommend grade-specific proficiency scores that do cumulate in the intended way, but individual states will decide for themselves what signifies readiness for promotion and graduation.)
If I’m right that ACT and College Board scarf up much state business, there won’t be a lot left for the consortia—and they may founder. That would, of course, represent a considerable waste of federal dollars. On the other hand, it would remove from the Common Core debate (at least until NCLB-reauthorization time, if that day ever comes) the specter of Arne Duncan and Barack Obama clutching those standards to the federal bosom.
Besides, the consortia could remain useful, even if they don’t do assessments themselves. Neither ACT nor the College Board will want to alienate the many state leaders who have been earnestly advancing the consortium work, and these groups could readily convert into advisory and coordinating bodies that help member states implement and make sense out of the results on the new tests—and advise test developers and standard-setters alike on how their products work in the real world.
Time will tell. I might be jumping to premature prediction—and you may interpret these entrails differently than I do. Letters to the editor are cordially invited.