On growth-to-standard measures

Getty Images/CherriesJD

Richard J. Wenning and Damian W. Betebenner

David Griffith recently praised Colorado’s ESSA plan and how it addresses growth. While doing so, he discourages states from including growth-to-standard (criterion-referenced growth) in education accountability systems under ESSA. We agree with his general sentiment that Colorado’s plan is laudable, but we worry that arguments against using growth-to-standard measures to rate schools obscures the important role that these measures should play in communicating with parents, teachers, and the public.

David’s objections specifically concern school rating systems, but accountability systems also evaluate the educational outcomes of states, districts, and—most importantly, in our view—individual students. Some of the data the systems collect are used to rate or grade these various entities, while other information is simply reported to teachers, administrators, parents, and the public at large. Both of these purposes are important.

Our view is that any accountability system committed to standards-based outcomes (e.g. college and career ready by exit) must insure that indicators used in that system are consistent with those outcomes. Growth-to-standard is relevant for exactly that reason. It allows us to connect indicators that measure the growth for all students—which are by and large norm-referenced and divorced from any standard—to each student’s readiness for college or career. Checker Finn tells us why this is important in “The Fog of “College Readiness”:

Our K–12 education system has a transparency problem, and our higher-education system is complicit. While some American parents have a decent sense of whether their children are on track for the kinds of colleges they hope to attend, many more have been kept in the dark—or have been sorely misled. Most parents think their children are on track to be prepared for college after their twelfth-grade year, and most students agree.

But the truth is, a shockingly large share of graduating high school seniors are not prepared to go to college—more than half, by some estimates. Given that the vast majority of high-school students plan to eventually pursue some kind of post-secondary degree, this means millions of kids are being set up for failure. The source of this gap between belief and reality is the K–12 education system. Our schools create a fog when it comes to academic preparation for college success.

Those in charge have their reasons, which mostly turn out to safeguard the interests of adults and their institutions, even as they wreak havoc with the next generation. None of this is acknowledged, however, save by a handful of would-be illuminators, for the education system has generally persuaded itself that this fog is better for kids than clarity would be.

So how do we lift the fog?

Lifting the fog requires us to confront both norm-referenced growth (e.g., the relative progress of students) and growth-to-standard (e.g., the progress of students toward an agreed upon achievement outcome like proficiency). And Colorado’s growth model, which we helped design, does precisely this. Specifically, in addition to a “median growth percentile,” which summarizes student progress for a school compared with the progress that other schools are making, it also reports “adequate growth percentiles,” which can help stakeholders understand how much progress a student needs to achieve proficiency or, for example, a college-ready cut score on the ACT or SAT in a timely fashion.

To be sure, using criterion-referenced growth measures like “growth to proficiency” for accountability purposes can be tricky because the measures correlate directly with each student’s starting point. Student’s starting behind have further to catch-up. Doing so requires balancing it with norm-referenced results to determine what is ambitious yet reasonable. The data collected by the metrics should be reported in a disaggregated manner to individual parents, students, and teachers, and used by the state, districts, and schools for improvement efforts and to set statewide goals. It’s certainly of value to know that individual students or students in a given school are progressing at above average rates—that is, growing at a healthy pace, regardless of where they are relative to a given standard—but it’s also critical to know whether such rates lead to desirable achievement outcomes.

The key fact that should not be obscured by “the fog” is that if standards based outcomes like “career and college ready by exit” are non-negotiable, then prevailing rates of growth (i.e., learning) need to improve dramatically for low-achieving students in states nationwide. Norm-referenced growth indicators fall short in communicating this without an assist from their criterion-referenced counterparts.

Ultimately, students will enter post-secondary education or a career, and they’ll either be ready or they won’t. The fact that students made lots of relative progress in recent years but are still not ready for higher education won’t help them much at college. Thus we believe it is vital to include both of these growth measures—norm- and criterion-referenced—in accountability systems, especially when it comes to what gets reported. How much growth? And is it good enough? Parents want and deserve the answers in a regular, useful, and engaging format.

Richard J. Wenning is Executive Director of Be Foundation and former Associate Commissioner of the Colorado Department of Education, where he led the design of the state’s accountability system including the Colorado Growth Model.

Damian W. Betebenner is Senior Associate with the Center for Assessment. He is the analytic architect of the Student Growth Percentile (SGP) methodology developed in collaboration with the Colorado Department of Education as the Colorado Growth Model.

The views expressed herein represent the opinions of the author and not necessarily the Thomas B. Fordham Institute.