More By Author
November 02, 2009
This revealing back-and-forth with the United States Department of Education is the third and final installment in our testing-consortia series.
“The Department,” like any hulking, beltway-bound federal agency, can seem like a cold, faceless leviathan—this imposing force, issuing impenetrable regulations from a utilitarian, vaguely Soviet, city block–sized building in the shadow of the Capitol.
But those who interact with it regularly, especially those of us fortunate enough to have worked there, know that it is made up of hundreds and hundreds of very fine people.
During my tenure there, I found both the career staff and the political appointees to be knowledgeable public servants and excellent colleagues. While working for a state department of education, I found the Department’s team to be thoughtful, accessible, and accommodating. And in my loyal-opposition think-tank stints, during which I sometimes find myself poking and prodding the Department, they’ve been patient, respectful, but understandably steely adversaries.
I’m appreciative that they took the time to answer these questions so thoroughly, and I’m flabbergasted that they did so at—in terms of agency timelines—Guinness-Book speed.
The consortia are designing the next generation of assessment systems, which include diagnostic or formative assessments, not just end-of-the-year summative assessments. Their systems will assess student achievement of standards, student growth, and whether students are on-track to being college and career ready. These new systems will offer significant improvements directly responsive to the wishes of teachers and other practitioners: they will offer better assessment of critical thinking, through writing and real-world problem solving, and offer more accurate and rapid scoring. The Smarter Balanced consortium’s assessment will also be “computer-adaptive,” meaning that the difficulty of questions will adjust to students’ ability levels as they proceed through the test.
The two consortia are making significant progress developing their assessment systems and are making an effort to be as transparent as possible, going well beyond what is typical in an assessment-development process. They have released a wide variety of information on how they will create the assessments and have invited comment from educators, district practitioners, additional national experts and the public. In addition, both PARCC and Smarter Balanced have released sample items to offer educators and the public an early look and will release additional questions this summer.
When the two consortia roll out their new assessments in the 2014-15 school year, they will be works in progress. We fully expect some schedule adjustments and technical glitches. Assessment 2.0 will need lots of work to get to version 2.1 and 2.2. States and districts will improve implementation as they learn from pilots and field tests. And teachers will play an absolutely critical role in providing the consortia feedback about what works and what doesn’t work.
This new generation of assessments—combined with the adoption of internationally benchmarked, college and career-ready standards—is an absolute game-changer for American education. PARCC and Smarter Balanced are tremendously important as a step forward to getting better, more accurate, and more actionable data about what students know and can do. As important as better assessments are, they must work in tandem with high-quality curriculum; meaningful, job-embedded professional development; and all the other pieces that will support educators preparing to teach to these new standards.
As with all grantees, the Department works to ensure that the grants are on track, that funds are spent appropriately, and that we have actively supported grantee success. See the RTTA Program Review Process for some additional details. In addition, because we recognize the complexity of the consortia’s work, we have held a series of public meetings over the past two years to address particular components of their system—state and local technology needs, automated scoring of assessments, and how to improve the accessibility of assessments for all students, particularly students with disabilities and English learners. While each consortium has created its own technical advisory group, the Department recently created the RTTA Technical Review to help analyze each consortium’s progress and identify areas where additional attention may be necessary.
The states are the vital decision-makers here. States have demonstrated remarkable leadership, first through developing and adopting new, higher standards, and then through design and development of the next generation of high-quality assessments. But this is hard work. We are asking an enormous amount of principals and teachers in the next several years. We fully expect that there will be states that choose not to stay on board, and in those that do, we must provide teachers and principals with the resources and professional development they need to make the transition. Further, even if a state opts out of a consortium now, they can re-enter at any time in the future.
States must make the right decisions for their students and communities. There’s overwhelming agreement that high standards and well-aligned assessments, emphasizing critical thinking and writing, are vital to serving students well. How states get there is entirely up to them.
It’s worth pointing out that when the states developed the Common Core State Standards, they provided some important distinctions from current standards and current state tests. For example, the Common Core emphasizes writing in the English language arts standards. Any assessment aligned to the Common Core needs to similarly emphasize writing, which is a skill children need to be ready for college and the workforce. These and other distinctions mean that assessments that truly measure the Common Core will likely look different from current state tests, necessary as we move from fill-in-the-bubble tests toward more engaging assessments that better mirror good instruction in the classroom.
The Department is focused on states developing college- and career-ready standards and aligned high-quality assessments that provide a better, more accurate measure of what students know and can do and whether they graduate high school ready for college or the workforce. We don’t want to see any state go backward. We expect the consortia to develop assessment systems that are markedly better than current assessments and we expect them to be already considering how to continue innovating and improving the systems. We understand that states may choose a different way of measuring whether its students are ready for college and careers and we are working with states such as Minnesota, Virginia, and Utah on their approaches. Again, states need to individually make the best decision for them based on all the relevant facts.
We expect that all states will continue to improve their assessment systems. This currently includes requirements that state tests are aligned to the standards chosen by the state, provide accurate, valid, and reliable data about student knowledge and skills, and measures higher-order thinking skills. In December 2012 the Department paused our peer review of state assessment systems in order to reconsider whether our criteria and process for evaluating assessments is sufficient to measure whether an assessment system is a high-quality measure of college and career readiness. We will be providing additional detail in the coming months about our process and our criteria. Once complete, all assessment systems, including PARCC, Smarter Balanced, and all other state assessment systems, will be required to demonstrate how they meet the requirements for technical quality, alignment, and other assessment best practices. It is vital students, parents and educators receive reliable and valid information on student achievement of standards, student growth, and whether students are on-track to being college and career ready regardless of what state they reside in.
Having multiple state assessment systems aligned to common content standards with different cut scores and proficiency standards would make comparison harder (though not impossible), which would be unfortunate. In addition, the public reporting and transparency required under ESEA would continue to be an avenue to identify schools and districts that are doing a good job and identify where states are lagging in what they expect of students. States that have college- and career-ready standards will continue to work with their institutions of higher education to identify what it means to measure college- and career-readiness on state tests. This is important work that PARCC and Smarter Balanced are actively engaged in and something that has been lacking in state assessment systems previously. For states not in either consortium in the future, the connection to higher education will help ensure that states set a rigorous bar for college and career readiness. In addition, the National Assessment of Educational Progress (NAEP) will continue to give the nation a “report card” on how students are doing across states.
In 2010, in direct response to requests from governors and chief state school officers, the Department elected to use a portion of the Race to the Top funds from the American Recovery and Reinvestment Act (ARRA) to support the next generation of assessment because the market was not meeting their needs. Current state tests were missing several important opportunities—they often did not measure the full range of what students should know, focusing on easier skills and ignoring hard-to-measure standards, and most states did not include writing in their assessment systems (to name just a few of the issues with the current market of tests).
We have already seen the Race to the Top Assessment program move the field of assessment. Forty-four states and DC, working in two consortia to develop assessments aligned to the Common Core, have pushed the field to react in ways they likely would not have reacted if each state were separately pursuing a new set of assessments. A 2012 study by the RAND Corporation, for example, indicated that most state tests do not assess “deeper learning skills” of cognitively complex tasks. By contrast, an initial study of the consortia by CRESST in 2013 shows promising results for the consortia’s ability to measure students’ ability “mastering and being able to apply core academic content and cognitive strategies related to complex thinking, communication, and problem solving.”
Yes, the Department is concerned about test security. We don’t think the concerns are any greater with PARCC and Smarter Balanced than with current state tests; though the challenges may change slightly due to the tests being primarily computer-based and the fact that a breach in security could have repercussions beyond a single state. The consortia need to establish security controls and procedures to address these issues, and we expect them to do so as they ramp up toward the field test in spring 2014 and the first operational assessment in the 2014-2015 school year.
It’s worth pointing out that in recent months, critics have claimed that high-stakes tests drive teachers and school administrators to cheat. But that argument confuses correlation with causation. And it also ignores history. There is no excuse for school administrators and teachers tampering with student tests to boost test scores. It is morally indefensible—and it is most damaging to the very students who most desperately need the help of their teachers and school leaders.
We reject the idea that the system makes people cheat. Millions of educators administer tests but very few chose to cheat. In all but a tiny minority of cases, teachers want their children to genuinely learn and grow—not achieve phony gains to make themselves or their schools look good. In places where a district’s culture is rotten, people must speak out. But the vast, vast majority of educators are committed to assessing their students’ progress with complete integrity.