Standards, Testing & Accountability

As a form of credentialing, high school diplomas are supposed to signal whether a young person possesses a certain set of knowledge and skills. When meaningful, the diploma mutually benefits individuals who have obtained one—it helps them stand out from the crowd—and colleges or employers that must select from a pool of many candidates.

In recent years, however, Ohio’s high school diploma has been diluted to the point where its value has been rightly questioned. One of the central problems has been the state’s embarrassingly easy exit exams, the Ohio Graduation Tests (OGT). To rectify this situation, Ohio is phasing in new high school graduation requirements starting with the class of 2018. Under these new requirements, students must pass a series of seven end-of-course assessments in order to graduate high school, or meet alternative requirements such as attaining a remediation-free ACT score or earning an industry credential.

The end-of-course exams have proven tougher for students to pass than the OGT, leading to concerns that too many young people will soon be stranded without a diploma. One local superintendent called the situation an “apocalypse,” predicting that more than 30 percent of high school students in his...

The Batman v Superman edition

On this week’s podcast, Mike Petrilli, Alyssa Schwenk, and David Griffith discuss the titanic tussle between two tendentious tenets of school success measurement occurring among the mighty minds of Fordham and spilling out into the greater world. It’s proficiency vs. student growth. KA-THOOOOM! On the Research Minute, Amber tackles an early grade retention policy in Florida.

Amber's Research Minute

Christina LiCalsi, Umut Özek, and David Figlio, "The Uneven Implementation of Universal School Policies: Maternal Education and Flordia's Mandatory Grade Retention Policy," CALDER (September 2016).

 

David Steiner

NOTE: The publication of a recent Flypaper post arguing that growth measures (like “value added” or “student growth percentiles”) are a fairer way to evaluate schools than are proficiency measures drew quick reaction both inside and outside of Fordham. Here we present a "letter to the editor" in response to the initial blog post, lightly edited.

To the editors:

I find your argument that state accountability systems should increase the weight of growth indicators, as against proficiency indicators, perplexing. Here is a summary as to why.

The most basic difficulty with the growth models you recommend is this: they attempt to estimate a school’s average contribution to students’ achievement based on past achievement within a given state and a comparison group in that state. Such a growth measure is norm-based rather than criteria-based, i.e., relative to other students in other schools as opposed to an external standard. Assigning such a heavy weight to relative growth may end up removing a school from funding and other support even if its students perform far more poorly than students in schools that would be identified for intervention.  

To focus on the details: The first problem in your recommendation is its lack...

Piet van Lier

NOTE: All photos used in this piece were graciously provided by the Cleveland Transformation Alliance. The photo at the top of this page features HBCU Preparatory School student Meiyah Hill and school principal Tim Roberts.

Standardized test scores are the most common measure of academic success in our nation’s K-12 schools. While they are an important indicator, most observers would agree that tests don’t tell the whole story about what’s happening in our public schools.

Given the recent changes to Ohio’s assessments and standards and their impact on test scores statewide, the need to tell a deeper story about public education has become even more evident.

In Cleveland, we know that Cleveland’s Plan for Transforming Schools is enabling both district and charter schools to create new learning environments that are laying a foundation for sustainable academic improvement. Progress is slow and not always visible from the outside, but it’s happening.

That’s why the Cleveland Transformation Alliance recently partnered with Civic Commons ideastream to share powerful stories about education in Measuring Success Behind the Numbers. The conversation included three storytellers:

  • Student Meiyah Hill talked about how HBCU Preparatory School, a charter middle school in Cleveland, made her feel
  • ...

Management expert Peter Drucker once defined leadership as “lifting a person's vision to higher sights.” Ohio has set its policy sights on loftier goals for all K-12 students in the form of more demanding expectations for what they should know and be able to do by the end of each grade en route to college and career readiness. That’s the plan, anyway.

These higher academic standards include the Common Core in math and English language arts along with new standards for science and social studies. (Together, these are known as Ohio’s New Learning Standards.) Aligning with these more rigorous expectations, the state has implemented new assessments designed to gauge whether students are meeting the academic milestones important to success after high school. In 2014-15, Ohio replaced its old state exams with the PARCC assessments and in 2015-16, the state transitioned to exams developed jointly by the American Institutes for Research (AIR) and the Ohio Department of Education.

As the state marches toward higher standards and—one hopes—stronger pupil achievement and school performance, Ohioans are also seeing changes in the way the state reports student achievement and rates its approximately 600 districts and 3,500 public schools. Consider these developments:

As the standards...

The grade inflation edition

On this week’s podcast, Mike Petrilli, Alyssa Schwenk, and David Griffith discuss whether teachers should be giving As and Bs to students who aren't on track for success. During the research minute, Amber Northern examines whether sixth graders fare better when they aren't the youngest students in the school.

Amber's Research Minute

Amy Ellen Schwartz, Leanna Stiefel, and Michah W. Rothbart, "Do Top Dogs Rule in Middle School? Evidence on Bullying, Safety, and Belonging," AERA (September 2016).

The annual release of state report card data in Ohio evokes a flurry of reactions, and this year is no different. The third set of tests in three years, new components added to the report cards, and a precipitous decline in proficiency rates are just some of the topics making headlines. News, analysis, and opinion on the health of our schools and districts – along with criticism of the measurement tools – come from all corners of the state.

Fordham Ohio is your one-stop shop to stay on top of the coverage:

  • Our Ohio Gadfly Daily blog has already featured our own quick look at the proficiency rates reported in Ohio’s schools as compared to the National Assessment of Educational Progress (NAEP). More targeted analysis will come in the days ahead. You can check out the Ohio Gadfly Daily here.
  • Our official Twitter feed (@OhioGadfly) and the Twitter feed of our Ohio Research Director Aaron Churchill (@a_churchill22) have featured graphs and interesting snapshots of the statewide data with more to come.
  • Gadfly Bites, our thrice-weekly compilation of statewide education news clips and editorials, has already featured coverage of state report cards from the Columbus Dispatch,
  • ...

Ohio’s report card release showed a slight narrowing of the “honesty gap”—the difference between the state’s own proficiency rate and proficiency rates as defined by the National Assessment of Educational Progress (NAEP). The NAEP proficiency standard has been long considered stringent—and one that can be tied to college and career readiness. When states report inflated state proficiency rates relative to NAEP, they may label their students “proficient” but they overstate to the public the number of students who are meeting high academic standards.

The chart below displays Ohio’s three-year trend in proficiency on fourth and eighth grade math and reading exams, compared to the fraction of Buckeye students who met proficiency on the latest round of NAEP. The red arrows show the disparity between NAEP proficiency and the 2015-16 state proficiency rates.

Chart 1: Ohio’s proficiency rates 2013-14 to 2015-16 versus Ohio’s 2015 NAEP proficiency

As you can see, Ohio narrowed its honesty gap by lifting its proficiency standard significantly in 2014-15 with the replacement of the Ohio Achievement Assessments and its implementation of PARCC. (The higher PARCC standards meant lower proficiency...

School report cards offer important view of student achievement -
c
ritical that schools be given continuity moving forward

The Ohio Department of Education today released school report cards for the 2015-16 school year. After a couple tumultuous years, today’s traditional fall report card release reflects a return to normalcy. This year also marked the first year of administration for next-generation exams developed jointly by Ohio educators and the American Institutes for Research (AIR).

“This year’s state testing and report card cycle represents a huge improvement from last year,” said Chad L. Aldis, Vice President for Ohio Policy and Advocacy at the Thomas B. Fordham Institute. “Last year’s controversy made it easy to forget the simple yet critical role state assessments and school report cards play. They are, quite simply, necessary, annual checkups to see how well schools are preparing students for college or career.”

“The state tests are designed to measure the extent to which our children are learning so that our students can compete with students around the country and around the globe,” said Andy Boy, Founder and CEO of United Schools Network, a group of high-performing charter schools in Columbus....

A report recently released by the Economic Studies program at the Brookings Institution delves into the complex process behind designing and scoring cognitive assessments. Author Brian Jacobs illuminates the difficult choices developers face when creating tests—and how those choices impact test results.

Understanding exam scores should be a simple enough task. A student is given a test, he answers a percentage of questions correctly, and he receives a score based on that percentage. Yet for modern cognitive assessments (think SAT, SBAC, and PARCC), the design and scoring processes are much more complicated.

Instead of simple fractions, these tests use complex statistical models to measure and score student achievement. These models—and other elements, such as test length—alter the distribution (or the spread) of reported test scores. Therefore, when creating a test, designers are responsible for making decisions regarding test length and scoring models that impact exam results and consequently affect future education policy.

Test designers can choose from a variety of statistical models to create a scoring system for a cognitive assessment. Each model distributes test scores in a different way, but the purpose behind each is the same: reduce the margin of error and provide a more accurate representation of...

Pages