Standards, Testing & Accountability

Management expert Peter Drucker once defined leadership as “lifting a person's vision to higher sights.” Ohio has set its policy sights on loftier goals for all K-12 students in the form of more demanding expectations for what they should know and be able to do by the end of each grade en route to college and career readiness. That’s the plan, anyway.

These higher academic standards include the Common Core in math and English language arts along with new standards for science and social studies. (Together, these are known as Ohio’s New Learning Standards.) Aligning with these more rigorous expectations, the state has implemented new assessments designed to gauge whether students are meeting the academic milestones important to success after high school. In 2014-15, Ohio replaced its old state exams with the PARCC assessments and in 2015-16, the state transitioned to exams developed jointly by the American Institutes for Research (AIR) and the Ohio Department of Education.

As the state marches toward higher standards and—one hopes—stronger pupil achievement and school performance, Ohioans are also seeing changes in the way the state reports student achievement and rates its approximately 600 districts and 3,500 public schools. Consider these developments:

As the standards...

The grade inflation edition

On this week’s podcast, Mike Petrilli, Alyssa Schwenk, and David Griffith discuss whether teachers should be giving As and Bs to students who aren't on track for success. During the research minute, Amber Northern examines whether sixth graders fare better when they aren't the youngest students in the school.

Amber's Research Minute

Amy Ellen Schwartz, Leanna Stiefel, and Michah W. Rothbart, "Do Top Dogs Rule in Middle School? Evidence on Bullying, Safety, and Belonging," AERA (September 2016).

The annual release of state report card data in Ohio evokes a flurry of reactions, and this year is no different. The third set of tests in three years, new components added to the report cards, and a precipitous decline in proficiency rates are just some of the topics making headlines. News, analysis, and opinion on the health of our schools and districts – along with criticism of the measurement tools – come from all corners of the state.

Fordham Ohio is your one-stop shop to stay on top of the coverage:

  • Our Ohio Gadfly Daily blog has already featured our own quick look at the proficiency rates reported in Ohio’s schools as compared to the National Assessment of Educational Progress (NAEP). More targeted analysis will come in the days ahead. You can check out the Ohio Gadfly Daily here.
  • Our official Twitter feed (@OhioGadfly) and the Twitter feed of our Ohio Research Director Aaron Churchill (@a_churchill22) have featured graphs and interesting snapshots of the statewide data with more to come.
  • Gadfly Bites, our thrice-weekly compilation of statewide education news clips and editorials, has already featured coverage of state report cards from the Columbus Dispatch,
  • ...

Ohio’s report card release showed a slight narrowing of the “honesty gap”—the difference between the state’s own proficiency rate and proficiency rates as defined by the National Assessment of Educational Progress (NAEP). The NAEP proficiency standard has been long considered stringent—and one that can be tied to college and career readiness. When states report inflated state proficiency rates relative to NAEP, they may label their students “proficient” but they overstate to the public the number of students who are meeting high academic standards.

The chart below displays Ohio’s three-year trend in proficiency on fourth and eighth grade math and reading exams, compared to the fraction of Buckeye students who met proficiency on the latest round of NAEP. The red arrows show the disparity between NAEP proficiency and the 2015-16 state proficiency rates.

Chart 1: Ohio’s proficiency rates 2013-14 to 2015-16 versus Ohio’s 2015 NAEP proficiency

As you can see, Ohio narrowed its honesty gap by lifting its proficiency standard significantly in 2014-15 with the replacement of the Ohio Achievement Assessments and its implementation of PARCC. (The higher PARCC standards meant lower proficiency...

School report cards offer important view of student achievement -
c
ritical that schools be given continuity moving forward

The Ohio Department of Education today released school report cards for the 2015-16 school year. After a couple tumultuous years, today’s traditional fall report card release reflects a return to normalcy. This year also marked the first year of administration for next-generation exams developed jointly by Ohio educators and the American Institutes for Research (AIR).

“This year’s state testing and report card cycle represents a huge improvement from last year,” said Chad L. Aldis, Vice President for Ohio Policy and Advocacy at the Thomas B. Fordham Institute. “Last year’s controversy made it easy to forget the simple yet critical role state assessments and school report cards play. They are, quite simply, necessary, annual checkups to see how well schools are preparing students for college or career.”

“The state tests are designed to measure the extent to which our children are learning so that our students can compete with students around the country and around the globe,” said Andy Boy, Founder and CEO of United Schools Network, a group of high-performing charter schools in Columbus....

A report recently released by the Economic Studies program at the Brookings Institution delves into the complex process behind designing and scoring cognitive assessments. Author Brian Jacobs illuminates the difficult choices developers face when creating tests—and how those choices impact test results.

Understanding exam scores should be a simple enough task. A student is given a test, he answers a percentage of questions correctly, and he receives a score based on that percentage. Yet for modern cognitive assessments (think SAT, SBAC, and PARCC), the design and scoring processes are much more complicated.

Instead of simple fractions, these tests use complex statistical models to measure and score student achievement. These models—and other elements, such as test length—alter the distribution (or the spread) of reported test scores. Therefore, when creating a test, designers are responsible for making decisions regarding test length and scoring models that impact exam results and consequently affect future education policy.

Test designers can choose from a variety of statistical models to create a scoring system for a cognitive assessment. Each model distributes test scores in a different way, but the purpose behind each is the same: reduce the margin of error and provide a more accurate representation of...

The Olympic edition

On this week’s podcast, Alyssa Schwenk, Brandon Wright, and David Griffith discuss alternative teacher licensing in Utah and opt-out consequences in Florida. During the research minute, Amber Northern examines the lack of college readiness in Baltimore.

Amber's Research Minute

Rachel E. Durham, "Stocks in the Future: An Examination of Participant Outcomes in 2014-15," Baltimore Education Research Consortium (August 2016).

Ohio leaders have started an important conversation about education policy under the Every Student Succeeds Act. One of the central issues is what accountability will look like—including how to hold schools accountable for the outcomes of student subgroups (e.g., pupils who are low-income or African American). Ohio’s accountability system is largely praiseworthy, but policy makers should address one glaring weakness: subgroup accountability policies.

The state currently implements subgroup accountability via the gap-closing measure, also known as “annual measureable objectives.” Briefly speaking, the measure consists of two steps: First, it evaluates a school’s subgroup proficiency rate against a statewide proficiency goal; second, if a subgroup misses the goal, schools may receive credit if that subgroup shows year-to-year improvement in proficiency.

This approach to accountability is deeply flawed. The reasons boil down to three major problems, some of which I’ve discussed before. First, using pure proficiency rate is a poor accountability policy when better measures of achievement—such as Ohio’s performance index—are available. (See Morgan Polikoff’s and Mike Petrilli’s recent letters to the Department of Education for more on this.) Second, year-to-year changes in proficiency could be conflated with changes in student composition. For example, we might notice a jump in subgroup proficiency. But is...

Everyone is entitled to their own opinion, Daniel Patrick Moynihan famously quipped, but they are not entitled to their own facts. This idea animates "The Learning Landscape," a new, accessible, and engaging effort by Bellwether Education Partners to ground contemporary education debates in, well, facts.

A robust document, it’s divided into six “chapters” on student achievement; accountability, standards, and assessment; school finance; teacher effectiveness; charter schools; and philanthropy in K–12 education. Data on these topics can be found elsewhere, of course. Where this report shines is in offering critical context behind current debates, and doing so in an admirably even-handed fashion. For example, the section on charter schools tracks the sector’s growth and student demographics and offers state-by-state data on charter school adoption and market share (among many other topics). But it also takes a clear-eyed look at for-profit operators, the mixed performance of charters, and other thorny issues weighing on charter effectiveness. (Online charters are a hot-button topic that could have used more discussion). Sidebars on “Why Some Charters Fail” and case studies on issues facing individual cities lend the report heft and authority, along with discussions on authorizing, accountability, and funding. In similar fashion, the chapter on standards and...

In recent years, more and more districts have encouraged students to take Advanced Placement (AP) courses because they’re more challenging and can earn them college credit. And according to the College Board, this encouragement has translated to more course taking: “Over the past decade, the number of students who graduate from high school having taken rigorous AP courses has nearly doubled, and the number of low-income students taking AP has more than quadrupled.”

Enter a new study that examines what role grade-weighting AP courses might have played in this uptick in participation (for example, a district might assign 5.0 grade points for an A in an AP course but 4.0 grade points in a regular class).

The authors conducted a survey of over nine hundred traditional public high schools in Texas, inquiring whether they had weighting systems for AP courses; if so, when they began; and what changes have occurred in their systems since then. Twenty-eight schools that had increased their weights made up the “treatment group,” including rural, urban, and suburban schools scattered around the Lone Star State. The control group was drawn from traditional public schools with school-level data available before any weight changes occurred. It was then...

Pages