week on the Core Knowledge blog, Robert Pondiscio called for the end
of seven classroom practices that don’t work
. Four of the seven practices
dealt with standards- and data-driven instruction—or, really, the
bastardization of standards- and data-driven instruction. The crux of
Pondiscio’s argument is right on the money: Standards-driven instruction is
only as good as the standards and assessments that are used to drive
instruction, and reading standards (and/or assessments) that prioritize empty
reading skills over content are sure to steer our teachers wrong.

Pondiscio’s post distracts from that point by deriding some practices that,
when done well, can be used to powerfully drive student achievement.

for example, data-driven instruction. Pondiscio is right that “using data in
half-baked or simplistic ways” is going to do very little to drive student
learning. But the answer is not to abandon data-driven instruction writ large,
but rather to encourage teachers to use data thoughtfully and purposefully.
There aren’t nearly enough examples (or quality PD purveyors) that demonstrate
how this can be done and done well. We need more.

There is no question that test prep is virtually useless.

Pondiscio derides both “dumb test prep” and “reciting lesson aim and standard.”
There is no question that test prep is virtually useless. In fact, the fact
that test prep is used so widely, but that reading scores have remained
essentially flat for more than a decade, should help demonstrate just
how ineffective it is. Why it is still the go-to method for preparing students
for state tests is beyond me.

contrast, the practice of organizing lessons around a clearly-defined aim is
critical. And putting that aim in student-friendly language, while not
absolutely necessary, can be useful. Unfortunately, the aim is too often added
at the end, often as a compliance measure only because it is required by school
and district leaders. And, as a result, there are countless examples of
laughable “aims,” chief among them the one Pondiscio cites in his post. (“Through this lesson I will develop
phonemic awareness and understanding of alphabetic principles.”)

But, as the Cheshire Cat
explained to Alice:
if you don’t know where you’re going, it doesn’t matter much which way you go.
And so it is in teaching: aimless lessons are too often guided by ill-chosen
activities—including the kinds of “overused teaching strategies” that Pondiscio
warns against in his post—exactly because the teacher hasn’t clearly defined
the outcome s/he is driving towards. In fact, perhaps the best way to avoid the
overuse of ineffective teaching strategies is to organize lessons around
clearly defined aims. (And to use formative data to drive short and long term

said, writing great aims—particularly in reading—is incredibly
difficult. But getting it right is essential.

Writing great aims—particularly in reading—is incredibly
difficult. But getting it right is essential.

the end, Pondiscio is right about one thing: poorly conceived and implemented
standards- and data-driven instruction will do little to drive achievement,
particularly in reading. But, the best way to improve instruction—and to
discourage the practices that Pondiscio rightly derides—is not to abandon it
entirely, but rather to improve the foundation upon which that instruction is
built. Specifically: we need to change the way we assess reading and the way we
present data from those assessments to teachers.

the end, the main reason that reading instruction is driven by skills is that
reading assessments are designed to assess mastery of reading skills in
isolation. This can’t be done well and should be abandoned. Instead,
assessments should be organized around genres and mastery of critical,
genre-specific content. And the data from the assessments should not be
presented in terms of whether students have mastered particular skills—but
rather how that comprehension differs depending on the genre or content covered.

Item Type: