Identifying "what works" is still a work in progress

Getty Images/Kkolosov

If we are going to take advantage of the End of Education Policy, and usher in a Golden Age of Educational Practice, we need our field to start taking rigorous evidence much more seriously. Getting inside the black box of the classroom is a necessary first step, and launching lots more research initiatives about teaching and learning is second. But the big payoff will come if we can more accurately and constructively identify “what works” (and when it works, and what it costs)—and get it implemented more widely across the country. 

That’s not a particularly revolutionary notion. People have been trying to figure out what works in education for at least fifty years. But we still haven’t come close to cracking this nut, and if we want to make progress, we need to figure it out. Below I offer some of my ideas on how to do so. In future posts I’ll tackle the much tougher question of scaling up evidence-based practices in the real world.

***

There are many debates in education policy that will never be settled by science because they mostly involve values, priorities, and tradeoffs. (Should parents get to choose their children’s school, and if so, should religious schools be in the mix? How much should we spend on education versus other activities that compete for our limited resources? Should our schools focus equally on preparation for college, or career, or citizenship?) Evidence can inform policy debates, but it is hardly dispositive.

Instructional practices, on the other hand, are different. Or should be. Consider elementary schools, those magical places where we work to turn pre-literate, pre-numerate kindergarteners into avid readers, writers, and problem solvers, ready to tackle the Great American Novel in middle school, capable of writing a clear five-paragraph essay, and possessing a mastery of math facts and an early understanding of algebraic reasoning.

None of this is controversial. Everyone agrees that all students need to have these basic skills, none of which come naturally to the human animal, and all of which must be taught to them in school. But this stuff is complicated. Questions of practice that elementary educators must address include ones such as:

  • How can little children make sense of the code that is the alphabet? How can we help them move smoothly from sounding out words to reading fluently and confidently?
  • How does “reading comprehension” develop? Is it a skill to be learned? Or is it more like a process—driven by how much students know about the world via subjects like history and geography and science?
  • How can children be taught to write effectively? Should we worry about spelling, grammar, and punctuation, or can that come later? How do you teach children to write strong sentences, paragraphs, and essays?
  • What about math? Should we simply teach kids that 9 + 6 = 15—memorize it now!—or is there a phase when it’s helpful to teach them various strategies to figure this out and understand why 9 + 6 = 15? Are there some ways to teach fractions that work better than others?

For all of these specific skills and processes, science can help us understand what’s happening inside kids’ brains when it’s working, when it’s not, and what that implies for specific instructional practices. To be practical, we also need to understand what works in a classroom setting. All of this is surely easier to do one-on-one, between a single teacher and a single student, but we can’t afford to employ 25 million tutors for our 25 million elementary-age students. That brings us into questions like these:

  • Should we place students in small groups with peers at their same level in reading, writing, or math? What if those peers aren’t the same age?
  • Should students practice their reading skills with books that are at their current reading level? Or at their grade level?
  • What’s the role of homework?
  • How can teachers best “manage” their classrooms? Keep an orderly, yet friendly, environment?

The best part about these questions is that their answers are knowable. In an ideal world, it would go something like this:

  • Educators identify key instructional questions for which they would like empirical answers—like those above. (Morgan Polikoff and Carrie Conaway have ideas on how to solicit those questions.)
  • Scientists design studies to test various hypothesis and approaches and answer educators’ queries. Some of these might focus on discreet skills and processes, and others might test out complete programs or curricula.
  • Committees of professional educators, using a rigorous process, sift through the research on a regular basis and develop clear guidelines for practitioners based on the available evidence. The stronger the evidence, the stronger their recommendations. They also identify additional, unanswered questions for additional study.

This is how it works in other fields, most notably medicine. It doesn’t go perfectly. Doctors still debate vociferously about various approaches to treating certain illnesses, and the medical field worries about the long time-lag between the publication of evidence-based practice guidelines and their widespread use. (One study put it at seventeen years on average!) Still, most of the components are in place, and it is one reason why we continue to get better at treating illnesses.

I have a book on my desk from the American Academy of Pediatrics, Pediatric Clinical Practice Guidelines and Policies, 14th Edition. These folks serve the same kids that our teachers and administrators do. And on a wide range of topics, from Attention Deficit Disorder to Diabetes to Sinusitus and beyond, they have a set of clinical practice guidelines that professionals have endorsed and expect to be followed. All based on rigorous studies, and which form the basis for medical education. They don’t expect doctors to figure out treatments for these illnesses on their own. They don’t expect doctors to Google “Sinusitis,” or turn to Pinterest, like so many of our teachers do.

I understand that plenty of people don’t like the medical analogy. Teaching is an art and a science, goes the argument. Fair enough. Cognitive scientist Dan Willingham prefers to point to architecture. There is a lot of art in architecture, a lot of freedom, different styles and approaches and traditions. But there are also a set of engineering principles that architects simply cannot ignore, not if they don’t want their buildings to fall down.

So too in education. There will always be, and should be, a lot of room for creativity and artistry in teaching, and a wide range of approaches, from Montessori to classical models and beyond. But there are also some design principles that cannot be ignored, not if we don’t want our children to fall behind.

The teaching of foundational reading skills is one of those areas. It’s crazy that, twenty years after the National Reading Panel report, we still have teachers who believe that kids learn to read naturally, just as they learn to speak—and that education schools are still teaching that! (Not that it’s easy to get experts to agree on what precisely the research says on the topic.) It’s like architecture professors positing that gravity is just a theory.

Surely there are a handful of other areas where strong research studies can guide instructional practice. So let’s begin there. Why don’t we have a professional organization producing “Clinical Practice Guidelines and Policies” for education, ones that would be embraced by all educators and enforced by the profession?

It wouldn’t need to start from scratch. In recent years, the federal What Works Clearinghouse has published some excellent “practice guides,” which come close to this vision. But they have no buy-in from the profession, nor do they have any teeth. It’s hard to know if anyone is reading them, much less using them. A recent IES “listening tour” does not provide much optimism on that front.

Turning back to elementary schools, there’s an opportunity. An accident of history has bequeathed us no professional association for elementary school teachers. So let’s create such a group, one with a membership among elementary school teachers, principals, and instructional coaches. Its board should be comprised of accomplished educators and respected scholars and other practitioners; the Americans involved in ResearchED might be a good place to look for initial leadership.

Then philanthropists could get such a group off and running on developing a set of evidence-based practice guidelines, limited to the few areas with the strongest research. This organization might also partner with companies that administer state licensure tests to revise the assessments to align with their recommendations. Here, too, the idea isn’t to invent something out of whole cloth; Massachusetts’s test for prospective teachers has covered much of this ground, especially around early reading, for years.

***

To be sure, this still doesn’t solve the problem of getting better practices in use in our classrooms. It was more than thirty years ago that the U.S. Department of Education, under the leadership of Bill Bennett and Chester Finn, published the first “What Works” guides. Much of what they identified is still legitimate today, but is also still widely ignored in our schools. Flagging evidence-based practices is clearly just half (or a quarter, or an eighth?) of the battle. We also have to convince the field, especially ed schools, to value evidence over ideology and beliefs, plus we have to battle what educator Peter Greene recently called “the thirteenth clown problem”—the challenge caused by shady vendors hawking various educational products as “evidence based” even when they clearly aren’t.

I’ll offer ideas on how to overcome these significant barriers in future posts. But just as better research on classroom practice is dependent on better data about classroom practice, so is the implementation of evidence-based practices dependent on our ability to identify what is evidence based and what is not. We have bits and pieces of that today. We need the whole enchilada.

 
 
Michael J. Petrilli
Michael J. Petrilli is the President of the Thomas B. Fordham Institute.