Efforts to improve teacher observations in New Mexico

As teacher evaluation systems evolve around the nation—decreasing the importance of student growth scores in favor of more reliance on classroom observations—how best to support principals in observing and giving feedback on teacher performance will gain importance. While research may play a part in determining best practices going forward, a recent report from the Institute of Education Sciences is more of a cautionary tale than an exemplar.

The study involved 339 New Mexico principals who were scheduled to observe their teachers for the first of multiple times in the early part of the 2015-2016 school year. According to the state’s evaluation framework, principals are required to score teachers on a 22-item rubric after each observation and to hold a feedback conference within ten days of each observation. This was the first year of full implementation of the state’s new evaluation system, which ultimately assigned ratings to every teacher in the state based on classroom observations, student growth data, surveys, and other factors. This study explored whether providing a detailed checklist to principals could improve the quality of the post-observation conferences.

To carry out the experiment, the researchers randomly assigned half of the principals to a control group, while those in the treatment group were given specific, step-by-step information—including a 24-item checklist and a testimonial video from a principal in another state attesting to the efficacy of the checklist. Teachers in the treatment group received the same documents and video as their principals, along with the knowledge that the checklist might be used in their feedback conferences. Adapted from a guide developed by the Carnegie Foundation for the Advancement of Teachers, the checklist aimed to increase the quality and efficacy of feedback conferences, to positively influence teacher professional development, and to raise student achievement. An important effort if successful.

Unfortunately, the experimental design ran into problems. While 77 percent of principals in the treatment group reported viewing the checklist, only 58 percent said they actually used it with one or more teachers. Additionally, 29 percent of control group principals also reported that they had seen the checklist, and 10 percent of them reported using it with one or more teachers, despite an admonition to treatment group participants not to share the checklist. The low uptake rate, lack of consistent use, and blurring of the line between treatment and control groups limits our ability to draw clear conclusions about the effectiveness of these checklists. (This has been a challenge with other federal studies, too.)

Nevertheless, teachers in the treatment group reported that principals using the checklist were less dominant during the feedback conference. This is to be expected as the step-by-step checklist was geared toward principals giving teachers prompts for their response after each piece of feedback was presented. Principals, however, reported no significant impact of the checklist on any measure of conference quality. Both principals and teachers who used the checklist reported that it was useful but also reported concerns that it could lead to formulaic conferences. While teachers who received the checklist were more likely to report following their principals’ professional development recommendations than were teachers in the control group, there was no clear impact on teachers’ subsequent classroom observation rating scores during the school year. The researchers posited that the short timeline for the study meant that PD suggestions were not acted upon within the year. Teachers may use the feedback to improve their practice, but likely not until the summer, with effects potentially visible in the following school year. Finally, the feedback conference checklist had no clear impact on student achievement outcomes—as measured by state math and English language arts exams—or on school report card grades released after the close of the school year. Once again, the short timeline of the study design is to blame.

Productive conversations between teachers and principals about instructional effectiveness is definitely to be encouraged and will likely become more important in states whose evaluation frameworks prioritize observation-based evaluation. Can a step-by-step checklist help facilitate those conversations and improvements? It seems the jury is still out.

SOURCE: Kata Mihaly, et al, “Impact of a checklist on principal-teacher feedback conferences following classroom observations,” Institute of Education Sciences, U.S. Department of Education (January 2018).

 
 
Jeff Murray
Jeff Murray is the Ohio Operations Manager of the Thomas B. Fordham Institute,