The Fallout from New York’s Teacher Ratings

New York's Teacher Ratings

Photo by amboo who?

Controversy erupted in February after the New York City Department of Education published ratings of 18,000 public school teachers. The ratings have been criticized as inaccurate, and even the team who designed the system recommended against making the data public. New York City teachers and administrators have argued that publishing the ratings has damaged morale, while the teachers’ union has fought to minimize the effect of the ratings on personnel decisions — but the ratings are already affecting how teachers are perceived.

The Math Behind the Ratings
The Teacher Data Reports are calculated using a “value-added” formula designed to determine how much a teacher influences their students’ standardized test scores. But mathematically speaking the ratings have almost no meaning, as indicated by fantastically high margins of error: 53 percent in English and 35 percent in math. To put that in perspective, it means that a math teacher who rated a perfect hundred might actually deserve a D grade of 65 percent, whereas an English teacher who scored 47 percent may actually deserve 100 percent (or, for that matter, 0 percent). One of the designers of the ranking system told the New York Times that he thought the data could be useful when considered along with other measures of performance, but he also said that he considered releasing the data to the public, “at best unwise, at worst absurd.”

In addition to the bizarrely high margin of error, principal Elizabeth Philips of P.S. 321 has reported discovering numerous errors in the ratings regarding her own school. The errors she found included a rating assigned to one teacher during a year she had taken child-care leave, other teachers who did not receive any ratings for years during which they had taught and one teacher assigned a rating based on the test scores of another teacher’s students. P.S. 321’s  overall rating has fluctuated wildly each year, jumping from 36th percentile in 2007-08 to 95th percentile in 2009-2010. Philips believes the 59 percentile leap reflects a mild improvement: the difference between students earning B grades and As.

Ratings and Morale
Perhaps because most teachers pass through a rigorous certification process before entering a profession that they passionately believe in, many educators see TDRs as demeaning. In an email to the Washington Post, Principal Philips wrote, “The idea of the TDRs being publicly released with names attached is incredibly demoralizing to teachers — and this includes ones who scored above average. Because they understand that some of their well respected colleagues scored low, there is the feeling that this is arbitrary, and that ‘this could be me next year.’ I worry that some of the best teachers, who are the ones who have options for jobs elsewhere, will leave the system.”

In a recent New York Times op-ed piece, Brooklyn teacher William Johnson argued that the best evaluations come directly from students: “Few things are more excruciating for a teacher than leading a class that’s not learning. Good administrators use the evaluation processes to support teachers and help them avoid those painful classroom moments — not to weed out the teachers who don’t produce good test scores or adhere to their pedagogical beliefs.”

Subscribe to Certification Map’s monthly newsletter to receive updates about teacher certification, education news and much more.