Monday, March 05, 2012

Why it’s no surprise high- and low-rated teachers are all around

One of the most interesting debates about the NY value-added data is what it says about whether poor/minority schools get stuck the preponderance of the worst teachers.  Some studies show this to be the case (captured in pages 80-85 in my school reform presentation, at:www.arightdenied.org/presentation-slides), but a recent NYT article said the following (as captured by the Gotham Schools article below):

The New York Times' first big story on the Teacher Data Reports released last week contained what sounded like great news: After years of studies suggesting that the strongest teachers were clustered at the most affluent schools, top-rated teachers now seemed as likely to work on the Upper East Side as in the South Bronx.

Teachers with high scores on the city's rating system could be found "in the poorest corners of the Bronx, like Tremont and Soundview, and in middle-class neighborhoods," "in wealthy swaths of Manhattan, but also in immigrant enclaves," and "in similar proportions in successful and struggling schools," the Times reported.

But as the Gotham Schools article points out:

Value-added measurements like the ones used to generate the city's Teacher Data Reports are designed precisely to control for differences in neighborhood, student makeup, and students' past performance.

The adjustments mean that teachers are effectively ranked relative to other teachers of similar students. Teachers who teach similar students, then, are guaranteed to have a full range of scores, from high to low. And, unsurprisingly, teachers in the same school or neighborhood often teach similar students.

"I chuckled when I saw the first [Times story], since the headline pretty much has to be true: Effective and ineffective teachers will be found in all types of schools, given the way these measures are constructed," said Sean Corcoran, a New York University economist who has studied the city's Teacher Data Reports.

The design stems from evaluators' attempts to improve on the way teachers are judged. In the past, assessments of teacher quality tended to look only at students' test scores: A teacher whose students scored higher was deemed stronger. But that design stacked the deck against teachers whose students started the school year with greater needs and lower scores.

The idea behind value-added measurements is that they look instead at how much growth students make in a year. Teachers are rewarded not when their students score highest, but when the students' performance gains exceed the average gains made by similar students.

So while the ratings were explicitly designed to compare teachers who work with similar students, they cannot compare teachers who don't. "This is just a difficult question that we still don't know how to answer — this question of how to compare teachers who are in very different kinds of schools," said Douglas Staiger, a Dartmouth College economist.

He added, "There are a lot of issues that I disagree with critics of value-added. But this is a real issue that it's not clear how best to handle."

So here's an example of the dilemma, for which there is no easy answer (as best I understand it; I don't claim to be an expert): Let's say a poor, minority kid enters 5th grade having made only ½ a grade level of progress in reading each of the past few years.  Adjusting for all of these factors, the value-added model will naturally predict a baseline performance of ½ a grade level of progress in 5th grade.  So how would one evaluate the 5th grade reading teacher if the student makes ¾ of a grade level of progress?  My understanding is that the value-added method would rate that teacher very highly, but I wouldn't, as the student STILL fell even further behind, but just not as much as in previous years.  But I also understand the counter-argument that the teacher propelled a VERY challenging student ahead much further than other teachers in previous years, so shouldn't that be applauded?  Once you understand the adjustments to the value-added method, it's easy to see why the results, showing high- and low-rated teachers spread quite evenly across all schools, are not inconsistent with other studies that show that poor and minority kids get a much higher proportion of the dregs of our national teaching talent pool.

 

One of my friends adds:

 

Irrespective of what these data do or don't say, I think the way to think about it in general is that a B+ teacher in a wealthy school will get A+ results, but that same teacher in a poor school will get C- results, because it's so much harder.  So we need to stack our poor schools with A++ teachers.  The problem with a percentile-based metric like VAM is that if every teacher in the poorest schools was A++, they'd still be rated all throughout the 1-100th percentile range and it would look like there were some bad teachers even if there weren't (we're a long way from having that problem, though.).

---------------------

Why it's no surprise high- and low-rated teachers are all around

by Philissa Cramer, at 1:15 pm

http://gothamschools.org/2012/02/29/why-its-no-surprise-high-and-low-rated-teachers-are-all-around 

 Subscribe in a reader