Analogy for Understanding the Controversy over the Value-Added Data
I just realized the perfect analogy for understanding the controversy over the value-added data that NYC recently released (and which LA released a while back): the SAT. Neither the SAT nor the value-added method are perfect: there are lots of errors around the edges, and one can always find extreme cases that make the whole thing look bogus (the extremely smart, driven, capable kid who would excel at any college, but has lousy SAT scores; and the genuinely great teacher who somehow gets a very poor value-added ranking). So why do almost all colleges continue to use SAT scores as ONE part of the evaluation process? Simple: because across large numbers of students, it DOES correlate with college success to a modest but statistically significant degree (see http://abcnews.go.com/Technology/WhosCounting/story?id=98373&page=1#.T1AfzPVDeg4, for example).
I view the value-added methodology similarly. It shouldn't be used by itself – just as no college admits students solely based on SAT scores – but it's an important metric that should be one of multiple measures (and, of course, its flaws should be fixed to the extent possible). So what evidence do I have that it's an important metric rather than a nearly random and therefore useless and unfair metric? That's easy: the NBER study that was released in January, which (though this isn't public) I happen to know used NYC data, showed ENORMOUS positive lifelong impact on students by teachers ranked very highly using the value-added method (and, sadly, the converse was true as well). For my blog post on this study, see: http://edreform.blogspot.com/2012/01/long-term-impacts-of-teachers-teacher_16.html, http://edreform.blogspot.com/2012/01/value-of-teachers.html, and http://edreform.blogspot.com/2012/01/new-study-gauges-teachers-impact-on.html
<< Home