Value-added modeling

The difference between the predicted and actual scores, if any, is assumed to be due to the teacher and the school, rather than to the student's natural ability or socioeconomic circumstances.

[1] Louisiana legislator Frank A. Hoffmann introduced a bill to authorize the use of value-added modeling techniques in the state's public schools as a means to reward strong teachers and to identify successful pedagogical methods, as well as providing a means to provide additional professional development for those teachers identified as weaker than others.

A school with high levels of student turnover may have difficulty in collecting sufficient data to apply this model.

As a result, it is difficult to use this model to evaluate first-year teachers, especially in elementary school, as they may have only taught 20 students.

[9] The idea of judging the effectiveness of teachers based on the learning gains of students was first introduced[10] into the research literature in 1971 by Eric Hanushek,[11] currently a Senior Fellow at the conservative[12][13][14] Hoover Institution, an American public policy think tank located at Stanford University in California.

First created as a teacher evaluation tool for school programs in Tennessee in the 1990s, the use of the technique expanded with the passage of the No Child Left Behind legislation in 2002.

[1] In February, 2011, Derek Briggs and Ben Domingue of the National Education Policy Center (NEPC) released a report reanalyzing the same dataset from the L.A. Unified School District, attempting to replicate the results published in the Times, and they found serious limitations of the previous research, concluding that the "research on which the Los Angeles Times relied for its August 2010 teacher effectiveness reporting was demonstrably inadequate to support the published rankings.

"[18] The Bill and Melinda Gates Foundation is sponsoring a multi-year study of value-added modeling with their Measures of Effective Teaching program.

[4] Reanalysis of the MET report's results conducted by Jesse Rothstein, an economist and professor at University of California, Berkeley, dispute some of these interpretations, however.

[19] Rothstein argues that the analyses in the report do not support the conclusions, and that "interpreted correctly... [they] undermine rather than validate value-added-based approaches to teacher evaluation.

While there has been considerable anecdotal discussion about the importance of school leaders, there has been very little systematic research into the impact of them on student outcomes.

This outcome-based approach to measuring effectiveness of principals is very similar to the value-added modeling that has been applied to the evaluation of teachers.

The EPI report recommends that measures of performance based on standardized test scores be one factor among many that should be considered to "provide a more accurate view of what teachers in fact do in the classroom and how that contributes to student learning."

[22] Edward Haertel, who led the Economic Policy Institute research team, wrote that the methodologies being pushed as part of the Race to the Top program placed "too much emphasis on measures of growth in student achievement that have not yet been adequately studied for the purposes of evaluating teachers and principals" and that the techniques of valued-added modeling need to be more thoroughly evaluated and should only be used "in closely studied pilot projects".

[1] Education policy researcher Gerald Bracey further argued it is possible that a correlation between teachers and short-term changes in test scores may be irrelevant to the actual quality of teaching.

The ASA cited limitations of input data, the influence of factors not included in the models, and large standard errors resulting in unstable year-to-year rankings.