Most of the stories, like this one from Reuters, focus on the the study's finding that teacher performance can indeed be predicted by performance measures. The best evaluations involve a weighted average of student test scores, teacher observations and student evaluations. Any one of these by itself is a much less accurate predictor.
There are nuances to this that can be gleaned from the project's Policy and Practitioner Brief:
- The different measures (student testing, teacher observation and student evaluation) have some overlap but mostly they measure different aspects of the teacher's skills.
- Different weightings are better predictors of different outcomes. Unsurprisingly, placing greater weight on test results is a better predictor of future student test results. However, equal weighting models or those that emphasize teacher observations are more reliable year over year.
- Effective teacher observations are more than a periodic visit from the principal. Evaluations require a consistent framework and procedure. The MET project used the Danielson Framework for Teaching as a rubric. The reliability of teacher observations is greatly improved by having at least two evaluators.
- When done properly, student evaluations are very reliable and an important component of teacher evaluation. As with observations, the key is to ask the right questions. The MET project used the Tripod Student Survey.
- The "value added" theory is supported. When student scores are compared with the previous year's performance (a value added score) the result is a more consistent predictor of future teacher performance than just the most recent year's scores.