Trends, Studies and Research V

A | B| C| D| E| F| G| H| I| J| K| L| M| N| O| P| Q| R| S| T| U| W| X| Y| Z


 

V

VALUE-ADDED MODEL

 

Instructional Alignment as a Measure of Teaching Quality

Another VAM study finds a weak link between instructional alignment and teacher evaluation. The new study shows little evidence to support the use of value-added models (VAMs) as a measure of teaching quality. Using data from the Gates Foundation MET Project, the researchers find little evidence to support a substantial association between instructional alignment and student outcomes as measured by VAMs. The findings also indicate little support for an association between instructional alignment and teacher effectiveness as gauged by the MET composite measure. The authors note that the most plausible explanation for their findings is that “the tests used for calculating VAMs are not particularly able to detect differences in the content or quality of classroom instruction.” This study expands the uncertainty around VAMs as measures of educator effectiveness to include a lack of relationship to instructional alignment, which has been positively associated with student outcomes in previous studies. It also provides justification for the NEA and its affiliates to push for the use of alternative measures of teacher effectiveness that are less volatile and provide more useful information for professional development and high-stakes decisions around employment and compensation.

 

Measures of Effective Teaching (MET) Project

The Bill & Melinda Gates Foundation's MET project, one of the largest instructional-observation studies in the country, has found that teacher-effectiveness assessments similar to those used in some district value-added systems aren't good at showing which differences are important between the most-and least-effective educators, and often totally misunderstand the "messy middle" that most teachers occupy.

 

Review of Measuring the Impacts of Teachers Study

A highly influential report, Measuring the Impact of Teachers: Parts I and II, on teacher impact on student outcomes suffers from a series of errors in methodology and calculations.  According to a review by the National Education Policy Center (NEPC), the report on teacher value-added (VAM) impact ignores information contradicting its findings.  The report’s own results reveal that calculating teacher value-added is unreliable. The findings rely on an erroneous calculation to support a favorable result; and it assumes that the miscalculated result holds across students’ lifetimes – “despite the authors’ own research indicating otherwise,” the reviewer notes. “Despite widespread references to this study in policy circles, the shortcomings and shaky extrapolations make this report misleading and unreliable for determining educational policy,” the researchers conclude. Find Measuring the Impacts of Teachers

 

Trouble with Those Value-Added Numbers

Mathew Di Carlo on the Shanker Blog used newly released New York City value-added data to gauge the validity of a popular argument: That dismissing the bottom 5 to 10 percent of teachers would increase test scores to those of the highest-performing nations. Di Carlo looked at 2010 outcomes for the bottom decile of math teachers in 2009, finding that in the first place, roughly half left the district of their own accord by 2010. For those who stayed, their 2010 percentile ranks spread out. In plain English, results are so easily skewed that they have little actual or predictive value.

 

Back to page 1 

 0 user(s) rated this page
Login to leave a comment
No Comments yet