A new analysis from the Manhattan Institute (a known mouth piece for The Foundation on Education Excellence and a strong supporter of Jeb Bush’s education policies) suggests Value Added Measurement or VAM can be part of a “sensible policy for retaining a teacher", but it also refers to VAM as "imperfect" and the group partially validates VAM criticisms.
|VAM Impacts Evaluations|
|Teachers Question Use of VAM|
|Read Kim's story: A former
Teacher of the Year
Several weeks past (early November), President Ford sent a letter to Governor Scott urging him to use his executive authority to suspend the law requiring districts to apply VAM scores to teacher evaluations for the 2011-2012 school year. He also asked the Governor to halt the use of VAM until there is clear, concise and universal research that supports the use of VAM in evaluating teacher performance.
|Read President Andy Ford's Letter to Gov. Scott|
|Gov. Scott's Response to FEA||FLDOE Commissioner Pam Stewart Response to FEA|
|President Ford's Response to Gov. Scott and FLDOE|
FEA is concerned about how the obviously very flawed VAM scores this year will play out in the future.
In three years, this data is supposed to be part of the awarding of merit pay. How will this play out when the DOE cavalierly suggests that a “teacher’s performance as ‘effective’ if one cannot statistically conclude whether the VAM score is above or below whatever threshold has been established in your system.”
Value Added Measurement of VAM is one of the most controversial topics in education. You could call it the education reform fad or trend. An increasing number of states are looking at its use for teacher evaluations, even though there remains a great debate over its effectiveness in achieving good quality data that will have a significant impact on student achievement and teacher quality. There simply isn't enough research to support this.
Florida tried a value-added approach to merit pay for schools in the 1980s. District statisticians found their value-added models identified different schools as meritorious, depending on which factors they controlled for and they realized there was no right way to decide what to control for. Still a great deal of emphasis is being placed on VAM.
The FL-DOE was supposed to set score categories this year for districts to help in the assessments, but due to the administrative judge’s ruling which nullified that complex formula proposed by the FL-DOE for calculating the VAM score, each district had to decide on its own which teachers are effective, highly effective, unsatisfactory, needs improvement, etc. The state turned over preliminary VAM data to the districts last month. District administrators were to review the information, make corrections and return the final data to the FL DOE.
Value added analysis provides an estimate of how well a teacher is doing at increasing student performance on standardized tests. On the flip side or more accurately, it measures how well a teacher teaches to the test.
Policymakers say it provides an idea of how well a student will do on future tests by looking at past test scores (baseline) and comparing them to current scores, measuring student academic growth.
Academic Growth = Current/recent test performance - baseline (past test performance)
Yet, the U.S. Department of Education has estimated that VAM will be wrong 25% of the time.
In a report for the Anneberg Institute for School Reform, researchers point out that "because value-added is statistically estimated, it is subject to uncertainty, or a 'margin of error'.” Teacher ratings can vary wildly from year to year, based on the student population, their prior test scores, school environment and conditions, aßnd many other factors.
Recently, the Ohio Department of Education began encouraging its schools to dial back the emphasis on value-added scores. Instead, schools are urged to use other ways of incorporating student performance into teacher evaluations. The district leaders want less than 50 percent of a teacher’s evaluation to be based solely on one value-added score. The Ohio Department of Education recommends starting at 10 percent of a teacher’s evaluation and working VAM upward over time. Administrators say, it is difficult to fully measure a teachers impact.
Why? Because standardized tests aren’t always the most effective assessment of student skill and learning. Using standardized tests as a measurement tool doesn’t take into account the learning and growth that have been made in areas outside the scope of the test, not only academically, but socially and behaviorally as well. Often, teachers and ESPs have a huge impact on students in these areas. Sometimes that impact can make a big difference in the classroom.
Value-added could be a promising tool, but must be further refined and deployed with extreme caution. There's too much missing from the equation. it can not assess what really inspired the student to acheive. it can not assess the lesson plans or learn strategies the teacher employed to make the difference for their students.
Value Added Measurement will not evaluate:
FEA has been a strong supporter of high standards for teaching and learning and is pleased with the reports of thoughtful discussions about instructional practice occurring between teachers and their administrators. But we have continually raised concerns about the inclusion of Value Added Model (VAM) scores as a significant part of every teacher’s evaluation and career prospects.
We all need evaluations, even teachers because it provides a guide for them to set personal achievement goals. But a proper evaluation includes a broad mix of assessment tools and measurements, including but not limited to: observations of classroom practice, portfolios of teachers’ work, student learning objectives, test scores, homework, and surveys of students. If we are serious about education reform, quality teaching and advancing student achievement, then we need to mix it up and add more resources to the education reform toolkit.