Educator Evaluations Released Statewide

Katya Starostina, Graduate Research Assistant, Urban Initiative

On November 21st, for the first time, teacher evaluation ratings were released for more than 200 school districts in MA for the 2012-2013 school year. This was prompted by the requirement for all Race to the Top districts (which are under a federal grant) to implement the new education evaluation framework. Previously, schools simply noted whether educators did or did not meet expectations, if educators were rated at all.

Data was reported by school and district for teachers and just district-wide for administrators. Ratings for each educator were not reported. Superintendents were given the responsibility for evaluation and then designated the individual evaluators. While the evaluation system is aimed at providing useful feedback to struggling educators, eliminating the worst educators, and raising the overall performance of all educators, the overwhelming majority of districts across the state rated their educators very high.

For Fall River and New Bedford, data showed interesting results. When compared against school performance data that was recently compiled for UI’s SouthCoast Indicators Project, there seems to be hardly any correlation between how educators were rated by the district and how schools were evaluated by the state. Two schools stand out from the rest with the best educator ratings that are also very similar to the state average. Durfee High, a Level 3 school, received an overall performance score of 9 out of 100, relative to other schools that serve similar grades, and Mary Fonseca Elementary, also a Level 3 school, received a 2 out of 100 overall performance score. These schools’ performance is drastically lower than the state average, but their educator ratings are about the same.

The chart below presents the ratings for Fall River educators. Blue denotes all the educators rated in the ‘unsatisfactory’ category, red in the ‘needs improvement,’ yellow in the ‘proficient,’ and green in the ‘exemplary’ category. There is a great variability among the rating across all schools. These educator evaluations were based primarily on observations and items like student work and lesson plans. In the next few years, student achievement data such as standardized test scores will be incorporated.

[googleapps domain=”docs” dir=”spreadsheet/pub” query=”key=0AqeZCLy9YsKIdEtsWVRkYmIzV2wwaHMyeWxoeFBMVFE&output=html&widget=true” width=”600″ height=”400″ /]

For New Bedford, the ratings look drastically different, with a lot fewer ‘unsatisfactory’ educators, a lot more ‘exemplary’ educators, and a lot less variability than Fall River. Yet, it is still interesting to compare the data to school performance evaluations. The two lowest-performing Level 4 schools – New Bedford High and Hayden/McFadden Elementary – with overall performance scores of 3 and 1 on a 100 point scale received educator ratings comparable to the state average.

[googleapps domain=”docs” dir=”spreadsheet/pub” query=”key=0AqeZCLy9YsKIdHNyY3dFM3djZzcyUHBUWjdlanNPN1E&output=html&widget=true” width=”600″ height=”400″ /]

James Vaznis noted in his Boston Globe article that “other education specialists said the decision by state and federal policymakers to make the data public ultimately could undermine efforts to help teachers and administrators grow professionally.” He quoted Kim Marshall, a former Boston school principal who now advises districts on evaluation practices, who said that “The public release will lead to tremendous grade inflation” in the ratings. She added, “What you are taking away from is more authentic coaching of teachers.”

Will releasing the results to the public keep school districts more accountable or in fact undermine the process of improving educator performance? With time we hope to learn whether these speculations are true or not and whether the public release was a smart move on the part of federal and state policymakers. It will be curious to see what impact incorporating student achievement data into evaluation will have on educator ratings. However, in interpreting the current results, we need to know how objective the evaluation process really is, and if this rating system is a good tool for comparing various schools and school districts across the state. With such a difference in educator evaluations between Fall River and New Bedford, can we make any substantive conclusions?