Is Kentucky's teacher evaluation system a massive waste of time and money?
08/07/2015
If the goal of Kentucky's new statewide teacher evaluation plan is to improve the accuracy of performance feedback to help teachers improve their practice, then initial results suggest the whole enterprise might be an abysmal failure.
Last year Kentucky schools piloted the new program, called PGES (Professional Growth and Effectiveness System), statewide. At yesterday's state Board of Education meeting, Education Commissioner Terry Holliday recommended that the board delay plans to include results from the teacher evaluation system in next year's school accountability scores. The reason is because 93.5 percent of teachers were evaluated as "effective" or "accomplished" by their principals last year.
And lest you think superintendents did a better job evaluating building leaders, 89% of school principals were also rated as "effective" or "accomplished."
It's important to note that PGES includes a four-tier rating system. Evaluators don't just determine if a teacher is effective or ineffective. They may also be rated as "developing," meaning that they are not ineffective by have key aspects of their teaching practice that still need support and growth. "Effective" is self explanatory, and is the goal of teacher development. "Accomplished" means the teacher exceeds performance goals - he or she is more than effective.
Kentucky is indeed blessed with many effective and accomplished educators, and I deeply respect their efforts and achievements. But to suggest that less than 7 percent of Kentucky teachers are even in need of development - in a state where only about half our students are proficient in reading and math - is absurd. If the goal of PGES is teacher growth (as the name implies), then these results suggest the entire program is a failure, since principals apparently believe that almost none of our teachers have any room to grow.
Wisely, the state Board agreed to Holliday's recommendations and unanimously voted to delay inclusion of the evaluation results in next year's school accountability formula, an idea which was deeply misguided to start with. Richard Innes, education analyst for the Bluegrass Institute, was at yesterday's meeting and observed that it's "inevitable human nature: If a person’s own organization is being held accountable for job performance scores, and if the organization’s own staff self-awards those scores, then those scores are going to get inflated to the point of meaninglessness." [UPDATE: Disclosure - I serve on the Board of Scholars for the Bluegrass Institute].
Innes also notes that "I am unaware of any large scale school staff evaluation programs that ever worked well at the state level. So far, it seems teacher evaluations always get inflated even when the stakes are not very high."
He's right, and it's cold comfort that Kentucky's results are actually a wee bit better than most other states who have also been down the path of "improved" teacher evaluation (see my previous posts on this topic here and here). In Tennessee and Michigan, as many as 98% of teachers were still rated as effective after the state revamped teacher evaluation.
This suggests something deeply wrong in the professional culture and personnel practices of schools, something that improved teacher evaluation plans were supposed to fix. The now-famous "Widget Effect" report documented how traditional teacher evaluation systems utterly fail to distinguish between high and low-performing teachers, provide no meaningful feedback to help teachers improve their practice, and have little influence on teacher professional development.
PGES was Kentucky's effort to remedy that situation, but after millions of dollars have been spent in training and administrative costs, and principals have devoted untold hours implementing its complex components, there's no evidence that principals are changing the way they interact with teachers about instruction and professional improvement.
In upcoming posts I'll try to explore some of the reasons why PGES and other systems of teacher evaluation aren't making much difference. But a key takeaway is that these large-scale, state-mandated, top-down mechanisms for improving schools just don't tend to work very effectively. What really matters is how teachers and administrators use the structures they've got to foster professional dialogue around improvement, at both the classroom and school level - and ultimately, I believe, in providing the ultimate form of accountability by empowering parents to choose the schools that meet their child's individual needs.
Again, I'll write more about this in coming days but these initial PGES results ought to cause some serious soul-searching among educators and policy makers as to how serious we are about really improving student learning.
UPDATE: Here's the PowerPoint Dr. Holliday and KDE's Rhonda Sims used in their presentation to the Board: Download Lessons Learned from PGES
UPDATE: See may follow up post, "PGES and the Bathwater," plus other previous posts:
It might be beneficial to look at how the student growth goals and the evaluation ratings factor together. Even if principals are rating fairly and having those tough conversations, the growth portion of evaluations can bring a teacher from developing to exemplary in some counties.
Posted by: Hannah | 08/07/2015 at 09:01 PM