PGES and the Bathwater
08/30/2015
After the Kentucky Department of Education recently announced that 93.5% of the state's teachers were rated as effective or highly effective under the new Professional Growth and Effectiveness System, I suggested that these initial results do not bode well for future of PGES. In part, I wrote that:
If the goal of PGES is teacher growth (as the name implies), then these results suggest the entire program is a failure, since principals apparently believe that almost none of our teachers have any room to grow.
The reaction to my blog post from educators was varied. Some offered a resounding "amen" while others cautioned that there are many complexities to PGES, we are still early in implementation, PGES is still superior to the vague evaluation systems that preceded it, and we should be careful about throwing the proverbial baby out with the bathwater. I doubt that Education Commissioner Terry Holliday read my post, but writing on his own blog Holliday took to task those who "will say that Kentucky has wasted five years and significant resources to implement a state evaluation system that has a mismatch between student performance and teacher performance."
To be clear, I do not believe that Kentucky should completely abandon PGES, a well-intended effort to correct a teacher evaluation system that in the past has been gravely deficient. But I do think the results from last year's state-wide pilot raise much deeper and troubling questions than Holliday himself suggests.
For one thing, the results really don't inspire confidence. There are many components to PGES, including observations by an administrator, the teacher's professional growth planning process, the teacher's self reflections, a student survey, and a student growth component whereby teachers set goals for improving student achievement over the course of the year. These multiple data sources are supposed to be part of the strength of the system.
Some readers suggested that the student growth component is particularly problematic because teachers can set relatively easy goals. Principals may have actually rated teachers more accurately than the data suggests, but the student growth component could have skewed the results.
The problem is that when you consider only the Professional Practice Ratings, which do not include student growth, the results are actually worse. The commissioner's report to the Kentucky Board of Education showed that 94.5% of teachers were rated as effective or highly effective when considering Professional Practice alone. And these were not, as Dr. Holliday's blog post suggested, just tenured teachers in their evaluation year. In an email KDE staff confirmed to me that these ratings of 16,600 teachers represented every teacher evaluated under the system, regardless of tenure status.
Dr. Holliday does raise some important issues that must be addressed going forward with PGES. He suggests that principals need better training on giving feedback, district leaders need more direction in how to support principals, and especially the complaint I hear most from administrators: something must be done about the enormous amount of time it takes to complete the evaluation components and repeated issues with the software platform where results are entered.
Training is definitely important, and the process must be simplified so that principals can do justice to teacher evaluation and still attend to their many other responsibilities. But there are also elements of teacher evaluation that I'm not sure training or restructuring of the process can adequately address.
As I see it, the key question is this: How do we get principals to deliver more accurate feedback about teaching performance so that teachers and principals can work together to better align teacher professional development and instructional support to their actual professional growth areas?
And this question begs the second: Why don't principals deliver this accurate feedback already?
I think there are several possible, inter-related answers:
- Many principals simply aren't proficient yet in understanding what good instruction looks like.
- Even if they know what good instruction looks like, many principals lack the communication skills and/or emotional courage to deliver actionable feedback (informally or formally through the evaluation process) to help teachers improve their practice.
- The job of school principal is not structured in a way that encourages principals to be proficient in either of these two areas.
I'll explore these issues in greater depth in an upcoming post, but it should be clear that while these issues definitely imply training needs (something that I am heavily invested in as a professor of education administration), they are not necessarily issues that can be fixed through PGES. In fact, pilot data suggest that PGES has had no impact on them at all. Whether it does in the long run will likely have to do with things other that PGES itself.
So, I agree that the jury is still out as to whether PGES is a waste of time. The effort to improve teacher evaluation is definitely worthwhile. But last year's results should make us seriously consider whether PGES as currently configured is the solution to the problem.
Previous related posts:
Comments
You can follow this conversation by subscribing to the comment feed for this post.