Previous month:
July 2015
Next month:
September 2015

August 2015

PGES and the Bathwater

After the Kentucky Department of Education recently announced that 93.5% of the state's teachers were rated as effective or highly effective under the new Professional Growth and Effectiveness System, I suggested that these initial results do not bode well for future of PGES.  In part, I wrote that:

If the goal of PGES is teacher growth (as the name implies), then these results suggest the entire program is a failure, since principals apparently believe that almost none of our teachers have any room to grow.

The reaction to my blog post from educators was varied.  Some offered a resounding "amen" while others cautioned that there are many complexities to PGES, we are still early in implementation, PGES is still superior to the vague evaluation systems that preceded it, and we should be careful about throwing the proverbial baby out with the bathwater.  I doubt that Education Commissioner Terry Holliday read my post, but writing on his own blog Holliday took to task those who "will say that Kentucky has wasted five years and significant resources to implement a state evaluation system that has a mismatch between student performance and teacher performance."

To be clear, I do not believe that Kentucky should completely abandon PGES, a well-intended effort to correct a teacher evaluation system that in the past has been gravely deficient.  But I do think the results from last year's state-wide pilot raise much deeper and troubling questions than Holliday himself suggests.

For one thing, the results really don't inspire confidence.  There are many components to PGES, including observations by an administrator, the teacher's professional growth planning process, the teacher's self reflections, a student survey, and a student growth component whereby teachers set goals for improving student achievement over the course of the year.  These multiple data sources are supposed to be part of the strength of the system. 

Some readers suggested that the student growth component is particularly problematic because teachers can set relatively easy goals.  Principals may have actually rated teachers more accurately than the data suggests, but the student growth component could have skewed the results.

The problem is that when you consider only the Professional Practice Ratings, which do not include student growth, the results are actually worse.  The commissioner's report to the Kentucky Board of Education showed that 94.5% of teachers were rated as effective or highly effective when considering Professional Practice alone.  And these were not, as Dr.  Holliday's blog post suggested, just tenured teachers in their evaluation year.  In an email KDE staff confirmed to me that these ratings of 16,600 teachers represented every teacher evaluated under the system, regardless of tenure status.

Dr. Holliday does raise some important issues that must be addressed going forward with PGES.  He suggests that principals need better training on giving feedback, district leaders need more direction in how to support principals, and especially the complaint I hear most from administrators: something must be done about the enormous amount of time it takes to complete the evaluation components and repeated issues with the software platform where results are entered.

Training is definitely important, and the process must be simplified so that principals can do justice to teacher evaluation and still attend to their many other responsibilities.  But there are also elements of teacher evaluation that I'm not sure training or restructuring of the process can adequately address.

As I see it, the key question is this: How do we get principals to deliver more accurate feedback about teaching performance so that teachers and principals can work together to better align teacher professional development and instructional support to their actual professional growth areas?

And this question begs the second: Why don't principals deliver this accurate feedback already?

I think there are several possible, inter-related answers:

  1. Many principals simply aren't proficient yet in understanding what good instruction looks like.
  2. Even if they know what good instruction looks like, many principals lack the communication skills and/or emotional courage to deliver actionable feedback (informally or formally through the evaluation process) to help teachers improve their practice.
  3. The job of school principal is not structured in a way that encourages principals to be proficient in either of these two areas.

I'll explore these issues in greater depth in an upcoming post, but it should be clear that while these issues definitely imply training needs (something that I am heavily invested in as a professor of education administration), they are not necessarily issues that can be fixed through PGES.  In fact, pilot data suggest that PGES has had no impact on them at all.  Whether it does in the long run will likely have to do with things other that PGES itself.

So, I agree that the jury is still out as to whether PGES is a waste of time.  The effort to improve teacher evaluation is definitely worthwhile.  But last year's results should make us seriously consider whether PGES as currently configured is the solution to the problem. 

Previous related posts:


Is Kentucky's teacher evaluation system a massive waste of time and money?

If the goal of Kentucky's new statewide teacher evaluation plan is to improve the accuracy of performance feedback to help teachers improve their practice, then initial results suggest the whole enterprise might be an abysmal failure.

Last year Kentucky schools piloted the new program, called PGES (Professional Growth and Effectiveness System), statewide.  At yesterday's state Board of Education meeting, Education Commissioner Terry Holliday recommended that the board delay plans to include results from the teacher evaluation system in next year's school accountability scores.  The reason is because 93.5 percent of teachers were evaluated as "effective" or "accomplished" by their principals last year.

And lest you think superintendents did a better job evaluating building leaders, 89% of school principals were also rated as "effective" or "accomplished."

It's important to note that PGES includes a four-tier rating system.  Evaluators don't just determine if a teacher is effective or ineffective.  They may also be rated as "developing," meaning that they are not ineffective by have key aspects of their teaching practice that still need support and growth.  "Effective" is self explanatory, and is the goal of teacher development.  "Accomplished" means the teacher exceeds performance goals - he or she is more than effective.

Kentucky is indeed blessed with many effective and accomplished educators, and I deeply respect their efforts and achievements.  But to suggest that less than 7 percent of Kentucky teachers are even in need of development - in a state where only about half our students are proficient in reading and math - is absurd.  If the goal of PGES is teacher growth (as the name implies), then these results suggest the entire program is a failure, since principals apparently believe that almost none of our teachers have any room to grow.

Wisely, the state Board agreed to Holliday's recommendations and unanimously voted to delay inclusion of the evaluation results in next year's school accountability formula, an idea which was deeply misguided to start with.  Richard Innes, education analyst for the Bluegrass Institute, was at yesterday's meeting and observed that it's "inevitable human nature: If a person’s own organization is being held accountable for job performance scores, and if the organization’s own staff self-awards those scores, then those scores are going to get inflated to the point of meaninglessness." [UPDATE: Disclosure - I serve on the Board of Scholars for the Bluegrass Institute].

 Innes also notes that "I am unaware of any large scale school staff evaluation programs that ever worked well at the state level. So far, it seems teacher evaluations always get inflated even when the stakes are not very high."

He's right, and it's cold comfort that Kentucky's results are actually a wee bit better than most other states who have also been down the path of "improved" teacher evaluation (see my previous posts on this topic here and here).  In Tennessee and Michigan, as many as 98% of teachers were still rated as effective after the state revamped teacher evaluation.

This suggests something deeply wrong in the professional culture and personnel practices of schools, something that improved teacher evaluation plans were supposed to fix.  The now-famous "Widget Effect" report documented how traditional teacher evaluation systems utterly fail to distinguish between high and low-performing teachers, provide no meaningful feedback to help teachers improve their practice, and have little influence on teacher professional development. 

PGES was Kentucky's effort to remedy that situation, but after millions of dollars have been spent in training and administrative costs, and principals have devoted untold hours implementing its complex components, there's no evidence that principals are changing the way they interact with teachers about instruction and professional improvement.

In upcoming posts I'll try to explore some of the reasons why PGES and other systems of teacher evaluation aren't making much difference.  But a key takeaway is that these large-scale, state-mandated, top-down mechanisms for improving schools just don't tend to work very effectively.  What really matters is how teachers and administrators use the structures they've got to foster professional dialogue around improvement, at both the classroom and school level - and ultimately, I believe, in providing the ultimate form of accountability by empowering parents to choose the schools that meet their child's individual needs.

Again, I'll write more about this in coming days but these initial PGES results ought to cause some serious soul-searching among educators and policy makers as to how serious we are about really improving student learning.

UPDATE: Here's the PowerPoint Dr. Holliday and KDE's Rhonda Sims used in their presentation to the Board:  Download Lessons Learned from PGES

UPDATE: See may follow up post, "PGES and the Bathwater," plus other previous posts: