Previous month:
March 2013
Next month:
May 2013

April 2013

Schools needed to pilot teacher perception survey

If you are an administrator or teacher leader of a Kentucky school, I'd like to ask you to please consider volunteering your school to participate in an exciting pilot research study being developed by my WKU colleagues, Dr.  Steve Miller, Dr. Kyong Chon, and me.  We are creating a survey designed to measure how deeply and effectively teachers believe their school is implementing the Standards and Indicators for School Improvement (SISI).

The Standards and Indicators are a research-proven framework for measuring a school's improvement efforts.  If we are able to validate our new survey, tentatively called the Standards and Indicators Scholastic Review (SISR), schools will get reliable data on their progress that mirrors what has normally required a five-day audit from an external review team.  The SISR will be superior  to other teacher perception instruments like the TELLKentucky or AdvancEd stakeholder surveys because it is built on a strong base of research.

We eventually hope to use the SISR with hundreds of schools across the state, but first we need to pilot the draft instrument with a handful of schools to make sure our design works.  Please consider volunteering your school for the pilot administration of the survey.

A member of our research team will visit pilot schools during a regularly scheduled faculty meeting (or a special meeting, if appropriate) to explain the survey.  Then we will ask teachers to go directly to a school computer lab to take the survey, which will take approximately 45 minutes to complete.  This is obviously  a lengthy survey, but each participating school will receive a complete profile of anonymous teacher perception data relative to the nine Standards and Indicators for School Improvement - a potential wealth of valuable information.  Plus, your school will be contributing to
ground-breaking research that might eventually help distinguish how teacher perceptions differ between high-performing and low-performing schools, assisting leaders in better predicting future trends and engaging in meaningful long-range improvement planning.

We hope to conduct this pilot survey before the end of the current school year, so time is of the essence.  If you are interested in having your school participate in the SISR pilot, please email me as soon as possible at [email protected] so we can coordinate a time for administration.  Even if you cannot participate in the pilot at this time, let me know if  you'd like to be included during the full validation study, which we'll be conducting this fall.

I'll blog regularly about the progress of this study.


Uniting the adults in schools around a common purpose

I found the short video below via the National Center for Montessori in the Public Sector Facebook page.  It features Dr. Katherine Merseth of the Harvard Graduate School of Education discussing school improvement and the necessity of "creating a common culture of coherence:"

 

Among the highlights:

  •  No one (including education reformers) seems to agree on the actual purpose of schooling.
  • The bureaucratic structures of schooling appear to be more designed for the benefit of adults rather than students.
  • In high functioning schools (which are, in fact, student centered), every person in the school can answer the question, "What is your purpose?  Why are you here?"
  • Teachers in high functioning schools function as advocates for children, with whom they have strong personal relationships, know their content deeply so they can answer (or, I would add, help kids answer for themselves) the big "why" questions, knows how to personalize learning for individual student needs, and is continuously learning from and with other educators.

Merseth seems to have fairly conventional perspective on what schooling is all about (no real mention of Montessori, Sudbury, or any of the radically student-centered approaches I've become interested in).  Nevertheless, I think in this three-minute video she gives a terrific summary of some of the key problems vexing schools and the absolute necessity of uniting the adults in a school around a common purpose.  It's a message relevant for leaders of any school.


Still "widgets:" Data from other states bodes ill for KY's new teacher eval system

I've previously written with guarded optimism about the massive overhaul taking place in Kentucky right now relative to teacher evaluation.  If all goes according to planned, next year teacher performance in Kentucky will be assessed using a variety of fairly sophisticated measures, including the traditional supervisor observation, but also peer observations, self-reflection, student feedback, and growth in student learning (among others). 

The whole thing is a massive undertaking, and school administrators around the state are bracing for a 40-hour training program that must be completed prior to actually carrying out any evaluations.   I really want to believe the effort will be worth it, as data from the now-infamous Widget Effect report, as well as the experience of millions of educators nationwide, verifies that teacher evaluation in the United States is, frankly, a joke.  In the vast majority of schools and districts, the teacher evaluation process does nothing to really distinguish between high and low performers, or to use performance data to guide professional growth and development or leadership decision making.

To be sure, Kentucky's effort to improve teacher evaluation wasn't based entirely on a conversion to clearer thinking about how to improve teaching and learning in our schools.  As an applicant for federal "Race to the Top" funds, Kentucky was mandated to carry out reforms of the teacher evaluation process.  Nevertheless, I have been pleased to see the state taking the whole issue seriously.  My biggest concern has been whether school leaders can be sufficiently trained - and evaluated themselves - to carry out the new system with fidelity.

Now, data from states that have already revamped their teacher evaluation systems suggests that these reforms aren't really working.  The vast majority of teachers are still getting the highest performance marks, in clear contradiction to what administrators, parents, students, and the teachers themselves know to be reality.

According to a recent New York Times story, nearly half of all states have revised their teacher evaluation systems and the results are disheartening.  Examples:

In Florida, 97 percent of teachers were deemed effective or highly effective in the most recent evaluations. In Tennessee, 98 percent of teachers were judged to be “at expectations.”

In Michigan, 98 percent of teachers were rated effective or better.

With all due respect to the thousands of hard-working teachers in these states, it just flies in the face of reason that these astronomical percentages of teachers are satisfactory or better in their performance.  As an Education Week blogger put it, "The 'Widget Effect' endures."

What's most alarming about these figures for Kentucky is that under our new education accountability model, teacher evaluation ratings will eventually be included as a component of every school's overall performance scores

I recently attended a meeting of school district leaders who were putting hard questions about all of this to staff members of the Kentucky Department of Education.  As one district administrator pointed out, we know how badly inflated teacher evaluations have been in the past.  Now, principals will have more incentive than ever to rate their teachers highly, in spite of the multiple measures now included in our evaluation system.  How can this possibly work?

The KDE representative acknowledged that this was a risk and that the Department would be monitoring the situation to look for signs that schools were inflating their own ratings.  But the representative provided no details on how this will work, and one superintendent voiced concern that schools would be singled out for scrutiny because they might be doing exactly what the Department wants: improving teacher performance.

I suspect how this will unfold in practice is that KDE will start looking for discrepencies between student performance and teacher evaluation.  If students are performing poorly but teachers are being evaluated highly, that should be a red flag.  But no mechanism for identifying such a school currently exists in law or regulation, and this approach has its own inherent flaws.  Many schools are high-performing in some part because of their student demographics, not their stellar teaching.  In fact, such schools can mask mediocre or poor teacher performance and will have even more incentive not to jeapordize their high ranking by pointing out teacher growth needs.

So I am growing increasingly pessimistic about the chances of this reform effort to make any real difference in teacher performance.  We run the risks of wasting massive amounts of time and money.  And as some folks like Mike Schmoker have pointed out, you don't need a complex approach to properly evaluate teacher performance.  You do, however, need effective leadership.

And if we really can't do any better at identifying differences in teacher performance - one of the most fundament variables in student achievement - then maybe all of these education reforms really are, as Richard Elmore put it, "palliative care for a dying institution."


The great "Ability grouping" misnomer

A flurry of headlines in the education media has recently announced the return of "ability grouping."  The news stories cite a recent study by the Brooking Institution's Brown Center on Education Policy that found a major resurgence of "ability grouping" after the practice had fallen out of fashion for many years.

But what the Brown Center study describes is simply good practice and should not be called "ability grouping," a term that does indeed need to remain on the scrap heap of history.

The Education Week story on this topic is a good example.  It defines "ability grouping" as "the practice-primarily in elementary grades-of separating students for instruction within a single class."  Reporting findings from the Brown Center study, the story goes on to describe the increasingly common practice of using assessment data to flexibly sort students for intervention and enrichment.  In the best-case scenarios, these groups are truly flexible: students move in and out of the groups based on their progress toward benchmarks.

I've seen some pretty poor and primitive excuses for flexible grouping, like assigning students to groups based on a single assessment measure, and then leaving them in a group for an entire semester or longer before reassessing their progress.  And in many schools "enrichment" groups don't provide much meaningful enrichment. 

But the effort to do flexible grouping is still an important step toward implementation of a truly "balanced" assessment system.  Ideally, schools should be constantly measuring student progress toward learning targets (using frequent, ungraded formative assessments) and making immediate instructional adjustments based on this progress.  Adjustments could include grouping students based on their progress to provide additional (or differentiated) instruction (or enrichment). 

This practice, however, has nothing to do with a student's "ability," a word which suggests a child's innate capacity to learn.  On any given day, any student could require some intervention or enrichment based on progress toward a particular learning target. 

It is not splitting hairs to make this distinction.  Much of the tracking that took place in past decades had everything to do with educators' perceptions of children's innate capacaties to learn.  With relatively little meaningful data to go off (and lots of prejudicial attitudes based on race, poverty, or family education background), teachers assigned students to groups based on "ability" and the vast majority of students never left their track.  "Lower" tracks were distinguished by profoundly lower expectations for what students would ever be able to achieve.  And this probably explains as much about historical achievement gaps as nearly anything.

Tracking practices like this have greatly declined at the secondary level in recent years, and rightly so.  For adults to decide on a child's behalf - when that child is 14 years old or younger - whether he or she is "college material" reeks of paternalism and a profound unfairness, and sets up those who might desire more for themselves to be perpetually unprepared for learning at the next level.  Most high schools have replaced multiple tracks (for students without disabilities) with two today: "honors" and "regular."

But even these distinctions seem problematic to me.  When I ask high school educators the difference between their honors and regular courses, uncomfortable squirming often ensues.  The honors classes move "faster" and "go deeper," I'm usually told, but the content is the same.  I have trouble seeing how the content could be the same if the class is moving "faster."  There's no way getting around the fact that our expectations for "regular" classes are lower.  Are none of these students capable - or worthy - of higher expectations?

Lest readers misunderstand, I do believe there are differences in students regarding their "ability."  Like almost all human characteristics, intelligence (in all its forms) falls along a bell-shaped curve pattern for large populations.  And these innate capacities do shape the rate at which students learn, and for a few perhaps, a maximum capacity for achievement.  But when educators use these differences to make decisions that profoundly shape the entire curriculum and learning program for vast numbers of students, we have given "ability" far more prominence than it deserves and institutionalized low expectations and a reluctance to do what truly needs to be done: meaningful individualization and differentiation for all students.

Would we even need "honors" classes if we knew how to really differentiate?  And could the institutional structures of schooling ever allow us to differentiate in this way if we actually knew how?

This is the kind of debate we need to have in education.  Misnomers like "ability grouping" are a major distraction.