Formative Assessment

On Differentiation, Direct Instruction, and More: A Decade Later

FOcus
Recently I was contacted by a teacher who had come across a now-decade old blog post I wrote about education author Mike Schmoker and his (hostile) take on differentiated instruction. I argued that, as much as I admired Schmoker's work, I thought he was making a bit of a straw man argument against differentiation. This teacher was curious if I had any more recent thoughts on this topic because his school had recently been through a long spell of exploring "personalized learning" and I got the impression that they weren't entirely satisfied because now they were studying Schmoker's (still classic) book Focus, which argues against bells and whistles and for a much more standardized (perhaps traditional) approach to instruction. Here's my response: 

Great to hear from you and I'm glad folks in the trenches are continuing to wrestle with these important issues.
 
I must admit that I have not followed Schmoker's work in recent years, or Tomlinson's for that matter. My gut instinct is still that their ideas really are in creative tension rather than opposition, but to the extent that they do represent different emphases, my money is still very much with Schmoker
 
And I'm more confident than I was in 2012 that you can't do it all. Schools must prioritize their "focus" and choose what to emphasize. I'm increasingly convinced that the focus for most schools needs to be on creating a strong, coherent, content-rich curriculum and then ensuring fidelity to that curriculum through administrative oversight and support. Then there must be a relentless focus on effective instruction to deliver that curriculum. Only when those pieces are in place can schools begin to meaningfully work on assessment (which they should). 
 
However, I'm much less confident than I used to be that schools can formatively assess short-term student learning in ways that can validly inform a lot of personalized instructional follow up. My thinking on this has been strongly informed by the work of England-based educator David Didau and his book What If Everything You Thought You Knew About Education Was Wrong? (See my review of his book here). Within that review, also see my references to books by Daisy Christodoulou and E.D. Hirsch, which seem to speak strongly to this topic of what should be our highest education priorities.
Bottom line: I think most talk of differentiation (and especially personalized learning) is a distraction for many schools, which have far greater fish to fry in terms of curriculum and instruction. Differentiation has never been practical for most classrooms and may not even be that beneficial. Education, like all human endeavors, involves limited resources of time, talent, and materials. We need to invest in the strategies that have the biggest impact for the vast majority of students. 
 
In most cases, that's likely to involve direct instruction of rich content by content-expert teachers.
 
Then I shared with him a couple of Twitter/X threads I have posted in recent months that even better summarize my current thinking, which I've reproduced below. The first is from August 18:
 
Earlier this week I quoted an article arguing that classrooms should feature more “lecture” and less “facilitation” on the part of teachers. The article (or the quote at least) provoked a big reaction, both positive and negative. It should go without saying that lecture, done poorly, is ineffective, and that more “student-centered” activities can sometimes work quite well for some students. But a general shift in emphasis toward more teacher-led classrooms is in order for two reasons.
 
The first is philosophical: much of the vacuous mess that makes up “contemporary” instructional strategies is the dross of assumptions about learning, the purpose of education, and of human nature left by Dewey & the “Progressives,” assumptions that can and should be challenged. The second is pragmatic: we should give primacy to instructional strategies that work best for most students when deployed by most teachers in most classrooms. That’s going to often be teacher-led learning centered on a rich, rigorous, established curriculum.
 
Of course there is room and need for variety in terms what this looks like in practice. But we need to throw out many if not most of the assumptions in which most teachers of the last generation have been trained.
 
Dear teacher, you are NOT a “guide on the side.” You better be a content expert ready to impart a comprehensive body of knowledge, skills, and cultural values that is not a personal assemblage of your favorite subjects and ideologies. You are a public servant forming children according to the knowledge and virtues that represent your state and local community’s vision for a life of adult flourishing. That requires you to be firmly in charge of the learning in your classroom. And yes, often it will mean a well-crafted lecture, demonstration, or modeled example is the centerpiece of most lessons. Don’t be shy about that and don’t ever apologize. Be the “sage on the stage.” Your students deserve it.
 
A few days later, I followed up with this thread:
 
More on why we need teachers to intentionally think of themselves as “sage” rather than “guide.” Relevant question: when *should* the teacher be a guide? 
 
There’s definitely a point in the learning journey when the sage becomes a guide. This happens at the highest levels of student learning after the mastery of a large body of knowledge and the practice of skill under the careful tutelage of the master. Examples: when I work w/ a doc student on their dissertation, when a HS composition teacher edits a student thesis, when a teacher steps aside so that well-read students can do Socratic seminar, and when the master electrician watches his apprentice wire a house for real people.
 
The problem is that we’ve been led to believe these are normal, everyday learning experiences that would apply to all students of all developmental levels rather than the culmination of months and years of didactic learning from the direct instruction of an expert. 
 
The ancients understood this when they organized the Trivium - the ascending ladder of grammar, logic, and rhetoric. First comes content knowledge, then understanding and skill for organizing that knowledge, and finally the skill to express it to others, including in novel ways. Contemporary education lost sight of this learning structure and pressures students and teachers to skip directly to application and synthesis without the hard work of mastering the underlying basics, or to jump around willy-nilly as if novice-level students were already masters. Therefore a thoughtful shift toward a more traditional (pre-Progressive) understanding of knowledge, learning, human nature, and the purpose of education itself, seems in order.
 
Looking at what I wrote 11 years ago compared with my more recent thoughts, I can see how my own understanding about high-quality instruction has matured while still revolving around a core set of principles, the chief of which is that schools can't do it all, and must prioritize their efforts on tried and true strategies that work for most students. It's a bit discouraging to think of how little progress most schools have made in this regard, but when I also consider the (re)emergence of classical education over this same time period and the recent achievements of many reformers around content knowledge, curriculum improvement, and science-based reading instruction, I'm encouraged for the future. 
 
Somebody email me in another 10 years and let's see where we're at.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

What if everything you knew about education was wrong?

Didau

Near the beginning of his book, David Didau says he doesn’t actually want to convince you that everything you know about education is wrong (though a fair amount of it actually is), but rather “that you will consider the implications of being wrong and consider what you would do differently if your most cherished beliefs about education turned out not to be true.” He spends a good portion of the book exploring why we have a tendency to stubbornly believe what we do, but then he explodes some of the most commonly-held beliefs in P-12 education.

Didau, who is from England, is a former teacher and a devotee of education psychology. As formative assessment guru Dylan Wiliam notes in his introduction to Didau’s 2015 book, What If Everything You Knew About Education Was Wrong?, the practice of education in recent years seems remarkably disconnected from what research actually reveals about the way we learn. Didau uses that research to challenge a host of ideas about instruction and assessment and offers a robust defense for a rather traditional approach to learning: knowledgeable teachers carefully modeling and guiding students through supported practice, revisiting challenging, domain-specific concepts until students have become experts in their own right.

According to Didau, if there’s one overarching misconception about education, it’s that we can observe short-term, incremental learning in students. “We teach, children learn,” Didau says of our false assumptions. “That’s the input/output myth.” Research on learning reveals that the process is far more complicated, Didau argues. In our obsession with short-term learning gains, we mistake “performance” - the ability to mimic a skill or concept - for real learning. Information can be forced into short-term memory over brief periods of time, but it doesn’t last. Thus the experience of every teacher: today students seem to understand the lesson, but tomorrow they’ll be as clueless as if I had never taught it at all.

Didau shows how forgetting is actually an essential part of learning. Unless a learning experience is imbued with a very high degree of emotional content or connection, new information usually has to be taught - and forgotten - several times for it to become embedded in long-term memory. Real learning takes place when new knowledge becomes linked to an existing schema - a mental framework of understand a complex web of information - or when schema are completely rearranged into new patterns to incorporate added content.

The implications of this understanding of learning are considerable. Didau is critical of many current practices of formative assessment, which seek to determine whether students have attained short-term mastery of a concept. Relying on students correctly answering formative assessment questions or tasks can be misleading, especially if the teacher assumes these correct answers means she can “move on” and be done with the concept. “If they answer your questions correctly, it means very little,” Didau says. “Who cares what they know at the end of the lesson. Better to assume that they are likely to forget it.”

Didau advocates a practice he calls “interleaving,” or intentionally reteaching key concepts while increasing the amount of time after each lesson. Not every concept or skill would need to be addressed through interleaving, but only those he calls threshold concepts, or those key understandings that students often struggle to master and upon which further progress in that subject depends. For example, Didau suggests gravity in physics, evolutionary theory in biology, opportunity cost in economics, and deconstruction in literature as possible threshold concepts for each discipline. Each subject might have several more threshold concepts depending on the grade level or developmental level of the students.

That these concepts are difficult to master explains why they are linchpins to deeper learning. The key goal of lesson planning should be to make sure every student is engaged in productive struggle with new material, because that’s where maximum learning takes place. Teachers should be assisting students in moving from a novice to an expert level (as developmentally appropriate) relative to their subjects.

While all of the above may sound like common sense, much of educational practice in recent decades stands in the way of the kind of rich, content-specific, and teacher-led instruction Didau is advocating. In fact, Didau positions the teacher as essential player in the learning process (very much the “sage on the stage” and not merely the “guide on the side”). In contrast to teaching approaches that place a major emphasis on student agency, collaborative learning, and “real-world relevancy” (whatever that means), Didau argues for a traditional model of instruction whereby the teacher as content-area expert explains new material, models new skill and application of knowledge, and carefully directs students through scaffolded levels of practice until independence is achieved.

Didau doesn’t necessarily reject project-based learning, group work, or “21st century skills” as wastes of time, but argues that spending energy on these strategies is far less effective and efficient than teacher-led instruction. Lest the reader think Didau’s methods would lead to rote memorization of facts, he presents a powerful argument that embedding new knowledge into students long-term memory is inseparable from teaching them how to think critically and creatively. “Creativity requires form,” he argues, illustrating how masterful artists spend years learning techniques and styles so that they can actually deviate from them.

As an advocate for formative assessment strategies, I found What If Everything You Knew About Education Was Wrong? compelling on multiple levels. Didau offers several pages of his book to formative assessment expert Dylan Wiliam, who agrees with Didau that there are many ways of oversimplifying and misusing formative assessment while still making a strong case that, whatever its limitations, teachers and students benefit from having more data about their learning progress than less.

Didau does not argue against assessment, but shifts the emphasis in how assessment is used. Specifically, he argues that regular testing of previously-taught material is itself one of the most powerful means of helping students relearn and therefore master new knowledge. So he advocates for less re-teaching and lots more retesting of previously-taught material.

Related to this, I found Didau’s ideas challenging to my keen interest in helping teachers create more personalized learning environments. I’ve recently become concerned about some of the excesses in the personalized learning movement that have de-emphasized the important role of knowledge in favor of teaching generic skills (as if those could be separated from domain-specific content), but still feel that instruction needs to be far more directed to individual students’ readiness levels relative to a clear and rigorous curriculum. But if Didau is correct, it’s far more difficult to actually establish a child’s readiness level that I assumed, and students may actually benefit greatly from being regularly reintroduced to content they or their teacher think they have already mastered.

Perhaps there is more room here for whole class instruction than I’ve previously considered, and perhaps the key really is ensuring that no matter how many times a student has encountered a concept, the learning must be deliberately difficult enough to cause the student to struggle, but always with the actual possibility of supported success.

I believe all teachers and school administrators would benefit from reading What If Everything You Knew About Education Was Wrong? and struggling with the questions David Didau raises. Alongside recent works by E. D. Hirsch (reviewed here) and Daisy Christodoulou (reviewed here), Didau’s book makes a strong case for a rigorous and well-planned curriculum and thoughtful teacher-led classrooms.

Usual disclaimer: All views expressed on this website are mine alone and do not reflect the opinions of Western Kentucky University (where I serve as associate professor of educational administration, leadership, and research) or the Kentucky Board of Education (where I serve as a member).

Related posts:


What I See in Low-Performing Schools

What is really the difference between high- and low-performing schools?  This question is critical to the work of parents, educators, and policy makers striving to improving student learning.  Some will answer that there is essentially no difference in the curriculum, teaching, or leadership of these schools.  Rather, they'll argue (often in opposition to some school reform program or policy initiative), that poverty explains the difference.  Kids growing up in poverty face all sorts of learning challenges, so it should be no surprise that schools with high percentages of students from poverty will be low-achieving.

There's no doubt that family income makes a huge difference in student learning outcomes.  High-poverty schools face an enormous uphill struggle.  But in my experience, there are discernible differences in many low-performing schools that have little to do with poverty and could, if addressed boldly and intentionally, make a major impact on student learning.

In recent years I've participated in a number of evaluation visits to low-performing schools.  These reviews are part of many states' efforts to provide feedback and support to their lowest-achieving schools.  During these multi-day visits, teams of practicing and retired educators observe classrooms, interview teachers, students, and parents, and review data and reams of documentation using a carefully-crafted rubric of indicators that are correlated to student achievement outcomes.  The reviews conclude with an extensive report highlighting the school's successful practices and making prioritized recommendations for improvement.

In the last four years I've participated in or lead evaluation visits in seven schools, representing five school districts in two states.  These were some of the lowest-performing schools in their respective states.  These schools also served very high percentages of students from low-income families.  What follows is a summary of the patterns I've observed.  This is not a scientific analysis of course, but is rather based on my anecdotal experiences in these schools, as well as countless other schools of all performance levels that I visit while providing professional development services or other supports to practicing and aspiring school administrators. 

First, I'll say that the vast majority of educators I've encountered in low-performing schools are dedicated and hard working.  Many profess a sense of calling to work with students of poverty.  Union contracts often present obstacles to some school improvement efforts, but many of these are surmountable and most individual teachers seem willing to do whatever it takes to help their students succeed.

But despite these good intentions, I see three major patterns that stand in the way of student learning in chronically low-performing schools.  These patterns are within the control of teachers and especially school leaders, and they greatly account for low student achievement, above and beyond the effects of poverty.

Basic classroom-level instruction is often weak and ineffective.  Classrooms in low-performing schools are often characterized by un-engaging, teacher-driven instruction undifferentiated for student readiness level.  Learning tasks suggest teachers have consistently low expectations for what students can do.  Lesson learning targets, where they exist, do not seem to be organized into a coherent framework of learning progressions that can effectively inform assessment.  Perhaps most importantly, formative classroom assessment data rarely leads to meaningful, descriptive feedback that could help a student improve her performance, and almost never leads to changes in instruction.

Again, it's not that teachers aren't working hard, or that they are refusing to do these things.  I think in most cases the teachers have simply never been properly trained or supported to make these changes in instruction, or held accountable for doing so.  They think their teaching is good because they've never seen the alternative, or don't believe it could work with their students. And of course there are exceptions.  In every low-performing school I've encountered pockets of excellent teaching.  But these teachers are usually working in isolation from their colleagues and with little recognition or support from school leaders.

A related issue is that student behavior in some low-performing schools is a real obstacle to learning.  This is not universally true, however.  I've seen several low-performing schools where students were polite, cooperative, and compliant with teacher directives.  But in some schools there is great inconsistency in teacher expectations for student behavior across classrooms, or in enforcement of these expectations. These schools desperately need to faithfully implement Positive Behavior Interventions and Supports or some other framework to address this challenge.

School improvement efforts lack focus and clarity.  In many low-performing schools, especially those that have had extensive support from state consultants, numerous well-intentioned improvement initiatives are underway.  But schools are sometimes implementing so many different programs they don't have the time to do any of them well, and are often not gathering data that would help them gauge the effectiveness of any initiative.  Above all, very few of these initiatives really foster significant changes in classroom instructional practice, which as I've already noted is a key problem in most of these schools.  These schools need to narrow their efforts to a highly-focused set of improvement strategies that promote meaningful changes in classroom teaching.

Finally, school leaders not sufficiently engaged in the work of instruction.  While many principals in low-performing schools, like their teachers, work hard and desire to do their jobs well, many are also unaware of the relatively poor quality of classroom instruction and are not focused enough on improving teaching practice.  They do not invest enough time monitoring classroom instruction, providing teacher feedback for improvement, or holding all staff accountable for implementing improvement initiatives and being consistent with student behavior expectations.  And without principals who are sufficiently engaged in instructional improvement, none of the other problems I've seen in low-performing schools can be adequately addressed (See Karin Chenoweth and Christina Theokas' book, Getting it Done, for profiles of principals in high-poverty schools who are, in fact, "getting it done.")

Of course, these patterns can sometimes be found in high-achieving schools too.  For many reasons it is simply easier to get good test scores with students from more affluent family backgrounds, and I'm also greatly concerned about schools that appear to be high performing for this reason, but have so much potential to improve teaching and learning (what John Hattie calls "cruising schools").

The bottom line is that, if you work in a low-performing, high-poverty school, the stakes are simply higher to improve student learning.  You have to be better than your colleagues in more affluent schools to make up for the negative effects of poverty.  And those effects are so powerful, high-poverty schools may even, with the most focused and effective teaching and leadership, still lag behind low-poverty schools in terms of test scores.  But there is no question that most-low performing schools can do better, and with skillful, engaged leaders who don't accept poverty as an excuse, real progress can be made.

 


Letter grades: The dinosaur that needs to go extinct

According to the Trimble Banner newspaper, Board members in the Trimble County Schools (just up the Ohio River from Louisville) are debating the role of the "D" letter grade in the district high school's grading scale.  The school dropped the D in 2007 (making an F a grade of 69 percent and below).  Some board members want to bring it back, and want to hear from high school teachers on the issue.  From what I can infer, proponents of the current "no D" policy believe that it ensures high expectations for student performance, and opponents worry that it encourages dropping out.

I'm sure the board members and educators on both sides of this discussion are well-intentioned, hard working, and thoroughly dedicated to the students of Trimble County.  But these kinds of debates represent outdated thinking about student feedback.  Traditional grading systems (and Trimble County's will remain traditional whether they bring back the D or not) actually hamper educators' efforts to help students master the skills and knowledge they need.  In fact, it's time for educators to scrap the whole system of letter grades altogether.  That's what many elementary schools across the country have already done, and it's time for high schools to follow suit.

Why do we give students grades in the first place?  Ideally, we give grades because students and their parents need feedback that will help them improve their learning strategies and become more proficient relative to specific learning targets.  The thing is, to give that kind of feedback, you don't need letter grades.  Letter grade systems tend not to deliver meaningful, actionable feedback to students.  Mostly they just serve as a proxy for a system of rewards and punishments that has little to do with what students have actually learned.

Letter grade systems don't provide much meaningful feedback to students

Letter grade systems don't provide much meaningful feedback to students for several reasons.  First, learning needs to be organized around very clear objectives (learning targets that articulate what students should know or be able to do as a result).  These learning targets need to be clearly understood by students, guide every stage of the learning process, and provide the precise basis upon which students are assessed.  But in many high school classrooms, learning objectives are still not always clear to students (and sometimes teachers), and scores on classroom assessments don't always translate into the specific skills or content knowledge students are still lacking. 

Additionally, teachers often figure in variables to students' grades that have nothing to do with their actual progress toward the specific learning objectives of the course.  This includes completion of homework, timeliness of assignments, and other subjectively-valued student behaviors.  These behaviors aren't necessarily unimportant, but including them in the student's grade distorts the capacity of that letter grade to tell students what they actually know and are able to do.

Furthermore, in many classrooms once an assignment or unit test is complete, the student has no real opportunity to correct their errors and demonstrate further progress toward the learning objectives.  This makes grades more like an autopsy report, rather than a check up that would let you know how you can improve your health.

And finally, because there are so many of these subjective elements to teachers' grades, what makes for an "A" in one teacher's class many differ wildly from what it means in another's (even when the teachers teach the same course).  For all these reasons, letter grades tend not to give students and their parents the kind of feedback they really need - if what we truly value is whether they learned the knowledge and skills associated the course.

Letter grade systems are a mechanism for rewards and punishments

But in fact, in many schools (especially high schools) letter grades just serve as a kind of mechanism for rewards and punishments that have little to do with student learning, and are often detrimental to real learning.  Letter grades determine athletic and (sometimes) extra-curricular eligibility, for example, providing a mechanism to reward or punish students with the privilege of participating in school activities based on how well they play the game of "school."

Likewise, letter grades are used to calculate grade-point averages that determine scholarship eligibility, like the Kentucky Educational Excellence Scholarship (KEES), a state program that rewards students with college scholarship money based on their gpa.  KEES is a valuable and well-intentioned program, but its effect is to prop up a grading system that is outmoded and ineffective, and actually encourages grade inflation since high school teachers know their students' "KEES money" is on the line every time they assign grades.

And of course letter grades also help determine class rank and figure into the determination of valedictorians.  In general I don't have much positive to say about that.  Creating a sense of competition among students fosters cheating and academic anxiety and generally undercuts the greater social virtue of learning cooperation and a love of learning for its own sake.  But high schools still very much emphasize the competitive effect of the grading system.

There's a better way

It's time for educators to admit all this and seek a better way of delivering student feedback.  Many schools have developed straightforward systems of standards-based assessment and reporting that provide students and parents specific feedback on their progress toward learning targets.  Students have ample opportunities to continue working on learning targets with which they struggle until they demonstrate mastery.  Such systems emphasize learning as an end in itself and give all school stakeholders a clear understanding of what students actually know and are able to do - and a much better system by which teachers and administrators can know the impact of their work (and adjust accordingly).

But what about teaching kids work ethic?  And what about athletic eligibility and KEES money and valedictorians?  How do we address all of those things without letter grades?

Some of these concerns call for straightforward solutions.  If you want to give students feedback about work ethic, timeliness, good behavior, etc., then develop rubrics that would accurately assess these dispositions (good luck with that) and deliver that feedback separately from their actual academic achievement.  Regarding valedictorians - how about we just get rid of such outmoded systems of rewarding competition? Who can honestly say that most high school valedictorians have mastered more learning targets than any other student in their class?  Or have they just played the game of school better? If you insist on rewarding someone for learning more than someone else, then make sure they actually have.

Regarding athletic eligibility, KEES money, etc., I have to be honest.  I don't know what to do about those deeply-rooted (and important) policy-based institutions and practices.  But instead of holding on to an outdated and ineffective system of letter grading because of them, let's apply our collective imagination to finding ways to make athletic eligibility and scholarships conform to effective systems of student learning feedback.

Let's stop wasting time arguing over grading scales, whether our scale should include a "D," and other topics that represent a very 20th century way of thinking about school and focus instead on creating systems of feedback that really value and empower student learning.

For Further Reading:


#AnnualVL2015: Reflections on Day Two

Today was the second and concluding day of the Corwin Press Annual Visible Learning conference featuring the work of John Hattie and associated authors.  You can read my reflections on Day One here.

The day began with a wide-ranging keynote presentation from Douglas Fisher on "Better Learning Through Structured Teaching." Based on his book by the same title, the presentation provided an overview of Fisher's framework for gradual release of responsibility.  Fisher focused primarily on the first phase of the process, focused instruction, in which new information is introduced to students.

Of most interest to me was the great emphasis he placed on clear learning objectives.  Students must know what they are learning in each class and how they'll know if they mastered it.  His comments resonated with the condition I find in most Kentucky schools.  Educators know that we need to post learning targets for each lesson.  But in many classrooms, the effort stops there.  The targets are often not clear or well developed, the teacher rarely references the target in the lesson, and students are often quite oblivious to the real goal of their learning or how it will be assessed.

And without that foundation, none of the other steps in the structured teaching process make much sense.  So I think this points us directly toward a lot of work that remains to be done in area schools around further deconstructing standards, developing new learning targets (sequenced into learning progressions; see James Popham's work on this), and then coaching teachers around the effective use of learning targets in the instructional process.

Later, I attended a great session with Raymond and Julie Smith, authors of Evaluating Instructional Leadership.  If the spirit of Visible Learning is that teachers should know their impact, it's essential that school principals should also know their impact on student learning.  This is no easy task, since research shows us that the leader's influence on student outcomes, while real, is indirect and mediated through his/her interactions with teachers.  The Smiths provided some excellent tools for leaders to identify practices associated with larger school improvement efforts that can become a basis for gathering data and then correlating that data with changes in both teaching practice and student achievement.

After lunch, a whole hosts of Corwin authors came together for a panel discussion.  They discussed many topics, including how district leaders can encourage Visible Learning, how teachers without administrative support can pursue Visible Learning, etc.  The discussion veered toward policy questions and I heard a lot of negativity from the panel about the over-emphasis on testing and our rather crude accountability structures that make it harder for teachers to focus on important improvements in classroom-level instruction.  

To my point of view, Douglas Fisher had the best response when he basically said that testing will simply take care of itself if we attend to the work of Visible Learning without getting distracted by the larger policy issues.  This seems right to me.  As I've written before, we got into this testing regime because of long-standing, unjust achievement gaps.  The public has a right to know how much kids are learning, and while testing may be a limited measure, it's a useful measure when used properly.  And besides that, if you have a strong instructional vision that is based on research and good practice, the tests can be a secondary priority.

One of the most immediately practical sessions of the conference came from a presentation by Jennifer Abrams, author of Having Hard Conversations. For many of the aspiring and practicing administrators I work with, the hardest part of their jobs is dealing with difficult situations that require hard conversations with colleagues, teachers, or other staff.  In general, I don't find a lot of great examples in the field of school leaders who are especially good at this.

My sense is that these hard conversations require a degree of emotional courage that is extremely hard to muster in conflict-ridden situations.  Abrams helped articulate the reasons we hesitate and struggle to generate this emotional courage, but even better she provided several excellent resources for helping discern when to have a hard conversation, how to prepare for it, and even quick scripts to help structure the conversation itself.  These tools will be immediately useful in my administrator preparation classes and for the principals I coach.

But the most interesting portion of the conference for me came in John Hattie's closing keynote at the end of the day in which he discussed the kinds of fundamental changes that must take place in structure of schooling itself to support Visible Learning.  From a recent paper he wrote on "The Politics of Distraction," Hattie outlined five popular "distractions" that get in the way of seriously scaling up student achievement, including various schemes for reducing class size, increasing school autonomy, increasing school funding, revamping teacher training. etc.

Hattie was careful to say these things are NOT unimportant (and indeed, I would argue that some of the "distractions" - like school choice - are goods in and of themselves regardless of student achievement).  Rather, when it comes to student learning outcomes, right now the body of empirical research literature cannot establish any large-scale link between these ideas and achievement.  We can, however, draw empirical links between the instructional and learning strategies that are proven to help student learn.

Hattie advocated for shifting our professional narrative from these distractions to the politics of collaborative expertise.  He insisted that we should ensure - as a minimal expectation - that for every year kids sit in school they should get at least one year's worth of growth compared to where they started.  Researchers and practitioners must work toward agreement on what one year's progress looks like. 

Hattie argued for the development of new assessment and evaluation tools to provide feedback to teachers.  What he's talking about is much more than the complex, if well-intentioned tools familiar to Kentuckians in the new Teacher Professional Growth and Effectiveness System.  Rather, Hattie is talking about mechanisms for providing teachers rich, real-time achievement data. 

Related to this, Hattie argued for developing teachers' expertise in diagnosis, interventions, and evaluations.  Here, I thought of the good work on data teams going on in many of our area schools.  This new version of professional learning communities, coupled with the strategies outlined in Visible Learning, could be a powerful method for accomplishing this goal.

Finally, Hattie argued that the autonomy teachers crave should be linked to this achievement of at least one year's growth for every child, emphasizing that we need to redefine a "good school" as one in which students are progressing - regardless of whether the overall achievement is high or low.  In this way, I think schools that are of greatest concern are those with high-achieving students who are making little progress (what Hattie called "cruising schools").  These schools deserve as much scrutiny, support, and attention as low-achieving schools (especially those that are high progress).

I need more time to process all of this rapid learning, but I'm as excited about Hattie's work and its implication for leadership as anything I've encountered in awhile.  More reflections to come.

 


Reflections on Visible Learning

Hattie
Yesterday I was pleased to join nearly 300 area teachers and school leaders for a GRREC-sponsored, day-long session with Professor John Hattie and his colleagues from Corwin Press to explore concepts laid out his book Visible Learning and other publications.  The day was extremely informative and thought provoking, and I wanted to quickly note some of the key points I'm taking away for reflection.

You can follow the Twitter chatter from session participants using the hashtag #GRRECVL.

I was familiar with the essence of Hattie's work, but this was my first time to hear it directly from him.  According to Hattie, visible learning is "when teachers see learning through the eyes of their students and help students become their own teachers."  The strategies that make up visible learning emerge from Hattie's decades-long research analyzing the effect sizes of a wide variety of educational interventions. 

The challenge of educational research, according to Hattie, is that nearly everything "works."  In other words, research shows positive effects from practically every single instructional intervention that is attempted with intentionality.  But just because a research study reveals that a strategy made a statistically significant improvement in student achievement does not mean that all strategies are equally effective.  Thus, we sh0uld focus on the "effect size" revealed in all the collected research on a strategy, which is a measure of how powerful the impact of an intervention is on student learning.

To cite one of the easiest examples (because it gets so much attention), reducing class size does, in fact, help student learning, but in a list of 150 possible influences on student achievement ranked by effect size, class size comes in at 113.  It's not that class size isn't important; rather, we can get far bigger improvements in student learning by focusing on some of the more powerful interventions at the top of the list.

These include strategies like the effective use of response to intervention (RtI), formative assessment, classroom discussion, and providing students rich, descriptive, actionable feedback on their progress, among others

I don't want to say too much about the list of strategies, since readers can (and should) explore visible learning for themselves, other than to say it is gratifying to see affirmation for a lot of the work I've personally pursued and promoted with teachers and school leaders over the last 10 years.

Instead, I want to note a couple of ideas that struck me while listening to and reflecting on what Hattie and his colleagues shared.

First, I see Hattie's work as an invitation for teachers and school leaders to engage deeply and thoughtfully with educational research and to reflectively apply it to their own school situations

If schools simply took the list of interventions, crossed off the bottom 100 strategies that have the lowest effect sizes, and blindly threw themselves into implementing the highest-ranked strategies with little thought for their own context, I believe it would be a gross misuse of this work.  In fact, I think this is the antithesis of what Hattie himself wants, because he has repeatedly lamented the tendency of educational research to de-professionalize the work of teaching.

An example: in his presentation Hattie noted that problem-based learning has a relatively low effect size when all the existing literature on the strategy is considered (.15, ranked 128 out of 150).  But this is explained, in part, when we consider that many teachers attempt to use problem-based learning as a mechanism for introducing new concepts.  Hattie went on to note that when students already have a fairly solid foundation in the content being studied, problem-based learning becomes a much more powerful tool for engaging them in higher-ordered thinking and reflection.

So how a strategy is used can make all the difference in its impact.  In this way, I would just suggest that if you are interested in pursuing a particular intervention in your school or your classroom, even if it is relatively low on Hattie's list of effect sizes, that's okay.  Just immerse yourself in the existing research literature on the topic (recruit a pinhead academic from the university - like yours truly and his colleagues - if you need help interpreting the studies) and figure out what pitfalls to avoid, when to use the strategy, and what other variables you need to consider in order to make it most effective.

Which leads me to my second, troubling observation: Hattie's own estimation that 95% of existing education research examines surface-level student learning.  In other words, the thousands of research studies that make up the body of literature reviewed in Hattie's work examined student learning experiences that only required surface-level thinking.  Because that's the vast bulk of what is happening in our classrooms most days.

This is a pretty terrible indictment of our work as educators, and I think there is probably great truth in his observation, but beyond that sad judgment, what is the practical significance of that truth?

Is it possible that the visible learning research just points us toward how to become more effective at getting students to learn and regurgitate more surface-level information?  Looking at Hattie's high-impact strategies, it's hard to imagine that student learning wouldn't be at least marginally deeper as a result of engaging in rich classroom discussions, self-reflecting on their learning, receiving descriptive feedback they can use to improve, using meta-cognitive techniques, etc.

But even if visible learning improves student achievement, the question still lingers: does all  this simply prop up a learning system that leaves students basically lacking key skills in creativity, critical thinking, and especially independence and self-motivation?

Many readers know I've recently become intensely interested in various models of personalized learning.  Yet, "Student control over learning" was one of the lowest-impact strategies on Hattie's list, with an effect size of .04, which is virtually meaningless.  When I asked Hattie about this, he said that this is because the popular notion that the teacher should become "a guide on the side" is "rubbish."  Kids need teachers to instruct, he said, and provide lots of direction on what to learn and how to learn it.

And yet, I think in that moment Professor Hattie may have forgotten what he said moments before about the vast majority of the research being focused on surface level learning.  (To be fair, if you haven't heard Hattie speak before, he is prone to making somewhat exaggerated generalizations in order to make a larger point. Having the tendency toward hyperbolic rhetoric in my own speaking and writing, I sympathize with Hattie - and with the confusion and frustration this can cause in his readers/listeners).

I haven't yet delved into the studies on giving students control of their own learning that Hattie examined in his book, but I have a strong suspicion that within a traditional school context where students have no control over any significant portion of their learning, conducting limited experiments in giving them little dabs of autonomy probably does yield meager results (and contradicts other - admittedly limited - evidence that students can actually learn a great deal when given far more control over the process).

What we need - and Hattie acknowledged this in response to a question in his opening keynote yesterday - is a lot more research on what works to shape certain "non-academic" student dispositions like self-reliance, self-motivation, creativity, and teamwork around solving complex, real-world problems.  And we need educators who are willing to challenge the conventional ways that we structure schools, experiment with alternatives, and give researchers a place to study what deep learning really looks like, whether directed by teachers or students themselves.

Traditional schooling rests on the question, "What do we want students to learn?"  And then we design schools (and curricula) in response. 

What if instead we began with the question, "What kinds of people do we want students to become?" and then design the curriculum - and the school - around that much larger question?  (Which, I admit, is actually complementary to the first question; but where you begin the thought process makes a huge difference).

If we did, we might find that very different learning strategies work far more effectively than in the current context.

At any rate, John Hattie and his colleagues have given us a feast for thought, and educators would do well to carefully study their work - and the research upon which it rests.  But do so thoughtfully, critically, and with an eye toward how we make school work for kids - emotionally, psychologically, and socially, and not just academically.


Research: On-going learning with opportunities to practice, share, makes for meaningful professional development

I'm delighted to report that the work of my friend and colleague, Dr. Tom Stewart of Austin Peay State University, exploring the effects of an on-going formative assessment initiative on teacher learning, has been published in the peer-reviewed international research journal, Qualitative Research in Education.

The article, "Deep Impact: How a Job-Embedded Formative Assessment Professional Development Model Affected Teacher Practice," represented key findings from Tom's dissertation study while a doctoral student in WKU's EdD program.  I was priveleged to serve as Tom's methodologist, and the model described in his study reflects a professional development framework he and I have utilized numerous times in our consulting with P-12 schools and districts.

The study describes Tom's efforts as a district administrator to help teachers learn to effectively use formative assessment strategies.  Recognizing the limitations of "one-and-done" professional learning experiences, Tom designed a series of multiple after school PD sessions (called the Formative Assessment Academy) with a core group of volunteer teachers.  In each session, teachers explored research on formative assessment and learned new strategies.  They also committed to practicing at least one formative assessment strategy between sessions, and brought examples of student work and their reflections, which they shared with other teachers during each workshop.

Interviews with participating teachers, non-participating teachers, and administrators indicated that teacher confidence in the use of formative assessment rapidly increased, even for those who did not get direct benefit from the Academy.  Teachers cited the opportunity to practice strategies repeatedly and share with others as a key component of the initiative's success.

This study suggests a professional development format that parellels the best features of meaningful professional learning communitiesMarzano, Frontier, and Livingston (2011) cite opportunities to practice and discuss expert teaching strategies as a fundamental condition for fostering improvements in teaching skills.  And this study further extends the research literature on the power of formative assessment as a tool for teaching and learning.

School and district leaders should consider teacher learning frameworks like the Formative Assessment Academy for all professional development initiatives.

Read the full text of the article here.


Instructional Sensitivity Conference next month

I've written previously about James Popham's compelling case for why standarized tests are not particularly good measures of teaching effectiveness.  The term now used to define whether a test actually measures the impact of teaching on learning is "instructional sensitivity."  Popham argues that for tests to be instructionally sensitive, they must exhibit the following characteristics:

  • The test must be based on a modest number of important curricular targets.
  • The test must be based on learning targets that are clearly defined.
  • Performance reports generated from the test must yield data showing exactly which learning targets individual students have mastered and which they have not.
  • Each test item must be free of cultural bias.

Based on these criteria, the vast majority of standardized assessments can't be considered "instructionally sensitive."

The answer to this is not to throw out all standardized tests as some people, including the once insightful but  increasingly ridiculous Diane Ravitch, suggest.  As I've argued before, taxpayers who shell over millions of dollars each year in support of education deserve some common, standardized measure of school performance.  We need to recognize the limitations of standardized tests as they are now constructed, however, and place a much larger emphasis on the creation of meaningful, instructionally-sensitive school- and classroom-level assessments.  And, we need a thoughtful, engaged, research-based, industry-wide focus on improving the quality of state tests.

In the spirit of all of these goals, the Achievement and Assessment Institute at the University of Kansas is sponsoring an Instructional Sensitivity Conference November 13-15 in Lawrence, Kansas.  Among other top-notch presenters, the conference will feature Jim Popham himself delivering the keynote.  Sessions will focus on debates around the topic of instructional sensitivity, and promising research on advances in testing design and technology that can improve the measurement of student learning.

Unfortunately I cannot attend this event, as I'll just be getting back from the Mid-South Educational Research Association annual conference where I'll be presenting several papers.  However, I encourage readers to attend (click here for registration information), and to follow AAI's work, which has major implications for school improvement and education practice. 

I'd welcome feedback and thoughts from readers who attend, and I'll continue to follow and share AAI's work and subsequent events.


The great "Ability grouping" misnomer

A flurry of headlines in the education media has recently announced the return of "ability grouping."  The news stories cite a recent study by the Brooking Institution's Brown Center on Education Policy that found a major resurgence of "ability grouping" after the practice had fallen out of fashion for many years.

But what the Brown Center study describes is simply good practice and should not be called "ability grouping," a term that does indeed need to remain on the scrap heap of history.

The Education Week story on this topic is a good example.  It defines "ability grouping" as "the practice-primarily in elementary grades-of separating students for instruction within a single class."  Reporting findings from the Brown Center study, the story goes on to describe the increasingly common practice of using assessment data to flexibly sort students for intervention and enrichment.  In the best-case scenarios, these groups are truly flexible: students move in and out of the groups based on their progress toward benchmarks.

I've seen some pretty poor and primitive excuses for flexible grouping, like assigning students to groups based on a single assessment measure, and then leaving them in a group for an entire semester or longer before reassessing their progress.  And in many schools "enrichment" groups don't provide much meaningful enrichment. 

But the effort to do flexible grouping is still an important step toward implementation of a truly "balanced" assessment system.  Ideally, schools should be constantly measuring student progress toward learning targets (using frequent, ungraded formative assessments) and making immediate instructional adjustments based on this progress.  Adjustments could include grouping students based on their progress to provide additional (or differentiated) instruction (or enrichment). 

This practice, however, has nothing to do with a student's "ability," a word which suggests a child's innate capacity to learn.  On any given day, any student could require some intervention or enrichment based on progress toward a particular learning target. 

It is not splitting hairs to make this distinction.  Much of the tracking that took place in past decades had everything to do with educators' perceptions of children's innate capacaties to learn.  With relatively little meaningful data to go off (and lots of prejudicial attitudes based on race, poverty, or family education background), teachers assigned students to groups based on "ability" and the vast majority of students never left their track.  "Lower" tracks were distinguished by profoundly lower expectations for what students would ever be able to achieve.  And this probably explains as much about historical achievement gaps as nearly anything.

Tracking practices like this have greatly declined at the secondary level in recent years, and rightly so.  For adults to decide on a child's behalf - when that child is 14 years old or younger - whether he or she is "college material" reeks of paternalism and a profound unfairness, and sets up those who might desire more for themselves to be perpetually unprepared for learning at the next level.  Most high schools have replaced multiple tracks (for students without disabilities) with two today: "honors" and "regular."

But even these distinctions seem problematic to me.  When I ask high school educators the difference between their honors and regular courses, uncomfortable squirming often ensues.  The honors classes move "faster" and "go deeper," I'm usually told, but the content is the same.  I have trouble seeing how the content could be the same if the class is moving "faster."  There's no way getting around the fact that our expectations for "regular" classes are lower.  Are none of these students capable - or worthy - of higher expectations?

Lest readers misunderstand, I do believe there are differences in students regarding their "ability."  Like almost all human characteristics, intelligence (in all its forms) falls along a bell-shaped curve pattern for large populations.  And these innate capacities do shape the rate at which students learn, and for a few perhaps, a maximum capacity for achievement.  But when educators use these differences to make decisions that profoundly shape the entire curriculum and learning program for vast numbers of students, we have given "ability" far more prominence than it deserves and institutionalized low expectations and a reluctance to do what truly needs to be done: meaningful individualization and differentiation for all students.

Would we even need "honors" classes if we knew how to really differentiate?  And could the institutional structures of schooling ever allow us to differentiate in this way if we actually knew how?

This is the kind of debate we need to have in education.  Misnomers like "ability grouping" are a major distraction.


See my op-ed on Woodford County and our collective testing obsession in today's Herald-Leader

I recently blogged about Woodford County High's embarassment a few weeks back when school leaders sent a letter home to parents announcing an assembly to discuss the academic performance of African-American students.  A version of that post appears in today's Lexington Herald-Leader, which you can read here.

My key point is that the general public should not assume administrators at Woodford County were up to no good.  In fact, they were probably doing what schools everywhere do these days: trying to close achievement gaps.  But they did it in an awkward and inappropriate way, and the whole thing reveals a collective educational culture that is far more focused on improving test scores than on transforming classroom-level instruction.