Monday, February 29, 2016

What Matters in Schools?

February. That horrid month in the lives of all teachers everywhere. Don't ask me why, but the things that CAN go wrong in schools almost always DO go wrong in the month of February. From spikes in discipline-related issues to leaky gym ceilings ruining new basketball floorings, school stuff goes belly-up in February.

For me this February has been no different than any other dreadful February. Instead of being able to work on "things that matter," I have instead been impelled to address a host of issues at school that have little to no bearing on student performance. Today however, I am grateful for at least two aspects of February. The first is that February is a short month and is almost, mercifully, at an end. The second is that February 2016 brings us an extra day. And since I have vowed to publish a blog post every month this year, I have needed an extra day this month to accomplish my blogging goal. So thanks, February ... but you still suck.

On this auspicious, final day of February, I am turning my back on the dreary month that was, returning to a meditation on "things that matter." In previous writings, I have mentioned the work of John Hattie, an Australian professor of educational research and Director of Melbourne Educational Research Institute. Hattie has spent his entire career looking into "things that matter," scholastic programs designed and implemented to improve academic achievement. In 2009, Hattie published Visible Learning, a synthesis of his study of 800+ meta-studies on school programs designed to improve learning and achievement. In essence what Hattie has done is to analyze thousands upon thousands of educational studies in thousands of schools with thousands of teachers working in thousands and thousands of programs. Given the complexity of the research, his initial question is quite simple ... what works? In Visible Learning Hattie answers that question. And it turns out, surprisingly, that almost everything works; almost all the school programs that Hattie has studied, programs launched to enhance academic achievement, actually work. Given that almost everything works, Hattie then goes on to ask another relatively simple question ... how well? Hattie's book also answers this question. And the answers are fascinating.

By analyzing studies that target educational programmatic impact in terms of students' grades before and after program implementation, Hattie is able to generate a coefficient (a number without a corresponding unit of value) in conjunction with every program launched. The greater the value of the coefficient, the greater impact that program has on student achievement based on the thousands and thousands of studies. You may not understand all of the kinds of programs launched, but take a look (you will need to scroll down a little once the new window opens). What you are seeing is a list many of the kinds of programs we tend to launch in schools in order to enhance student achievement ranked in order of "most effective" to "least effective." Cool, eh?

Given that the average of the program coefficients is equal to 0.4, I want to look towards the top of the list if I am meditating on "things that matter." I want to ignore the things that I used to think mattered like "teacher subject matter knowledge" (coefficient = .09), "extra-curricular programs" (coefficient = .17), "class size" (coefficient = .21), "teaching test taking" (coefficient = .22), "homework" (coefficient = .29), and even "decreasing disruptive behavior" (coefficient = .34). These things work, they kinda matter, but given that they are all programs that lie below the average, they just don't deliver the bang-for-the-buck that other programs can deliver. If I want to ponder the "things that matter," I want the top of Hattie's list, the programmatic high-flyers. The school initiatives that go BOOM.

Let's look at the top five.

  1. Student Self-Reported Grades (coefficient = 1.44)
  2. Piagetian Programs (coefficient = 1.28)
  3. Providing Formative Evaluation (coefficient = 0.9)
  4. Micro-teaching (coefficient = 0.88)
  5. Acceleration (coefficient = 0.88)

When a school begins a program centered around student-self reported grades, it means that the school promotes a program where a teacher takes the time to discuss with a student about his/her targets/goals for upcoming assessments. After the assessments, that teacher then has a follow-up discussion with that student about his/her actual performance compared to the predicted/desired performance. The teacher and the student then debrief. Simple, huh?

Piagetian programs are those instructional programs that embrace the ideas of educational philosopher Jean Piaget. Long story short, Piaget basically categorized four different levels of thinking and ideation based on age. Certain aged students are capable of certain levels of thought and ideation and are not capable of more advanced levels of thought and ideation. The lesson here is for schools and teachers to understand the thinking of the students they serve and not to impose adult modes of thought and ideation onto younger students. Again, simple.

There are two types of what Hattie calls "evaluation" but what I will call assessment: formative and summative. Formative assessment happens when a student is practicing. Summative assessment happens when a student is producing or performing for real. To use a basketball analogy, formative assessment happens in practice with drills and scrimmages (and coaches yelling), while summative assessment happens in games with scores and statistics (and coaches yelling). In the classroom, formative assessment happens whenever a teacher provides a student with some form of relatively informal feedback (verbal, written, thumbs-up-thumbs-down, exit tickets, etc.), while summative assessment happens with unit tests, semester tests, and end-of-year tests. Hattie's research suggests that schools should adopt and support programs that focus on formative assessment. Simple, but openly defying every dumbass politician who has ever supported any kind of end-of-year testing.

Micro-teaching is the recording of a lesson by a peer, instructional coach or mentor. The teacher being recorded then discusses his or her performance with the peer, instructional coach, or mentor. Hattie's research strongly suggests that schools implementing these very "teacherly" programs tend to see big gains in student achievement. Simple, and very cool. Also surprising, at least to me.

The final big-hitter is acceleration, that is, allowing faster and more able students to learn at their own accelerated pace. What Hattie finds is that almost all students benefit from these kinds of programs. We can understand that such a program is probably beneficial to academic high-flyers, right? But how does this kind of program benefit almost all students? Hattie explains that accelerated students don't learn in a vacuum. Rather they learn the powerful social setting of the classroom, and these accelerated learners have a knock-on effect on all other students within the classroom setting. Again drawing on a basketball analogy. Say one of my players, Ricky, goes to a superstar summer basketball camp for a month. He is being accelerated. He comes back to my practice a month later having been playing with all those superstars. Some of his new-found skills begin to rub off on my other players during subsequent practice sessions. Ricky becomes a better player and so too do almost all of my other players. Again, simple.

So much for the top-five programs that matter according to Hattie. So why in the hell are we spending so much time on the programs that don't matter much to the detriment and exclusion of the programs that do matter?

I will get to that in my next post.

Thanks for reading. -Kyle

No comments:

Post a Comment