Of That

Brandt Redd on Education, Technology, Energy, and Trust

02 July 2012

Learning - Everything Works, But How Well?

In a recent Freakonomics Post, Roger Pielke Jr. writes about the perils of "False Positive Science." We constantly fight the fallacy of equating correlation with causation. But false positive science involves a more subtle error. In the search to find statistically significant results, researchers often try many different analytical alternatives. Their papers rarely list all of the failed models, only the one that achieves statistical significance is used. Joseph Simmons and colleagues write, "It is unacceptably easy to publish 'statistically significant' evidence consistent with any hypothesis." And this mistake is more difficult for the reader to detect than the correlation/causation fallacy.

Credit: Randall Munroe - xkcd.com
When it comes to research into educational achievement, another issue comes into play. Since humans are natural learners, just about everything works. In his book, Visible Learning, John Hattie gives this rigorous treatment. Over 15 years, Hattie and his staff studied over 800 meta-analyses representing hundreds of thousands of studies into what affects student learning. For every study, they converted the results into a common effect size scale.

Roughly speaking, the effect sizes used in Visible Learning are the amount of improvement a student would make in a year scaled to one standard deviation on a standardized test. By mapping all effects onto a common effect size scale you can compare the relative value of different techniques and theories.

Among Hattie's observations is the following:
Almost everything works. Ninety percent of all effect sizes in education are positive. Of the ten percent that are negative, about half are "expected" (e.g., effects of disruptive students); thus about 95 percent of all things we do have a positive influence on achievement. When teachers claim that they are having a positive effect on achievement or when a policy improves achievement this is almost a trivial claim: virtually everything works. One only needs a pulse and we can improve achievement. (Hattie, Visible Learning, p. 15)
On Hattie's scale, a child simply living for a year with no schooling achieves an effect size of 0.15. "Maturation alone can account for much of the enhancement of learning." Being present in a classroom with a teacher results in effect sizes between 0.15 and 0.40. So, for an innovation to be interesting, it must result in an effect size substantially higher than 0.40. (Hattie, p. 16).

From the book, here are some selected influences with their rank and effect sizes.

RankDomainInfluenceEffect Size
1StudentSelf-report grades1.44
2StudentPiagetian programs1.28
3TeachingProviding formative evaluation0.90
4TeacherMicro teaching0.88
5SchoolAcceleration0.88
6SchoolClassroom behavioral0.80
7TeachingComprehensive interventions for learning disabled0.77
8TeacherTeacher clarity0.75
9TeachingReciprocal teaching0.74
10TeachingFeedback0.73
11TeacherTeacher-student relationships0.72
22CurriculaPhonics instruction0.60
25TeachingStudy skills0.59
29TeachingMastery learning0.58
31HomeHome environment0.57
32HomeSocioeconomic status0.57
42SchoolClassroom management0.52
45HomeParental involvement0.51
51StudentMotivation0.48
56TeacherQuality of teaching0.44
59SchoolSchool size0.43
62TeachingMatching style of learning0.41
81StudentDrugs (e.g. for ADHD)0.33
91SchoolDesegregation0.28
92SchoolMainstreaming0.28
100TeachingIndividualized instruction0.23
106SchoolClass size0.21
107SchoolCharter schools0.20
129CurriculaWhole language0.06
133SchoolOpen vs. traditional0.01
134SchoolSummer vacation-0.09
135HomeWelfare policies-0.12
136SchoolRetention-0.16
137HomeTelevision-0.18
138SchoolMobility-0.34

There's a ton of stuff to chew on here. Far more than I can do justice in a blog post. Hattie has between one half and five pages for each of the 138 effects and there is nuance that the numbers don't capture. I'll just make a few observations:
  • The top five influences all involve adapting the experience according to individual student needs.
  • Charter schools, something I favor, have an unimpressive effect size of 0.20. But charters were intended to enable experimentation. So we should expect them to average similar to conventional public schools but with a much larger standard deviation. Recent studies seem to confirm that expectation. And studies are starting to identify what factors distinguish the high-performing charters from other schools.
  • Smaller schools help somewhat while the impact of smaller classes is minimal. That's probably because most small-class initiatives dilute their impact by with a consequential reduction in teacher experience.
  • Feedback loops, among my favorite topics, appear at #10 with an effect size of 0.73.
  • Home and socioeconomic status have a huge impact. But other factors are bigger so it should be possible to overcome the achievement gap in the school.
  • Phonics Instruction has an effect size of 0.60 while Whole Language has one tenth that effect. There's much to be said for Whole Language and I tend to agree with its constructivist roots but not at the expense of phonics.
Of course, the observation that nearly everything works doesn't eliminate the other perils of false positive science and the correlation/causation fallacy. All three of these make it possible to latch on to ones's favorite intervention while claiming to be evidence driven. To defend against this, we must seek 2-5 times improvement in learning performance and replicable results. It also helps to be careful, honest and humble.

1 comment:

  1. I'm really glad you gave me a summary of this research. It makes me wonder how many things that have been touted as "proven in the classroom" were just marginal improvements. I'm really glad someone is quantifying the effects of different influences.

    ReplyDelete