Every so often and with regularity, there seems to be new educational research that shows that such-and-such method of instruction either shows improvement in student performance, a decrease in student performance, or no change at all.
The methodology varies but the experiment is effectively a standard controlled experiment with one group receiving one type of instruction (the “new method”) and the other group receiving a different type of instruction (typically an existing form of instruction). The question to be answered is often, “Does < new instructional method > help a student learn better / faster / longer?” or something to that effect. The null hypothesis is typically that the new method does not offer any change in performance with the alternative being that the new method does offer change. To keep the study within the reach of well-known statistical theory, performance (response) is measured in some consistent quantitative way across all test groups in the study.
At the end of the study, and after the number-crunching there is a (statistical) conclusion. The conclusion can be one of three possibilities — no effect, positive effect, negative effect. The study is published, books are written, and there is great fanfare.
Now, to be clear, I typically do not have much disagreement with the statistical conclusion. In fact, I find real value in knowing what other methods can work. What concerns me is how we (not the researchers) extrapolate from that conclusion.
For example, suppose there is a study that shows that a point-based reward system does not promote cognitive ability (for tasks that require cognitive effort, i.e., not a mechanical non-thinking task). My first question is, “Is this a statistical conclusion? or did every person who was part of the point-based reward (e.g., money) group show no increase in cognitive ability?”. I think it makes a big difference if the conclusion were a statistical one as opposed to the latter (though, technically speaking that’s still a statistical conclusion) where every person showed no increase.
The problem here is that we’re dealing with people, not widgets. If I had a new quality control process that showed that widget defects were reduced (statistically) in comparison to the existing process, then the logical conclusion, ceterus paribus, would be to implement the new process. Widgets are widgets. With people, it’s a different ball game.
The general idea underlying a statistical conclusion is that if we were to obtain a simple random sample of widgets, then the new quality control process would reduce widget defects with a certain level of statistical certainty. The same goes for edu-research. If we were to pick, at random, a number of students and randomly split them into two groups, then we would show with statistical confidence that a point-based reward system does not promote cognitive ability. And this is fine.
My concern is the extrapolation made from here. It is true that a randomly chosen individual would show no increase in cognitive ability if incentivized by points. It is, however, untrue that every person would show no increase in cognitive ability if incentivized by points. And this is the distinction I want to make between studies on people’s ability to learn / do / be motivated, etc vs studies on widgets.
If we knew nothing about the person to whom we were providing instruction, then, yes, it is likely true that a point-based reward system will have no effect on that person’s cognitive ability. But, as we teach that individual, we begin to learn about who we are instructing. At this point, we are no longer in the controlled world of the study. We are in a world with more information and as such, we have to be able to discern if the person to whom we are providing instruction will respond well to a point-based system or not.
Having worked with hundreds of students, I can say that there are certainly those who thrive in a system of rewards and penalties. I have also seen enough students who do not respond well to the pressure of testing. Instead those students thrive in a no-pressure environment and learn quite well when the threat of being marked as a failure is taken away from them. I also know students who like to work with tangible things first before going into the abstract. I have had students who need to see things abstractly first before they will dare tinker with specifics. And then there are people like me who neither want theory nor examples nor games nor groups nor projects, we just want a problem before anything else and we want to fiddle with it on our own before seeing what the formalities are.
So, of course, project-based learning works, gamification works, technology-based / technology-infused instruction works, brain-based learning works, flipping the classroom works, etc. And so does traditional instruction. The caveat to all of these methods is that they don’t work for everyone — and this is one of the reasons that we have so many different ways to teach. All of these methods are tools in the instructor’s toolkit. The way to use these methods is to recognize what will and won’t work for which student and to best adapt a teaching style to reach the student. (And I know that’s a tall order for any teacher given that teachers often have to work with 20+ students simultaneously.)
Traditional instruction is the punching bag as it is often the method against which other methods are compared. So in that sense, traditional instruction could be considered to be inferior, but I think we have to remember that, as far as I know, there haven’t been (enough) studies that compare new methods against other new methods. Additionally, we also have to remember that there may have been plenty of other methods of instruction that went up against the traditional style of instruction but did worse . We don’t hear about those cases because they don’t get published. Thus, we end up with an anti-traditional bias.
For the record, I am anything but traditional, but at the same time, when I teach I do my best to recognize when traditional instruction will promote learning and when I need to go with something “modern”.
Teaching is simultaneously an art and a science; learning is very complex; grades aren’t killing education; tests aren’t the devil; people aren’t widgets. If I had to give what a best teaching style is, I would say that it’s the one that is most flexible.
What’re your thoughts? I’d like to hear them even if you disagree with every word in this post.