Some Thoughts on Edu-Research

Every so often and with regularity, there seems to be new educational research that shows that such-and-such method of instruction either shows improvement in student performance, a decrease in student performance, or no change at all.

The methodology varies but the experiment is effectively a standard controlled experiment with one group receiving one type of instruction (the “new method”) and the other group receiving a different type of instruction (typically an existing form of instruction). The question to be answered is often, “Does < new instructional method > help a student learn better / faster / longer?” or something to that effect. The null hypothesis is typically that the new method does not offer any change in performance with the alternative being that the new method does offer change. To keep the study within the reach of well-known statistical theory, performance (response) is measured in some consistent quantitative way across all test groups in the study.

At the end of the study, and after the number-crunching there is a (statistical) conclusion. The conclusion can be one of three possibilities — no effect, positive effect, negative effect. The study is published, books are written, and there is great fanfare.

Now, to be clear, I typically do not have much disagreement with the statistical conclusion. In fact, I find real value in knowing what other methods can work. What concerns me is how we (not the researchers) extrapolate from that conclusion.

For example, suppose there is a study that shows that a point-based reward system does not promote cognitive ability (for tasks that require cognitive effort, i.e., not a mechanical non-thinking task). My first question is, “Is this a statistical conclusion? or did every person who was part of the point-based reward (e.g., money) group show no increase in cognitive ability?”. I think it makes a big difference if the conclusion were a statistical one as opposed to the latter (though, technically speaking that’s still a statistical conclusion) where every person showed no increase.

The problem here is that we’re dealing with people, not widgets. If I had a new quality control process that showed that widget defects were reduced (statistically) in comparison to the existing process, then the logical conclusion, ceterus paribus, would be to implement the new process. Widgets are widgets. With people, it’s a different ball game.

The general idea underlying a statistical conclusion is that if we were to obtain a simple random sample of widgets, then the new quality control process would reduce widget defects with a certain level of statistical certainty. The same goes for edu-research. If we were to pick, at random, a number of students and randomly split them into two groups, then we would show with statistical confidence that a point-based reward system does not promote cognitive ability. And this is fine.

My concern is the extrapolation made from here. It is true that a randomly chosen individual would show no increase in cognitive ability if incentivized by points. It is, however, untrue that every person would show no increase in cognitive ability if incentivized by points. And this is the distinction I want to make between studies on people’s ability to learn / do / be motivated, etc vs studies on widgets.

If we knew nothing about the person to whom we were providing instruction, then, yes, it is likely true that a point-based reward system will have no effect on that person’s cognitive ability. But, as we teach that individual, we begin to learn about who we are instructing. At this point, we are no longer in the controlled world of the study. We are in a world with more information and as such, we have to be able to discern if the person to whom we are providing instruction will respond well to a point-based system or not.

Having worked with hundreds of students, I can say that there are certainly those who thrive in a system of rewards and penalties. I have also seen enough students who do not respond well to the pressure of testing. Instead those students thrive in a no-pressure environment and learn quite well when the threat of being marked as a failure is taken away from them. I also know students who like to work with tangible things first before going into the abstract. I have had students who need to see things abstractly first before they will dare tinker with specifics. And then there are people like me who neither want theory nor examples nor games nor groups nor projects, we just want a problem before anything else and we want to fiddle with it on our own before seeing what the formalities are.

So, of course, project-based learning works, gamification works, technology-based / technology-infused instruction works, brain-based learning works, flipping the classroom works, etc. And so does traditional instruction. The caveat to all of these methods is that they don’t work for everyone — and this is one of the reasons that we have so many different ways to teach. All of these methods are tools in the instructor’s toolkit. The way to use these methods is to recognize what will and won’t work for which student and to best adapt a teaching style to reach the student. (And I know that’s a tall order for any teacher given that teachers often have to work with 20+ students simultaneously.)

Traditional instruction is the punching bag as it is often the method against which other methods are compared. So in that sense, traditional instruction could be considered to be inferior, but I think we have to remember that, as far as I know, there haven’t been (enough) studies that compare new methods against other new methods. Additionally, we also have to remember that there may have been plenty of other methods of instruction that went up against the traditional style of instruction but did worse . We don’t hear about those cases because they don’t get published. Thus, we end up with an anti-traditional bias.

For the record, I am anything but traditional, but at the same time, when I teach I do my best to recognize when traditional instruction will promote learning and when I need to go with something “modern”.

Teaching is simultaneously an art and a science; learning is very complex; grades aren’t killing education; tests aren’t the devil; people aren’t widgets. If I had to give what a best teaching style is, I would say that it’s the one that is most flexible.

What’re your thoughts? I’d like to hear them even if you disagree with every word in this post.

4 thoughts on “Some Thoughts on Edu-Research

  1. Pingback: “Research is at the mercy of the subjects.” | Schooled Country Bumpkin

    1. Manan Shah Post author

      I hear what you’re saying. I think the point I was trying to make was that there is research and then there is application. The research world is (hopefully) a controlled environment. The applied world doesn’t necessarily conform to the research axioms. As such, we have to recognize when penalties work and when they don’t; when project-based learning works and when it doesn’t; when direct instruction works and when it doesn’t; when flipping the classroom works and when it doesn’t; etc.

      When we begin to teach, we begin to recognize who reacts to what. Some people are motivated by rewards in the form of points. Others are motivated by rewards in the form of fame and attention. Others are motivated by the reward of being left alone. Rewards work. It’s a matter of understanding what rewards work for whom.

      What research tends to show is a *statistical* improvement over, say, traditional methods.

      This is why education is tricky. People aren’t dogs, people aren’t widgets. People can be far more complex. As educators we have to be mindful of those complexities. If we pigeonhole our teaching methods, then it will be a matter of time until that method will be traditional and supplanted by, say, direct instruction.

      Reply
  2. Ryan Horne

    Manan, one part that I would like to discuss more is the ways in which these different instruction methods are evaluated …. What’s interesting is that many of the “new” and different instruction methods often have their success, or lack thereof , tested in a form of standardized tests. So, we are comparing different instruction methods, based upon their success rate (which is often based upon a student’s standardized test score after receiving the “new” instruction in comparison to another student’s standardized test score in a control group). And not only that, we are gathering our research from the “new and improved” instruction methods by using an evaluation method (standardized tests) that is just as old as the “traditional teaching” (direct instruction, lecture).

    Perhaps it’s time to rethink our goals and end product we wish to see as a result of these ‘new’ instruction methods, and then develop or choose a new evaluation tool that can more accurately show if these ‘new’ instruction methods meet our newly established learning goals. Standardized tests simply cannot show much evidence of learning past the bottom stages of Bloom’s Taxonomy.

    Reply
    1. Manan Shah Post author

      Ryan,

      This is a good point. I was alluding to standardized tests when I had said “consistent quantitative way”. This is yet another tricky problem to overcome. What is a good way to measure learning? There is the immediate satisfaction (from a data standpoint) gained from being able to measure it by using a standardized test. But we know that this isn’t the full story.

      Standardized tests have a very nice appeal in that they are standardized. They take away so many of the confounding variables that can exist in an educational study. When we search for a better evaluation tool, we will have to compare it against the standardized tests that exist! And this is a meta-problem: What evaluation tool is better at being an evaluation tool?

      One of the things that I remain interested in as a consumer of research is answering: “When a student does not learn <insert topic> what are alternative ways to teach them that topic?”. This is a bit different from “What are alternative ways to teach?”. For example, see the work here: Breaking Away, where the researchers state, in effect, that when students have a difficult time learning multiplication in the standard way (and the standard way is very efficient in terms of computation time) how else can they be taught to multiply? This is different from “How else can the students be taught to multiply in the standard way?”.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *