Ed. tech. research and the black box of the class(room)

Categories:

Let me just get this out of the way: there may be some who read this post and say, “Well, Jon, of course that’s what you think about this study. The conclusion doesn’t jive with your beliefs in and advocacy for educational technology.” Confirmation bias, yada, yada, yada…

Study? Yeah, this one. Featured today in Inside Higher Education. (And between the time I started this post and the time I finished it, The Washington Post reported about it. That article in mainstream media has been splashed throughout my Facebook feed.) It’s the latest in a line of studies about laptops in higher ed. classrooms. The tl;dr version is that students who were allowed to have laptops open or tablets flat on the table/desk (huh?) did not do as well in the course as students who were “device free.” Predictably, there have been mixed reactions across my social media radar screens. So, I’ll see you some confirmation bias and raise you some selection bias. Here are some early tweets about the study:

From Kevin Gannon

to which Christopher Brooks added

Throw in some Derek Bruff:

and, of course, EdTech Hulk:

 

Good stuff, all, to which I’ll add the following…

By my reading of the report, the study is reasonably methodologically sound. That is, it’s a fairly solid implementation of a randomized-control trial, and the data analysis is pretty sophisticated. They control for some important variables, including teacher effects (Sorry, Dr. Brooks, but it looks like your tweet wasn’t quite right). I’d like a larger sample size (control group = 270 students, laptop group = 248, tablets face up on table = 208), but I won’t quibble with that too much. My larger concern for external validity is what Donald Campbell calls “proximate similarity.” This is what Kevin Gannon tweeted about above. West Point is a VERY unique institution of higher education. The institutional culture there is like few others. I could be wrong, but my guess is that students there are a bit more focused on “compliance” than in other places. That feels important to me in a study about “focus” and “attention” and “distractions” in a classroom. To use an image from William Trochim’s Research Methods Knowledge Base, the people at West Point are different than most students in higher ed. and the place itself is very different. That pushes this study pretty far along two of the gradients of similarity, thus rendering it pretty hard to generalize beyond the specific locale of the study.

proximal similarity

 

The other critique I have about this study is the same complaint I have about much of the research in online learning. Many of those studies involves small and simple comparisons of two sections of the same course, one that is online and one that is face-to-face. The idea is that “online” is the “treatment” condition and if we can reasonably randomly assign students to that condition, we can assume pretty good external validity; i.e. the random assignment takes care of other generalizability concerns (except, maybe, proximate similarity; see above). But, “online” is not a treatment; it’s not a monolithic thing like a pill or a drug. In most of these studies, we don’t really know what we’re comparing. What happened in the face-to-face class? What happened in the online class? How were the courses designed? Maybe one of the courses was designed or carried out poorly, pedagogically. 

I’ve said this before, but if I set out to do a study comparing the health effects of eating out versus eating homemade meals (not an unreasonable inquiry), would I just randomly assign subjects to one of the two conditions and see how it plays out? Wouldn’t we want to know what the meals consist of? Maybe the subjects in the eating out condition are eating lots of salads from Wendys. Maybe the subjects in the homemade meals condition are making lots of baked goods with simple carbohydrates. 

So, back to the West Point laptop study… are you able to tell me what happened in those classrooms? If not, how can we really know if the laptops are/were a distraction or not? What if the professors had the students doing lots of group projects that required them to do online research or collaborative work using Google Docs? If the course is designed for lecture and intense discussion, then maybe laptops are a “distraction.” But, then, that means the study would only generalize to classes where there is only lecture and intense discussion. Instead, we get articles like the Washington Post one that lead people to conclude that laptops should be banned from classrooms entirely, irrespective of instructional design.

Finally, what if I designed a study where students were randomly assigned to two conditions: classrooms with windows and classrooms with no windows. And, what if students in classrooms with windows did significantly worse on the final exams? Would we ban windows from classrooms? I mean, if they’re such a distraction…

4 thoughts on “Ed. tech. research and the black box of the class(room)”

  1. Let’s have a study where internet access is critical to the learning in the classroom, where information and teacher student interaction or student to student interaction occurs explicitly via a computer. How are the students who can’t use computers going to fare in that study?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.