(This article was originally published at Simply Statistics, and syndicated at StatsBlogs.)

My colleague John McGready has just published a study he conducted comparing the outcomes of students in the online and in-class versions of his *Statistical Reasoning in Public Health* class that he teaches here in the fall. In this class the online and in-class portions are taught concurrently, so it’s basically one big class where some people are not in the building. Everything is the same for both groups–quizzes, tests, homework, instructor, lecture notes. From the article:

The on-campus version employs twice-weekly 90 minute live lectures. Online students view pre-recorded narrated versions of the same materials. Narrated lecture slides are made available to on-campus students.

The on-campus section has 5 weekly office hour sessions. Online students communicate with the course instructor asynchronously via email and a course bulletin board. The instructor communicates with online students in real time via weekly one-hour online sessions. Exams and quizzes are multiple choice. In 2005, on-campus students took timed quizzes and exams on paper in monitored classrooms. Online students took quizzes via a web-based interface with the same time limits. Final exams for the online students were taken on paper with a proctor.

So how did the two groups fair in their final grades? Pretty much the same. First off, the two groups of students were not the same. Online students were 8 years older on average, more likely to have an MD degree, and more likely to be male. Final exam scores between online and in-class groups differed by -1.2 (out of 100, online group was lower) and after adjusting for student characteristics they differed by -1.5. In both cases, the difference was not statistically significant.

This was not a controlled trial and so there are possibly some problems with unmeasured confounding given that the populations appeared fairly different. It would be interesting to think about a study design that might allow a measure of control or perhaps get a better measure of the difference between online and on-campus learning. But the logistics and demographics of the students would seem to make this kind of experiment challenging.

Here’s the best I can think of right now: Take a large class (where all students are on-campus) and get a classroom that can fit roughly half the number of students in the class. Then randomize half the students to be in-class and the other half to be online up until the midterm. After the midterm cross everyone over so that the online group comes into the classroom and the in-class group goes online to take the final. It’s not perfect–One issue is that course material tends to get harder as the term goes on and it may be that the “easier” material is better learned online and the harder material is better learned on-campus (or vice versa). Any thoughts?

**Please comment on the article here:** **Simply Statistics**