A small study into electronic device usage during lectures found that there was minimal difference in scores between those who were distracted while listening to the lecture and those who weren't when there was a quiz afterwards.
Results. The sample was comprised of 26 students. Of these, 17 were distracted in some form (either checking email, sending email, checking Facebook, or sending texts). The overall mean score on the test was 9.85 (9.53 for distracted students and 10.44 for non-distracted students). There were no significant differences in test scores between distracted and non-distracted students (p = 0.652). Gender and types of distractions were not significantly associated with test scores (p > 0.05). All students believed that they understood all the important points from the lecture.
Conclusions. Every class member felt that they acquired the important learning points during the lecture. Those who were distracted by electronic devices during the lecture performed similarly to those who were not. However, results should be interpreted with caution as this study was a small quasi-experimental design and further research should examine the influence of different types of distraction on different types of learning.
(Score: 3, Informative) by TGV on Wednesday September 17 2014, @05:55AM
Sounds like inexperienced experimenters to me.
First: 26 samples is nothing, and by the looks of it, the variance is pretty high. This study lacks power. And even within the Fisher-Neyman framework (p-testing), you can't assume H0 is true just because you can't reject it.
Second: generalization. One class?
Third: "The students believed", "Every class member felt"? That's not really objective with respect to the goal, isn't it? If that's what they asked, the conclusion is about students' believes. So then the title of the article should be "Students distracted by electronic devices can't believe they don't perform at the same level as those who are focused on the lecture".
(Score: 2) by frojack on Wednesday September 17 2014, @06:53AM
Look, if the lecture was as content free as this study, chances are that any random student on campus could score as well never hearing the lecture at all.
No, you are mistaken. I've always had this sig.
(Score: 2) by TGV on Wednesday September 17 2014, @06:55AM
Good point.
(Score: 2) by jimshatt on Wednesday September 17 2014, @10:14AM
(Score: 2) by c0lo on Wednesday September 17 2014, @07:09AM
Sorry, I had to answer to an email. So, you were sayin'...?
Oh, don't tell me I can't perform
https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
(Score: 0) by Anonymous Coward on Wednesday September 17 2014, @11:40AM
There are standards for pedagogical studies, and it is possible, though difficult, to do rigorous, high quality pedagogical studies. Not all "[Method X] makes students learn better" studies are crap.
That said, PeerJ requires "All authors on a paper must have a 'paid publishing plan' and there are 3 publishing plans, each conferring different rights. The three plans are: Basic (which allows for publishing once per year, for life); Enhanced (which allows for publishing twice per year, for life); and Investigator (which allows for publishing an unlimited number of articles per year, for life)." They claim to have a peer-review system, but it hasn't really been around long enough to establish a reputation beyond "cheap and author-friendly." It does seem like the editors have substantial leeway to over-ride reviewer criticisms. It seems like the people most happy with the journal are from outside of traditional biomedical sciences, or people publishing outside their usual profession. In the case of TFA, it's a couple of dentists.
Interestingly, the reviews are also open [peerj.com]. The reviewers cite the (fairly obvious) flaws, and one of them recommended rejection (ie, flawed in design and implementation, and not fixable). The reviewers never saw the revised manuscript, which the editor accepted following a final round of wordsmithing. This is exactly the concern that academics have with new journals, and particularly with "online only" journals in which the authors pay: that the low cost of publication (or the high marginal revenue per manuscript) will result in low standards of evidence and rigor.
(Score: 2) by TGV on Wednesday September 17 2014, @12:10PM
It's interesting that the reviews are open, that at least shows what considerations lead to publication. In this case, it seems indeed editor's interest, i.e. free publicity.
(Score: 2) by opinionated_science on Wednesday September 17 2014, @01:12PM
I agree, this is clearly Bullsh!t.
The average human brain has only so much processing power and I guarantee if you are not focusing on the material at hand, you are not doing your best. I found streaming very helpful in this regard, since the pace of teaching could keep you engaged.
Exams may not be the best educational tool, but they are at least objective data (we can argue the syllabus, but we can't argue that an exam was taken!).
Perhaps a good unit of measure of exam effectiveness would be "educational impedance". The current is the transfer of understanding, the voltage is the "teaching pressure", and the resistance is a term consisting of the students intellectual and the teaching methods...
(Score: 2) by melikamp on Wednesday September 17 2014, @04:16PM
I wouldn't say that sample size 26 is "nothing", but it looks like investigators made some choices that rendered the study effectively meanigless. First, there really are 2 samples here: one of size 17, and the other of size 9, and 9 is a really small sample size. Second, a quick search of TFA fails to bring up the word "population", and the sampling process is not described, so it looks like it wasn't really a sample at all (samples are taken out of populations they are supposed to represent). In other words, this was properly a survey of the population of size 26, and no conclusion of this study can apply to any other student population. Lastly, what about people being distracted by someone else surfing? A more interesting result could be obtained by splitting 26 people into 2 groups of 13 randomly, and then giving them the same lecture, with one group being allowed to surf, and the other one forbidden. It would be a different kind of conclusion qualitatively, but at least it would be meaningful.
(Score: 2) by TGV on Wednesday September 17 2014, @06:35PM
Actually, if all of the population was there, it is a fact that the distracted students scored worse. It doesn't generalize, though.
Anyway, 26 data points is really nothing in this kind of test. The variance is too high. Suppose you want to distinguish a false coin from a true coin (the false one with probability p for heads, and the true one with probability 0.5). With 26 drawings, the 95% interval (leaving both tails at 5%) is from 8 to 17. So to have a 95% chance (a priori) that your false coin throws less than 8 (or more than 17) heads in 26 samples, it would need to be something like 0.2 (or 0.8). If it's inside the range 0.2-0.8, you don't have enough samples to be relatively sure to find a difference before starting the experiment.
In psycholinguistic experiments of this nature, the number of subjects there is usually around 30, and that's not considered high. Each subject usually does 10 to 20 (or more) samples in the same condition, and hopefully across all conditions. In this case, that would mean at least 30x10x2 = 600 samples instead of 26.