Stories
Slash Boxes
Comments

SoylentNews is people

posted by n1 on Wednesday September 17 2014, @04:49AM   Printer-friendly
from the was-not-paying-attention-to-begin-with dept.

A small study into electronic device usage during lectures found that there was minimal difference in scores between those who were distracted while listening to the lecture and those who weren't when there was a quiz afterwards.

Results. The sample was comprised of 26 students. Of these, 17 were distracted in some form (either checking email, sending email, checking Facebook, or sending texts). The overall mean score on the test was 9.85 (9.53 for distracted students and 10.44 for non-distracted students). There were no significant differences in test scores between distracted and non-distracted students (p = 0.652). Gender and types of distractions were not significantly associated with test scores (p > 0.05). All students believed that they understood all the important points from the lecture.

Conclusions. Every class member felt that they acquired the important learning points during the lecture. Those who were distracted by electronic devices during the lecture performed similarly to those who were not. However, results should be interpreted with caution as this study was a small quasi-experimental design and further research should examine the influence of different types of distraction on different types of learning.

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Wednesday September 17 2014, @11:40AM

    by Anonymous Coward on Wednesday September 17 2014, @11:40AM (#94484)

    There are standards for pedagogical studies, and it is possible, though difficult, to do rigorous, high quality pedagogical studies. Not all "[Method X] makes students learn better" studies are crap.

    That said, PeerJ requires "All authors on a paper must have a 'paid publishing plan' and there are 3 publishing plans, each conferring different rights. The three plans are: Basic (which allows for publishing once per year, for life); Enhanced (which allows for publishing twice per year, for life); and Investigator (which allows for publishing an unlimited number of articles per year, for life)." They claim to have a peer-review system, but it hasn't really been around long enough to establish a reputation beyond "cheap and author-friendly." It does seem like the editors have substantial leeway to over-ride reviewer criticisms. It seems like the people most happy with the journal are from outside of traditional biomedical sciences, or people publishing outside their usual profession. In the case of TFA, it's a couple of dentists.

    Interestingly, the reviews are also open [peerj.com]. The reviewers cite the (fairly obvious) flaws, and one of them recommended rejection (ie, flawed in design and implementation, and not fixable). The reviewers never saw the revised manuscript, which the editor accepted following a final round of wordsmithing. This is exactly the concern that academics have with new journals, and particularly with "online only" journals in which the authors pay: that the low cost of publication (or the high marginal revenue per manuscript) will result in low standards of evidence and rigor.

  • (Score: 2) by TGV on Wednesday September 17 2014, @12:10PM

    by TGV (2838) on Wednesday September 17 2014, @12:10PM (#94497)

    It's interesting that the reviews are open, that at least shows what considerations lead to publication. In this case, it seems indeed editor's interest, i.e. free publicity.