Stories
Slash Boxes
Comments

SoylentNews is people

posted by n1 on Wednesday September 17 2014, @04:49AM   Printer-friendly
from the was-not-paying-attention-to-begin-with dept.

A small study into electronic device usage during lectures found that there was minimal difference in scores between those who were distracted while listening to the lecture and those who weren't when there was a quiz afterwards.

Results. The sample was comprised of 26 students. Of these, 17 were distracted in some form (either checking email, sending email, checking Facebook, or sending texts). The overall mean score on the test was 9.85 (9.53 for distracted students and 10.44 for non-distracted students). There were no significant differences in test scores between distracted and non-distracted students (p = 0.652). Gender and types of distractions were not significantly associated with test scores (p > 0.05). All students believed that they understood all the important points from the lecture.

Conclusions. Every class member felt that they acquired the important learning points during the lecture. Those who were distracted by electronic devices during the lecture performed similarly to those who were not. However, results should be interpreted with caution as this study was a small quasi-experimental design and further research should examine the influence of different types of distraction on different types of learning.

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Wednesday September 17 2014, @05:05AM

    by Anonymous Coward on Wednesday September 17 2014, @05:05AM (#94396)

    p = 0.652? ROFL. Does the author even know what the p value means?

  • (Score: 3, Informative) by TGV on Wednesday September 17 2014, @05:55AM

    by TGV (2838) on Wednesday September 17 2014, @05:55AM (#94406)

    Sounds like inexperienced experimenters to me.

    First: 26 samples is nothing, and by the looks of it, the variance is pretty high. This study lacks power. And even within the Fisher-Neyman framework (p-testing), you can't assume H0 is true just because you can't reject it.

    Second: generalization. One class?

    Third: "The students believed", "Every class member felt"? That's not really objective with respect to the goal, isn't it? If that's what they asked, the conclusion is about students' believes. So then the title of the article should be "Students distracted by electronic devices can't believe they don't perform at the same level as those who are focused on the lecture".

    • (Score: 2) by frojack on Wednesday September 17 2014, @06:53AM

      by frojack (1554) on Wednesday September 17 2014, @06:53AM (#94413) Journal

      Look, if the lecture was as content free as this study, chances are that any random student on campus could score as well never hearing the lecture at all.

      --
      No, you are mistaken. I've always had this sig.
      • (Score: 2) by TGV on Wednesday September 17 2014, @06:55AM

        by TGV (2838) on Wednesday September 17 2014, @06:55AM (#94414)

        Good point.

      • (Score: 2) by jimshatt on Wednesday September 17 2014, @10:14AM

        by jimshatt (978) on Wednesday September 17 2014, @10:14AM (#94463) Journal
        Heh, that was the point I was going to make. Distracted from what!?
    • (Score: 2) by c0lo on Wednesday September 17 2014, @07:09AM

      by c0lo (156) Subscriber Badge on Wednesday September 17 2014, @07:09AM (#94422) Journal

      First: 26 samples is nothing...
      ...
      ...
      title of the article should be "Students distracted by electronic devices can't believe they don't perform at the same level as those who are focused on the lecture".

      Sorry, I had to answer to an email. So, you were sayin'...?
      Oh, don't tell me I can't perform

      --
      https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
    • (Score: 0) by Anonymous Coward on Wednesday September 17 2014, @11:40AM

      by Anonymous Coward on Wednesday September 17 2014, @11:40AM (#94484)

      There are standards for pedagogical studies, and it is possible, though difficult, to do rigorous, high quality pedagogical studies. Not all "[Method X] makes students learn better" studies are crap.

      That said, PeerJ requires "All authors on a paper must have a 'paid publishing plan' and there are 3 publishing plans, each conferring different rights. The three plans are: Basic (which allows for publishing once per year, for life); Enhanced (which allows for publishing twice per year, for life); and Investigator (which allows for publishing an unlimited number of articles per year, for life)." They claim to have a peer-review system, but it hasn't really been around long enough to establish a reputation beyond "cheap and author-friendly." It does seem like the editors have substantial leeway to over-ride reviewer criticisms. It seems like the people most happy with the journal are from outside of traditional biomedical sciences, or people publishing outside their usual profession. In the case of TFA, it's a couple of dentists.

      Interestingly, the reviews are also open [peerj.com]. The reviewers cite the (fairly obvious) flaws, and one of them recommended rejection (ie, flawed in design and implementation, and not fixable). The reviewers never saw the revised manuscript, which the editor accepted following a final round of wordsmithing. This is exactly the concern that academics have with new journals, and particularly with "online only" journals in which the authors pay: that the low cost of publication (or the high marginal revenue per manuscript) will result in low standards of evidence and rigor.

      • (Score: 2) by TGV on Wednesday September 17 2014, @12:10PM

        by TGV (2838) on Wednesday September 17 2014, @12:10PM (#94497)

        It's interesting that the reviews are open, that at least shows what considerations lead to publication. In this case, it seems indeed editor's interest, i.e. free publicity.

    • (Score: 2) by opinionated_science on Wednesday September 17 2014, @01:12PM

      by opinionated_science (4031) on Wednesday September 17 2014, @01:12PM (#94524)

      I agree, this is clearly Bullsh!t.

      The average human brain has only so much processing power and I guarantee if you are not focusing on the material at hand, you are not doing your best. I found streaming very helpful in this regard, since the pace of teaching could keep you engaged.

      Exams may not be the best educational tool, but they are at least objective data (we can argue the syllabus, but we can't argue that an exam was taken!).

      Perhaps a good unit of measure of exam effectiveness would be "educational impedance". The current is the transfer of understanding, the voltage is the "teaching pressure", and the resistance is a term consisting of the students intellectual and the teaching methods...

    • (Score: 2) by melikamp on Wednesday September 17 2014, @04:16PM

      by melikamp (1886) on Wednesday September 17 2014, @04:16PM (#94601) Journal

      26 samples [sic] is nothing

      I wouldn't say that sample size 26 is "nothing", but it looks like investigators made some choices that rendered the study effectively meanigless. First, there really are 2 samples here: one of size 17, and the other of size 9, and 9 is a really small sample size. Second, a quick search of TFA fails to bring up the word "population", and the sampling process is not described, so it looks like it wasn't really a sample at all (samples are taken out of populations they are supposed to represent). In other words, this was properly a survey of the population of size 26, and no conclusion of this study can apply to any other student population. Lastly, what about people being distracted by someone else surfing? A more interesting result could be obtained by splitting 26 people into 2 groups of 13 randomly, and then giving them the same lecture, with one group being allowed to surf, and the other one forbidden. It would be a different kind of conclusion qualitatively, but at least it would be meaningful.

      • (Score: 2) by TGV on Wednesday September 17 2014, @06:35PM

        by TGV (2838) on Wednesday September 17 2014, @06:35PM (#94650)

        Actually, if all of the population was there, it is a fact that the distracted students scored worse. It doesn't generalize, though.

        Anyway, 26 data points is really nothing in this kind of test. The variance is too high. Suppose you want to distinguish a false coin from a true coin (the false one with probability p for heads, and the true one with probability 0.5). With 26 drawings, the 95% interval (leaving both tails at 5%) is from 8 to 17. So to have a 95% chance (a priori) that your false coin throws less than 8 (or more than 17) heads in 26 samples, it would need to be something like 0.2 (or 0.8). If it's inside the range 0.2-0.8, you don't have enough samples to be relatively sure to find a difference before starting the experiment.

        In psycholinguistic experiments of this nature, the number of subjects there is usually around 30, and that's not considered high. Each subject usually does 10 to 20 (or more) samples in the same condition, and hopefully across all conditions. In this case, that would mean at least 30x10x2 = 600 samples instead of 26.

  • (Score: 2) by BsAtHome on Wednesday September 17 2014, @05:58AM

    by BsAtHome (889) on Wednesday September 17 2014, @05:58AM (#94408)

    The small size of the group is a problem for generalizing the conclusion. Also, possible biases are the subject of the lecture and the person who gives the lecture. The third problem is that the test was performed non-anonymous and it does not state how the students were selected from the group (random or what?) or who selected the students. So, it may be implicitly biases towards a good/bad group of students.

    As a pilot, it may be an interesting try, but this is a very far cry from being able to make any significant conclusion from the data collected or saying something about the actual influence of device use in lectures.

    • (Score: 0) by Anonymous Coward on Wednesday September 17 2014, @06:24AM

      by Anonymous Coward on Wednesday September 17 2014, @06:24AM (#94412)

      The lack of generalizability comes from the sampling bias (one class of similar students), not from the size of the study group: that is the case of the low power of the experiment (meaning it would take a strong effect to disprove the null hypothesis). Really, 9 not distracted people? My servery for my English class had more data that that!

      Regardless there is a conclusion worth nothing here: there isn't an ever present very strong effect massively harming test scores from these distractions. Sure they couldn't show that it doesn't happen, but they can show that the effect is not always large.

      Oh wait, it was an observational study not an experiment? Never mind my points then, totally useless. This simply tells you you shouldn't auto fail distracted people: it provides no info for students on if they should allow themselves to be distracted due to selection bias (needs random assignment). I was very heavily distracted in classes where I knew the material well, and payed careful attention when I struggled for example: these confounding factors matter!

  • (Score: 2) by aristarchus on Wednesday September 17 2014, @07:10AM

    by aristarchus (2645) on Wednesday September 17 2014, @07:10AM (#94424) Journal

    So, not significantly associated with test scores. . . IMBECILE!!! Do you even know what tests are supposed to do in a well ordered pedagogy? Obviously not, since to took test scores as some kind of indicator of educational success. God save us from these people who fail to be able to teach at the most basic levels, and think that they are somehow competent to just the teaching success of others! The test is not the point! The test is a tool to direct the students to what they have been missing, on a rather gross level. The fact that the "distracted students" ( I prefer the technical term, "a**H**es") do not do terribly worse than the rest says nothing about how much of the content of a course they are understanding, since there is much more to any course than what is covered in an exam. Oh, how I mourn what was education in America! Americans adopted the best of the European systems and to rid to the classism, producing the most promising system of higher education in the world. And now, they think they cannot afford it, because they have lost all sense of value.

    • (Score: 0) by Anonymous Coward on Wednesday September 17 2014, @08:01AM

      by Anonymous Coward on Wednesday September 17 2014, @08:01AM (#94438)

      there is much more to any course than what is covered in an exam.

      And there is a lot more to any subject than what is being said in the classroom. There is no universal guide to optimal learning, it all depends on the person.
      Personally I have always had problems keeping my focus during lectures (with or without electronics around), and even when I can focus, I get very little learning value compared to the time I spend there. So ever since second year at University, I've skipped almost all my classes, and attended only those few with compulsory attendance.
      But this doesn't mean that I'm a slacker. I take some time to check out several textbooks on the topic to find the one that fits my style best (even if it isn't the one followed by the professor), then I sit down and focus and study that textbook cover to cover. And I do all the exercises suggested by the professor on my own, supported by my chosen textbook, googling for relevant lecture notes from other universities and sometimes academic papers, and dialogue with the other students. This way of studying fits me a lot better than attending lectures, and it has given me mostly just A's in all the classes that I've skipped. Now I'm doing a master thesis in theoretical physics, and doing a master's actually feels no different from the way I've been studying for years already, except that I now have weekly meetings with a very skilled professor that can guide me along.
      But I'm not saying that this way of studying suits everybody, I know a lot of other students who say they learn a lot more from lectures than textbooks too. And I know several students who say that they need to doodle or use electronics or read a book or something during a lecture in order to stay focused, that this is something that distracts them a little bit, while sitting in a lecture and not doing anything else would distract them completely (sleepiness, wandering thoughts, etc). I'm just saying that the best way to learn is dependent on the person, so don't be to quick to judge someone as a slacker or "assh**e" just because they tend to study in a different way from you.

      • (Score: 1) by hendrikboom on Wednesday September 17 2014, @04:31PM

        by hendrikboom (1125) Subscriber Badge on Wednesday September 17 2014, @04:31PM (#94605) Homepage Journal

        have a close friend who always knits during lectures in order to be able to focus her mind on what is being said. Otherwise her mind waders to hopelessly distracting topics. Knitting apparently takes just enough of her excess mental capacity that it helps her focus. I suspect it is an anchor that grounds her attention deficit somewhat.

        I notice the same effect when I'm doing nonogram puzzles (entirely visual reasoning) while watching (well, actually listening to) Jeopardy. Potential answers pop into my head, whereas otherwise I keep wondering what the clue is they are trying to answer.

    • (Score: 0) by Anonymous Coward on Wednesday September 17 2014, @11:52AM

      by Anonymous Coward on Wednesday September 17 2014, @11:52AM (#94488)

      ( I prefer the technical term, "a**H**es")

      aDHDes?

      Attention Deficit Hiperactivity Disorder and estimated stupid?

      That's what happens when people have incomplete education and pathetic parenting (parents with low studies, low emotional skills, stupid as hell, bad social environment and/or unmanaged mental illness (even ADHD too, sometimes becoming a "disaster parent")).

      People need to get convinced about abandoning ssocial tupidity networks and avoid massive exposure to chatting.

      I'm a 30yo ADHD, my school days were an extreme disaster and I was a no incomes geek that never did something like computer programming (my dream) or advanced system administration (just playing with computers, no plans or goals). I started to attend psychologists and psychiatrist this year and things are slowly progressing (I managed with very great efforts to pass an entry exam for a computer programming two year course by public school).

      • (Score: 0) by Anonymous Coward on Wednesday September 17 2014, @12:10PM

        by Anonymous Coward on Wednesday September 17 2014, @12:10PM (#94498)

        ( I prefer the technical term, "a**H**es")

        aDHDes?

        Attention Deficit Hiperactivity Disorder and estimated stupid?

        Probably "AssHoles." Those students who sit in class watching reruns of Firefly or commenting on friends' Facebook walls. Those students whose activity is out of sync with the classroom presentation, distracting not only themselves but students around them and the lecturer.

        • (Score: 0) by Anonymous Coward on Wednesday September 17 2014, @10:57PM

          by Anonymous Coward on Wednesday September 17 2014, @10:57PM (#94711)

          Facebook and poker. That's what goes on in large lectures. I've seen it with my own eyes. It's always either Facebook or poker. I can't begin to imagine how much money has been lost by poker players during lectures.

  • (Score: 2, Insightful) by TheB on Wednesday September 17 2014, @07:28AM

    by TheB (1538) on Wednesday September 17 2014, @07:28AM (#94432)

    Published papers which use statistics should require a statistician to sign off on the study before submission.

    There are too many claims made that have no statistical basis. Anyone who took stat101 would know this study's claim is BS. The only claim that could be made is "This study cannot conclude that being distracted at some point during a lecture lowers performance." It shows only slightly more statistical competence than Mythbusters.

    Sadly this appears common in academia. I've proofread a few masters theses. 3/4 of them had false statistical claims, and all passed review. One paper claimed a correlation of 95% which would have required a min sample size of 1,000+. 1,500 surveys were sent out, only ~50 were returned!!! I pointed out the error, however the faculty adviser thought it was fine. Sat in on the review and the same adviser was the only board member paying attention. After several papers full of fantasy statistics, I complained to a department head about the total disregard of statistics in masters theses, and was told "Statistics only count for statistics majors. We are only trying to teach the process of a formal paper, and at this stage the result of the paper don't really matter."... So when between a masters degree and a doctorate do they teach distilling truth instead of polluting knowledge? Hopefully other Universities are better.

    • (Score: 1) by Tanuki64 on Wednesday September 17 2014, @09:15AM

      by Tanuki64 (4712) on Wednesday September 17 2014, @09:15AM (#94453)

      1,500 surveys were sent out, only ~50 were returned!!!

      HEY, this is very good. The most common statistic has n=1: 'My uncle/grandfather/father smoked his whole life and died with 95'. ;-)

  • (Score: 2) by arslan on Wednesday September 17 2014, @07:39AM

    by arslan (3462) on Wednesday September 17 2014, @07:39AM (#94434)

    Did they have some scantily clad girl/guy as ... ermm... "control" for the experiment? If not its a phail!!

    Also, pics/videos if they did, otherwise it didn't happen...

  • (Score: 2) by GreatAuntAnesthesia on Wednesday September 17 2014, @09:22AM

    by GreatAuntAnesthesia (3275) on Wednesday September 17 2014, @09:22AM (#94457) Journal

    Another way of reading these results is that lectures are a really inefficient way of transferring information, so much so that students can afford to slack off and play with facebook half the time without missing anything important.

  • (Score: 1) by quixote on Wednesday September 17 2014, @10:05PM

    by quixote (4355) on Wednesday September 17 2014, @10:05PM (#94698)

    Yes, there are all the problems with stats everyone else has already pointed out.

    But another huge problem in terms of generalizing the results to less advanced students is that third year med school (dental school, same thing) are very smart and very practiced students. The lecture was on special needs dentistry. In other words, on something that might add a few factoids to what they already know. If anybody could be texting and processing this minimal information, it's this group of students.

    Contrast that with a college freshman who isn't sure how long it takes the earth to orbit the sun and imagine him/her in a basic physics class. Can I get anyone to bet that texting while "listening" is going to have no effect on comprehension in that case?