Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Tuesday April 24 2018, @06:25PM   Printer-friendly
from the safer-sipping dept.

The Sip Safe wristband lets you dab on a drop of your drink to test if it's been spiked.

You learn the rules early when you go to gigs, festivals and bars: Always keep an eye on your drink. Watch out for strangers. Be careful who you leave your glass with.

But now an Australian invention could change that (and put less onus on young people -- especially women -- to completely change the way they act when they're out).

The Sip Safe is a wristband designed for concerts and festivals that lets you test for drugs in your drink. Dab a drop of your drink onto the two spots on the band, wait two minutes till the liquid dries, and if the spots turn darker blue, that's a sign that your drink could have been spiked.

It's not the first invention designed to make drink safety easy -- we've seen drug-testing drinkware, sensors that look like swizzle sticks and even nail polish that tests for date-rape drugs. 


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by JNCF on Tuesday April 24 2018, @10:02PM (3 children)

    by JNCF (4317) on Tuesday April 24 2018, @10:02PM (#671370) Journal

    The study with 3,303 cases is paywalled, but the abstract says

    This research highlights the need for the early collection of forensic samples in cases of alleged sexual assault. Law enforcement agencies and health professionals should establish guidelines and procedures to ensure that appropriate forensic samples (blood and urine) are collected in a timely manner following allegations of possible drug mediated sexual assault.

    I'd like to know more about methodology, as it sounds like the authors themselves have reservations. How quickly was the urine collected and analyzed? The other statements in the abstract seem to be saying that they can't prove a link, but they don't say they've ruled it out. This might be a case of popsci reporting going further than the researchers themselves. I didn't look for the details of the study with 101 cases; even if it were conducted perfectly, the margin of error would be so high that calling a lack of evidence a statistical anomaly would be a stretching of the term. Any variable condition is a statistical anomaly if you have a small enough sample size.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by JNCF on Tuesday April 24 2018, @10:02PM

    by JNCF (4317) on Tuesday April 24 2018, @10:02PM (#671371) Journal

    Statistical anomaly, rounding error, you know what I mean.

  • (Score: 0) by Anonymous Coward on Tuesday April 24 2018, @11:50PM (1 child)

    by Anonymous Coward on Tuesday April 24 2018, @11:50PM (#671422)

    I didn't look for the details of the study with 101 cases; even if it were conducted perfectly, the margin of error would be so high that calling a lack of evidence a statistical anomaly would be a stretching of the term. Any variable condition is a statistical anomaly if you have a small enough sample size.

    This is not the case. Sample size influences the margin of error but a small sample size does not imply a large error.

    The margin of error in sampling is entirely determined by three factors, all of which are controlled by the researcher: the sample size, the desired confidence level (typically this is chosen to be 19 times out of 20), and the "complexity" of the question being answered by sampling. (This "complexity" has an objective mathematical definition based on shattering sets and fun stuff).

    With a sufficiently simple question, a very small sample size gives low error and/or high confidence. Suppose I have a crate full of papayas and I want to know what proportion of the papayas are tasty. I could select 100 papayas uniformly at random from the crate, eat them all, and conclude that the proportion of tasty papayas in the crate matches the proportion of the sample. This will give a certain margin of error for any desired confidence level, which can be calculated. For example, if I find that every one of the 100 sampled papayas was delicious it is vanishingly unlikely that the true proportion of bad papayas in the crate is more than 50% (this would represent a large margin of error but a tremendously high confidence level).

    In practice, the calculations normally go the other way, one fixes the margin of error and confidence interval and then calculates the required sample size.

    • (Score: 2) by JNCF on Wednesday April 25 2018, @03:03AM

      by JNCF (4317) on Wednesday April 25 2018, @03:03AM (#671489) Journal

      All your Bayes are belong to us. We were discussing a rounding error, which is < 1%. With a sample size of 101, you have a perfectly reasonable chance of missing a small but > 1% effect. I'm not saying the study is bunk -- I haven't looked into it -- but they'd have to be dealing with ridiculous priors to come up with the sort of numbers we're discussing from a pool of 101 with a high level of certainty.

      In my anecdotal dealings with academics, sample sizes seem to often be based on percieved funding and availability rather than preconceived notions about probabilities.