Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Monday June 29 2015, @03:17PM   Printer-friendly
from the I'm-glad-you-asked dept.

I'm a neuroscientist in a doctoral program but I have a growing interest in deep learning methods (e.g., http://deeplearning.net/ ). As a neuroscientist using MR imaging methods, I often rely on tools to help me classify and define brain structures and functional activations. Some of the most advanced tools for image segmentation are being innovated using magical-sounding terms like Adaboosted weak-learners, auto-encoders, Support Vector Machines, and the like.

While I do not have the time to become a computer-science expert in artificial intelligence methods, I would like to establish a basic skill level in the application of some of these methods. Soylenters, "Do I need to know the mathematical foundation of these methods intimately to be able to employ them effectively or intelligently?" and "What would be a good way of becoming more familiar with these methods, given my circumstances?"


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Informative) by fritsd on Tuesday June 30 2015, @08:25AM

    by fritsd (4586) on Tuesday June 30 2015, @08:25AM (#203241) Journal

    I've seen people repartition datasets into training/testing, over and over, until the train and test groups gave the desired outcome...

    Then they're doing it wrong :-)

    Cross-validation [wikipedia.org] is a very useful technique to repartition your dataset into training and testing sets, but the idea is to prove a more robust result, i.e. "is your predicted result really as good as you found". In other words, it's meant to make your result *worse* but a bit more certain. (This is also in the Hastie & Tibshirani book, half of chapter 7 is about "does cross-validation really work and how should you do it")

    I had to LOL at this XKCD cartoon: https://xkcd.com/882/ (significant) [xkcd.com] which shows exactly what you get if you're using it wrong.

    Starting Score:    1  point
    Moderation   +1  
       Informative=1, Total=1
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 0) by Anonymous Coward on Tuesday June 30 2015, @09:37PM

    by Anonymous Coward on Tuesday June 30 2015, @09:37PM (#203517)

    Cross Validation is not (at least on its face) a hard concept. I've used it, and used it correctly in an imaging methods paper. Obviously test data and training data need to be separated within each cross-validation experiment.