I'm a neuroscientist in a doctoral program but I have a growing interest in deep learning methods (e.g., http://deeplearning.net/ ). As a neuroscientist using MR imaging methods, I often rely on tools to help me classify and define brain structures and functional activations. Some of the most advanced tools for image segmentation are being innovated using magical-sounding terms like Adaboosted weak-learners, auto-encoders, Support Vector Machines, and the like.
While I do not have the time to become a computer-science expert in artificial intelligence methods, I would like to establish a basic skill level in the application of some of these methods. Soylenters, "Do I need to know the mathematical foundation of these methods intimately to be able to employ them effectively or intelligently?" and "What would be a good way of becoming more familiar with these methods, given my circumstances?"
(Score: 3, Informative) by fritsd on Tuesday June 30 2015, @08:25AM
Then they're doing it wrong :-)
Cross-validation [wikipedia.org] is a very useful technique to repartition your dataset into training and testing sets, but the idea is to prove a more robust result, i.e. "is your predicted result really as good as you found". In other words, it's meant to make your result *worse* but a bit more certain. (This is also in the Hastie & Tibshirani book, half of chapter 7 is about "does cross-validation really work and how should you do it")
I had to LOL at this XKCD cartoon: https://xkcd.com/882/ (significant) [xkcd.com] which shows exactly what you get if you're using it wrong.
(Score: 0) by Anonymous Coward on Tuesday June 30 2015, @09:37PM
Cross Validation is not (at least on its face) a hard concept. I've used it, and used it correctly in an imaging methods paper. Obviously test data and training data need to be separated within each cross-validation experiment.