Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Monday June 29 2015, @03:17PM   Printer-friendly
from the I'm-glad-you-asked dept.

I'm a neuroscientist in a doctoral program but I have a growing interest in deep learning methods (e.g., http://deeplearning.net/ ). As a neuroscientist using MR imaging methods, I often rely on tools to help me classify and define brain structures and functional activations. Some of the most advanced tools for image segmentation are being innovated using magical-sounding terms like Adaboosted weak-learners, auto-encoders, Support Vector Machines, and the like.

While I do not have the time to become a computer-science expert in artificial intelligence methods, I would like to establish a basic skill level in the application of some of these methods. Soylenters, "Do I need to know the mathematical foundation of these methods intimately to be able to employ them effectively or intelligently?" and "What would be a good way of becoming more familiar with these methods, given my circumstances?"


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Non Sequor on Monday June 29 2015, @07:30PM

    by Non Sequor (1005) on Monday June 29 2015, @07:30PM (#202962) Journal

    Using biology as a guide, any methodology that results in a decent decision a decent fraction of the time is a winner. 100% success rates are generally too expensive to be worthwhile. Heuristics outcompete algorithms when implementation cost is a factor.

    --
    Write your congressman. Tell him he sucks.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 3, Interesting) by TheLink on Tuesday June 30 2015, @08:08AM

    by TheLink (332) on Tuesday June 30 2015, @08:08AM (#203234) Journal
    You miss the point[1]. The point about 90% etc is about understanding.

    If your simulation of the solar system is 10% off then you probably don't understand it so well and you need a new theory.

    So if your simulation of a testate amoeba is 10% off then you probably don't understand it so well either.

    Once you understand it then you can build on that understanding and achieve more. Just like when we went from newtonian physics (which was quite good and good enough in many cases) to relativity.

    [1] Or perhaps you're actually an AI that's replying based on heuristics. In which case congrats, you're better than most of those on Slashdot ;).
    • (Score: 1, Interesting) by Anonymous Coward on Tuesday June 30 2015, @09:37AM

      by Anonymous Coward on Tuesday June 30 2015, @09:37AM (#203256)

      A lot of this stuff does appear simple and was figured out 1900-1950 but goes largely ignored. Check out what this guy has been doing:

      More than a century has passed since Ramón y Cajal presented a set of fundamental biological laws of neuronal branching. He described how the shape of the core elements of the neural circuitry – axons and dendrites – are constrained by physical parameters such as space, cytoplasmic volume, and conduction time. The existence of these laws enabled him to organize his histological observations, to formulate the neuron doctrine, and to infer directionality in signal flow in the nervous system. We show that Cajal's principles can be used computationally to generate synthetic neural circuits. These principles rigorously constrain the shape of real neuronal structures, providing direct validation of his theories.

      http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1000877 [plos.org]

      Complex neuronal branching patterns explained using a theory from 1912 with one parameter! This is the kind of research we need more of, and no I'm not hyping my own work here.

      • (Score: 2) by TheLink on Sunday July 05 2015, @07:08PM

        by TheLink (332) on Sunday July 05 2015, @07:08PM (#205349) Journal

        But that's not really relevant to how single celled creatures think/make decisions in the first place. Single celled creatures do not have neurons.

        From what I see it's not as if the simpler animals needed to be much smarter than those single celled creatures, nor do they appear to be much smarter in practice - how much smarter are worms compared to testate amoeba or similar? White blood cells make decisions, usually decent ones if not we would be dead. We do know a fair bit about the hardware: https://www.youtube.com/watch?v=FzcTgrxMzZk [youtube.com]
        But doesn't seem like we've figured out the software.

        And as you should see by now single celled creatures aren't really that simple as many assume. Thus my theory that initially nerves and neurons were mainly to solve the problems of controlling a body made up of very many cells (and not the problem of thinking). How do you control muscles etc if the thinking stuff was just one cell? So those thinking cells had to work together.

        And if we haven't fully figured out how single celled creatures think, we might be missing out significant bits of how neurons think. Just like the Newtonian Laws miss out significant bits.