Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Monday July 06 2015, @04:17AM   Printer-friendly
from the now-where-should-I-store-this-fact? dept.

BBC has a nice article:

Storing information so that you can easily find it again is a challenge. From purposefully messy desks to indexed filing cabinets, we all have our preferred systems. How does it happen inside our brains?

Somewhere within the dense, damp and intricate 1.5kg of tissue that we carry in our skulls, all of our experiences are processed, stored, and - sometimes more readily than others - retrieved again when we need them. It's what neuroscientists call "episodic memory" and for years, they have loosely agreed on a model for how it works. Gathering detailed data to flesh out that model is difficult.

But the picture is beginning to get clearer and more complete. A key component is the small, looping structure called the hippocampus, buried quite deep beneath the brain's wrinkled outer layer. It is only a few centimetres in length but is very well connected to other parts of the brain. People with damage to their hippocampus have profound memory problems and this has made it a major focus of memory research since the 1950s.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by fritsd on Monday July 06 2015, @06:30PM

    by fritsd (4586) on Monday July 06 2015, @06:30PM (#205778) Journal

    If memory recollection works by giving a piece of neural network a partial memory, and then it manages to retrieve more of that memory, does that then mean that Hopfield was onto something with training his self-associative networks?

    I.e. if you train a network with input X and output X, then if you give it "that bit of X that you can remember at the moment" it would make sense that the output is either X, or some other memory that was trained using something similar to that bit of X.

    Also, is there evidence that Hebbian learning works in real life?

    Last question: should we give up on the convenient backprop algorithm and acquaint ourselves with more complicated spiky networks (STDP)? Is there a time dimension?

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2