Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 14 submissions in the queue.
posted by martyb on Monday December 05 2016, @04:13PM   Printer-friendly
from the learning-how-we-think dept.

MIT researchers and their colleagues have developed a new computational model of the human brain's face-recognition mechanism that seems to capture aspects of human neurology that previous models have missed.

The researchers designed a machine-learning system that implemented their model, and they trained it to recognize particular faces by feeding it a battery of sample images. They found that the trained system included an intermediate processing step that represented a face's degree of rotation—say, 45 degrees from center—but not the direction—left or right.

This property wasn't built into the system; it emerged spontaneously from the training process. But it duplicates an experimentally observed feature of the primate face-processing mechanism. The researchers consider this an indication that their system and the brain are doing something similar.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Monday December 05 2016, @06:28PM

    by Anonymous Coward on Monday December 05 2016, @06:28PM (#437298)

    Parsimony pretty much dictates factoring out the angle and lighting on a face* and using basically a 3D model to actually compare with. If dealing with thousands of faces, then that's more compact than storing many angles and many lighting conditions for each face.

    For example, if we consider 25 approximate possible lighting angles/types (including point source and hazy) and 25 approximate facial angles (including being tilted up and down for each horizontal angle), then 1000 faces would require storing 625,000 samplings (1000 x 25 x 25). That's a multiplicative mess. It's less info to first extract a 3D model, removing angle and lighting, and then compare to 1000 3D models (actual instances).

    Any score-based AI-ish training system that either punishes or limits storage is probably going to come to a similar "conclusion", and more or less store and construct 3D models for primary comparison. (Neural nets typically pre-limit storage, but genetic algorithms can be more flexible that way.)

    I suspect this will be found generally true of any AI of 3D objects. Factoring out projection-to-2D combinations as a preliminary step will result in less storage needed. Storing one 3D model is a lot less memory than storing all or enough possible 2D projections (photos) of such a model under enough possible lighting conditions.

    * Skin tones and eye-color should still be used in identification, but perhaps treated as a different process path.

  • (Score: 1) by Scruffy Beard 2 on Monday December 05 2016, @07:12PM

    by Scruffy Beard 2 (6030) on Monday December 05 2016, @07:12PM (#437317)

    I would say that was a $5 word, except the definition says otherwise;

    I had to look it up [merriam-webster.com].

    • (Score: 0) by Anonymous Coward on Monday December 05 2016, @11:33PM

      by Anonymous Coward on Monday December 05 2016, @11:33PM (#437461)

      Mathematicians and linguists seem to use it often to mean fewest words or symbols. In this case, it means less memory or less combinations.

      Or

      Parsimony: what a Parson pays in alimony ;-)

  • (Score: 0) by Anonymous Coward on Monday December 05 2016, @07:40PM

    by Anonymous Coward on Monday December 05 2016, @07:40PM (#437340)

    Facebook has been paying for this research for a while. Look up the sample work they did -- I think Sylvester Stallone was the example face model.

    The goal was to have FB be able to automatically detect and tag people in photos that were on an angle, poorly lit, or less than symetrical (scars/jowls, etc) that can make one person loon potentially like someone else on an angle.

    Why they care is not something they discussed.