Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Monday December 05 2016, @04:13PM   Printer-friendly
from the learning-how-we-think dept.

MIT researchers and their colleagues have developed a new computational model of the human brain's face-recognition mechanism that seems to capture aspects of human neurology that previous models have missed.

The researchers designed a machine-learning system that implemented their model, and they trained it to recognize particular faces by feeding it a battery of sample images. They found that the trained system included an intermediate processing step that represented a face's degree of rotation—say, 45 degrees from center—but not the direction—left or right.

This property wasn't built into the system; it emerged spontaneously from the training process. But it duplicates an experimentally observed feature of the primate face-processing mechanism. The researchers consider this an indication that their system and the brain are doing something similar.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by ledow on Tuesday December 06 2016, @08:47AM

    by ledow (5567) on Tuesday December 06 2016, @08:47AM (#437581) Homepage

    More importantly, what happens when you feed it an image that ISN'T a face?

    This seems, to me, to be the part where any form of "AI" falls down. Unexpected data basically throws up random results, whereas "intelligent" systems tend not to. And the AI is rarely given the option "I don't know, I can't see anything" as a returnable response.

    Whereas, even the youngest of children will just frown at you if you ask "which picture is Daddy" when neither of them are.

    I'm sure there's an algorithm for "is there a face-like feature in this image" and one for "is this face similar to any other", but actually conjoining them isn't directly possible, and training for both circumstances is harder.

    Even if there is - extrapolating doesn't produce "That's not a face, that's a butt" in preference to "No face detected".

    The problem I have with this is that it's not AI, and realising there's an intermediate step that determines angle doesn't mean it's doing anything even vaguely similar to real intelligences or especially us. It would seem, in fact, a necessary element if you plug in lots of faces at lots of angles to immediately find a compensating factor that results in success more than normal because it reflects quite what "angle" to perceive the features at.

    Feed it an image with an upside-down face. Humans are distinctly bad at recognising that - you can flip features like lips, eyebrows, nose, etc. in upside-down images and human rarely spot it until you rotate it the right way up. If it "detects" such a face, it's not recognising images the same way a human is.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2