The Guardian is reporting that Google is trying to understand how its neural net for image recognition works by feeding in random noise then telling the neural net to look for certain features then feeding the resulting image back in. Apart from anything else some of the images generated are astounding.
Link to original Google research article.
(Score: 2) by acid andy on Sunday June 21 2015, @07:27PM
These could be self associative neural networks e.g. Hopfield networks. I am not sure if back prop is used with those, though I'm no expert.
If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
(Score: 0) by Anonymous Coward on Monday June 22 2015, @03:52AM
Their algorithm sucks if it is supposed to recognize bananas but does so even if there are no bananas... I think this is just spin on a failed project.
(Score: 2) by fritsd on Monday June 22 2015, @10:55AM
Their algorithm sucks if it is supposed to recognize bananas but does so even if there are no bananas... I think this is just spin on a failed project.
Failed projects are sometimes the most interesting projects. Have you never heard about serendipity [wikipedia.org]? Like how Teflon [wikipedia.org] was invented? Tefal frying pans FTW.
(Score: 0) by Anonymous Coward on Monday June 22 2015, @07:19PM
No disagreement there. I just disagree with (the possible) obfuscation of the motivations for the project.