At The Guardian, Cathy O'Neil writes about why algorithms can be wrong. She classifies the reasons into four categories on a spectrum ranging from unintential errors to outright malfeasance. As algorithms now make a large portion of the decisions affecting our lives, scrutiny is ever more important and she provides multiple examples in each category of their impact.
(Score: 2) by meustrus on Monday July 17 2017, @06:23PM
Normally you would isolate the learning data from the testing data, and only an isolated copy of the AI will be used on the test data and them terminated. The AI will have never experienced the test data before.
There are two situations where this breaks down:
If there isn't at least one reference or primary source, it's not +1 Informative. Maybe the underused +1 Interesting?