At The Guardian, Cathy O'Neil writes about why algorithms can be wrong. She classifies the reasons into four categories on a spectrum ranging from unintential errors to outright malfeasance. As algorithms now make a large portion of the decisions affecting our lives, scrutiny is ever more important and she provides multiple examples in each category of their impact.
(Score: 2) by The Mighty Buzzard on Monday July 17 2017, @04:27PM (5 children)
Until the AI learns to tell the difference between actual use and testing and lie selectively...
My rights don't end where your fear begins.
(Score: 1, Informative) by Anonymous Coward on Monday July 17 2017, @04:44PM
I just want to chime in here, you are presuming the existence of a true AI and then you presume the AI would have some motivation to lie about software testing. These observations point to only one obvious conclusion: you are an idiot.
(Score: 0) by Anonymous Coward on Monday July 17 2017, @04:46PM
Just don't let VW train the AI. :-)
(Score: 2) by HiThere on Monday July 17 2017, @04:49PM (1 child)
You are making assumptions about its goal structure. It shouldn't *want* to tell the difference. If an AI lies, it's because it has been trained to lie, probably via the specifications. This isn't the same as being biased which is impossible to remove, but only to minimize...and then only if you realize that your training data is biased. But if, say, the specs say that you need to get a certain percentage approved, then the net can (will probably) learn to lie in order to meet that goal. This is an error in the specifications of the goal...and unfortunately it isn't uncommon.
Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
(Score: 2) by maxwell demon on Monday July 17 2017, @07:07PM
No. Learning to lie would mean the net would learn what the correct response would be, but decide to give another response. While in reality the net will work out how to meet the given goal, and therefore it will give what, according to its programming, is the right answer, even though it is not the right answer according to what we actually want. Or in other words, the network is not lying, it is misinformed.
The Tao of math: The numbers you can count are not the real numbers.
(Score: 2) by meustrus on Monday July 17 2017, @06:23PM
Normally you would isolate the learning data from the testing data, and only an isolated copy of the AI will be used on the test data and them terminated. The AI will have never experienced the test data before.
There are two situations where this breaks down:
If there isn't at least one reference or primary source, it's not +1 Informative. Maybe the underused +1 Interesting?