Submitted via IRC for chromas
Trust but verify: Machine learning's magic masks hidden frailties
The idea sounded good in theory: Rather than giving away full-boat scholarships, colleges could optimize their use of scholarship money to attract students willing to pay most of the tuition costs.
So instead of offering a $20,000 scholarship to one needy student, they could divide the same amount into four scholarships of $5,000 each and dangle them in front to wealthier students who might otherwise choose a different school. Luring four paying students instead of one nonpayer would create $240,000 in additional tuition revenue over four years.
The widely used practice, called "financial aid leveraging," is a perfect application of machine learning, the form of predictive analytics that has taken the business world by storm. But it turned out that the long-term unintended consequence of this leveraging is an imbalance in the student population between economic classes, with wealthier applicants gaining admission at the expense of poorer but equally qualified peers.
[...] Financial aid leveraging is one of several examples of questionable machine-learning outcomes cited by Samir Passi of Princeton University and Solon Barocas of Cornell University in a recent paper about fairness in problem formulation. Misplaced assumptions, failure to agree on desired outcomes and unintentional biases introduced by incomplete training data are just some of the factors that can cause machine learning programs to go off the rails, yielding data that’s useless at best and misleading at worst.
"People often think that bad machine learning systems are equated with bad actors, but I think the more common problem is unintended, undesirable side effects," Passi said in an interview with SiliconANGLE.
[...] Like most branches of artificial intelligence, machine learning has acquired a kind of black-box mystique that can easily mask some of its inherent frailties. Despite the impressive advances computers have made in tasks like playing chess and piloting driverless automobiles, their algorithms are only as good as the people who built them and the data they're given.
The upshot: Work on machine learning in coming years is likely to focus on cracking open that black box and devising more robust methods to make sure those algorithms do what they’re supposed to do and avoid collateral damage.
Any organization that's getting started with machine learning should be aware of the technology's limitations as well as its power.
(Score: 1, Insightful) by Anonymous Coward on Monday February 11 2019, @12:50PM (3 children)
These algorithms will just optimize for whatever you set as the target. If you choose number of paper clips produced it will gladly choose to consume all minerals in the entire solar system in order to make unused paper clips.
https://wiki.lesswrong.com/wiki/Paperclip_maximizer [lesswrong.com]
(Score: -1, Redundant) by Anonymous Coward on Monday February 11 2019, @01:34PM (2 children)
The flaw is not in the machine but as usual, the human.
A machine that only works say 99.99% of the time has a human supervisor to avoid that 0.01% of catastrophic screwups. Problem averted, right?
But when the machine doesn't work perfectly 100% of the time, the human laziness of the supervisor always wins.
Press ENTER to continue.
(Score: 0) by Anonymous Coward on Monday February 11 2019, @02:42PM
Yea, these humans should be fired though. Programming a computer to give money to rich people but not poor people and then blaming the computer for doing that is just dumb.
(Score: 0) by Anonymous Coward on Monday February 11 2019, @10:33PM
Certainly. See the Tesla "autopilot" people. They're not gonna verify anything in any significant way before trusting the machine with their/other lives.