Submitted via IRC for chromas
Trust but verify: Machine learning's magic masks hidden frailties
The idea sounded good in theory: Rather than giving away full-boat scholarships, colleges could optimize their use of scholarship money to attract students willing to pay most of the tuition costs.
So instead of offering a $20,000 scholarship to one needy student, they could divide the same amount into four scholarships of $5,000 each and dangle them in front to wealthier students who might otherwise choose a different school. Luring four paying students instead of one nonpayer would create $240,000 in additional tuition revenue over four years.
The widely used practice, called "financial aid leveraging," is a perfect application of machine learning, the form of predictive analytics that has taken the business world by storm. But it turned out that the long-term unintended consequence of this leveraging is an imbalance in the student population between economic classes, with wealthier applicants gaining admission at the expense of poorer but equally qualified peers.
[...] Financial aid leveraging is one of several examples of questionable machine-learning outcomes cited by Samir Passi of Princeton University and Solon Barocas of Cornell University in a recent paper about fairness in problem formulation. Misplaced assumptions, failure to agree on desired outcomes and unintentional biases introduced by incomplete training data are just some of the factors that can cause machine learning programs to go off the rails, yielding data that’s useless at best and misleading at worst.
"People often think that bad machine learning systems are equated with bad actors, but I think the more common problem is unintended, undesirable side effects," Passi said in an interview with SiliconANGLE.
[...] Like most branches of artificial intelligence, machine learning has acquired a kind of black-box mystique that can easily mask some of its inherent frailties. Despite the impressive advances computers have made in tasks like playing chess and piloting driverless automobiles, their algorithms are only as good as the people who built them and the data they're given.
The upshot: Work on machine learning in coming years is likely to focus on cracking open that black box and devising more robust methods to make sure those algorithms do what they’re supposed to do and avoid collateral damage.
Any organization that's getting started with machine learning should be aware of the technology's limitations as well as its power.
(Score: 2) by VLM on Monday February 11 2019, @03:32PM
Ah I found it, Sept 2000 regarding DVD sales, looks like Amazon violated Robinson-Patman regarding DVD prices (and Diamond Rio mp3 players) but they were never charged or punished, just issued an "oh shit we messed up" and a claim they'd no longer violate the act.
Its hard to search because as you'd guess the obvious search terms find many legal books for sale on Amazon, LOL.
Robinson-Patman is pretty lame because it requires proof of monopolizing the market; its OK if the same behavior merely happens to increase profit. If it puts Sears out of business as a direct and obvious result, then its a violation; if it merely makes more revenue its OK. The government has pretty much given up on enforcing it outside the very occasional political oriented prosecution.
There's better more easily to apply anti-trust laws to use against online behemoths.