Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Monday February 11 2019, @12:25PM   Printer-friendly
from the who-watches-the-watchers dept.

Submitted via IRC for chromas

Trust but verify: Machine learning's magic masks hidden frailties

The idea sounded good in theory: Rather than giving away full-boat scholarships, colleges could optimize their use of scholarship money to attract students willing to pay most of the tuition costs.

So instead of offering a $20,000 scholarship to one needy student, they could divide the same amount into four scholarships of $5,000 each and dangle them in front to wealthier students who might otherwise choose a different school. Luring four paying students instead of one nonpayer would create $240,000 in additional tuition revenue over four years.

The widely used practice, called "financial aid leveraging," is a perfect application of machine learning, the form of predictive analytics that has taken the business world by storm. But it turned out that the long-term unintended consequence of this leveraging is an imbalance in the student population between economic classes, with wealthier applicants gaining admission at the expense of poorer but equally qualified peers.

[...] Financial aid leveraging is one of several examples of questionable machine-learning outcomes cited by Samir Passi of Princeton University and Solon Barocas of Cornell University in a recent paper about fairness in problem formulation. Misplaced assumptions, failure to agree on desired outcomes and unintentional biases introduced by incomplete training data are just some of the factors that can cause machine learning programs to go off the rails, yielding data that’s useless at best and misleading at worst.

"People often think that bad machine learning  systems are equated with bad actors, but I think the more common problem is unintended, undesirable side effects," Passi said in an interview with SiliconANGLE.

[...] Like most branches of artificial intelligence, machine learning has acquired a kind of black-box mystique that can easily mask some of its inherent frailties. Despite the impressive advances computers have made in tasks like playing chess and piloting driverless automobiles, their algorithms are only as good as the people who built them and the data they're given.

The upshot: Work on machine learning in coming years is likely to focus on cracking open that black box and devising more robust methods to make sure those algorithms do what they’re supposed to do and avoid collateral damage.

Any organization that's getting started with machine learning should be aware of the technology's limitations as well as its power.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Insightful) by Anonymous Coward on Monday February 11 2019, @03:42PM (1 child)

    by Anonymous Coward on Monday February 11 2019, @03:42PM (#799542)

    Yeah, the fundamental mistake was to make the university a business, which means the mandate to maximize profits. It was not the AI that made this decision.

    Starting Score:    0  points
    Moderation   +2  
       Insightful=2, Total=2
    Extra 'Insightful' Modifier   0  

    Total Score:   2  
  • (Score: 2) by Azuma Hazuki on Monday February 11 2019, @11:43PM

    by Azuma Hazuki (5086) on Monday February 11 2019, @11:43PM (#799834) Journal

    Bingo. This sort of thing is the purest and most dangerous expression of "garbage in, garbage out" it is possible to have.

    --
    I am "that girl" your mother warned you about...