Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Saturday January 11 2020, @02:12AM   Printer-friendly
from the bite-my-shiny dept.

So it seems that AI is a big thing, especially predictions on how it will kill us all. But the Boston Review currently has a rather interesting article on the ethics of algorithms.

A great deal of recent public debate about artificial intelligence has been driven by apocalyptic visions of the future. Humanity, we are told, is engaged in an existential struggle against its own creation. Such worries are fueled in large part by tech industry leaders and futurists, who anticipate systems so sophisticated that they can perform general tasks and operate autonomously, without human control. Stephen Hawking, Elon Musk, and Bill Gates have all publicly expressed their concerns about the advent of this kind of "strong" (or "general") AI—and the associated existential risk that it may pose for humanity. In Hawking's words, the development of strong AI "could spell the end of the human race."

These are legitimate long-term worries. But they are not all we have to worry about, and placing them center stage distracts from ethical questions that AI is raising here and now. Some contend that strong AI may be only decades away, but this focus obscures the reality that "weak" (or "narrow") AI is already reshaping existing social and political institutions. Algorithmic decision making and decision support systems are currently being deployed in many high-stakes domains, from criminal justice, law enforcement, and employment decisions to credit scoring, school assignment mechanisms, health care, and public benefits eligibility assessments. Never mind the far-off specter of doomsday; AI is already here, working behind the scenes of many of our social systems.

[...] For a concrete example, consider the machine learning systems used in predictive policing, whereby historical crime rate data is fed into algorithms in order to predict future geographic distributions of crime. The algorithms flag certain neighborhoods as prone to violent crime. On that basis, police departments make decisions about where to send their officers and how to allocate resources. While the concept of predictive policing is worrisome for a number of reasons, one common defense of the practice is that AI systems are uniquely "neutral" and "objective," compared to their human counterparts. On the face of it, it might seem preferable to take decision making power out of the hands of biased police departments and police officers. But what if the data itself is biased, so that even the "best" algorithm would yield biased results?

Long article, good read. Conclusion?

[...] Rather than rushing to quick, top-down solutions aimed at quality control, optimization, and neutrality, we must first clarify what particular kind of problem we are trying to solve in the first place. Until we do so, algorithmic decision making will continue to entrench social injustice, even as tech optimists herald it as the cure for the very ills it exacerbates.

The path to this conclusion is worth considering.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: -1, Troll) by Anonymous Coward on Saturday January 11 2020, @04:07AM (1 child)

    by Anonymous Coward on Saturday January 11 2020, @04:07AM (#942158)

    TMB's racism triggered! Against his wishes! This is morality rape!

    Starting Score:    0  points
    Moderation   -1  
       Flamebait=1, Troll=1, Insightful=1, Total=3
    Extra 'Troll' Modifier   0  

    Total Score:   -1  
  • (Score: 0, Insightful) by Anonymous Coward on Saturday January 11 2020, @06:47AM

    by Anonymous Coward on Saturday January 11 2020, @06:47AM (#942179)

    The problem is that in this context being triggered is more of an ammosexual pleasure... *yuck*