Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Saturday January 11 2020, @02:12AM   Printer-friendly
from the bite-my-shiny dept.

So it seems that AI is a big thing, especially predictions on how it will kill us all. But the Boston Review currently has a rather interesting article on the ethics of algorithms.

A great deal of recent public debate about artificial intelligence has been driven by apocalyptic visions of the future. Humanity, we are told, is engaged in an existential struggle against its own creation. Such worries are fueled in large part by tech industry leaders and futurists, who anticipate systems so sophisticated that they can perform general tasks and operate autonomously, without human control. Stephen Hawking, Elon Musk, and Bill Gates have all publicly expressed their concerns about the advent of this kind of "strong" (or "general") AI—and the associated existential risk that it may pose for humanity. In Hawking's words, the development of strong AI "could spell the end of the human race."

These are legitimate long-term worries. But they are not all we have to worry about, and placing them center stage distracts from ethical questions that AI is raising here and now. Some contend that strong AI may be only decades away, but this focus obscures the reality that "weak" (or "narrow") AI is already reshaping existing social and political institutions. Algorithmic decision making and decision support systems are currently being deployed in many high-stakes domains, from criminal justice, law enforcement, and employment decisions to credit scoring, school assignment mechanisms, health care, and public benefits eligibility assessments. Never mind the far-off specter of doomsday; AI is already here, working behind the scenes of many of our social systems.

[...] For a concrete example, consider the machine learning systems used in predictive policing, whereby historical crime rate data is fed into algorithms in order to predict future geographic distributions of crime. The algorithms flag certain neighborhoods as prone to violent crime. On that basis, police departments make decisions about where to send their officers and how to allocate resources. While the concept of predictive policing is worrisome for a number of reasons, one common defense of the practice is that AI systems are uniquely "neutral" and "objective," compared to their human counterparts. On the face of it, it might seem preferable to take decision making power out of the hands of biased police departments and police officers. But what if the data itself is biased, so that even the "best" algorithm would yield biased results?

Long article, good read. Conclusion?

[...] Rather than rushing to quick, top-down solutions aimed at quality control, optimization, and neutrality, we must first clarify what particular kind of problem we are trying to solve in the first place. Until we do so, algorithmic decision making will continue to entrench social injustice, even as tech optimists herald it as the cure for the very ills it exacerbates.

The path to this conclusion is worth considering.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Insightful) by darkfeline on Saturday January 11 2020, @04:29AM (4 children)

    by darkfeline (1030) on Saturday January 11 2020, @04:29AM (#942165) Homepage

    That's because criminals are disproportionately black. This is going to blow your mind, but reality is inherently biased. If you were to build the simplest and most effective machine for recognizing criminals, doing it by black/not black is the best way. Evolution has proven it experimentally; this is why humans evolved the neural machinery for learning stereotypes.

    Note that I'm not making any claims beyond that statement of fact, such as whether or not blacks are inherently evil by race, ethnicity, culture, whether poverty is a factor, etc.

    --
    Join the SDF Public Access UNIX System today!
    Starting Score:    1  point
    Moderation   +3  
       Insightful=2, Interesting=1, Total=3
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 5, Insightful) by c0lo on Saturday January 11 2020, @05:05AM (2 children)

    by c0lo (156) Subscriber Badge on Saturday January 11 2020, @05:05AM (#942170) Journal

    That's because criminals are disproportionately black.

    That's because black ae disproportionately criminalized. Self-fulfilling prophesy, see? (grin)

    --
    https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
    • (Score: 0) by Anonymous Coward on Saturday January 11 2020, @06:51AM

      by Anonymous Coward on Saturday January 11 2020, @06:51AM (#942181)
      And nobody is doing anything about it.
    • (Score: -1, Troll) by Anonymous Coward on Saturday January 11 2020, @10:05AM

      by Anonymous Coward on Saturday January 11 2020, @10:05AM (#942192)

      That's because black ae disproportionately criminalized. Self-fulfilling prophesy, see?

      Criminals are disproportionately criminalized and criminals are disproportionately black (and / or muslim [independent.co.uk]). It's not because people live criminal lifestyles (or adhere to a 6th Century ideology), no it's "systemic racism" and reality must be wrong if it doesn't conform to the lefts desired outcome.

  • (Score: 0) by Anonymous Coward on Saturday January 11 2020, @08:04AM

    by Anonymous Coward on Saturday January 11 2020, @08:04AM (#942186)

    the simplest and most effective machine for recognizing criminals, doing it by black/not black is the best way

    What? Moron. I could build a machine that checks GPS location against known jails and have lower type I and type II errors. If you're going to spout garbage at least use your imagination enough to spout garbage which isn't so trivially falsifyable.