Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Saturday January 11 2020, @02:12AM   Printer-friendly
from the bite-my-shiny dept.

So it seems that AI is a big thing, especially predictions on how it will kill us all. But the Boston Review currently has a rather interesting article on the ethics of algorithms.

A great deal of recent public debate about artificial intelligence has been driven by apocalyptic visions of the future. Humanity, we are told, is engaged in an existential struggle against its own creation. Such worries are fueled in large part by tech industry leaders and futurists, who anticipate systems so sophisticated that they can perform general tasks and operate autonomously, without human control. Stephen Hawking, Elon Musk, and Bill Gates have all publicly expressed their concerns about the advent of this kind of "strong" (or "general") AI—and the associated existential risk that it may pose for humanity. In Hawking's words, the development of strong AI "could spell the end of the human race."

These are legitimate long-term worries. But they are not all we have to worry about, and placing them center stage distracts from ethical questions that AI is raising here and now. Some contend that strong AI may be only decades away, but this focus obscures the reality that "weak" (or "narrow") AI is already reshaping existing social and political institutions. Algorithmic decision making and decision support systems are currently being deployed in many high-stakes domains, from criminal justice, law enforcement, and employment decisions to credit scoring, school assignment mechanisms, health care, and public benefits eligibility assessments. Never mind the far-off specter of doomsday; AI is already here, working behind the scenes of many of our social systems.

[...] For a concrete example, consider the machine learning systems used in predictive policing, whereby historical crime rate data is fed into algorithms in order to predict future geographic distributions of crime. The algorithms flag certain neighborhoods as prone to violent crime. On that basis, police departments make decisions about where to send their officers and how to allocate resources. While the concept of predictive policing is worrisome for a number of reasons, one common defense of the practice is that AI systems are uniquely "neutral" and "objective," compared to their human counterparts. On the face of it, it might seem preferable to take decision making power out of the hands of biased police departments and police officers. But what if the data itself is biased, so that even the "best" algorithm would yield biased results?

Long article, good read. Conclusion?

[...] Rather than rushing to quick, top-down solutions aimed at quality control, optimization, and neutrality, we must first clarify what particular kind of problem we are trying to solve in the first place. Until we do so, algorithmic decision making will continue to entrench social injustice, even as tech optimists herald it as the cure for the very ills it exacerbates.

The path to this conclusion is worth considering.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Interesting) by Anonymous Coward on Saturday January 11 2020, @02:46AM (16 children)

    by Anonymous Coward on Saturday January 11 2020, @02:46AM (#942131)

    AI has been trained on data sets that are inherently biased. Books of mugshots are 80% black, so AI assumes blacks are typically criminals. But the real problem is the mugshots were taken by racist police. The only solution is for AI to be forced to start from the assumption that blacks are innocent.

    Starting Score:    0  points
    Moderation   +2  
       Interesting=2, Total=2
    Extra 'Interesting' Modifier   0  

    Total Score:   2  
  • (Score: -1, Troll) by Anonymous Coward on Saturday January 11 2020, @02:50AM (1 child)

    by Anonymous Coward on Saturday January 11 2020, @02:50AM (#942135)

    Willie Horton's victims would disagree. You stupid Democrats never learn from your past mistakes,.

    • (Score: 1, Insightful) by Anonymous Coward on Saturday January 11 2020, @08:44AM

      by Anonymous Coward on Saturday January 11 2020, @08:44AM (#942189)

      Database of one criminal? Fallacy of Converse Accident, you stupid Republican git!

  • (Score: 3, Insightful) by The Mighty Buzzard on Saturday January 11 2020, @02:58AM (2 children)

    by The Mighty Buzzard (18) Subscriber Badge <themightybuzzard@proton.me> on Saturday January 11 2020, @02:58AM (#942141) Homepage Journal

    S'a pretty fair troll. Kudos.

    --
    My rights don't end where your fear begins.
    • (Score: -1, Troll) by Anonymous Coward on Saturday January 11 2020, @04:07AM (1 child)

      by Anonymous Coward on Saturday January 11 2020, @04:07AM (#942158)

      TMB's racism triggered! Against his wishes! This is morality rape!

      • (Score: 0, Insightful) by Anonymous Coward on Saturday January 11 2020, @06:47AM

        by Anonymous Coward on Saturday January 11 2020, @06:47AM (#942179)

        The problem is that in this context being triggered is more of an ammosexual pleasure... *yuck*

  • (Score: 1, Funny) by Anonymous Coward on Saturday January 11 2020, @03:44AM

    by Anonymous Coward on Saturday January 11 2020, @03:44AM (#942152)

    Is it better than the AI identifying them as gorillas?

  • (Score: 0, Troll) by Ethanol-fueled on Saturday January 11 2020, @04:01AM (2 children)

    by Ethanol-fueled (2792) on Saturday January 11 2020, @04:01AM (#942156) Homepage

    You think it's difficult getting a missile lock on Black faces, you should look at the Chink faces. The former is spooky, the latter, they all look the same. Guess America is bad in one respect: We're so mutted out we might just have unique facial identifiers. We're not just eyeballs and smiles in darkess, you know. We're not just slanted eyeballs upon yellow square-faced skin with noodles hanging out of our mouths, you know. We're people*.

    * for some value of "people"

    • (Score: 2) by JoeMerchant on Saturday January 11 2020, @04:14AM (1 child)

      by JoeMerchant (3937) on Saturday January 11 2020, @04:14AM (#942161)

      I had a number of Asian friends in College - they do all look similar, but not alike once you get to know them. However, if you're looking for an Asian face in a crowd that's 5% African, 20% Hispanic and 70% Anglo - with the straight black hair they stand out even more than the blacks.

      --
      🌻🌻 [google.com]
      • (Score: 0) by Anonymous Coward on Saturday January 11 2020, @06:39AM

        by Anonymous Coward on Saturday January 11 2020, @06:39AM (#942177)

        What's the other 4.9... %, lizards?

  • (Score: 1, Insightful) by Anonymous Coward on Saturday January 11 2020, @04:02AM (1 child)

    by Anonymous Coward on Saturday January 11 2020, @04:02AM (#942157)

    Have to exclude mugshots, FBI stats, etc. Can't let numbers get in the way of your feels.

    • (Score: 2, Informative) by Anonymous Coward on Saturday January 11 2020, @06:50AM

      by Anonymous Coward on Saturday January 11 2020, @06:50AM (#942180)

      Can't let knowledge get in the way of your racism feelz :|

      Cracks me up how when Trump gets in trouble the racism around here really spikes. Curious!

  • (Score: 5, Insightful) by darkfeline on Saturday January 11 2020, @04:29AM (4 children)

    by darkfeline (1030) on Saturday January 11 2020, @04:29AM (#942165) Homepage

    That's because criminals are disproportionately black. This is going to blow your mind, but reality is inherently biased. If you were to build the simplest and most effective machine for recognizing criminals, doing it by black/not black is the best way. Evolution has proven it experimentally; this is why humans evolved the neural machinery for learning stereotypes.

    Note that I'm not making any claims beyond that statement of fact, such as whether or not blacks are inherently evil by race, ethnicity, culture, whether poverty is a factor, etc.

    --
    Join the SDF Public Access UNIX System today!
    • (Score: 5, Insightful) by c0lo on Saturday January 11 2020, @05:05AM (2 children)

      by c0lo (156) Subscriber Badge on Saturday January 11 2020, @05:05AM (#942170) Journal

      That's because criminals are disproportionately black.

      That's because black ae disproportionately criminalized. Self-fulfilling prophesy, see? (grin)

      --
      https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
      • (Score: 0) by Anonymous Coward on Saturday January 11 2020, @06:51AM

        by Anonymous Coward on Saturday January 11 2020, @06:51AM (#942181)
        And nobody is doing anything about it.
      • (Score: -1, Troll) by Anonymous Coward on Saturday January 11 2020, @10:05AM

        by Anonymous Coward on Saturday January 11 2020, @10:05AM (#942192)

        That's because black ae disproportionately criminalized. Self-fulfilling prophesy, see?

        Criminals are disproportionately criminalized and criminals are disproportionately black (and / or muslim [independent.co.uk]). It's not because people live criminal lifestyles (or adhere to a 6th Century ideology), no it's "systemic racism" and reality must be wrong if it doesn't conform to the lefts desired outcome.

    • (Score: 0) by Anonymous Coward on Saturday January 11 2020, @08:04AM

      by Anonymous Coward on Saturday January 11 2020, @08:04AM (#942186)

      the simplest and most effective machine for recognizing criminals, doing it by black/not black is the best way

      What? Moron. I could build a machine that checks GPS location against known jails and have lower type I and type II errors. If you're going to spout garbage at least use your imagination enough to spout garbage which isn't so trivially falsifyable.