Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Saturday January 11 2020, @02:12AM   Printer-friendly
from the bite-my-shiny dept.

So it seems that AI is a big thing, especially predictions on how it will kill us all. But the Boston Review currently has a rather interesting article on the ethics of algorithms.

A great deal of recent public debate about artificial intelligence has been driven by apocalyptic visions of the future. Humanity, we are told, is engaged in an existential struggle against its own creation. Such worries are fueled in large part by tech industry leaders and futurists, who anticipate systems so sophisticated that they can perform general tasks and operate autonomously, without human control. Stephen Hawking, Elon Musk, and Bill Gates have all publicly expressed their concerns about the advent of this kind of "strong" (or "general") AI—and the associated existential risk that it may pose for humanity. In Hawking's words, the development of strong AI "could spell the end of the human race."

These are legitimate long-term worries. But they are not all we have to worry about, and placing them center stage distracts from ethical questions that AI is raising here and now. Some contend that strong AI may be only decades away, but this focus obscures the reality that "weak" (or "narrow") AI is already reshaping existing social and political institutions. Algorithmic decision making and decision support systems are currently being deployed in many high-stakes domains, from criminal justice, law enforcement, and employment decisions to credit scoring, school assignment mechanisms, health care, and public benefits eligibility assessments. Never mind the far-off specter of doomsday; AI is already here, working behind the scenes of many of our social systems.

[...] For a concrete example, consider the machine learning systems used in predictive policing, whereby historical crime rate data is fed into algorithms in order to predict future geographic distributions of crime. The algorithms flag certain neighborhoods as prone to violent crime. On that basis, police departments make decisions about where to send their officers and how to allocate resources. While the concept of predictive policing is worrisome for a number of reasons, one common defense of the practice is that AI systems are uniquely "neutral" and "objective," compared to their human counterparts. On the face of it, it might seem preferable to take decision making power out of the hands of biased police departments and police officers. But what if the data itself is biased, so that even the "best" algorithm would yield biased results?

Long article, good read. Conclusion?

[...] Rather than rushing to quick, top-down solutions aimed at quality control, optimization, and neutrality, we must first clarify what particular kind of problem we are trying to solve in the first place. Until we do so, algorithmic decision making will continue to entrench social injustice, even as tech optimists herald it as the cure for the very ills it exacerbates.

The path to this conclusion is worth considering.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Insightful) by Anonymous Coward on Saturday January 11 2020, @02:18AM

    by Anonymous Coward on Saturday January 11 2020, @02:18AM (#942122)

    "we must first clarify what particular kind of problem we are trying to solve"

    Efficient redistribution of wealth upwards.

  • (Score: 1, Insightful) by Anonymous Coward on Saturday January 11 2020, @02:43AM (1 child)

    by Anonymous Coward on Saturday January 11 2020, @02:43AM (#942128)

    It's a socio-economic problem, not a tech problem.

    • (Score: -1, Troll) by Anonymous Coward on Saturday January 11 2020, @04:38AM

      by Anonymous Coward on Saturday January 11 2020, @04:38AM (#942167)

      Aren't programmers all incels? We should throw them into the military. That'll straighten them out. Support Bloomberg 2020! He'll know what to do about Trump.

  • (Score: 2, Interesting) by Anonymous Coward on Saturday January 11 2020, @02:46AM (16 children)

    by Anonymous Coward on Saturday January 11 2020, @02:46AM (#942131)

    AI has been trained on data sets that are inherently biased. Books of mugshots are 80% black, so AI assumes blacks are typically criminals. But the real problem is the mugshots were taken by racist police. The only solution is for AI to be forced to start from the assumption that blacks are innocent.

    • (Score: -1, Troll) by Anonymous Coward on Saturday January 11 2020, @02:50AM (1 child)

      by Anonymous Coward on Saturday January 11 2020, @02:50AM (#942135)

      Willie Horton's victims would disagree. You stupid Democrats never learn from your past mistakes,.

      • (Score: 1, Insightful) by Anonymous Coward on Saturday January 11 2020, @08:44AM

        by Anonymous Coward on Saturday January 11 2020, @08:44AM (#942189)

        Database of one criminal? Fallacy of Converse Accident, you stupid Republican git!

    • (Score: 3, Insightful) by The Mighty Buzzard on Saturday January 11 2020, @02:58AM (2 children)

      by The Mighty Buzzard (18) Subscriber Badge <themightybuzzard@proton.me> on Saturday January 11 2020, @02:58AM (#942141) Homepage Journal

      S'a pretty fair troll. Kudos.

      --
      My rights don't end where your fear begins.
      • (Score: -1, Troll) by Anonymous Coward on Saturday January 11 2020, @04:07AM (1 child)

        by Anonymous Coward on Saturday January 11 2020, @04:07AM (#942158)

        TMB's racism triggered! Against his wishes! This is morality rape!

        • (Score: 0, Insightful) by Anonymous Coward on Saturday January 11 2020, @06:47AM

          by Anonymous Coward on Saturday January 11 2020, @06:47AM (#942179)

          The problem is that in this context being triggered is more of an ammosexual pleasure... *yuck*

    • (Score: 1, Funny) by Anonymous Coward on Saturday January 11 2020, @03:44AM

      by Anonymous Coward on Saturday January 11 2020, @03:44AM (#942152)

      Is it better than the AI identifying them as gorillas?

    • (Score: 0, Troll) by Ethanol-fueled on Saturday January 11 2020, @04:01AM (2 children)

      by Ethanol-fueled (2792) on Saturday January 11 2020, @04:01AM (#942156) Homepage

      You think it's difficult getting a missile lock on Black faces, you should look at the Chink faces. The former is spooky, the latter, they all look the same. Guess America is bad in one respect: We're so mutted out we might just have unique facial identifiers. We're not just eyeballs and smiles in darkess, you know. We're not just slanted eyeballs upon yellow square-faced skin with noodles hanging out of our mouths, you know. We're people*.

      * for some value of "people"

      • (Score: 2) by JoeMerchant on Saturday January 11 2020, @04:14AM (1 child)

        by JoeMerchant (3937) on Saturday January 11 2020, @04:14AM (#942161)

        I had a number of Asian friends in College - they do all look similar, but not alike once you get to know them. However, if you're looking for an Asian face in a crowd that's 5% African, 20% Hispanic and 70% Anglo - with the straight black hair they stand out even more than the blacks.

        --
        🌻🌻 [google.com]
        • (Score: 0) by Anonymous Coward on Saturday January 11 2020, @06:39AM

          by Anonymous Coward on Saturday January 11 2020, @06:39AM (#942177)

          What's the other 4.9... %, lizards?

    • (Score: 1, Insightful) by Anonymous Coward on Saturday January 11 2020, @04:02AM (1 child)

      by Anonymous Coward on Saturday January 11 2020, @04:02AM (#942157)

      Have to exclude mugshots, FBI stats, etc. Can't let numbers get in the way of your feels.

      • (Score: 2, Informative) by Anonymous Coward on Saturday January 11 2020, @06:50AM

        by Anonymous Coward on Saturday January 11 2020, @06:50AM (#942180)

        Can't let knowledge get in the way of your racism feelz :|

        Cracks me up how when Trump gets in trouble the racism around here really spikes. Curious!

    • (Score: 5, Insightful) by darkfeline on Saturday January 11 2020, @04:29AM (4 children)

      by darkfeline (1030) on Saturday January 11 2020, @04:29AM (#942165) Homepage

      That's because criminals are disproportionately black. This is going to blow your mind, but reality is inherently biased. If you were to build the simplest and most effective machine for recognizing criminals, doing it by black/not black is the best way. Evolution has proven it experimentally; this is why humans evolved the neural machinery for learning stereotypes.

      Note that I'm not making any claims beyond that statement of fact, such as whether or not blacks are inherently evil by race, ethnicity, culture, whether poverty is a factor, etc.

      --
      Join the SDF Public Access UNIX System today!
      • (Score: 5, Insightful) by c0lo on Saturday January 11 2020, @05:05AM (2 children)

        by c0lo (156) Subscriber Badge on Saturday January 11 2020, @05:05AM (#942170) Journal

        That's because criminals are disproportionately black.

        That's because black ae disproportionately criminalized. Self-fulfilling prophesy, see? (grin)

        --
        https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
        • (Score: 0) by Anonymous Coward on Saturday January 11 2020, @06:51AM

          by Anonymous Coward on Saturday January 11 2020, @06:51AM (#942181)
          And nobody is doing anything about it.
        • (Score: -1, Troll) by Anonymous Coward on Saturday January 11 2020, @10:05AM

          by Anonymous Coward on Saturday January 11 2020, @10:05AM (#942192)

          That's because black ae disproportionately criminalized. Self-fulfilling prophesy, see?

          Criminals are disproportionately criminalized and criminals are disproportionately black (and / or muslim [independent.co.uk]). It's not because people live criminal lifestyles (or adhere to a 6th Century ideology), no it's "systemic racism" and reality must be wrong if it doesn't conform to the lefts desired outcome.

      • (Score: 0) by Anonymous Coward on Saturday January 11 2020, @08:04AM

        by Anonymous Coward on Saturday January 11 2020, @08:04AM (#942186)

        the simplest and most effective machine for recognizing criminals, doing it by black/not black is the best way

        What? Moron. I could build a machine that checks GPS location against known jails and have lower type I and type II errors. If you're going to spout garbage at least use your imagination enough to spout garbage which isn't so trivially falsifyable.

  • (Score: 4, Insightful) by deimios on Saturday January 11 2020, @06:33AM (6 children)

    by deimios (201) Subscriber Badge on Saturday January 11 2020, @06:33AM (#942176) Journal

    But what if the data itself is biased

    What if reality itself is biased and life isn't fair. Shocking.

    • (Score: 0) by Anonymous Coward on Saturday January 11 2020, @06:42AM

      by Anonymous Coward on Saturday January 11 2020, @06:42AM (#942178)

      But what if the data itself is biased

      What if reality itself is biased and life isn't fair. Shocking.

      What if life itself is data and reality is a simulation? Shocking.

    • (Score: 1, Informative) by Anonymous Coward on Saturday January 11 2020, @07:38AM (4 children)

      by Anonymous Coward on Saturday January 11 2020, @07:38AM (#942182)

      Reality has a well-known liberal bias. But that is not what we are talking about here. Fricking racist cops. And Khallow.

      • (Score: 0) by Anonymous Coward on Saturday January 11 2020, @09:55AM (3 children)

        by Anonymous Coward on Saturday January 11 2020, @09:55AM (#942190)

        Reality has a well-known liberal bias.

        Liberals don't exist in a world the rest of us recognize as reality.

        • (Score: 1, Touché) by Anonymous Coward on Saturday January 11 2020, @06:43PM (2 children)

          by Anonymous Coward on Saturday January 11 2020, @06:43PM (#942278)

          Don't blame liberals for your living in a conservative fantasy world!

          • (Score: 1, Insightful) by Anonymous Coward on Saturday January 11 2020, @09:42PM (1 child)

            by Anonymous Coward on Saturday January 11 2020, @09:42PM (#942313)

            The US was founded in the liberal tradition; "life, liberty and the protection of private property pursuit of happiness", that is now a conservative position. The modern "liberal" believes in "Diversity, Inclusion and Equity" which is why they elect people like this. [pacificresearch.org] How are people supposed to live with day to day harassment from drug-addled, homeless criminals?

            As Solzhenitsyn explained:

            ‘Pluralism’ as a principle degenerates into indifference, superficiality: it spills over into relativism, into tolerance of the absurd, into a pluralism of errors and lies.

            Who is it that tolerates the absurdity of homeless people shooting drugs and shitting on their doorstep by electing a DA that stood on a platform of doing nothing about it?

            • (Score: -1, Troll) by Anonymous Coward on Saturday January 11 2020, @10:39PM

              by Anonymous Coward on Saturday January 11 2020, @10:39PM (#942327)

              I see the "liberal" mods are active. Instead of attempting to bury my comment by modding me troll, you are welcome to rebuke it.

              Social liberalism is the extension of the liberal franchise to all so please explain to me how the pursuit of happiness is reflected by the election of a DA that refuses to enforce "quality of life" laws? The concept of personal property arises from the extension of the concept of the self and on that basis what is the difference between a homeless person shitting on my doorstep and me shitting in his face? How is the refusal to prosecute "quality of life laws" anything other than anti-social? Is it not true that the DA is a socialist (==sociopath) who is literally intent on turning his world to shit? Given my previous questions, please explain under what definition could the word "liberal" possibly apply to the policies being pursued in SF?

              TIA

  • (Score: 4, Informative) by Lester on Saturday January 11 2020, @10:15AM (1 child)

    by Lester (6231) on Saturday January 11 2020, @10:15AM (#942195) Journal

    Programs don't do what we want them to do, but what we told them to do. And such deviation is called bug.

    AI systems won't learn what we want them to learn, but what we taught them. And such deviation is called... AI bug? Biased AI?

    The main problem is that bugs of programs can be corrected, deep learning is almost un-analyzable.

    • (Score: 2) by maxwell demon on Saturday January 11 2020, @11:07AM

      by maxwell demon (1608) on Saturday January 11 2020, @11:07AM (#942200) Journal

      I would call it artificial prejudice.

      --
      The Tao of math: The numbers you can count are not the real numbers.
  • (Score: 5, Interesting) by EEMac on Saturday January 11 2020, @11:24AM

    by EEMac (6423) on Saturday January 11 2020, @11:24AM (#942202)

    While the concept of predictive policing is worrisome for a number of reasons . . .

    No, not really. Police are supposed to patrol crime-ridden neighborhoods to deter bad actors. It's part of the reason police forces exist.

    Jerry Pournelle [jerrypournelle.com] had experience here:

    More than forty years ago when I was a city official in the Mayor’s office, I was asked to sit in on a meeting with the precinct captain of a district that included both black middle class and some “Inner city” “ghetto” areas. The meeting consisted of the police officers and several black women who were tired of the lack of law and order in their neighborhood. The captain explained that he had no more resources: he had patrols on overtime as it was. There was nothing to be done. I offered to send some of the Metro units in. These were elite police patrols who strictly enforced the law. I warned the ladies that if we sent them in, they would come down hard on all criminal activity they saw. All of it. The ladies said that was very much what they wanted.

    We sent some of the elite Metro units into the neighborhood. They began enforcing the law as they had been trained: not as community police, but as strict enforcement officers looking for good arrests. This was before Wilson’s “Broken Windows” theory became widely known, but I knew Wilson, and this was in that spirit: you don’t ignore minor infractions because that leads people to think you will ignore major ones.

    The experiment lasted about a month, and the ladies reported they were really surprised at how much better conditions were; but there were black leaders who claimed that the district was being overpoliced. The LA Times talked about the invasion of the police. The mayor told me to get the Metro units out of there. Things went back to where they were before I attempted to intervene.

  • (Score: 4, Interesting) by khallow on Saturday January 11 2020, @01:50PM

    by khallow (3766) Subscriber Badge on Saturday January 11 2020, @01:50PM (#942218) Journal
    I think a key early example of this will be increasing the rent seeking power of large legal organizations. As I've noted before, law at merely the federal level is increasing in the US at a faster rate than anyone can read, even if they were to devote their entire life to reading law. Other developed world countries are probably in a similar situation.

    As that complexity continues to grow, that'll provide growing opportunities for AI systems to mine the law and regulation for loopholes, means to harass and shut down opponents, and all kinds of opportunities to insert middlemen into otherwise lawful activities.

    I think there's the potential to lead to a stagnant world where one has to either run one's own legal AI or pay some large tribute to a law firm for access to theirs so that you can do anything. It won't matter much to the big businesses. It's just another fixed expense which they can spread among a lot of revenue. But everyone else is going to have a challenge.
  • (Score: 4, Interesting) by Thexalon on Saturday January 11 2020, @05:17PM (1 child)

    by Thexalon (636) on Saturday January 11 2020, @05:17PM (#942261)

    AI does what the humans controlling it want it to do. An AI only knows the data fed into it by humans, and only learns in the ways humans tell it to learn. If I control the data flowing in, and/or the program that's going to operate on that data, I control the output of the AI, it's as simple as that.

    For instance, to make an AI racist, I tell it that "race" is something that exists, or alternately feed it information containing racial markers and tell it that those racial markers are significant in some way beyond just "does this photograph look like it might be the same person as in this other photograph?" That's the ideology of racism in a nutshell: That you can look at somebody, see, say, tightly curled black hair and a wider-than-average nose, and correctly conclude anything at all about their behavior based on nothing more than that. Which anybody with more granular data (i.e. actually been around more than a couple of people matching that description) knows is an inaccurate conclusion.

    And of course if your AI is just answering the question "does the person in photo A look like it might be the person in photo B", I'd expect humans to go over that data before acting on it. For instance, "photo A was from a security camera showing somebody fleeing a crime scene, photo B was from an annual symposium on astrophysics, just because a computer decided the faces are similar doesn't mean that you can conclude that Lawrence Krauss is going on a killing spree".

    --
    The only thing that stops a bad guy with a compiler is a good guy with a compiler.
    • (Score: -1, Offtopic) by Anonymous Coward on Sunday January 12 2020, @02:16AM

      by Anonymous Coward on Sunday January 12 2020, @02:16AM (#942382)

      Photons are also "racist"! [wired.com]

      Of course, current AI consists of statistical models, there's no inherent racial prejudice. This raises the question are the phrases "blacks look like gorillas" or "they all look the same to me" actually racist? Society certainly classifies them as that but the truth is that it always depends on context and it's not realistic AI's were trained on trained on racist material (except Tay [theverge.com]).

      Perhaps instead of retarding the statistical model, we should instead be looking at root causes. High testosterone was variously posited as the cause of higher prostrate cancer and criminality in black males but research proved otherwise. [nih.gov] That line of inquiry should be no more "racist" than researching sickle cell and I'm sure it has implications across races. I've posted this exact study here within the last 12 months, since then Google has seemingly scrubbed its index of the fact elevated levels of estrogen in males cause emotional disregulation, anxiety and depression. Low testosterone (or an imbalance) causes aggression as males attempt to assert dominance. There is at least enough evidence to suggest that the statistics are not showing bias due to systemic racism, that there is likely broader genetic and socioeconomic issues to be explored.

      ----

      Dear Google employees: here's another paper [plos.org] for you to remove from your index, and another (via a reddit result). [questia.com] The paper that used to be top of your index for identical search terms was on first cousin marriage and centered on the Pakistani Muslim community in Bradford, UK. Here is a puff piece [bbc.co.uk] while the paper itself presented a 400% increase in congenital birth defects and an average 10pt IQ deficit -- why is that paper no longer in Googles index? I guess Google is pro mental retardation or something? "Don't be evil" - I think not! What an anti-humanitarian disgrace Google and by extension their employees have become.

(1)