Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Saturday January 11 2020, @02:12AM   Printer-friendly
from the bite-my-shiny dept.

So it seems that AI is a big thing, especially predictions on how it will kill us all. But the Boston Review currently has a rather interesting article on the ethics of algorithms.

A great deal of recent public debate about artificial intelligence has been driven by apocalyptic visions of the future. Humanity, we are told, is engaged in an existential struggle against its own creation. Such worries are fueled in large part by tech industry leaders and futurists, who anticipate systems so sophisticated that they can perform general tasks and operate autonomously, without human control. Stephen Hawking, Elon Musk, and Bill Gates have all publicly expressed their concerns about the advent of this kind of "strong" (or "general") AI—and the associated existential risk that it may pose for humanity. In Hawking's words, the development of strong AI "could spell the end of the human race."

These are legitimate long-term worries. But they are not all we have to worry about, and placing them center stage distracts from ethical questions that AI is raising here and now. Some contend that strong AI may be only decades away, but this focus obscures the reality that "weak" (or "narrow") AI is already reshaping existing social and political institutions. Algorithmic decision making and decision support systems are currently being deployed in many high-stakes domains, from criminal justice, law enforcement, and employment decisions to credit scoring, school assignment mechanisms, health care, and public benefits eligibility assessments. Never mind the far-off specter of doomsday; AI is already here, working behind the scenes of many of our social systems.

[...] For a concrete example, consider the machine learning systems used in predictive policing, whereby historical crime rate data is fed into algorithms in order to predict future geographic distributions of crime. The algorithms flag certain neighborhoods as prone to violent crime. On that basis, police departments make decisions about where to send their officers and how to allocate resources. While the concept of predictive policing is worrisome for a number of reasons, one common defense of the practice is that AI systems are uniquely "neutral" and "objective," compared to their human counterparts. On the face of it, it might seem preferable to take decision making power out of the hands of biased police departments and police officers. But what if the data itself is biased, so that even the "best" algorithm would yield biased results?

Long article, good read. Conclusion?

[...] Rather than rushing to quick, top-down solutions aimed at quality control, optimization, and neutrality, we must first clarify what particular kind of problem we are trying to solve in the first place. Until we do so, algorithmic decision making will continue to entrench social injustice, even as tech optimists herald it as the cure for the very ills it exacerbates.

The path to this conclusion is worth considering.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Interesting) by Thexalon on Saturday January 11 2020, @05:17PM (1 child)

    by Thexalon (636) on Saturday January 11 2020, @05:17PM (#942261)

    AI does what the humans controlling it want it to do. An AI only knows the data fed into it by humans, and only learns in the ways humans tell it to learn. If I control the data flowing in, and/or the program that's going to operate on that data, I control the output of the AI, it's as simple as that.

    For instance, to make an AI racist, I tell it that "race" is something that exists, or alternately feed it information containing racial markers and tell it that those racial markers are significant in some way beyond just "does this photograph look like it might be the same person as in this other photograph?" That's the ideology of racism in a nutshell: That you can look at somebody, see, say, tightly curled black hair and a wider-than-average nose, and correctly conclude anything at all about their behavior based on nothing more than that. Which anybody with more granular data (i.e. actually been around more than a couple of people matching that description) knows is an inaccurate conclusion.

    And of course if your AI is just answering the question "does the person in photo A look like it might be the person in photo B", I'd expect humans to go over that data before acting on it. For instance, "photo A was from a security camera showing somebody fleeing a crime scene, photo B was from an annual symposium on astrophysics, just because a computer decided the faces are similar doesn't mean that you can conclude that Lawrence Krauss is going on a killing spree".

    --
    The only thing that stops a bad guy with a compiler is a good guy with a compiler.
    Starting Score:    1  point
    Moderation   +2  
       Insightful=1, Interesting=1, Total=2
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: -1, Offtopic) by Anonymous Coward on Sunday January 12 2020, @02:16AM

    by Anonymous Coward on Sunday January 12 2020, @02:16AM (#942382)

    Photons are also "racist"! [wired.com]

    Of course, current AI consists of statistical models, there's no inherent racial prejudice. This raises the question are the phrases "blacks look like gorillas" or "they all look the same to me" actually racist? Society certainly classifies them as that but the truth is that it always depends on context and it's not realistic AI's were trained on trained on racist material (except Tay [theverge.com]).

    Perhaps instead of retarding the statistical model, we should instead be looking at root causes. High testosterone was variously posited as the cause of higher prostrate cancer and criminality in black males but research proved otherwise. [nih.gov] That line of inquiry should be no more "racist" than researching sickle cell and I'm sure it has implications across races. I've posted this exact study here within the last 12 months, since then Google has seemingly scrubbed its index of the fact elevated levels of estrogen in males cause emotional disregulation, anxiety and depression. Low testosterone (or an imbalance) causes aggression as males attempt to assert dominance. There is at least enough evidence to suggest that the statistics are not showing bias due to systemic racism, that there is likely broader genetic and socioeconomic issues to be explored.

    ----

    Dear Google employees: here's another paper [plos.org] for you to remove from your index, and another (via a reddit result). [questia.com] The paper that used to be top of your index for identical search terms was on first cousin marriage and centered on the Pakistani Muslim community in Bradford, UK. Here is a puff piece [bbc.co.uk] while the paper itself presented a 400% increase in congenital birth defects and an average 10pt IQ deficit -- why is that paper no longer in Googles index? I guess Google is pro mental retardation or something? "Don't be evil" - I think not! What an anti-humanitarian disgrace Google and by extension their employees have become.