Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Saturday January 11 2020, @02:12AM   Printer-friendly
from the bite-my-shiny dept.

So it seems that AI is a big thing, especially predictions on how it will kill us all. But the Boston Review currently has a rather interesting article on the ethics of algorithms.

A great deal of recent public debate about artificial intelligence has been driven by apocalyptic visions of the future. Humanity, we are told, is engaged in an existential struggle against its own creation. Such worries are fueled in large part by tech industry leaders and futurists, who anticipate systems so sophisticated that they can perform general tasks and operate autonomously, without human control. Stephen Hawking, Elon Musk, and Bill Gates have all publicly expressed their concerns about the advent of this kind of "strong" (or "general") AI—and the associated existential risk that it may pose for humanity. In Hawking's words, the development of strong AI "could spell the end of the human race."

These are legitimate long-term worries. But they are not all we have to worry about, and placing them center stage distracts from ethical questions that AI is raising here and now. Some contend that strong AI may be only decades away, but this focus obscures the reality that "weak" (or "narrow") AI is already reshaping existing social and political institutions. Algorithmic decision making and decision support systems are currently being deployed in many high-stakes domains, from criminal justice, law enforcement, and employment decisions to credit scoring, school assignment mechanisms, health care, and public benefits eligibility assessments. Never mind the far-off specter of doomsday; AI is already here, working behind the scenes of many of our social systems.

[...] For a concrete example, consider the machine learning systems used in predictive policing, whereby historical crime rate data is fed into algorithms in order to predict future geographic distributions of crime. The algorithms flag certain neighborhoods as prone to violent crime. On that basis, police departments make decisions about where to send their officers and how to allocate resources. While the concept of predictive policing is worrisome for a number of reasons, one common defense of the practice is that AI systems are uniquely "neutral" and "objective," compared to their human counterparts. On the face of it, it might seem preferable to take decision making power out of the hands of biased police departments and police officers. But what if the data itself is biased, so that even the "best" algorithm would yield biased results?

Long article, good read. Conclusion?

[...] Rather than rushing to quick, top-down solutions aimed at quality control, optimization, and neutrality, we must first clarify what particular kind of problem we are trying to solve in the first place. Until we do so, algorithmic decision making will continue to entrench social injustice, even as tech optimists herald it as the cure for the very ills it exacerbates.

The path to this conclusion is worth considering.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Interesting) by khallow on Saturday January 11 2020, @01:50PM

    by khallow (3766) Subscriber Badge on Saturday January 11 2020, @01:50PM (#942218) Journal
    I think a key early example of this will be increasing the rent seeking power of large legal organizations. As I've noted before, law at merely the federal level is increasing in the US at a faster rate than anyone can read, even if they were to devote their entire life to reading law. Other developed world countries are probably in a similar situation.

    As that complexity continues to grow, that'll provide growing opportunities for AI systems to mine the law and regulation for loopholes, means to harass and shut down opponents, and all kinds of opportunities to insert middlemen into otherwise lawful activities.

    I think there's the potential to lead to a stagnant world where one has to either run one's own legal AI or pay some large tribute to a law firm for access to theirs so that you can do anything. It won't matter much to the big businesses. It's just another fixed expense which they can spread among a lot of revenue. But everyone else is going to have a challenge.
    Starting Score:    1  point
    Moderation   +3  
       Interesting=3, Total=3
    Extra 'Interesting' Modifier   0  

    Total Score:   4