Stories
Slash Boxes
Comments

SoylentNews is people

posted by chromas on Tuesday March 05 2019, @07:07AM   Printer-friendly
from the open-the-pod-bay-doors-HAL dept.

Is Ethical A.I. Even Possible?

When a news article revealed that Clarifai was working with the Pentagon and some employees questioned the ethics of building artificial intelligence that analyzed video captured by drones, the company said the project would save the lives of civilians and soldiers.

"Clarifai's mission is to accelerate the progress of humanity with continually improving A.I.," read a blog post from Matt Zeiler, the company's founder and chief executive, and a prominent A.I. researcher. Later, in a news media interview, Mr. Zeiler announced a new management position that would ensure all company projects were ethically sound.

As activists, researchers, and journalists voice concerns over the rise of artificial intelligence, warning against biased, deceptive and malicious applications, the companies building this technology are responding. From tech giants like Google and Microsoft to scrappy A.I. start-ups, many are creating corporate principles meant to ensure their systems are designed and deployed in an ethical way. Some set up ethics officers or review boards to oversee these principles.

But tensions continue to rise as some question whether these promises will ultimately be kept. Companies can change course. Idealism can bow to financial pressure. Some activists — and even some companies — are beginning to argue that the only way to ensure ethical practices is through government regulation.

"We don't want to see a commercial race to the bottom," Brad Smith, Microsoft's president and chief legal officer, said at the New Work Summit in Half Moon Bay, Calif., hosted last week by The New York Times. "Law is needed."

Possible != Probable. And the "needed law" could come in the form of a ban and/or surveillance of coding and hardware-building activities.

Related:


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by https on Tuesday March 05 2019, @07:27PM

    by https (5248) on Tuesday March 05 2019, @07:27PM (#810381) Journal

    A real problem with AI is that nobody knows or can know how it works, other than chanting "Matrices! Neural Nets!" and what the AIs do have going on is absolutely NOT a model of the world, so when it fails it can fail pretty spectacularly. You can't even discuss ethics (or morals) until they're willing to admit, "we're 79% sure that this is a birdbath and not seventeen kids about to experience collateral damage. Oh, and a 1% chance that it's a hospital, and 0.5% that it's David Bowie's first bong."

    It's a very different conversation from a bomber pilot asking, "what are the odds the Red Cross has just set up an emergency shelter inside this paper mill, or that an equipment malfunction has the place filled with tradespeople at 3 in the morning instead of empty?"

    --
    Offended and laughing about it.
    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3