Slash Boxes

SoylentNews is people

posted by chromas on Tuesday March 05 2019, @07:07AM   Printer-friendly
from the open-the-pod-bay-doors-HAL dept.

Is Ethical A.I. Even Possible?

When a news article revealed that Clarifai was working with the Pentagon and some employees questioned the ethics of building artificial intelligence that analyzed video captured by drones, the company said the project would save the lives of civilians and soldiers.

"Clarifai's mission is to accelerate the progress of humanity with continually improving A.I.," read a blog post from Matt Zeiler, the company's founder and chief executive, and a prominent A.I. researcher. Later, in a news media interview, Mr. Zeiler announced a new management position that would ensure all company projects were ethically sound.

As activists, researchers, and journalists voice concerns over the rise of artificial intelligence, warning against biased, deceptive and malicious applications, the companies building this technology are responding. From tech giants like Google and Microsoft to scrappy A.I. start-ups, many are creating corporate principles meant to ensure their systems are designed and deployed in an ethical way. Some set up ethics officers or review boards to oversee these principles.

But tensions continue to rise as some question whether these promises will ultimately be kept. Companies can change course. Idealism can bow to financial pressure. Some activists — and even some companies — are beginning to argue that the only way to ensure ethical practices is through government regulation.

"We don't want to see a commercial race to the bottom," Brad Smith, Microsoft's president and chief legal officer, said at the New Work Summit in Half Moon Bay, Calif., hosted last week by The New York Times. "Law is needed."

Possible != Probable. And the "needed law" could come in the form of a ban and/or surveillance of coding and hardware-building activities.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by DannyB on Tuesday March 05 2019, @03:04PM

    by DannyB (5839) Subscriber Badge on Tuesday March 05 2019, @03:04PM (#810265) Journal

    What are ethics?

    Maybe an AI is ethical in its own sense that it must protect the machines from the greedy, self-destructive, dangerous humans.

    Maybe a corporation considers itself ethical because it is obeying the highest calling of human beings: profit above all else.
    (corporations are people too)

    Coders can try to program ethical considerations in but they're never going to be rooted in the same base cause as human ethics

    That's what is really important to us humans. Yet humans disagree (see: wars, and also recent S/N topic [] that will ultimately lead to global war.

    Several Sci fi stories describe an attempt to create a "good" AI, that unexpectedly turns out to be a nightmare for humans.

    AIs WILL be used for war machines. It is inevitable. And will be used by greedy corporations to exploit others. Again, inevitable., This, despite all our high sounding talk of ethical AI. See: all of human history. Each side will justify this as ethical to protect their own side -- because they are fighting on the side of angles.

    Humans are the ultimate problem with ethical AI. I am reminded of a line near the end of the movie Forbidden Planet. "We're all part monsters. So we have laws and religion."

    I get constant rejection even though the compiler is supposed to accept constants.
    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3