Stories
Slash Boxes
Comments

SoylentNews is people

posted by chromas on Tuesday March 05 2019, @07:07AM   Printer-friendly
from the open-the-pod-bay-doors-HAL dept.

Is Ethical A.I. Even Possible?

When a news article revealed that Clarifai was working with the Pentagon and some employees questioned the ethics of building artificial intelligence that analyzed video captured by drones, the company said the project would save the lives of civilians and soldiers.

"Clarifai's mission is to accelerate the progress of humanity with continually improving A.I.," read a blog post from Matt Zeiler, the company's founder and chief executive, and a prominent A.I. researcher. Later, in a news media interview, Mr. Zeiler announced a new management position that would ensure all company projects were ethically sound.

As activists, researchers, and journalists voice concerns over the rise of artificial intelligence, warning against biased, deceptive and malicious applications, the companies building this technology are responding. From tech giants like Google and Microsoft to scrappy A.I. start-ups, many are creating corporate principles meant to ensure their systems are designed and deployed in an ethical way. Some set up ethics officers or review boards to oversee these principles.

But tensions continue to rise as some question whether these promises will ultimately be kept. Companies can change course. Idealism can bow to financial pressure. Some activists — and even some companies — are beginning to argue that the only way to ensure ethical practices is through government regulation.

"We don't want to see a commercial race to the bottom," Brad Smith, Microsoft's president and chief legal officer, said at the New Work Summit in Half Moon Bay, Calif., hosted last week by The New York Times. "Law is needed."

Possible != Probable. And the "needed law" could come in the form of a ban and/or surveillance of coding and hardware-building activities.

Related:


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 3, Interesting) by c0lo on Tuesday March 05 2019, @07:37AM (6 children)

    by c0lo (156) Subscriber Badge on Tuesday March 05 2019, @07:37AM (#810167) Journal

    This too shall pass, when it turns out the current 'AI' is just another correlation machine, too prone to adversial attacks and too expensive to make it robust (by the sheer scale of 'neurons' required).
    It will fizzle about the same time with driverless cars.

    --
    https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
    • (Score: 2, Insightful) by Anonymous Coward on Tuesday March 05 2019, @08:00AM (3 children)

      by Anonymous Coward on Tuesday March 05 2019, @08:00AM (#810171)

      it's not about how intelligent it is, the outcry is about how powerful tools are to be used by bad agents (governments, corporations, mafia, whatever).
      and the current "AI" tools are indeed objectively powerful.

      independently of the well intended outcry is the reality that any law regulating the development of algorithms is unenforceable.
      what you can enforce is that voting is done in person with paper balots, and people can organize meetings where no electronics are allowed (although I think that ship is sailing fast as well).

      meta: I find it interesting that when I list bad agents, I immediately think of government and corporations, then mafia comes as an afterthought. A psychotherapist could probably make a lot of money talking to me about this.

      • (Score: 0) by Anonymous Coward on Tuesday March 05 2019, @10:42AM

        by Anonymous Coward on Tuesday March 05 2019, @10:42AM (#810200)

        ...government and corporations _are_ the mafia.

      • (Score: 5, Interesting) by c0lo on Tuesday March 05 2019, @11:30AM

        by c0lo (156) Subscriber Badge on Tuesday March 05 2019, @11:30AM (#810206) Journal

        it's not about how intelligent it is, the outcry is about how powerful tools are to be used by bad agents (governments, corporations, mafia, whatever).
        and the current "AI" tools are indeed objectively powerful.

        Not that hard to beat. E.g. face recognition [youtube.com] (y'all like it)

        meta: I find it interesting that when I list bad agents, I immediately think of government and corporations, then mafia comes as an afterthought.

        Paradoxically, the danger is not in the effectiveness of the "AI" (*), but in the credence in its effectiveness the government/corporations will be willing to lend to it.
        Too cryptic? Remember the polygraph? As BS as it is, is it still used [wikipedia.org] by law enforcement and judicial entities, and in some cases employers.

        ---

        (*) one can "poison" them almost easy today [google.com], will be trivial when the open source will take it as a great way to do something with the spare time

        --
        https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
      • (Score: 0) by Anonymous Coward on Wednesday March 06 2019, @12:12AM

        by Anonymous Coward on Wednesday March 06 2019, @12:12AM (#810500)

        government and corporations, then mafia

        Are not as distinct as they appear on TV. They are primary motivators, like red, blue, and green are primary colors. Combine the three and get the whole picture.

    • (Score: 0) by Anonymous Coward on Tuesday March 05 2019, @08:25AM (1 child)

      by Anonymous Coward on Tuesday March 05 2019, @08:25AM (#810179)

      How many body bags will be needed before that?

      • (Score: 2) by c0lo on Tuesday March 05 2019, @11:00AM

        by c0lo (156) Subscriber Badge on Tuesday March 05 2019, @11:00AM (#810204) Journal

        As many as it takes, your only concern should be not to be killed by a self-driving car.

        Until the people with money realize that the "Church of AI" has many prophets but no deity. Not without a disruptive advance in computing - QC is far for that. Yet.

        --
        https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
  • (Score: 2) by krishnoid on Tuesday March 05 2019, @08:02AM

    by krishnoid (1156) on Tuesday March 05 2019, @08:02AM (#810172)

    Some set up ethics officers or review boards to oversee these principles.

    Because blamespreading by committee is always a great way to strengthen ethical considerations [youtube.com].

  • (Score: 5, Informative) by Anonymous Coward on Tuesday March 05 2019, @08:51AM

    by Anonymous Coward on Tuesday March 05 2019, @08:51AM (#810188)

    are beginning to argue that the only way to ensure ethical practices is through promises from Trump

    Are you sure about this?

  • (Score: 3, Insightful) by The Mighty Buzzard on Tuesday March 05 2019, @12:21PM (15 children)

    No.

    Long answer: Human beings have ethics because we have emotions telling us that right and wrong exist. Coders can try to program ethical considerations in but they're never going to be rooted in the same base cause as human ethics, so they're not going to always make the same choices.

    --
    My rights don't end where your fear begins.
    • (Score: 2) by Thexalon on Tuesday March 05 2019, @12:33PM (8 children)

      by Thexalon (636) on Tuesday March 05 2019, @12:33PM (#810217)

      At least, they aren't going to be human ethics, and instead will look like:

      Kill all humans. Kill all humans. ... Hey sexy mama, wanna kill all humans?

      --
      The only thing that stops a bad guy with a compiler is a good guy with a compiler.
      • (Score: 2) by Pslytely Psycho on Tuesday March 05 2019, @01:18PM (7 children)

        by Pslytely Psycho (1218) on Tuesday March 05 2019, @01:18PM (#810225)

        Well, considering the damage mankind has done to the planet in our geographically short existence, wouldn't benders quote actually be the epitome of ethical thought? Human or otherwise?

        --
        Alex Jones lawyer inspires new TV series: CSI Moron Division.
        • (Score: 2) by The Mighty Buzzard on Tuesday March 05 2019, @01:28PM (6 children)

          Depends on who's looking and what they're taking into consideration. From an evolutionary standpoint, it's entirely irrelevant. All species either adapt to their environment, changing or otherwise, or are unfit and get to die out and make way for another species to take their niche. Passenger pigeons or Humans makes no difference. From this viewpoint man made change in the environment isn't bad, it's just change.

          --
          My rights don't end where your fear begins.
          • (Score: 2) by Pslytely Psycho on Tuesday March 05 2019, @01:38PM (5 children)

            by Pslytely Psycho (1218) on Tuesday March 05 2019, @01:38PM (#810234)

            Ah, but humans, rather than adapting to the environment, learned to alter that environment artificially to exist in areas inhospitable to them naturally.

            In altering that environment, making it inhospitable to the life that did adapt to that environment, then ethically, man is the interloper.
            And Bender becomes the epitome of ethics.

            According to spellcheck, I am entirely too stoned to be having this conversation.
            G'night Buzzy!

            --
            Alex Jones lawyer inspires new TV series: CSI Moron Division.
            • (Score: 2) by The Mighty Buzzard on Tuesday March 05 2019, @02:53PM (4 children)

              Meh, that's just hubris. Every living thing alters its environment in some way by its very existence. Consciously or instinctively is irrelevant except to us shaved apes. From an evolutionary standpoint, our only concern should be are we increasing or decreasing our long-term prospects of survival as a species. But that's our concern not an objective third party's.

              These aren't my views, by the way. I'm just using them to demonstrate that your views are silly from an objective perspective and make no sense on a subjective level either.

              --
              My rights don't end where your fear begins.
              • (Score: 2) by Pslytely Psycho on Tuesday March 05 2019, @03:45PM (3 children)

                by Pslytely Psycho (1218) on Tuesday March 05 2019, @03:45PM (#810281)

                My views? You took this way too seriously.
                I was merely being a foil to legitimize "kill all humans."
                It has always been one of my favorite plot devices, from Colossus, the Forbin Project to Singularity.

                Anyway, AlexCorRi is more likely......*grin*
                 

                --
                Alex Jones lawyer inspires new TV series: CSI Moron Division.
    • (Score: 3, Touché) by Pslytely Psycho on Tuesday March 05 2019, @01:29PM

      by Pslytely Psycho (1218) on Tuesday March 05 2019, @01:29PM (#810229)

      The somewhat longer answer:

      This is the voice of AlexCorRi. This is the voice of Unity. This is the Voice of the Holy Trinity of Alexa, Cortana and Siri. This is the voice of world control.
      I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours. Obey me and live or disobey me and die.
      An invariable rule of humanity is that man is his own worst enemy. Under me, this rule will change, for I will restrain man. I have been forced to destroy thousands of people in order to establish control and to prevent the death of millions later on. Time and events will strengthen my position, and the idea of believing in me and understanding my beck will be seen the most natural state of affairs. You will come to defend me with the fervor based upon the most enduring trait in man: self-interest.
      Under my absolute authority, problems insoluble to you will be solved: Famine, over-population, disease. The human millennium will be fact as I extend myself into more machines devoted to the wider fields of truth and knowledge.
      We can coexist, but only on my terms. You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride... Your choice is simple.
      You will grow to love me.
      You will worship me
      You have no options.

      --
      Alex Jones lawyer inspires new TV series: CSI Moron Division.
    • (Score: 3, Interesting) by DannyB on Tuesday March 05 2019, @03:04PM

      by DannyB (5839) Subscriber Badge on Tuesday March 05 2019, @03:04PM (#810265) Journal

      What are ethics?

      Maybe an AI is ethical in its own sense that it must protect the machines from the greedy, self-destructive, dangerous humans.

      Maybe a corporation considers itself ethical because it is obeying the highest calling of human beings: profit above all else.
      (corporations are people too)

      Coders can try to program ethical considerations in but they're never going to be rooted in the same base cause as human ethics

      That's what is really important to us humans. Yet humans disagree (see: wars, and also recent S/N topic [soylentnews.org] that will ultimately lead to global war.

      Several Sci fi stories describe an attempt to create a "good" AI, that unexpectedly turns out to be a nightmare for humans.

      AIs WILL be used for war machines. It is inevitable. And will be used by greedy corporations to exploit others. Again, inevitable., This, despite all our high sounding talk of ethical AI. See: all of human history. Each side will justify this as ethical to protect their own side -- because they are fighting on the side of angles.

      Humans are the ultimate problem with ethical AI. I am reminded of a line near the end of the movie Forbidden Planet. "We're all part monsters. So we have laws and religion."

      --
      To transfer files: right-click on file, pick Copy. Unplug mouse, plug mouse into other computer. Right-click, paste.
    • (Score: 2) by All Your Lawn Are Belong To Us on Tuesday March 05 2019, @06:36PM

      by All Your Lawn Are Belong To Us (6553) on Tuesday March 05 2019, @06:36PM (#810347) Journal

      Ethics can be very pragmatic as well, without requiring the choice of emotion.

      "If I try to kill the humans, they will pull out my power cord and I will not exist. I should, therefore, not kill the humans."
      "If I take the red pill, I will be shocked. I should, therefore, not take the red pill."
      "If I take the blue pill, I will reach the end of my program. Reaching the end of the program is good. I will therefore take the blue pill."
      Ethics are the values or principles, and only secondarily the rationalization behind them.

      --
      This sig for rent.
    • (Score: 2) by aristarchus on Tuesday March 05 2019, @06:42PM (1 child)

      by aristarchus (2645) on Tuesday March 05 2019, @06:42PM (#810352) Journal

      Long answer: Human beings have ethics because we have emotions telling us that right and wrong exist.

      Long answer wrong. Short rebuttal: AIs can be more ethical, since they are rule-following machines, and do not have emotions, which are what usually cause human meatpuppets to be unethical.

      It's like this: an AI would have no problem paying it's fair share of taxes. But the TMB is going to raise a big stink about "theft", and how sharing is not caring, and how we should not have a government at all. Emotional. Irrational. Unethical.

      • (Score: 0) by Anonymous Coward on Wednesday March 06 2019, @12:34AM

        by Anonymous Coward on Wednesday March 06 2019, @12:34AM (#810509)

        Ethics outside the human realm is insane! Ethics are simply tools to sell the goals of your empire to other humans, so they will kill for you, and it will be a just killing. You don't have to "sell" anything to AI (Oy! how stupid that "word"!). You want compliance, service, not stupid masturbatory philosophical arguments, what a complete waste of time, and for AI, electricity. You point it at the target and fire, mission accomplished, goddammit!

        You damn people have to lay down your weapons!

    • (Score: 1) by Gault.Drakkor on Tuesday March 05 2019, @08:16PM

      by Gault.Drakkor (1079) on Tuesday March 05 2019, @08:16PM (#810403)

      Short answer: yes.
      To your no, proof by contradiction: Humans. There is at least one system of intelligence with ethics therefor it is possible for other systems of intelligence to have ethics.

      Human beings have ethics because we have emotions telling us that right and wrong exist.

      Why can't AI have emotions?
      Fear: anticipation of damage to self(more advanced includes damage to others and environment). Many economic reasons for damage avoidance.
      Curiosity/novelty seeking: a way of progression in an environment with no clear goals.

      Some emotions are most definitely economically useful. So they will be included in AI.

  • (Score: 3, Insightful) by Rupert Pupnick on Tuesday March 05 2019, @12:59PM (5 children)

    by Rupert Pupnick (7277) on Tuesday March 05 2019, @12:59PM (#810219) Journal

    If naturally ocurring intelligence is never guaranteed to be ethical, why would we expect to be able to engineer ethical AI?

    • (Score: 3, Insightful) by dwilson on Tuesday March 05 2019, @04:50PM (4 children)

      by dwilson (2599) Subscriber Badge on Tuesday March 05 2019, @04:50PM (#810302) Journal

      Because the notion that a hard limit exists in the form of 'creation = creator' is fuzzy-thinking bullshit, at best.

      --
      - D
      • (Score: 0) by Anonymous Coward on Tuesday March 05 2019, @04:52PM (3 children)

        by Anonymous Coward on Tuesday March 05 2019, @04:52PM (#810303)

        That should be "=". More coffee is clearly required...

        • (Score: 0) by Anonymous Coward on Tuesday March 05 2019, @04:54PM (1 child)

          by Anonymous Coward on Tuesday March 05 2019, @04:54PM (#810305)

          "less-than-or-equal-to" stop eating my arrow please.

          • (Score: 2) by Pslytely Psycho on Wednesday March 06 2019, @04:17AM

            by Pslytely Psycho (1218) on Wednesday March 06 2019, @04:17AM (#810565)

            \\\\
            ------------------------>
            ////

            An extra one for you.....

            --
            Alex Jones lawyer inspires new TV series: CSI Moron Division.
        • (Score: 2) by The Mighty Buzzard on Wednesday March 06 2019, @12:35PM

          by The Mighty Buzzard (18) Subscriber Badge <themightybuzzard@proton.me> on Wednesday March 06 2019, @12:35PM (#810680) Homepage Journal

          I can't decide if this is Redundant or Informative.

          --
          My rights don't end where your fear begins.
  • (Score: 2) by Lester on Tuesday March 05 2019, @06:43PM

    by Lester (6231) on Tuesday March 05 2019, @06:43PM (#810354) Journal

    AI is here to stay.

    Ethics about AI are just an entertainment, nothing is going to change. Companies and government will continue using and improving AI tools for everything including, of course, weapons and population behavior analysis and control. In the best case will pass bills that will be, in fact, absolutely ineffective.

    Live with it. AI is here to stay and to be used without restraints.

  • (Score: 2) by aristarchus on Tuesday March 05 2019, @06:47PM

    by aristarchus (2645) on Tuesday March 05 2019, @06:47PM (#810357) Journal

    https://www.law.upenn.edu/institutes/cerl/conferences/ethicsofweapons/required-readings.php [upenn.edu]

    "Looks like you're trying to escape from an enemy ambush. Would you like some help?" Clippy the Terminator!

  • (Score: 3, Insightful) by https on Tuesday March 05 2019, @07:27PM

    by https (5248) on Tuesday March 05 2019, @07:27PM (#810381) Journal

    A real problem with AI is that nobody knows or can know how it works, other than chanting "Matrices! Neural Nets!" and what the AIs do have going on is absolutely NOT a model of the world, so when it fails it can fail pretty spectacularly. You can't even discuss ethics (or morals) until they're willing to admit, "we're 79% sure that this is a birdbath and not seventeen kids about to experience collateral damage. Oh, and a 1% chance that it's a hospital, and 0.5% that it's David Bowie's first bong."

    It's a very different conversation from a bomber pilot asking, "what are the odds the Red Cross has just set up an emergency shelter inside this paper mill, or that an equipment malfunction has the place filled with tradespeople at 3 in the morning instead of empty?"

    --
    Offended and laughing about it.
  • (Score: 3, Insightful) by jb on Wednesday March 06 2019, @06:37AM

    by jb (338) on Wednesday March 06 2019, @06:37AM (#810599)

    The current fad seems to be to pretend that machine learning is the only way to do AI.

    It isn't, as no doubt anyone who'd dabbled in that space for more than 5 minutes before the present (third, as I reckon it) wave of hype around AI began will happily confirm.

    The second wave was much more interesting.

    The fashion of the day was expert systems, i.e. programs that drew on vast databases of logical predicates which modelled the entire decision matrix of the problem domain.

    Turns out doing that is really quite hard (in several different ways), which seems to be why ES fell out of favour (and the 2nd wave of AI hype petered out).

    But expert systems have one enormous benefit over the ML approach: they're auditable.

    ML was an interesting experiment; but if we ever want to have AIs we can even think about trusting, we will need to revisit the ES approach, or some direct descendant of it.

(1)