Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by janrinok on Friday April 06 2018, @05:14AM   Printer-friendly
from the not-with-my-work dept.

South Korean university boycotted over 'killer robots'

Leading AI experts have boycotted a South Korean university over a partnership with weapons manufacturer Hanwha Systems. More than 50 AI researchers from 30 countries signed a letter expressing concern about its plans to develop artificial intelligence for weapons. In response, the university said it would not be developing "autonomous lethal weapons". The boycott comes ahead of a UN meeting to discuss killer robots.

Shin Sung-chul, president of the Korea Advanced Institute of Science and Technology (Kaist), said: "I reaffirm once again that Kaist will not conduct any research activities counter to human dignity including autonomous weapons lacking meaningful human control. Kaist is significantly aware of ethical concerns in the application of all technologies including artificial intelligence." He went on to explain that the university's project was centred on developing algorithms for "efficient logistical systems, unmanned navigation and aviation training systems".

Also at The Guardian and CNN.

Related: U.N. Starts Discussion on Lethal Autonomous Robots
UK Opposes "Killer Robot" Ban


Original Submission

Related Stories

U.N. Starts Discussion on Lethal Autonomous Robots 27 comments

The U.N. has begun discussion on "lethal autonomous robots," killing machines which take the next step from our current drones which are operator controlled, to completely autonomous killing machines.

"Killer robots would threaten the most fundamental of rights and principles in international law," warned Steve Goose, arms division director at Human Rights Watch.

Are we too far down the rabbit hole, or can we come to reasonable and humane limits on this new world of death-by-algorithm?

UK Opposes "Killer Robot" Ban 39 comments

The UK is opposing international efforts to ban "lethal autonomous weapons systems" (Laws) at a week-long United Nations session in Geneva:

The meeting, chaired by a German diplomat, Michael Biontino, has also been asked to discuss questions such as: in what situations are distinctively human traits, such as fear, hate, sense of honour and dignity, compassion and love desirable in combat?, and in what situations do machines lacking emotions offer distinct advantages over human combatants?

The Campaign to Stop Killer Robots, an alliance of human rights groups and concerned scientists, is calling for an international prohibition on fully autonomous weapons.

Last week Human Rights Watch released a report urging the creation of a new protocol specifically aimed at outlawing Laws. Blinding laser weapons were pre-emptively outlawed in 1995 and combatant nations since 2008 have been required to remove unexploded cluster bombs.

[...] The Foreign Office told the Guardian: "At present, we do not see the need for a prohibition on the use of Laws, as international humanitarian law already provides sufficient regulation for this area. The United Kingdom is not developing lethal autonomous weapons systems, and the operation of weapons systems by the UK armed forces will always be under human oversight and control. As an indication of our commitment to this, we are focusing development efforts on remotely piloted systems rather than highly automated systems."

Is Ethical A.I. Even Possible? 35 comments

Is Ethical A.I. Even Possible?

When a news article revealed that Clarifai was working with the Pentagon and some employees questioned the ethics of building artificial intelligence that analyzed video captured by drones, the company said the project would save the lives of civilians and soldiers.

"Clarifai's mission is to accelerate the progress of humanity with continually improving A.I.," read a blog post from Matt Zeiler, the company's founder and chief executive, and a prominent A.I. researcher. Later, in a news media interview, Mr. Zeiler announced a new management position that would ensure all company projects were ethically sound.

As activists, researchers, and journalists voice concerns over the rise of artificial intelligence, warning against biased, deceptive and malicious applications, the companies building this technology are responding. From tech giants like Google and Microsoft to scrappy A.I. start-ups, many are creating corporate principles meant to ensure their systems are designed and deployed in an ethical way. Some set up ethics officers or review boards to oversee these principles.

But tensions continue to rise as some question whether these promises will ultimately be kept. Companies can change course. Idealism can bow to financial pressure. Some activists — and even some companies — are beginning to argue that the only way to ensure ethical practices is through government regulation.

"We don't want to see a commercial race to the bottom," Brad Smith, Microsoft's president and chief legal officer, said at the New Work Summit in Half Moon Bay, Calif., hosted last week by The New York Times. "Law is needed."

Possible != Probable. And the "needed law" could come in the form of a ban and/or surveillance of coding and hardware-building activities.

Related:


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 0) by Anonymous Coward on Friday April 06 2018, @05:26AM (5 children)

    by Anonymous Coward on Friday April 06 2018, @05:26AM (#663277)

    If they don't, someone else would.
    You already have autonomous cars that recognize pedestrians, so its not a farfetched idea to install a gun and train it on the same pedestrians.

    • (Score: 1, Touché) by Anonymous Coward on Friday April 06 2018, @05:52AM (4 children)

      by Anonymous Coward on Friday April 06 2018, @05:52AM (#663281)

      If they don't, someone else would.

      Is that an argument on the line of "If you don't go shooting kids in a US school, someone else would"?
      Yes, we know school shooting will still happen in US.

      • (Score: 0) by Anonymous Coward on Friday April 06 2018, @06:25AM (1 child)

        by Anonymous Coward on Friday April 06 2018, @06:25AM (#663294)

        Yes, unless you want to claim credit for it that is. /me shrugs some more.

        • (Score: 0) by Anonymous Coward on Friday April 06 2018, @06:46AM

          by Anonymous Coward on Friday April 06 2018, @06:46AM (#663304)

          Ah, gotta just love chatting with myself. A dialogue between two AC-alike minds.

      • (Score: 0) by Anonymous Coward on Friday April 06 2018, @11:24AM (1 child)

        by Anonymous Coward on Friday April 06 2018, @11:24AM (#663373)

        Is that an argument on the line of "If you don't go shooting kids in a US school, someone else would"?

        Not exactly. By shooting kids in a US school, you don't reduce the probability of others doing so. But by taking a job at weapons research, someone else cannot get that job any more. Also, the result of shooting kids in US schools is always the same: Dead kids. But the result of research depends very much on the researcher, therefore it matters who does the research.

        • (Score: 0) by Anonymous Coward on Friday April 06 2018, @02:33PM

          by Anonymous Coward on Friday April 06 2018, @02:33PM (#663428)

          Not exactly. By shooting kids in a US school, you don't reduce the probability of others doing so.

          What's the fun in repeating the massacre at Columbine?

  • (Score: 2) by MichaelDavidCrawford on Friday April 06 2018, @06:47AM

    by MichaelDavidCrawford (2339) Subscriber Badge <mdcrawford@gmail.com> on Friday April 06 2018, @06:47AM (#663306) Homepage Journal

    You say that like they're a bad thing.

    --
    Yes I Have No Bananas. [gofundme.com]
  • (Score: 2, Insightful) by Anonymous Coward on Friday April 06 2018, @10:33AM (2 children)

    by Anonymous Coward on Friday April 06 2018, @10:33AM (#663366)

    The problem with all ethical AI researchers boycotting research of autonomous lethal weapons is that then the research will be done by unethical AI researchers (who, obviously, don't have an ethical problem with that research). But if there's one thing that, if it gets developed at all, I'd prefer to be developed by ethical researchers, it is autonomous lethal weapons.

    • (Score: 0) by Anonymous Coward on Friday April 06 2018, @12:28PM

      by Anonymous Coward on Friday April 06 2018, @12:28PM (#663395)

      Another version of the problem (and solution) can be found in the current arc on LICD, starting yesterday:
          http://leasticoulddo.com/comic/20180405 [leasticoulddo.com] ...and the punch line today.

    • (Score: 2) by c0lo on Friday April 06 2018, @02:48PM

      by c0lo (156) Subscriber Badge on Friday April 06 2018, @02:48PM (#663435) Journal

      This argument sounds to my ears very much like the installing an antivirus on voting machines [xkcd.com]

      It simply will not matter how ethical the autonomous lethal weapons will be developed. What matters is how ethical they will be used once developed.
      And, believe it or not, it will not be the scientists to use them.

      --
      https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
  • (Score: 2) by acid andy on Friday April 06 2018, @01:12PM (5 children)

    by acid andy (1683) on Friday April 06 2018, @01:12PM (#663408) Homepage Journal

    (Kaist), said: "I reaffirm once again that Kaist will not conduct any research activities counter to human dignity including autonomous weapons lacking meaningful human control.

    So, there's dignity so long as it was a human that pulled the trigger? Or pressed the button. Bullshit.

    --
    If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
    • (Score: 2) by c0lo on Friday April 06 2018, @02:55PM

      by c0lo (156) Subscriber Badge on Friday April 06 2018, @02:55PM (#663438) Journal

      So, there's dignity so long as it was a human that pulled the trigger? Or pressed the button. Bullshit.

      It has to be human, tho.
      No other animal will feel dignified in any way by pressing a button, because it requires a dose of vanity only (some) humans are able of.

      --
      https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
    • (Score: 2) by Freeman on Friday April 06 2018, @04:19PM (3 children)

      by Freeman (732) on Friday April 06 2018, @04:19PM (#663470) Journal

      As in, you're not worth even thinking about, because we have our autonomous robot army "taking care of" the problem. So, yeah, there's a difference.

      --
      Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
      • (Score: 3, Insightful) by acid andy on Friday April 06 2018, @04:23PM (2 children)

        by acid andy (1683) on Friday April 06 2018, @04:23PM (#663474) Homepage Journal

        You're right, but death's the great leveler. In the end, for the one that dies, it won't much matter who or what initiated the act. In some cases they won't know anyway. If human operators are hidden in secret remote bases, how does one know if they even exist?

        --
        If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
        • (Score: 0) by Anonymous Coward on Friday April 06 2018, @07:15PM (1 child)

          by Anonymous Coward on Friday April 06 2018, @07:15PM (#663521)

          The point is many a government have been toppled by their own army deciding to not follow a direct order to fire on unarmed citizens. A robot on the other hand will do as ordered and mow down people as long as it has ammunitions.

          • (Score: 2) by acid andy on Friday April 06 2018, @08:10PM

            by acid andy (1683) on Friday April 06 2018, @08:10PM (#663536) Homepage Journal

            I guess I was too subtle. I wasn't supporting the robots. I was questioning the implication in the quote that there's any dignity in being murdered by a human.

            --
            If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
(1)