from the not-with-my-work dept.
Leading AI experts have boycotted a South Korean university over a partnership with weapons manufacturer Hanwha Systems. More than 50 AI researchers from 30 countries signed a letter expressing concern about its plans to develop artificial intelligence for weapons. In response, the university said it would not be developing "autonomous lethal weapons". The boycott comes ahead of a UN meeting to discuss killer robots.
Shin Sung-chul, president of the Korea Advanced Institute of Science and Technology (Kaist), said: "I reaffirm once again that Kaist will not conduct any research activities counter to human dignity including autonomous weapons lacking meaningful human control. Kaist is significantly aware of ethical concerns in the application of all technologies including artificial intelligence." He went on to explain that the university's project was centred on developing algorithms for "efficient logistical systems, unmanned navigation and aviation training systems".
The U.N. has begun discussion on "lethal autonomous robots," killing machines which take the next step from our current drones which are operator controlled, to completely autonomous killing machines.
"Killer robots would threaten the most fundamental of rights and principles in international law," warned Steve Goose, arms division director at Human Rights Watch.
Are we too far down the rabbit hole, or can we come to reasonable and humane limits on this new world of death-by-algorithm?
The meeting, chaired by a German diplomat, Michael Biontino, has also been asked to discuss questions such as: in what situations are distinctively human traits, such as fear, hate, sense of honour and dignity, compassion and love desirable in combat?, and in what situations do machines lacking emotions offer distinct advantages over human combatants?
The Campaign to Stop Killer Robots, an alliance of human rights groups and concerned scientists, is calling for an international prohibition on fully autonomous weapons.
Last week Human Rights Watch released a report urging the creation of a new protocol specifically aimed at outlawing Laws. Blinding laser weapons were pre-emptively outlawed in 1995 and combatant nations since 2008 have been required to remove unexploded cluster bombs.
[...] The Foreign Office told the Guardian: "At present, we do not see the need for a prohibition on the use of Laws, as international humanitarian law already provides sufficient regulation for this area. The United Kingdom is not developing lethal autonomous weapons systems, and the operation of weapons systems by the UK armed forces will always be under human oversight and control. As an indication of our commitment to this, we are focusing development efforts on remotely piloted systems rather than highly automated systems."
When a news article revealed that Clarifai was working with the Pentagon and some employees questioned the ethics of building artificial intelligence that analyzed video captured by drones, the company said the project would save the lives of civilians and soldiers.
"Clarifai's mission is to accelerate the progress of humanity with continually improving A.I.," read a blog post from Matt Zeiler, the company's founder and chief executive, and a prominent A.I. researcher. Later, in a news media interview, Mr. Zeiler announced a new management position that would ensure all company projects were ethically sound.
As activists, researchers, and journalists voice concerns over the rise of artificial intelligence, warning against biased, deceptive and malicious applications, the companies building this technology are responding. From tech giants like Google and Microsoft to scrappy A.I. start-ups, many are creating corporate principles meant to ensure their systems are designed and deployed in an ethical way. Some set up ethics officers or review boards to oversee these principles.
But tensions continue to rise as some question whether these promises will ultimately be kept. Companies can change course. Idealism can bow to financial pressure. Some activists — and even some companies — are beginning to argue that the only way to ensure ethical practices is through government regulation.
"We don't want to see a commercial race to the bottom," Brad Smith, Microsoft's president and chief legal officer, said at the New Work Summit in Half Moon Bay, Calif., hosted last week by The New York Times. "Law is needed."
Possible != Probable. And the "needed law" could come in the form of a ban and/or surveillance of coding and hardware-building activities.
U.N. Starts Discussion on Lethal Autonomous Robots
UK Opposes "Killer Robot" Ban
Robot Weapons: What's the Harm?
The UK Government Urged to Establish an Artificial Intelligence Ethics Board
Google Employees on Pentagon AI Algorithms: "Google Should Not be in the Business of War"
South Korea's KAIST University Boycotted Over Alleged "Killer Robot" Partnership
About a Dozen Google Employees Have Resigned Over Project Maven
Google Drafting Ethics Policy for its Involvement in Military Projects
Google Will Not Continue Project Maven After Contract Expires in 2019
Uproar at Google after News of Censored China Search App Breaks
"Senior Google Scientist" Resigns over Chinese Search Engine Censorship Project
Google Suppresses Internal Memo About China Censorship; Eric Schmidt Predicts Internet Split
Leaked Transcript Contradicts Google's Denials About Censored Chinese Search Engine
Senators Demand Answers About Google+ Breach; Project Dragonfly Undermines Google's Neutrality
Google's Secret China Project "Effectively Ended" After Internal Confrontation
Microsoft Misrepresented HoloLens 2 Field of View, Faces Backlash for Military Contract