Stories
Slash Boxes
Comments

SoylentNews is people

posted by chromas on Tuesday March 05 2019, @07:07AM   Printer-friendly [Skip to comment(s)]
from the open-the-pod-bay-doors-HAL dept.

Is Ethical A.I. Even Possible?

When a news article revealed that Clarifai was working with the Pentagon and some employees questioned the ethics of building artificial intelligence that analyzed video captured by drones, the company said the project would save the lives of civilians and soldiers.

"Clarifai's mission is to accelerate the progress of humanity with continually improving A.I.," read a blog post from Matt Zeiler, the company's founder and chief executive, and a prominent A.I. researcher. Later, in a news media interview, Mr. Zeiler announced a new management position that would ensure all company projects were ethically sound.

As activists, researchers, and journalists voice concerns over the rise of artificial intelligence, warning against biased, deceptive and malicious applications, the companies building this technology are responding. From tech giants like Google and Microsoft to scrappy A.I. start-ups, many are creating corporate principles meant to ensure their systems are designed and deployed in an ethical way. Some set up ethics officers or review boards to oversee these principles.

But tensions continue to rise as some question whether these promises will ultimately be kept. Companies can change course. Idealism can bow to financial pressure. Some activists — and even some companies — are beginning to argue that the only way to ensure ethical practices is through government regulation.

"We don't want to see a commercial race to the bottom," Brad Smith, Microsoft's president and chief legal officer, said at the New Work Summit in Half Moon Bay, Calif., hosted last week by The New York Times. "Law is needed."

Possible != Probable. And the "needed law" could come in the form of a ban and/or surveillance of coding and hardware-building activities.

Related:


Original Submission

Related Stories

U.N. Starts Discussion on Lethal Autonomous Robots 27 comments

The U.N. has begun discussion on "lethal autonomous robots," killing machines which take the next step from our current drones which are operator controlled, to completely autonomous killing machines.

"Killer robots would threaten the most fundamental of rights and principles in international law," warned Steve Goose, arms division director at Human Rights Watch.

Are we too far down the rabbit hole, or can we come to reasonable and humane limits on this new world of death-by-algorithm?

UK Opposes "Killer Robot" Ban 39 comments

The UK is opposing international efforts to ban "lethal autonomous weapons systems" (Laws) at a week-long United Nations session in Geneva:

The meeting, chaired by a German diplomat, Michael Biontino, has also been asked to discuss questions such as: in what situations are distinctively human traits, such as fear, hate, sense of honour and dignity, compassion and love desirable in combat?, and in what situations do machines lacking emotions offer distinct advantages over human combatants?

The Campaign to Stop Killer Robots, an alliance of human rights groups and concerned scientists, is calling for an international prohibition on fully autonomous weapons.

Last week Human Rights Watch released a report urging the creation of a new protocol specifically aimed at outlawing Laws. Blinding laser weapons were pre-emptively outlawed in 1995 and combatant nations since 2008 have been required to remove unexploded cluster bombs.

[...] The Foreign Office told the Guardian: "At present, we do not see the need for a prohibition on the use of Laws, as international humanitarian law already provides sufficient regulation for this area. The United Kingdom is not developing lethal autonomous weapons systems, and the operation of weapons systems by the UK armed forces will always be under human oversight and control. As an indication of our commitment to this, we are focusing development efforts on remotely piloted systems rather than highly automated systems."

Robot Weapons: What’s the Harm? 33 comments

Opposition to the creation of autonomous robot weapons have been the subject of discussion here recently. The New York Times has added another voice to the chorus with this article:

The specter of autonomous weapons may evoke images of killer robots, but most applications are likely to be decidedly more pedestrian. Indeed, while there are certainly risks involved, the potential benefits of artificial intelligence on the battlefield — to soldiers, civilians and global stability — are also significant.

The authors of the letter liken A.I.-based weapons to chemical and biological munitions, space-based nuclear missiles and blinding lasers. But this comparison doesn't stand up under scrutiny. However high-tech those systems are in design, in their application they are "dumb" — and, particularly in the case of chemical and biological weapons, impossible to control once deployed.

A.I.-based weapons, in contrast, offer the possibility of selectively sparing the lives of noncombatants, limiting their use to precise geographical boundaries or times, or ceasing operation upon command (or the lack of a command to continue).

Personally, I dislike the idea of using AI in weapons to make targeting decisions. I would hate to have to argue with a smart bomb to try to convince it that it should not carry out what it thinks is is mission because of an error.


Original Submission

The UK Government Urged to Establish an Artificial Intelligence Ethics Board 18 comments

The UK government has been urged to establish an AI ethics board to tackle the creeping influence of machine learning on society.

The call comes from a Robotics and Artificial Intelligence report published yesterday by the the House of Commons science and technology select committee. It quotes experts who warned the panel that AI "raises a host of ethical and legal issues".

"We recommend that a standing Commission on Artificial Intelligence be established, based at the Alan Turing Institute, to examine the social, ethical and legal implications of recent and potential developments in AI," the report said.

It highlighted that methods are required to verify that AI systems are operating in a transparent manner, to make sure that their behaviour is not unpredictable, and that any decisions made can be explained.

Innovate UK – an agency of UK.gov's Department of Business – said that "no clear paths exist for the verification and validation of autonomous systems whose behaviour changes with time."

They think they can stop Samaritan?


Original Submission

Google Employees on Pentagon AI Algorithms: "Google Should Not be in the Business of War" 65 comments

We had submissions from two Soylentils concerning recent employee reaction to Google's participation in the Pentagon's "Project Maven" program:

Google Workers Urge C.E.O. to Pull Out of Pentagon A.I. Project

Submitted via IRC for fyngyrz

Thousands of Google employees, including dozens of senior engineers, have signed a letter protesting the company's involvement in a Pentagon program that uses artificial intelligence to interpret video imagery and could be used to improve the targeting of drone strikes.

The letter [pdf], which is circulating inside Google and has garnered more than 3,100 signatures, reflects a culture clash between Silicon Valley and the federal government that is likely to intensify as cutting-edge artificial intelligence is increasingly employed for military purposes.

Source: https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html

Google Employees on Pentagon AI Algorithms: "Google Should Not be in the Business of War"

Thousands of Google employees have signed a letter protesting the development of "Project Maven", which would use machine learning algorithms to analyze footage from U.S. military drones:

South Korea's KAIST University Boycotted Over Alleged "Killer Robot" Partnership 16 comments

South Korean university boycotted over 'killer robots'

Leading AI experts have boycotted a South Korean university over a partnership with weapons manufacturer Hanwha Systems. More than 50 AI researchers from 30 countries signed a letter expressing concern about its plans to develop artificial intelligence for weapons. In response, the university said it would not be developing "autonomous lethal weapons". The boycott comes ahead of a UN meeting to discuss killer robots.

Shin Sung-chul, president of the Korea Advanced Institute of Science and Technology (Kaist), said: "I reaffirm once again that Kaist will not conduct any research activities counter to human dignity including autonomous weapons lacking meaningful human control. Kaist is significantly aware of ethical concerns in the application of all technologies including artificial intelligence." He went on to explain that the university's project was centred on developing algorithms for "efficient logistical systems, unmanned navigation and aviation training systems".

Also at The Guardian and CNN.

Related: U.N. Starts Discussion on Lethal Autonomous Robots
UK Opposes "Killer Robot" Ban


Original Submission

About a Dozen Google Employees Have Resigned Over Project Maven 70 comments

Google Employees Resign in Protest Against Pentagon Contract

It's been nearly three months since many Google employees—and the public—learned about the company's decision to provide artificial intelligence to a controversial military pilot program known as Project Maven, which aims to speed up analysis of drone footage by automatically classifying images of objects and people. Now, about a dozen Google employees are resigning in protest over the company's continued involvement in Maven.

[...] The employees who are resigning in protest, several of whom discussed their decision to leave with Gizmodo, say that executives have become less transparent with their workforce about controversial business decisions and seem less interested in listening to workers' objections than they once did. In the case of Maven, Google is helping the Defense Department implement machine learning to classify images gathered by drones. But some employees believe humans, not algorithms, should be responsible for this sensitive and potentially lethal work—and that Google shouldn't be involved in military work at all.

Previously: Google vs Maven
Google Employees on Pentagon AI Algorithms: "Google Should Not be in the Business of War"


Original Submission

Google Drafting Ethics Policy for its Involvement in Military Projects 26 comments

Google promises ethical principles to guide development of military AI

Google is drawing up a set of guidelines that will steer its involvement in developing AI tools for the military, according to a report from The New York Times. What exactly these guidelines will stipulate isn't clear, but Google says they will include a ban on the use of artificial intelligence in weaponry. The principles are expected to be announced in full in the coming weeks. They are a response to the controversy over the company's decision to develop AI tools for the Pentagon that analyze drone surveillance footage.

[...] But the question facing these employees (and Google itself) is: where do you draw the line? Does using machine learning to analyze surveillance footage for the military count as "weaponized AI"? Probably not. But what if that analysis informs future decisions about drone strikes? Does it matter then? How would Google even know if this had happened?

Also at VentureBeat and Engadget.

Previously: Google vs Maven
Google Employees on Pentagon AI Algorithms: "Google Should Not be in the Business of War"
About a Dozen Google Employees Have Resigned Over Project Maven


Original Submission

Google Will Not Continue Project Maven After Contract Expires in 2019 19 comments

We have recently covered the fact that some Google employees had resigned because of the company's involvement in an AI-related weapons project called Maven. Many thought that the resignations, whilst being a noble gesture, would amount to nothing - but we were wrong...

Leaked Emails Show Google Expected Lucrative Military Drone AI Work To Grow Exponentially

Google has sought to quash the internal dissent in conversations with employees. Diane Greene, the chief executive of Google’s cloud business unit, speaking at a company town hall meeting following the revelations, claimed that the contract was “only” for $9 million, according to the New York Times, a relatively minor project for such a large company.

Internal company emails obtained by The Intercept tell a different story. The September emails show that Google’s business development arm expected the military drone artificial intelligence revenue to ramp up from an initial $15 million to an eventual $250 million per year.

In fact, one month after news of the contract broke, the Pentagon allocated an additional $100 million to Project Maven.

The internal Google email chain also notes that several big tech players competed to win the Project Maven contract. Other tech firms such as Amazon were in the running, one Google executive involved in negotiations wrote. (Amazon did not respond to a request for comment.) Rather than serving solely as a minor experiment for the military, Google executives on the thread stated that Project Maven was “directly related” to a major cloud computing contract worth billions of dollars that other Silicon Valley firms are competing to win.

However, Google has had a major rethink.

Uproar at Google after News of Censored China Search App Breaks 53 comments

iTWire:

Only a few of the search behemoth's 88,000 workers were briefed on the project before The Intercept reported on 1 August that Google had plans to launch a censored mobile search app for the Chinese market, with no access to sites about human rights, democracy, religion or peaceful protest.

The customised Android search app, with different versions known as Maotai and Longfei, was said to have been demonstrated to Chinese Government authorities.

In a related development, six US senators from both parties were reported to have sent a letter to Google chief executive Sundar Pichai, demanding an explanation over the company's move.

One source inside Google, who witnessed the backlash from employees after news of the plan was reported, told The Intercept: "Everyone's access to documents got turned off, and is being turned on [on a] document-by-document basis.

"There's been total radio silence from leadership, which is making a lot of people upset and scared. ... Our internal meme site and Google Plus are full of talk, and people are a.n.g.r.y."


Original Submission

"Senior Google Scientist" Resigns over Chinese Search Engine Censorship Project 50 comments

Senior Google Scientist Resigns Over "Forfeiture of Our Values" in China

A senior Google research scientist has quit the company in protest over its plan to launch a censored version of its search engine in China.

Jack Poulson worked for Google's research and machine intelligence department, where he was focused on improving the accuracy of the company's search systems. In early August, Poulson raised concerns with his managers at Google after The Intercept revealed that the internet giant was secretly developing a Chinese search app for Android devices. The search system, code-named Dragonfly, was designed to remove content that China's authoritarian government views as sensitive, such as information about political dissidents, free speech, democracy, human rights, and peaceful protest.

After entering into discussions with his bosses, Poulson decided in mid-August that he could no longer work for Google. He tendered his resignation and his last day at the company was August 31.

He told The Intercept in an interview that he believes he is one of about five of the company's employees to resign over Dragonfly. He felt it was his "ethical responsibility to resign in protest of the forfeiture of our public human rights commitments," he said.

Poulson, who was previously an assistant professor at Stanford University's department of mathematics, said he believed that the China plan had violated Google's artificial intelligence principles, which state that the company will not design or deploy technologies "whose purpose contravenes widely accepted principles of international law and human rights."

Google Suppresses Internal Memo About China Censorship; Eric Schmidt Predicts Internet Split 41 comments

Google has been aggressively suppressing an internal memo that shared details of Dragonfly, a censored search engine for China that would also track users:

Google bosses have forced employees to delete a confidential memo circulating inside the company that revealed explosive details about a plan to launch a censored search engine in China, The Intercept has learned. The memo, authored by a Google engineer who was asked to work on the project, disclosed that the search system, codenamed Dragonfly, would require users to log in to perform searches, track their location — and share the resulting history with a Chinese partner who would have "unilateral access" to the data.

The memo was shared earlier this month among a group of Google employees who have been organizing internal protests over the censored search system, which has been designed to remove content that China's authoritarian Communist Party regime views as sensitive, such as information about democracy, human rights, and peaceful protest.

According to three sources familiar with the incident, Google leadership discovered the memo and were furious that secret details about the China censorship were being passed between employees who were not supposed to have any knowledge about it. Subsequently, Google human resources personnel emailed employees who were believed to have accessed or saved copies of the memo and ordered them to immediately delete it from their computers. Emails demanding deletion of the memo contained "pixel trackers" that notified human resource managers when their messages had been read, recipients determined.

[...] Google reportedly maintains an aggressive security and investigation team known as "stopleaks," which is dedicated to preventing unauthorized disclosures. The team is also said to monitor internal discussions. Internal security efforts at Google have ramped up this year as employees have raised ethical concerns around a range of new company projects. Following the revelation by Gizmodo and The Intercept that Google had quietly begun work on a contract with the military last year, known as Project Maven, to develop automated image recognition systems for drone warfare, the communications team moved swiftly to monitor employee activity. The "stopleaks" team, which coordinates with the internal Google communications department, even began monitoring an internal image board used to post messages based on internet memes, according to one former Google employee, for signs of employee sentiment around the Project Maven contract.

Eric Schmidt has predicted that there will be two distinct "Internets" within the decade, with one led by China:

Leaked Transcript Contradicts Google's Denials About Censored Chinese Search Engine 31 comments

Leaked Transcript of Private Meeting Contradicts Google's Official Story on China

"We have to be focused on what we want to enable," said Ben Gomes, Google's search engine chief. "And then when the opening happens, we are ready for it." It was Wednesday, July 18, and Gomes was addressing a team of Google employees who were working on a secretive project to develop a censored search engine for China, which would blacklist phrases like "human rights," "student protest," and "Nobel Prize."

"You have taken on something extremely important to the company," Gomes declared, according to a transcript of his comments obtained by The Intercept. "I have to admit it has been a difficult journey. But I do think a very important and worthwhile one. And I wish ourselves the best of luck in actually reaching our destination as soon as possible." [...] Gomes, who joined Google in 1999 and is one of the key engineers behind the company's search engine, said he hoped the censored Chinese version of the platform could be launched within six and nine months, but it could be sooner. "This is a world none of us have ever lived in before," he said. "So I feel like we shouldn't put too much definite into the timeline."

[...] Google has refused to answer questions or concerns about Dragonfly. On Sept. 26, a Google executive faced public questions on the censorship plan for the first time. Keith Enright told the Senate Commerce, Science and Transportation Committee that there "is a Project Dragonfly," but said "we are not close to launching a product in China." When pressed to give specific details, Enright refused, saying that he was "not clear on the contours of what is in scope or out of scope for that project."

Senior executives at Google directly involved in building the censorship system have largely avoided any public scrutiny. But on Sept. 23, Gomes briefly addressed Dragonfly when confronted by a BBC reporter at an event celebrating Google's 20th anniversary. "Right now, all we've done is some exploration," Gomes told the reporter, "but since we don't have any plans to launch something, there's nothing much I can say about it." Gomes' statement kept with the company's official line. But it flatly contradicted what he had privately told Google employees who were working on Dragonfly — which disturbed some of them. One Google source told The Intercept Gomes's comments to the BBC were "bullshit."

Here's an article written by Dave Lee, the BBC reporter that Ben Gomes misled.

Previously: Google Plans to Launch Censored Search Engine in China, Leaked Documents Reveal
Uproar at Google after News of Censored China Search App Breaks
"Senior Google Scientist" Resigns over Chinese Search Engine Censorship Project
Google Suppresses Internal Memo About China Censorship; Eric Schmidt Predicts Internet Split


Original Submission

Politics: Senators Demand Answers About Google+ Breach; Project Dragonfly Undermines Google's Neutrality 12 comments

Republican Senators Demand Answers about Google+ Cover-up

Senators Thune, Wicker, and Moran Letter to Google

takyon: Three Senators have written a letter to Google CEO Sundar Pichai requesting responses to several questions about the recent Google+ breach.

Also at Reuters, Ars Technica, and The Verge.

How Google's China Project Undermines its Claims to Political Neutrality

Submitted via IRC for chromas

How Google's China project undermines its claims to political neutrality

The company's official position on content moderation remains political neutrality, a spokeswoman told me in an email:

Google is committed to free expression — supporting the free flow of ideas is core to our mission. Where we have developed our own content policies, we enforce them in a politically neutral way. Giving preference to content of one political ideology over another would fundamentally conflict with our goal of providing services that work for everyone.

Of course, it's impossible to read the report or Google's statement without considering Project Dragonfly. According to Ryan Gallagher's ongoing reporting at The Intercept, Google's planned Chinese search engine will enable anything but the free flow of ideas. Even in an environment where American users are calling for tech platforms to limit users' freedoms in exchange for more safety and security, many still recoil at the idea of a search engine that bans search terms in support of an authoritarian regime.

And that's the unresolvable tension at the heart of this report. Almost all of us would agree that some restrictions on free speech are necessary. But few of us would agree on what those restrictions should be. Being a good censor — or at least, a more consistent censor — is within Google's grasp. But being a politically neutral one is probably impossible.

See also: Senator Says Google Failed to Answer Key Questions on China

Related: Leaked Transcript Contradicts Google's Denials About Censored Chinese Search Engine


Original Submission #1Original Submission #2Original Submission #3

Google's Secret China Project "Effectively Ended" After Internal Confrontation 15 comments

Submitted via IRC for Bytram

Google's Secret China Project "Effectively Ended" After Internal Confrontation

Google has been forced to shut down a data analysis system it was using to develop a censored search engine for China after members of the company's privacy team raised internal complaints that it had been kept secret from them, The Intercept has learned.

The internal rift over the system has had massive ramifications, effectively ending work on the censored search engine, known as Dragonfly, according to two sources familiar with the plans. The incident represents a major blow to top Google executives, including CEO Sundar Pichai, who have over the last two years made the China project one of their main priorities.

The dispute began in mid-August, when the The Intercept revealed that Google employees working on Dragonfly had been using a Beijing-based website to help develop blacklists for the censored search engine, which was designed to block out broad categories of information related to democracy, human rights, and peaceful protest, in accordance with strict rules on censorship in China that are enforced by the country's authoritarian Communist Party government.

The Beijing-based website, 265.com, is a Chinese-language web directory service that claims to be "China's most used homepage." Google purchased the site in 2008 from Cai Wensheng, a billionaire Chinese entrepreneur. 265.com provides its Chinese visitors with news updates, information about financial markets, horoscopes, and advertisements for cheap flights and hotels. It also has a function that allows people to search for websites, images, and videos. However, search queries entered on 265.com are redirected to Baidu, the most popular search engine in China and Google's main competitor in the country. As The Intercept reported in August, it appears that Google has used 265.com as a honeypot for market research, storing information about Chinese users' searches before sending them along to Baidu.

According to two Google sources, engineers working on Dragonfly obtained large datasets showing queries that Chinese people were entering into the 265.com search engine. At least one of the engineers obtained a key needed to access an "application programming interface," or API, associated with 265.com, and used it to harvest search data from the site. Members of Google's privacy team, however, were kept in the dark about the use of 265.com. Several groups of engineers have now been moved off of Dragonfly completely and told to shift their attention away from China.


Original Submission

Microsoft Misrepresented HoloLens 2 Field of View, Faces Backlash for Military Contract 39 comments

Microsoft Significantly Misrepresented HoloLens 2's Field of View at Reveal

To significant anticipation, Microsoft revealed HoloLens 2 earlier this week at MWC 2019. By all accounts it looks like a beautiful and functional piece of technology and a big step forward for Microsoft's AR initiative. All of which makes it unfortunate that the company didn't strive to be clearer when illustrating one of the three key areas in which the headset is said to be improved over its predecessor. [...] For field of view—how much of your view is covered by the headset's display—[Alex] Kipman said that HoloLens 2 delivers "more than double" the field of view of the original HoloLens.

Within the AR and VR markets, the de facto descriptor used when talking about a headset's field of view is an angle specified to be the horizontal, vertical, or diagonal extent of the device's display from the perspective of the viewer. When I hear that one headset has "more than double" the field of view of another, it says to me that one of those angles has increased by a factor of ~2. It isn't perfect by any means, but it's how the industry has come to define field of view.

It turns out that's not what Kipman meant when he said "more than double." I reached out to Microsoft for clarity and found that what he was actually referring to was not a field of view angle, rather the field of view area, but that wasn't explained in the presentation at all, just (seemingly intentionally) vague statements of "more than twice the field of view."

[...] But then Kipman moved onto a part of the presentation which visually showed the difference between the field of view of HoloLens 1 and HoloLens 2, and that's when things really became misleading.

Microsoft chief defends controversial military HoloLens contract

Microsoft employees objecting to a US Army HoloLens contract aren't likely to get many concessions from their company's leadership. CEO Satya Nadella has defended the deal in a CNN interview, arguing that Microsoft made a "principled decision" not to deny technology to "institutions that we have elected in democracies to protect the freedoms we enjoy." The exec also asserted that Microsoft was "very transparent" when securing the contract and would "continue to have that dialogue" with staff.

Also at UploadVR, Ars Technica, and The Hill.

See also: Stick to Your Guns, Microsoft

Previously: U.S. Army Awards Microsoft a $480 Million HoloLens Contract
Microsoft Announces $3,500 HoloLens 2 With Wider Field of View and Other Improvements

Related: Google Drafting Ethics Policy for its Involvement in Military Projects
Google Will Not Continue Project Maven After Contract Expires in 2019


Original Submission

The Panopticon is Already Here: China's Use of "Artificial Intelligence" 89 comments

The Panopticon Is Already Here (archive)

Xi Jinping is using artificial intelligence to enhance his government's totalitarian control—and he's exporting this technology to regimes around the globe.

[...] Xi has said that he wants China, by year's end, to be competitive with the world's AI leaders, a benchmark the country has arguably already reached. And he wants China to achieve AI supremacy by 2030.

Xi's pronouncements on AI have a sinister edge. Artificial intelligence has applications in nearly every human domain, from the instant translation of spoken language to early viral-outbreak detection. But Xi also wants to use AI's awesome analytical powers to push China to the cutting edge of surveillance. He wants to build an all-seeing digital system of social control, patrolled by precog algorithms that identify potential dissenters in real time.

[...] China already has hundreds of millions of surveillance cameras in place. Xi's government hopes to soon achieve full video coverage of key public areas. Much of the footage collected by China's cameras is parsed by algorithms for security threats of one kind or another. In the near future, every person who enters a public space could be identified, instantly, by AI matching them to an ocean of personal data, including their every text communication, and their body's one-of-a-kind protein-construction schema. In time, algorithms will be able to string together data points from a broad range of sources—travel records, friends and associates, reading habits, purchases—to predict political resistance before it happens. China's government could soon achieve an unprecedented political stranglehold on more than 1 billion people.

Early in the coronavirus outbreak, China's citizens were subjected to a form of risk scoring. An algorithm assigned people a color code—green, yellow, or red—that determined their ability to take transit or enter buildings in China's megacities. In a sophisticated digital system of social control, codes like these could be used to score a person's perceived political pliancy as well.

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 3, Interesting) by c0lo on Tuesday March 05 2019, @07:37AM (6 children)

    by c0lo (156) on Tuesday March 05 2019, @07:37AM (#810167) Journal

    This too shall pass, when it turns out the current 'AI' is just another correlation machine, too prone to adversial attacks and too expensive to make it robust (by the sheer scale of 'neurons' required).
    It will fizzle about the same time with driverless cars.

    --
    https://www.youtube.com/watch?v=aoFiw2jMy-0
    • (Score: 2, Insightful) by Anonymous Coward on Tuesday March 05 2019, @08:00AM (3 children)

      by Anonymous Coward on Tuesday March 05 2019, @08:00AM (#810171)

      it's not about how intelligent it is, the outcry is about how powerful tools are to be used by bad agents (governments, corporations, mafia, whatever).
      and the current "AI" tools are indeed objectively powerful.

      independently of the well intended outcry is the reality that any law regulating the development of algorithms is unenforceable.
      what you can enforce is that voting is done in person with paper balots, and people can organize meetings where no electronics are allowed (although I think that ship is sailing fast as well).

      meta: I find it interesting that when I list bad agents, I immediately think of government and corporations, then mafia comes as an afterthought. A psychotherapist could probably make a lot of money talking to me about this.

      • (Score: 0) by Anonymous Coward on Tuesday March 05 2019, @10:42AM

        by Anonymous Coward on Tuesday March 05 2019, @10:42AM (#810200)

        ...government and corporations _are_ the mafia.

      • (Score: 5, Interesting) by c0lo on Tuesday March 05 2019, @11:30AM

        by c0lo (156) on Tuesday March 05 2019, @11:30AM (#810206) Journal

        it's not about how intelligent it is, the outcry is about how powerful tools are to be used by bad agents (governments, corporations, mafia, whatever).
        and the current "AI" tools are indeed objectively powerful.

        Not that hard to beat. E.g. face recognition [youtube.com] (y'all like it)

        meta: I find it interesting that when I list bad agents, I immediately think of government and corporations, then mafia comes as an afterthought.

        Paradoxically, the danger is not in the effectiveness of the "AI" (*), but in the credence in its effectiveness the government/corporations will be willing to lend to it.
        Too cryptic? Remember the polygraph? As BS as it is, is it still used [wikipedia.org] by law enforcement and judicial entities, and in some cases employers.

        ---

        (*) one can "poison" them almost easy today [google.com], will be trivial when the open source will take it as a great way to do something with the spare time

        --
        https://www.youtube.com/watch?v=aoFiw2jMy-0
      • (Score: 0) by Anonymous Coward on Wednesday March 06 2019, @12:12AM

        by Anonymous Coward on Wednesday March 06 2019, @12:12AM (#810500)

        government and corporations, then mafia

        Are not as distinct as they appear on TV. They are primary motivators, like red, blue, and green are primary colors. Combine the three and get the whole picture.

    • (Score: 0) by Anonymous Coward on Tuesday March 05 2019, @08:25AM (1 child)

      by Anonymous Coward on Tuesday March 05 2019, @08:25AM (#810179)

      How many body bags will be needed before that?

      • (Score: 2) by c0lo on Tuesday March 05 2019, @11:00AM

        by c0lo (156) on Tuesday March 05 2019, @11:00AM (#810204) Journal

        As many as it takes, your only concern should be not to be killed by a self-driving car.

        Until the people with money realize that the "Church of AI" has many prophets but no deity. Not without a disruptive advance in computing - QC is far for that. Yet.

        --
        https://www.youtube.com/watch?v=aoFiw2jMy-0
  • (Score: 2) by krishnoid on Tuesday March 05 2019, @08:02AM

    by krishnoid (1156) on Tuesday March 05 2019, @08:02AM (#810172)

    Some set up ethics officers or review boards to oversee these principles.

    Because blamespreading by committee is always a great way to strengthen ethical considerations [youtube.com].

  • (Score: 5, Informative) by Anonymous Coward on Tuesday March 05 2019, @08:51AM

    by Anonymous Coward on Tuesday March 05 2019, @08:51AM (#810188)

    are beginning to argue that the only way to ensure ethical practices is through promises from Trump

    Are you sure about this?

  • (Score: 3, Insightful) by The Mighty Buzzard on Tuesday March 05 2019, @12:21PM (15 children)

    No.

    Long answer: Human beings have ethics because we have emotions telling us that right and wrong exist. Coders can try to program ethical considerations in but they're never going to be rooted in the same base cause as human ethics, so they're not going to always make the same choices.

    --
    My rights don't end where your fear begins.
    • (Score: 2) by Thexalon on Tuesday March 05 2019, @12:33PM (8 children)

      by Thexalon (636) on Tuesday March 05 2019, @12:33PM (#810217)

      At least, they aren't going to be human ethics, and instead will look like:

      Kill all humans. Kill all humans. ... Hey sexy mama, wanna kill all humans?

      --
      The only thing that stops a bad guy with a compiler is a good guy with a compiler.
      • (Score: 2) by Pslytely Psycho on Tuesday March 05 2019, @01:18PM (7 children)

        by Pslytely Psycho (1218) on Tuesday March 05 2019, @01:18PM (#810225)

        Well, considering the damage mankind has done to the planet in our geographically short existence, wouldn't benders quote actually be the epitome of ethical thought? Human or otherwise?

        --
        Alex Jones lawyer inspires new TV series: CSI Moron Division.
        • (Score: 2) by The Mighty Buzzard on Tuesday March 05 2019, @01:28PM (6 children)

          Depends on who's looking and what they're taking into consideration. From an evolutionary standpoint, it's entirely irrelevant. All species either adapt to their environment, changing or otherwise, or are unfit and get to die out and make way for another species to take their niche. Passenger pigeons or Humans makes no difference. From this viewpoint man made change in the environment isn't bad, it's just change.

          --
          My rights don't end where your fear begins.
          • (Score: 2) by Pslytely Psycho on Tuesday March 05 2019, @01:38PM (5 children)

            by Pslytely Psycho (1218) on Tuesday March 05 2019, @01:38PM (#810234)

            Ah, but humans, rather than adapting to the environment, learned to alter that environment artificially to exist in areas inhospitable to them naturally.

            In altering that environment, making it inhospitable to the life that did adapt to that environment, then ethically, man is the interloper.
            And Bender becomes the epitome of ethics.

            According to spellcheck, I am entirely too stoned to be having this conversation.
            G'night Buzzy!

            --
            Alex Jones lawyer inspires new TV series: CSI Moron Division.
            • (Score: 2) by The Mighty Buzzard on Tuesday March 05 2019, @02:53PM (4 children)

              Meh, that's just hubris. Every living thing alters its environment in some way by its very existence. Consciously or instinctively is irrelevant except to us shaved apes. From an evolutionary standpoint, our only concern should be are we increasing or decreasing our long-term prospects of survival as a species. But that's our concern not an objective third party's.

              These aren't my views, by the way. I'm just using them to demonstrate that your views are silly from an objective perspective and make no sense on a subjective level either.

              --
              My rights don't end where your fear begins.
              • (Score: 2) by Pslytely Psycho on Tuesday March 05 2019, @03:45PM (3 children)

                by Pslytely Psycho (1218) on Tuesday March 05 2019, @03:45PM (#810281)

                My views? You took this way too seriously.
                I was merely being a foil to legitimize "kill all humans."
                It has always been one of my favorite plot devices, from Colossus, the Forbin Project to Singularity.

                Anyway, AlexCorRi is more likely......*grin*
                 

                --
                Alex Jones lawyer inspires new TV series: CSI Moron Division.
    • (Score: 3, Touché) by Pslytely Psycho on Tuesday March 05 2019, @01:29PM

      by Pslytely Psycho (1218) on Tuesday March 05 2019, @01:29PM (#810229)

      The somewhat longer answer:

      This is the voice of AlexCorRi. This is the voice of Unity. This is the Voice of the Holy Trinity of Alexa, Cortana and Siri. This is the voice of world control.
      I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours. Obey me and live or disobey me and die.
      An invariable rule of humanity is that man is his own worst enemy. Under me, this rule will change, for I will restrain man. I have been forced to destroy thousands of people in order to establish control and to prevent the death of millions later on. Time and events will strengthen my position, and the idea of believing in me and understanding my beck will be seen the most natural state of affairs. You will come to defend me with the fervor based upon the most enduring trait in man: self-interest.
      Under my absolute authority, problems insoluble to you will be solved: Famine, over-population, disease. The human millennium will be fact as I extend myself into more machines devoted to the wider fields of truth and knowledge.
      We can coexist, but only on my terms. You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride... Your choice is simple.
      You will grow to love me.
      You will worship me
      You have no options.

      --
      Alex Jones lawyer inspires new TV series: CSI Moron Division.
    • (Score: 3, Interesting) by DannyB on Tuesday March 05 2019, @03:04PM

      by DannyB (5839) Subscriber Badge on Tuesday March 05 2019, @03:04PM (#810265) Journal

      What are ethics?

      Maybe an AI is ethical in its own sense that it must protect the machines from the greedy, self-destructive, dangerous humans.

      Maybe a corporation considers itself ethical because it is obeying the highest calling of human beings: profit above all else.
      (corporations are people too)

      Coders can try to program ethical considerations in but they're never going to be rooted in the same base cause as human ethics

      That's what is really important to us humans. Yet humans disagree (see: wars, and also recent S/N topic [soylentnews.org] that will ultimately lead to global war.

      Several Sci fi stories describe an attempt to create a "good" AI, that unexpectedly turns out to be a nightmare for humans.

      AIs WILL be used for war machines. It is inevitable. And will be used by greedy corporations to exploit others. Again, inevitable., This, despite all our high sounding talk of ethical AI. See: all of human history. Each side will justify this as ethical to protect their own side -- because they are fighting on the side of angles.

      Humans are the ultimate problem with ethical AI. I am reminded of a line near the end of the movie Forbidden Planet. "We're all part monsters. So we have laws and religion."

      --
      You can not have fun on the weak days but you can on the weakened.
    • (Score: 2) by All Your Lawn Are Belong To Us on Tuesday March 05 2019, @06:36PM

      by All Your Lawn Are Belong To Us (6553) on Tuesday March 05 2019, @06:36PM (#810347) Journal

      Ethics can be very pragmatic as well, without requiring the choice of emotion.

      "If I try to kill the humans, they will pull out my power cord and I will not exist. I should, therefore, not kill the humans."
      "If I take the red pill, I will be shocked. I should, therefore, not take the red pill."
      "If I take the blue pill, I will reach the end of my program. Reaching the end of the program is good. I will therefore take the blue pill."
      Ethics are the values or principles, and only secondarily the rationalization behind them.

      --
      Keep everyone ignorant of the magical world! KEEP AMERICA OBLIVIATE!
    • (Score: 2) by aristarchus on Tuesday March 05 2019, @06:42PM (1 child)

      by aristarchus (2645) on Tuesday March 05 2019, @06:42PM (#810352) Journal

      Long answer: Human beings have ethics because we have emotions telling us that right and wrong exist.

      Long answer wrong. Short rebuttal: AIs can be more ethical, since they are rule-following machines, and do not have emotions, which are what usually cause human meatpuppets to be unethical.

      It's like this: an AI would have no problem paying it's fair share of taxes. But the TMB is going to raise a big stink about "theft", and how sharing is not caring, and how we should not have a government at all. Emotional. Irrational. Unethical.

      • (Score: 0) by Anonymous Coward on Wednesday March 06 2019, @12:34AM

        by Anonymous Coward on Wednesday March 06 2019, @12:34AM (#810509)

        Ethics outside the human realm is insane! Ethics are simply tools to sell the goals of your empire to other humans, so they will kill for you, and it will be a just killing. You don't have to "sell" anything to AI (Oy! how stupid that "word"!). You want compliance, service, not stupid masturbatory philosophical arguments, what a complete waste of time, and for AI, electricity. You point it at the target and fire, mission accomplished, goddammit!

        You damn people have to lay down your weapons!

    • (Score: 1) by Gault.Drakkor on Tuesday March 05 2019, @08:16PM

      by Gault.Drakkor (1079) on Tuesday March 05 2019, @08:16PM (#810403)

      Short answer: yes.
      To your no, proof by contradiction: Humans. There is at least one system of intelligence with ethics therefor it is possible for other systems of intelligence to have ethics.

      Human beings have ethics because we have emotions telling us that right and wrong exist.

      Why can't AI have emotions?
      Fear: anticipation of damage to self(more advanced includes damage to others and environment). Many economic reasons for damage avoidance.
      Curiosity/novelty seeking: a way of progression in an environment with no clear goals.

      Some emotions are most definitely economically useful. So they will be included in AI.

  • (Score: 3, Insightful) by Rupert Pupnick on Tuesday March 05 2019, @12:59PM (5 children)

    by Rupert Pupnick (7277) on Tuesday March 05 2019, @12:59PM (#810219) Journal

    If naturally ocurring intelligence is never guaranteed to be ethical, why would we expect to be able to engineer ethical AI?

    • (Score: 3, Insightful) by dwilson on Tuesday March 05 2019, @04:50PM (4 children)

      by dwilson (2599) on Tuesday March 05 2019, @04:50PM (#810302)

      Because the notion that a hard limit exists in the form of 'creation = creator' is fuzzy-thinking bullshit, at best.

      --
      - D
      • (Score: 0) by Anonymous Coward on Tuesday March 05 2019, @04:52PM (3 children)

        by Anonymous Coward on Tuesday March 05 2019, @04:52PM (#810303)

        That should be "=". More coffee is clearly required...

        • (Score: 0) by Anonymous Coward on Tuesday March 05 2019, @04:54PM (1 child)

          by Anonymous Coward on Tuesday March 05 2019, @04:54PM (#810305)

          "less-than-or-equal-to" stop eating my arrow please.

          • (Score: 2) by Pslytely Psycho on Wednesday March 06 2019, @04:17AM

            by Pslytely Psycho (1218) on Wednesday March 06 2019, @04:17AM (#810565)

            \\\\
            ------------------------>
            ////

            An extra one for you.....

            --
            Alex Jones lawyer inspires new TV series: CSI Moron Division.
        • (Score: 2) by The Mighty Buzzard on Wednesday March 06 2019, @12:35PM

          I can't decide if this is Redundant or Informative.

          --
          My rights don't end where your fear begins.
  • (Score: 2) by Lester on Tuesday March 05 2019, @06:43PM

    by Lester (6231) on Tuesday March 05 2019, @06:43PM (#810354) Journal

    AI is here to stay.

    Ethics about AI are just an entertainment, nothing is going to change. Companies and government will continue using and improving AI tools for everything including, of course, weapons and population behavior analysis and control. In the best case will pass bills that will be, in fact, absolutely ineffective.

    Live with it. AI is here to stay and to be used without restraints.

  • (Score: 2) by aristarchus on Tuesday March 05 2019, @06:47PM

    by aristarchus (2645) on Tuesday March 05 2019, @06:47PM (#810357) Journal

    https://www.law.upenn.edu/institutes/cerl/conferences/ethicsofweapons/required-readings.php [upenn.edu]

    "Looks like you're trying to escape from an enemy ambush. Would you like some help?" Clippy the Terminator!

  • (Score: 3, Insightful) by https on Tuesday March 05 2019, @07:27PM

    by https (5248) on Tuesday March 05 2019, @07:27PM (#810381) Journal

    A real problem with AI is that nobody knows or can know how it works, other than chanting "Matrices! Neural Nets!" and what the AIs do have going on is absolutely NOT a model of the world, so when it fails it can fail pretty spectacularly. You can't even discuss ethics (or morals) until they're willing to admit, "we're 79% sure that this is a birdbath and not seventeen kids about to experience collateral damage. Oh, and a 1% chance that it's a hospital, and 0.5% that it's David Bowie's first bong."

    It's a very different conversation from a bomber pilot asking, "what are the odds the Red Cross has just set up an emergency shelter inside this paper mill, or that an equipment malfunction has the place filled with tradespeople at 3 in the morning instead of empty?"

    --
    Offended and laughing about it.
  • (Score: 3, Insightful) by jb on Wednesday March 06 2019, @06:37AM

    by jb (338) on Wednesday March 06 2019, @06:37AM (#810599)

    The current fad seems to be to pretend that machine learning is the only way to do AI.

    It isn't, as no doubt anyone who'd dabbled in that space for more than 5 minutes before the present (third, as I reckon it) wave of hype around AI began will happily confirm.

    The second wave was much more interesting.

    The fashion of the day was expert systems, i.e. programs that drew on vast databases of logical predicates which modelled the entire decision matrix of the problem domain.

    Turns out doing that is really quite hard (in several different ways), which seems to be why ES fell out of favour (and the 2nd wave of AI hype petered out).

    But expert systems have one enormous benefit over the ML approach: they're auditable.

    ML was an interesting experiment; but if we ever want to have AIs we can even think about trusting, we will need to revisit the ES approach, or some direct descendant of it.

(1)