Stories
Slash Boxes
Comments

SoylentNews is people

Submission Preview

Link to Story

Google's five rules for AI safety

Accepted submission by exec at 2016-06-25 00:20:12
News

Story automatically generated by StoryBot Version 0.0.1f (Development).

Note: This is the complete story and will need further editing. It may also be covered
by Copyright and thus should be acknowledged and quoted rather than printed in its entirety.

FeedSource: [CNET] collected from rss-bot logs

Time: 2016-06-22 08:25:24 UTC

Original URL: http://www.cnet.com/news/google-goes-asimov-and-spells-out-concrete-ai-safety-concerns/#ftag=CAD590a51e [cnet.com]

Title: Google's five rules for AI safety - CNET

Suggested Topics by Probability (Experimental) : 37.5 science 25.0 software 12.5 hardware 12.5 careers 12.5 OS

--- --- --- --- --- --- --- Entire Story Below --- --- --- --- --- --- ---
 
 

Google's five rules for AI safety - CNET

Chris Olah at Google Research has, in a blog post on Tuesday [googleblog.com], spelled out the five big questions about how to develop smarter, safer artificial intelligence.

The post came alongside a research paper Google released in collaboration with OpenAI, Stanford and Berkley called Concrete Problems in AI Safety [arxiv.org]. It's an attempt to move beyond abstract or hypothetical concerns around developing and using AI by providing researchers with specific questions to apply in real-world testing.

"These are all forward thinking, long-term research questions -- minor issues today, but important to address for future systems," said Olah in the blog post.

The five points are:

Google has made no secret about its commitment to AI and machine learning, even having its own dedicated research branch, Google DeepMind. Earlier this year, DeepMind's learning algorithm AlphaGo challenged (and defeated) [cnet.com] one of the world's premier (human) players at the ancient strategy game Go in what many considered one of the hardest tests for AI.

-- submitted from IRC


Original Submission