Alphabet CEO Sundar Pichai says there is 'no question' that AI needs to be regulated
Google and Alphabet CEO Sundar Pichai has called for new regulations in the world of AI, highlighting the dangers posed by technology like facial recognition and deepfakes, while stressing that any legislation must balance "potential harms ... with social opportunities."
"[T]here is no question in my mind that artificial intelligence needs to be regulated. It is too important not to," writes Pichai in an editorial for The Financial Times. "The only question is how to approach it."
Although Pichai says new regulation is needed, he advocates a cautious approach that might not see many significant controls placed on AI. He notes that for some products like self-driving cars, "appropriate new rules" should be introduced. But in other areas, like healthcare, existing frameworks can be extended to cover AI-assisted products.
Also at The Associated Press.
(Score: 2) by krishnoid on Monday January 20 2020, @11:11PM (2 children)
Maybe they could go with a religious decree instead, "Thou shalt not make a machine in the likeness of a human mind." [fandom.com]
On another note, what defines AI [wikipedia.org] from a legal perspective, and distinguishes it from plain old computing?
(Score: 3, Informative) by TheRaven on Tuesday January 21 2020, @09:13AM (1 child)
Aside from the trolling link, I came to ask a similar question. Currently, the terms Artificial Intelligence and Machine Learning are largely used to differentiate data-driven (inference-based) systems from rule-based systems. A lot of the most interesting AI research has led to the creation of rule-based systems that can efficiently do something that the inference-based systems were doing less efficiently (but using them to understand what the rules should be). Are these considered AI from the perspective of regulation? Or do we just count 'deep' systems (i.e. ones that have a bunch of stuff in the middle that no one really understands)?
Do rule-based systems where the outcomes are the result of emergent properties, rather than directly from the encoded rules (such as a number of network routing mechanisms) count as AI systems? The problems that they're describing are to do with the applications of technology, not to do with the way that the technology is implemented. If someone comes up with a rule-based system for creating realistic video fakes, why would this be exempt from regulation? If someone creates realistic fake videos by painstakingly editing each frame by hand, is that exempt (possibly - often being able to do X at scale is considered significantly worse than being able to do X)?
sudo mod me up
(Score: 2) by krishnoid on Tuesday January 21 2020, @08:02PM
I was tangentially referring to how while we're sort of participating in the computing/AI revolution, the rest of the world is pretty much just having it brushed onto their lives like paint, without much choice. Who knows, when farmers talk about AI in a few years, there may start to be confusion in those discussions.