Microsoft cofounder Bill Gates says he's "scared" about artificial intelligence falling into the wrong hands, but unlike some fellow experts who have called for a pause on advanced A.I. development, he argues that the technology may already be on a runaway train:
The latest advancements in A.I. are revolutionary, Gates said in an interview with ABC published Monday, but the technology comes with many uncertainties. U.S. regulators are failing to stay up to speed, he said, and with research into human-level artificial intelligence advancing fast, over 1,000 technologists and computer scientists including Twitter and Tesla CEO Elon Musk signed an open letter in March calling for a six-month pause on advanced A.I. development until "robust A.I. governance systems" are in place.
But for Gates, A.I. isn't the type of technology you can just hit the pause button on.
"If you just pause the good guys and you don't pause everyone else, you're probably hurting yourself," he told ABC, adding that it is critical for the "good guys" to develop more powerful A.I. systems.
[...] "We're all scared that a bad guy could grab it. Let's say the bad guys get ahead of the good guys, then something like cyber attacks could be driven by an A.I.," Gates said.
The competitive nature of A.I. development means that a moratorium on new research is unlikely to succeed, he argued.
Originally spotted on The Eponymous Pickle.
Previously: Fearing "Loss of Control," AI Critics Call for 6-Month Pause in AI Development
Related: AI Weapons Among Non-State Actors May be Impossible to Stop
(Score: 2) by RamiK on Saturday May 20, @09:32AM
Lawyers and elected officials have a firmer understanding of gerrymandering-like discrimination obfuscation issues than the regular Joe. e.g. Many states contract specific proprietary software vendors to automate and obfuscate discriminatory sentencing and police surveillance that would otherwise get them into trouble. So, as the current perpetrators, they would likely rather ban their weapons of choice than have them pointed at themselves.
Anyhow, this isn't going to get fixed in a day or two. To quote a headline, there will need to be "Meaningful harm" before things get fixed and laws get written and rewritten over the years for the outcome I described to pass. Still, as long as the systems are impossible to analyze, it's only a matter of time before they'll be regulated in the manner I've described in most places since it will be increasingly harder and harder to authorize spending on systems that can't prove results and offer QA without going into the details.
compiling...