Stories
Slash Boxes
Comments

SoylentNews is people

posted by hubie on Friday May 19, @01:22AM   Printer-friendly
from the Mr.-President-we-must-not-allow-an-AI-gap! dept.

Microsoft cofounder Bill Gates says he's "scared" about artificial intelligence falling into the wrong hands, but unlike some fellow experts who have called for a pause on advanced A.I. development, he argues that the technology may already be on a runaway train:

The latest advancements in A.I. are revolutionary, Gates said in an interview with ABC published Monday, but the technology comes with many uncertainties. U.S. regulators are failing to stay up to speed, he said, and with research into human-level artificial intelligence advancing fast, over 1,000 technologists and computer scientists including Twitter and Tesla CEO Elon Musk signed an open letter in March calling for a six-month pause on advanced A.I. development until "robust A.I. governance systems" are in place.

But for Gates, A.I. isn't the type of technology you can just hit the pause button on.

"If you just pause the good guys and you don't pause everyone else, you're probably hurting yourself," he told ABC, adding that it is critical for the "good guys" to develop more powerful A.I. systems.

[...] "We're all scared that a bad guy could grab it. Let's say the bad guys get ahead of the good guys, then something like cyber attacks could be driven by an A.I.," Gates said.

The competitive nature of A.I. development means that a moratorium on new research is unlikely to succeed, he argued.

Originally spotted on The Eponymous Pickle.

Previously: Fearing "Loss of Control," AI Critics Call for 6-Month Pause in AI Development

Related: AI Weapons Among Non-State Actors May be Impossible to Stop


Original Submission

 
This discussion was created by hubie (1068) for logged-in users only. Log in and try again!
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by gnuman on Friday May 19, @06:36PM (3 children)

    by gnuman (5013) on Friday May 19, @06:36PM (#1307046)

    The risk on Musk's mind is the fact Tesla isn't making any real progress in self-driving cars....

    Which probably should give people pause about the real capabilities of the AI, as we currently know it. It can barely drive and needs constant training from the driver, yet, so little progress in years.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 3, Insightful) by RamiK on Friday May 19, @09:11PM (2 children)

    by RamiK (1813) on Friday May 19, @09:11PM (#1307065)

    Self-driving is an automotive safety regulations issue rather than anything specific to neural net based automation.

    That said, any software with potential for algorithmic discriminatory biases should be held to a "guilty until proven innocent" standard. That is, anyone processing records of people with racial or economic information should be made to go through regular and on-demand (following complaints) reviews upon nothing but loose suspicion and public contractors should be required to open source and open data-set their products with no allotments for trade-secrets. Anything less will just be gate keeping.

    --
    compiling...
    • (Score: 1) by khallow on Saturday May 20, @03:05AM (1 child)

      by khallow (3766) Subscriber Badge on Saturday May 20, @03:05AM (#1307095) Journal
      Self driving? It's good for it, just those pesky regulations in the way. But algorithmic discriminatory biases? Whoa Nelly, we need to review that carefully.

      My take is that people will care more about algorithmic discrimination that runs over children on the road.
      • (Score: 2) by RamiK on Saturday May 20, @09:32AM

        by RamiK (1813) on Saturday May 20, @09:32AM (#1307117)

        Lawyers and elected officials have a firmer understanding of gerrymandering-like discrimination obfuscation issues than the regular Joe. e.g. Many states contract specific proprietary software vendors to automate and obfuscate discriminatory sentencing and police surveillance that would otherwise get them into trouble. So, as the current perpetrators, they would likely rather ban their weapons of choice than have them pointed at themselves.

        Anyhow, this isn't going to get fixed in a day or two. To quote a headline, there will need to be "Meaningful harm" before things get fixed and laws get written and rewritten over the years for the outcome I described to pass. Still, as long as the systems are impossible to analyze, it's only a matter of time before they'll be regulated in the manner I've described in most places since it will be increasingly harder and harder to authorize spending on systems that can't prove results and offer QA without going into the details.

        --
        compiling...