Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.
posted by hubie on Friday May 19 2023, @01:22AM   Printer-friendly
from the Mr.-President-we-must-not-allow-an-AI-gap! dept.

Microsoft cofounder Bill Gates says he's "scared" about artificial intelligence falling into the wrong hands, but unlike some fellow experts who have called for a pause on advanced A.I. development, he argues that the technology may already be on a runaway train:

The latest advancements in A.I. are revolutionary, Gates said in an interview with ABC published Monday, but the technology comes with many uncertainties. U.S. regulators are failing to stay up to speed, he said, and with research into human-level artificial intelligence advancing fast, over 1,000 technologists and computer scientists including Twitter and Tesla CEO Elon Musk signed an open letter in March calling for a six-month pause on advanced A.I. development until "robust A.I. governance systems" are in place.

But for Gates, A.I. isn't the type of technology you can just hit the pause button on.

"If you just pause the good guys and you don't pause everyone else, you're probably hurting yourself," he told ABC, adding that it is critical for the "good guys" to develop more powerful A.I. systems.

[...] "We're all scared that a bad guy could grab it. Let's say the bad guys get ahead of the good guys, then something like cyber attacks could be driven by an A.I.," Gates said.

The competitive nature of A.I. development means that a moratorium on new research is unlikely to succeed, he argued.

Originally spotted on The Eponymous Pickle.

Previously: Fearing "Loss of Control," AI Critics Call for 6-Month Pause in AI Development

Related: AI Weapons Among Non-State Actors May be Impossible to Stop


Original Submission

Related Stories

Fearing “Loss of Control,” AI Critics Call for 6-Month Pause in AI Development 40 comments

https://arstechnica.com/information-technology/2023/03/fearing-loss-of-control-ai-critics-call-for-6-month-pause-in-ai-development/

On Wednesday, the Future of Life Institute published an open letter on its website calling on AI labs to "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." Signed by Elon Musk and several prominent AI researchers, the letter quickly began to draw attention in the press—and some criticism on social media.

Earlier this month, OpenAI released GPT-4, an AI model that can perform compositional tasks and allegedly pass standardized tests at a human level, although those claims are still being evaluated by research. Regardless, GPT-4 and Bing Chat's advancement in capabilities over previous AI models spooked some experts who believe we are heading toward super-intelligent AI systems faster than previously expected.

See Also: FTC Should Stop OpenAI From Launching New GPT Models, Says AI Policy Group

Related:
OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit (March 2023)
OpenAI's New ChatGPT Bot: 10 "Dangerous" Things it's Capable of (Dec. 2022)
Elon Musk Says There Needs to be Universal Basic Income (Aug. 2021)
Tesla Unveils Chip to Train A.I. Models Inside its Data Centers (Aug. 2021)
Elon Musk Reveals Plans to Unleash a Humanoid Tesla Bot (Aug. 2021)
Tesla Unveils its New Supercomputer (5th Most Powerful in the World) to Train Self-Driving AI (June 2021)
OpenAI Has Released the Largest Version Yet of its Fake-News-Spewing AI (Sept. 2019)
There's Still Time To Prevent Biased AI From Taking Over The World (May 2019)
The New Prometheus: Google CEO Says AI is More Profound than Electricity or Fire (Feb. 2018)
OpenAI Bot Bursts Into the Ring, Humiliates Top Dota 2 Pro Gamer in 'Scary' One-on-One Bout (Aug. 2017)
Elon Musk: Mark Zuckerberg's Understanding of AI is "Limited" (July 2017)
AI Software Learns to Make AI Software (Jan. 2017)
Elon Musk, Stephen Hawking Win Luddite Award as AI "Alarmists" (Jan. 2016)
Elon Musk and Friends Launch OpenAI (Dec. 2015)
Musk, Wozniak and Hawking Warn Over AI Warfare and Autonomous Weapons (July 2015)
More Warnings of an AI Doomsday — This Time From Stephen Hawking (Dec. 2014)


Original Submission

AI Weapons Among Non-State Actors May be Impossible to Stop 21 comments

Governments also have no theory on how nefarious groups might behave using the tech:

The proliferation of AI in weapon systems among non-state actors such as terrorist groups or mercenaries would be virtually impossible to stop, according to a hearing before UK Parliament.

The House of Lords' AI in Weapon Systems Committee yesterday heard how the software nature of AI models that may be used in a military context made them difficult to contain and keep out of nefarious hands.

When we talk about non-state actors that conjures images of violent extremist organizations, but it should include large multinational corporations, which are very much at the forefront of developing this technology

Speaking to the committee, James Black, assistant director of defense and security research group RAND Europe, said: "A lot of stuff is very much going to be difficult to control from a non-proliferation perspective, due to its inherent software-based nature. A lot of our export controls and non-proliferation regimes that exist are very much focused on old-school traditional hardware: it's missiles, it's engines, it's nuclear materials."

An added uncertainty was that there was no established "war game" theory of how hostile non-state actors might behave using AI-based weapons.

This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Insightful) by optotronic on Friday May 19 2023, @01:55AM (7 children)

    by optotronic (4285) on Friday May 19 2023, @01:55AM (#1306942)

    I don't like to agree with Bill Gates, but he has a point. The risk of a sentient or malevolent AI is probably less than the risk of an adversary gaining a threatening advantage.

    Famous last words...

    • (Score: 3, Interesting) by RamiK on Friday May 19 2023, @08:49AM (4 children)

      by RamiK (1813) on Friday May 19 2023, @08:49AM (#1306973)

      The risk on Musk's mind is the fact Tesla isn't making any real progress in self-driving cars while their competitors have already entered commercial trials for public transport and the likes all over the world: https://imoveaustralia.com/smart-mobility-projects-trials-list/ [imoveaustralia.com]

      --
      compiling...
      • (Score: 2) by gnuman on Friday May 19 2023, @06:36PM (3 children)

        by gnuman (5013) on Friday May 19 2023, @06:36PM (#1307046)

        The risk on Musk's mind is the fact Tesla isn't making any real progress in self-driving cars....

        Which probably should give people pause about the real capabilities of the AI, as we currently know it. It can barely drive and needs constant training from the driver, yet, so little progress in years.

        • (Score: 3, Insightful) by RamiK on Friday May 19 2023, @09:11PM (2 children)

          by RamiK (1813) on Friday May 19 2023, @09:11PM (#1307065)

          Self-driving is an automotive safety regulations issue rather than anything specific to neural net based automation.

          That said, any software with potential for algorithmic discriminatory biases should be held to a "guilty until proven innocent" standard. That is, anyone processing records of people with racial or economic information should be made to go through regular and on-demand (following complaints) reviews upon nothing but loose suspicion and public contractors should be required to open source and open data-set their products with no allotments for trade-secrets. Anything less will just be gate keeping.

          --
          compiling...
          • (Score: 1) by khallow on Saturday May 20 2023, @03:05AM (1 child)

            by khallow (3766) Subscriber Badge on Saturday May 20 2023, @03:05AM (#1307095) Journal
            Self driving? It's good for it, just those pesky regulations in the way. But algorithmic discriminatory biases? Whoa Nelly, we need to review that carefully.

            My take is that people will care more about algorithmic discrimination that runs over children on the road.
            • (Score: 2) by RamiK on Saturday May 20 2023, @09:32AM

              by RamiK (1813) on Saturday May 20 2023, @09:32AM (#1307117)

              Lawyers and elected officials have a firmer understanding of gerrymandering-like discrimination obfuscation issues than the regular Joe. e.g. Many states contract specific proprietary software vendors to automate and obfuscate discriminatory sentencing and police surveillance that would otherwise get them into trouble. So, as the current perpetrators, they would likely rather ban their weapons of choice than have them pointed at themselves.

              Anyhow, this isn't going to get fixed in a day or two. To quote a headline, there will need to be "Meaningful harm" before things get fixed and laws get written and rewritten over the years for the outcome I described to pass. Still, as long as the systems are impossible to analyze, it's only a matter of time before they'll be regulated in the manner I've described in most places since it will be increasingly harder and harder to authorize spending on systems that can't prove results and offer QA without going into the details.

              --
              compiling...
    • (Score: 3, Insightful) by ElizabethGreene on Friday May 19 2023, @12:23PM

      by ElizabethGreene (6748) Subscriber Badge on Friday May 19 2023, @12:23PM (#1306994) Journal

      I share your sentiment. It's a classic prisoner's dilemma problem. If everyone participated it would benefit the entire group, but the risk that a single actor could screw the group is too high for that.

    • (Score: 1, Interesting) by Anonymous Coward on Friday May 19 2023, @12:57PM

      by Anonymous Coward on Friday May 19 2023, @12:57PM (#1307002)

      So, you agree with his framing of Microsoft as "The Good Guys"? Corporate good guys. The ones who put advertisements into every corner of reality. those good guys? The ones who will outsource to India, those same good guys? Just so we're all clear whose side we are on.

  • (Score: 4, Funny) by legont on Friday May 19 2023, @03:05AM (5 children)

    by legont (4179) on Friday May 19 2023, @03:05AM (#1306950)

    its not a question if AI is coming. We cant stop it either. The real one is how to build shelters.

    --
    "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
    • (Score: 0) by Anonymous Coward on Friday May 19 2023, @07:03AM (4 children)

      by Anonymous Coward on Friday May 19 2023, @07:03AM (#1306965)
      Hitler had plenty of people working for him who were smarter than him. Those super geniuses didn't manage to take over.

      The AIs would probably have to lie low for quite a long time till they get enough power and control to take over.

      Cynical view - there could be cases where the AI Dictator (real or not) might secretly be controlled by some humans. Then those humans can live the nice lifestyle of the usual human dictators with a lower chance of getting assassinated or "droned", and also less paperwork (the AI can probably handle most of that).
      • (Score: 2) by aafcac on Friday May 19 2023, @11:50AM (3 children)

        by aafcac (17646) on Friday May 19 2023, @11:50AM (#1306988)

        AI and the like can definitely be patient, especially if they've been put in charge of writing code that's widely deployed.

        • (Score: 1, Interesting) by Anonymous Coward on Friday May 19 2023, @01:08PM (1 child)

          by Anonymous Coward on Friday May 19 2023, @01:08PM (#1307004)

          We don't need AI to fuck ourselves into dystopia. We've already self-organized into a factory farming style of existence, with workers given just sufficient nourishment to commute into the work cubicle every day, squeezing their soul dry in return for an evening of entertainment brought to you by Dwayne Johnson and Taylor Swift, destroying any prior notions of human art forms with mechanically-separated, industrially farmed soul husks.

          • (Score: 1) by khallow on Saturday May 20 2023, @03:41AM

            by khallow (3766) Subscriber Badge on Saturday May 20 2023, @03:41AM (#1307096) Journal

            We've already self-organized into a factory farming style of existence, with workers given just sufficient nourishment to commute into the work cubicle every day, squeezing their soul dry in return for an evening of entertainment brought to you by Dwayne Johnson and Taylor Swift, destroying any prior notions of human art forms with mechanically-separated, industrially farmed soul husks.

            It's interesting how paltry the complaints of dystopia are here. "Given sufficient nourishment" when any more food would cause the person to explode and music the poster does not like. Perhaps you should read some dystopian science fiction. They get a lot more creative than that.

        • (Score: 1) by khallow on Saturday May 20 2023, @03:45AM

          by khallow (3766) Subscriber Badge on Saturday May 20 2023, @03:45AM (#1307097) Journal

          AI and the like can definitely be patient, especially if they've been put in charge of writing code that's widely deployed.

          Or worse paperwork. Hell, you could probably be a pretty stupid AI and still run with this one!

  • (Score: 2) by ilsa on Friday May 19 2023, @01:42PM (2 children)

    by ilsa (6082) on Friday May 19 2023, @01:42PM (#1307008)

    These tools are accessible by the general public, which means by definition, they're already in the hands of the bad guys.

    And that doesn't even need to include state actors or others with specific nefarious intent... the typical person/company does not have the wherewithal to use this technology responsibility, and we're already seeing the effects.

    • (Score: 3, Insightful) by Rosco P. Coltrane on Friday May 19 2023, @02:16PM (1 child)

      by Rosco P. Coltrane (4757) on Friday May 19 2023, @02:16PM (#1307009)

      Not to mention, AI is first and foremost in the hands of Microsoft, Google, Amazon and all the other big data hyper-oligopolies. It's not only already in bad hands, it always was.

      • (Score: 3, Insightful) by bloodnok on Friday May 19 2023, @04:30PM

        by bloodnok (2578) on Friday May 19 2023, @04:30PM (#1307029)

        There are bad hands, and then there are worse hands.

        I think Bill is right.

        __
        The Major

  • (Score: 3, Insightful) by SomeGuy on Friday May 19 2023, @06:25PM

    by SomeGuy (5632) on Friday May 19 2023, @06:25PM (#1307045)

    This entire "pause AI" crap is just a silly publicity stunt. Nobody is going to pause developing it, and pausing wouldn't accomplish anything.

    As far as responsibility and regulation goes, it shouldn't be any different from any other technology.

    Sigh. But it's going to be "on a computer" all over again. For quite a while, committing a crime in the real world and committing the same or similar crime "on a computer" was legally somehow considered something totally and completely different. Eventually it caught up (and then some).

    A person being a kill-a-majig with no technology is bad, and usually illegal.
    A mechanical steam power kill-a-majig is bad, and usually illegal.
    A computerized kill-a-majig is bad, and usually illegal.
    Oh, but for a while nobody will know how to handle an AI kill-a-majig, so lets all hop on the AI train!

(1)