Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


AnonTechie (2275)

AnonTechie
(email not shown publicly)

Journal of AnonTechie (2275)

The Fine Print: The following are owned by whoever posted them. We are not responsible for them in any way.
Wednesday March 29, 23
01:02 PM
/dev/random

500 Top Technologists and Elon Musk Demand Immediate Pause of Advanced AI Systems
Steve Wozniak and Stuart Russell were among the signatories of an open letter warning advanced models pose “profound risks to society and humanity."

A wide-ranging coalition of more than 500 technologists, engineers, and AI ethicists have signed an open letter calling on AI labs to immediately pause all training on any AI systems more powerful than Open AI’s recently released GPT-4 for at least six months.

The signatories, which include Apple co-founder Steve Wozniak and “based AI” developer Elon Musk, warn these advanced new AI models could pose “profound risks to society and humanity,” if allowed to advance without sufficient safeguards. If companies refuse to pause development, the letter says governments should whip out the big guns and institute a mandatory moratorium.

“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources,” the letter reads. “Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.”

The letter was released by The Future of Life Institute, an organization self-described as focused on steering technologies away from perceived large-scale risks to humanity. Those primary risk groups include AI, biotechnology, nuclear weapons, and climate change. The group’s concerns over AI systems rest on the assumption that those systems, “are now becoming human-competitive at general tasks.” That level of sophistication, the letter argues, could lead to a near future where bad actors use AI to flood the internet with propaganda, make once stable jobs redundant, and develop “nonhuman minds” that could out-complete or “replace” humans.

Gizmodo

Display Options Threshold/Breakthrough Reply to Article Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 1, Touché) by Anonymous Coward on Wednesday March 29, @01:46PM (3 children)

    by Anonymous Coward on Wednesday March 29, @01:46PM (#1298628)

    Mandatory backdoor on all of your computers.

    • (Score: 5, Insightful) by DannyB on Wednesday March 29, @02:59PM (2 children)

      by DannyB (5839) Subscriber Badge on Wednesday March 29, @02:59PM (#1298650) Journal

      "That girl's standing over there listening and you're talking about our back doors?" [youtube.com]

      Isn't Management Engine a better euphemism than back door? It's way more sophisticated than leaving a password or special account in a system. Or some feature in the authentication machinery that has secret credentials baked in to the code. (see: reflections on trusting trust [cmu.edu])

      Trust the Management Engine. It's workings are kept secret for a reason. To protect you. Trust us. (The Psi Corps is your friend. Trust the corps.)

      --
      How often should I have my memory checked? I used to know but...
      • (Score: 0) by Anonymous Coward on Wednesday March 29, @05:38PM (1 child)

        by Anonymous Coward on Wednesday March 29, @05:38PM (#1298670)

        "Move along, nothing to see here."

        "we're from the government and we're here to help"

        • (Score: 0) by Anonymous Coward on Wednesday March 29, @11:31PM

          by Anonymous Coward on Wednesday March 29, @11:31PM (#1298739)

          Reminds me off Ronnie Raygun, oh the irony.

  • (Score: 0) by Anonymous Coward on Wednesday March 29, @02:03PM (3 children)

    by Anonymous Coward on Wednesday March 29, @02:03PM (#1298633)
    I think most of us who have been paying attention know they're not that good at certain stuff. But the real problem is the PHBs might think they're good enough and cause a lot of damage as a result.

    They're pretty good at art, music and other dream-style stuff. As for writing code, I guess they might copy and paste stack-overflow level code reasonably well?

    Imagine if some idiot PHBs let ChatGPT do some financial trading. The issue is the big fish get their big market losses rolled back when they screw up whereas us small fries don't. So if they win big we lose, if they lose big, we still lose.
    • (Score: 3, Interesting) by Freeman on Wednesday March 29, @02:27PM

      by Freeman (732) Subscriber Badge on Wednesday March 29, @02:27PM (#1298639) Journal

      The art/music/dream-style stuff as you note are still the domain of the meat bags. ChatGPT is just a lot better at the whole "fake it 'til you make it". It can generate "seemingly" unique things, but it's all based on what has already been done. There's no conscious mind / reasoning behind anything that ChatGPT does other than "this is how my programming works". Welcome to the future dystopia of art/music/etc.

      --
      Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
    • (Score: 2) by DannyB on Wednesday March 29, @03:03PM (1 child)

      by DannyB (5839) Subscriber Badge on Wednesday March 29, @03:03PM (#1298652) Journal

      As for writing code, I guess they might copy and paste stack-overflow level code

      ChatGPT failed to write me some code. [soylentnews.org]

      --
      How often should I have my memory checked? I used to know but...
      • (Score: 0) by Anonymous Coward on Friday March 31, @01:27PM

        by Anonymous Coward on Friday March 31, @01:27PM (#1299145)
        Did you compare the results with a similar request on stack overflow?

        Or are you a low grade chat bot that didn't actually understand what was written?
  • (Score: 1) by khallow on Wednesday March 29, @02:45PM (2 children)

    by khallow (3766) Subscriber Badge on Wednesday March 29, @02:45PM (#1298643) Journal
    What I find ridiculous about the letter is a complete lack of credible reason. We don't know anything about so-called "advanced AI systems". We haven't seen one in action and we certainly haven't seen the alleged problems they might cause. The letter writers also ignore who benefits from such a pause. Open AI would benefit because they slid in at the buzzer and would delay some superior competition for six months (they're also getting free advertising from this letter). Anyone who gets away with ignoring the demand benefits as well since they too get a six month advantage. And finally, there's no way that we would come up with anything constructive given our ignorance of what advanced AI will end up being and harms that it'll cause. We would be in the same state at the end of the six month pause as at the beginning (aside from defectors being further along on their AI projects). Thus, the situation would be set up for both further six month delays and resistance to said delays - they cried wolf and the only ones who benefited either were at the cutting edge six months ago or covertly researching further.

    This is worse than an utter waste of time. It doesn't make us safer and it delays any benefits that would come from said advanced AI from anyone who honors the demands of the letter.
    • (Score: 1) by khallow on Wednesday March 29, @02:54PM (1 child)

      by khallow (3766) Subscriber Badge on Wednesday March 29, @02:54PM (#1298648) Journal
      In footnote 5, they cite a few technologies that were paused:

      Examples include human cloning, human germline modification, gain-of-function research, and eugenics.

      Note that not a one of these were improved or risks mitigated by such a pause. For the first two, we don't even understand them well enough to know what the problems will be.

      • (Score: 1) by khallow on Wednesday March 29, @04:03PM

        by khallow (3766) Subscriber Badge on Wednesday March 29, @04:03PM (#1298660) Journal
        Since I'm thinking about it, there were a bunch of other technologies paused to ill effect: stem cell research, birth control, nuclear power, internet-based gig economy, rocketry, and encryption.
(1)