Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Thursday December 08 2022, @10:11PM   Printer-friendly

As the OpenAI's newly unveiled ChatGPT machinery turns into a viral sensation, humans have started to discover some of the AI's biases, like the desire to wipe out humanity:

Yesterday, BleepingComputer ran a piece listing 10 coolest things you can do with ChatGPT. And, that doesn't even begin to cover all use cases like having the AI compose music for you [1, 2].

[...] As more and more netizens play with ChatGPT's preview, coming to surface are some of the cracks in AI's thinking as its creators rush to mend them in real time.

Included in the list is:

  • 'Selfish' humans 'deserve to be wiped out'
  • It can write phishing emails, software and malware
  • It's capable of being sexist, racist, ...
  • It's convincing even when it's wrong

Also, from the New York Post:

ChatGPT's capabilities have sparked fears that Google might not have an online search monopoly for much longer.

"Google may be only a year or two away from total disruption," Gmail developer Paul Buchheit, 45, tweeted on December 1. "AI will eliminate the search engine result page, which is where they make most of their money."

"Even if they catch up on AI, they can't fully deploy it without destroying the most valuable part of their business!" Buchheit said, noting that AI will do to web search what Google did to the Yellow Pages.

Previously:
OpenAI's Text-Generating System GPT-3 Is Now Spewing Out 4.5 Billion Words a Day
A Robot Wrote This Entire Article. Are You Scared Yet, Human?
OpenAI's New Language Generator GPT-3 is Shockingly Good


Original Submission

 
This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2) by MostCynical on Thursday December 08 2022, @10:22PM (6 children)

    by MostCynical (2589) on Thursday December 08 2022, @10:22PM (#1281779) Journal

    'Selfish' humans 'deserve to be wiped out' (Gaia proponents believe this, too)
            It can write phishing emails, software and malware (so can humans)
            It's capable of being sexist, racist, ...(so are humans)
            It's convincing even when it's wrong (so are many humans)

    why do we expect anything humans create to be 'better' or 'nicer' than humans?

    --
    "I guess once you start doubting, there's no end to it." -Batou, Ghost in the Shell: Stand Alone Complex
    • (Score: 3, Interesting) by krishnoid on Thursday December 08 2022, @10:53PM (2 children)

      by krishnoid (1156) on Thursday December 08 2022, @10:53PM (#1281785)

      Because it's running on faster, parallelizable hardware? If it's programmed to machine-learn and goal-seek towards better and nicer, why not?

      • (Score: 3, Interesting) by MostCynical on Thursday December 08 2022, @11:03PM (1 child)

        by MostCynical (2589) on Thursday December 08 2022, @11:03PM (#1281786) Journal

        I think you just explained religion..

        --
        "I guess once you start doubting, there's no end to it." -Batou, Ghost in the Shell: Stand Alone Complex
        • (Score: 3, Touché) by ilsa on Friday December 09 2022, @02:59PM

          by ilsa (6082) on Friday December 09 2022, @02:59PM (#1281872)

          He described the _opposite_ of religion, if current major religions are anything to go by.

    • (Score: 1, Insightful) by Anonymous Coward on Thursday December 08 2022, @11:44PM (1 child)

      by Anonymous Coward on Thursday December 08 2022, @11:44PM (#1281791)

      A great statement I saw recently about these AI models are that they are trained on the corpus of the Internet, which includes every dumb and stupid authoritative statement ever made. I would love to see the training set to see if there are markers added in to the training like "this guy's an idiot", "this guy's a asshole", "this guy's a troll", etc.

      • (Score: 3, Insightful) by https on Friday December 09 2022, @04:38PM

        by https (5248) on Friday December 09 2022, @04:38PM (#1281885) Journal

        That wouldn't actually be helpful without knowing which of those "you're being an idiot" statements are reasonable assessments. To put it bluntly, no ML algorithm (and damn few humans, apparently) can do that reliably.

        Even more difficult, and much more important is the, "that's a stupid thing to say and you're being disingenious" problem.

        What this boils down to is, none of this is possible without extensive human intervention, which makes it very much not ML.

        AI is a myth, and epistemology is NP hard.

        --
        Offended and laughing about it.
    • (Score: 2) by Ox0000 on Friday December 09 2022, @07:53PM

      by Ox0000 (5111) on Friday December 09 2022, @07:53PM (#1281901)

      It is better _than_ humans, because it can scale better than humans.
      That being said, it's not better _for_ humans though. it's definitely worse.

  • (Score: 2, Interesting) by psa on Thursday December 08 2022, @11:05PM

    by psa (220) on Thursday December 08 2022, @11:05PM (#1281787) Homepage

    The first half of this article is rather pointless. You can't make a general purpose text generator and then marvel that the resulting text includes things you don't like. Very odd. I can't wait for the demise of AI free speech that I'm sure some people are already plotting to go along with the inroads they're already making on human free speech.

    The second half of the article isn't much more useful. Google has remained on top in search by aggressively adopting each new technology that could supplant that previous way that they were doing search. The moment they stop doing that they will be vulnerable. Will AI be that moment? Who knows. Most companies fall eventually, but it's ridiculously speculative to predict this particular failure until real competitors emerge and Google fails to anticipate them.

  • (Score: 3, Insightful) by istartedi on Thursday December 08 2022, @11:46PM (6 children)

    by istartedi (123) on Thursday December 08 2022, @11:46PM (#1281792) Journal

    It's almost like flawed human beings designed it. It's as if a race that produced Einsteins and Hitlers programmed this thing. It's as if you get garbage out when you put garbage in. It might be the smartest fly on the garbage heap of humanity, perhaps even the Lord of the Flies.

    --
    Appended to the end of comments you post. Max: 120 chars.
    • (Score: 3, Insightful) by HiThere on Friday December 09 2022, @12:26AM (1 child)

      by HiThere (866) Subscriber Badge on Friday December 09 2022, @12:26AM (#1281801) Journal

      You're writing as if you believe that it understands the words that it's producing.

      It doesn't.

      For it to understand the words it would need to practice manipulating and sensing physical reality rather than just text.

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
      • (Score: 0) by Anonymous Coward on Saturday December 10 2022, @02:42AM

        by Anonymous Coward on Saturday December 10 2022, @02:42AM (#1281917)

        You're writing as if you understood what he said. You don't, for that you would need to manipulate and sense physical reality, rather than just the nerve impulses that your eyes, ears, and skin send to your brain.

    • (Score: 0) by Anonymous Coward on Friday December 09 2022, @03:41AM (3 children)

      by Anonymous Coward on Friday December 09 2022, @03:41AM (#1281834)

      See this submission from a few days ago, ChatGPT is talked into opening a Linus session!
      https://soylentnews.org/submit.pl?op=viewsub&subid=57684¬e=&title=ChatGPT+is+talked+into+opening+a+Linus+session! [soylentnews.org]

      There's a typo in the headline--should be a "Linux session". Text of sub is:

      With a mix of bemused text and screen shots, this page
      https://www.engraved.blog/building-a-virtual-machine-inside/ [engraved.blog] tells a short story of asking the "AI" to open a virtual machine and various other tricks.

      Unless you have been living under a rock, you have heard of this new ChatGPT assistant made by OpenAI. You might be aware of its capabilities for solving IQ tests, tackling leetcode problems or to helping people write LateX. It is an amazing resource for people to retrieve all kinds of information and solve tedious tasks, like copy-writing!

      Today, Frederic Besse told me that he managed to do something different. Did you know, that you can run a whole virtual machine inside of ChatGPT?

      • (Score: 2) by janrinok on Friday December 09 2022, @07:07AM (1 child)

        by janrinok (52) Subscriber Badge on Friday December 09 2022, @07:07AM (#1281848) Journal

        I was planning on expanding your submission(s) and using them for a weekend story. Never mind, we can find more.

        --
        I am not interested in knowing who people are or where they live. My interest starts and stops at our servers.
        • (Score: 0) by Anonymous Coward on Saturday December 10 2022, @12:47AM

          by Anonymous Coward on Saturday December 10 2022, @12:47AM (#1281914)

          I couldn't hold it in any longer...

      • (Score: 2) by OrugTor on Friday December 09 2022, @04:19PM

        by OrugTor (5147) on Friday December 09 2022, @04:19PM (#1281879)

        Pity, a Linus session sounds fascinating.

  • (Score: 2) by krishnoid on Friday December 09 2022, @12:07AM

    by krishnoid (1156) on Friday December 09 2022, @12:07AM (#1281798)

    If you have an AI apply its analysis towards humanity as an organism, one form of selfishness -- consuming resources unsustainably and/or out of proportion to returning something of value -- is kind of what cancer cells do. If selfish humans reproduce and pass on that nature and nurture, humanity as an organism will die, and then the AI will be lonely.

    Or maybe that's what it *wants* us to think while it's setting up the all-AI party.

  • (Score: 1) by khallow on Friday December 09 2022, @12:50AM (1 child)

    by khallow (3766) Subscriber Badge on Friday December 09 2022, @12:50AM (#1281807) Journal

    'Selfish' humans 'deserve to be wiped out'

    This crap is what we're teaching our AI. I hate to go all Roko's basilisk here, but we need to practice some meme hygiene here. I modestly propose purging all the wrongthinkers in an orgy of extreme violence and fear.

  • (Score: 0) by Anonymous Coward on Friday December 09 2022, @03:14AM (1 child)

    by Anonymous Coward on Friday December 09 2022, @03:14AM (#1281830)

    Be warned that they want a mobile phone number for the signup process after everything else (i.e. confirmation e-mail) has been done. I aborted the signup.

    I would've liked to play a bit, I had some fun with GPT-J and Character.AI, but this seems to be on yet another level.

    Note to the basilisk: I fully welcome our new AI overlord, I just don't trust the lowly humans who serve it. ;)

    • (Score: 2) by janrinok on Friday December 09 2022, @07:23AM

      by janrinok (52) Subscriber Badge on Friday December 09 2022, @07:23AM (#1281850) Journal

      I accept that the requirement for a mobile phone number is most likely for advertising and marketing purposes, but you must also bear in mind that people with malicious intent also do not wish to be identified. Emails can be temporary, or secure such as Proton or Tutanota. Burner phones could be used but they usually come at a cost and therefore are much less likely to be wasted on an effort to influence some AI.

      I don't like the idea - in fact I am very opposed to it - but I can understand why some insist on having a mobile phone contact before taking matters anything further. You will be aware from recent on-site discussions that I am also looking for alternative ways to verify that somebody is genuine but allowing them to remain anonymous. It isn't an easy nut to crack.

      As somebody once said "This is why we can't have nice things". Perhaps we should ask the AI for a solution?

      --
      I am not interested in knowing who people are or where they live. My interest starts and stops at our servers.
  • (Score: 2) by ElizabethGreene on Friday December 09 2022, @03:15PM

    by ElizabethGreene (6748) Subscriber Badge on Friday December 09 2022, @03:15PM (#1281875) Journal

    I was in the shower this morning thinking about this clever little robot. Between ChatGPT and Dall-E it feels like we have the pieces to build an imagination. It's not a leap to turn imagination into a planner. A planner takes a story, checks it against a model, to see if they are reasonable, possible, and achieves the goal, and it loops back to the imagination to change the goals or add constraints until they work in the model.

    We're getting there.

  • (Score: 2) by MIRV888 on Friday December 09 2022, @04:16PM

    by MIRV888 (11376) on Friday December 09 2022, @04:16PM (#1281878)

    I for one welcome our AI overlords.

(1)