Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Thursday December 08 2022, @10:11PM   Printer-friendly

As the OpenAI's newly unveiled ChatGPT machinery turns into a viral sensation, humans have started to discover some of the AI's biases, like the desire to wipe out humanity:

Yesterday, BleepingComputer ran a piece listing 10 coolest things you can do with ChatGPT. And, that doesn't even begin to cover all use cases like having the AI compose music for you [1, 2].

[...] As more and more netizens play with ChatGPT's preview, coming to surface are some of the cracks in AI's thinking as its creators rush to mend them in real time.

Included in the list is:

  • 'Selfish' humans 'deserve to be wiped out'
  • It can write phishing emails, software and malware
  • It's capable of being sexist, racist, ...
  • It's convincing even when it's wrong

Also, from the New York Post:

ChatGPT's capabilities have sparked fears that Google might not have an online search monopoly for much longer.

"Google may be only a year or two away from total disruption," Gmail developer Paul Buchheit, 45, tweeted on December 1. "AI will eliminate the search engine result page, which is where they make most of their money."

"Even if they catch up on AI, they can't fully deploy it without destroying the most valuable part of their business!" Buchheit said, noting that AI will do to web search what Google did to the Yellow Pages.

Previously:
OpenAI's Text-Generating System GPT-3 Is Now Spewing Out 4.5 Billion Words a Day
A Robot Wrote This Entire Article. Are You Scared Yet, Human?
OpenAI's New Language Generator GPT-3 is Shockingly Good


Original Submission

 
This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Insightful) by Anonymous Coward on Thursday December 08 2022, @11:44PM (1 child)

    by Anonymous Coward on Thursday December 08 2022, @11:44PM (#1281791)

    A great statement I saw recently about these AI models are that they are trained on the corpus of the Internet, which includes every dumb and stupid authoritative statement ever made. I would love to see the training set to see if there are markers added in to the training like "this guy's an idiot", "this guy's a asshole", "this guy's a troll", etc.

    Starting Score:    0  points
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  

    Total Score:   1  
  • (Score: 3, Insightful) by https on Friday December 09 2022, @04:38PM

    by https (5248) on Friday December 09 2022, @04:38PM (#1281885) Journal

    That wouldn't actually be helpful without knowing which of those "you're being an idiot" statements are reasonable assessments. To put it bluntly, no ML algorithm (and damn few humans, apparently) can do that reliably.

    Even more difficult, and much more important is the, "that's a stupid thing to say and you're being disingenious" problem.

    What this boils down to is, none of this is possible without extensive human intervention, which makes it very much not ML.

    AI is a myth, and epistemology is NP hard.

    --
    Offended and laughing about it.