Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.
posted by janrinok on Thursday December 08 2022, @10:11PM   Printer-friendly

As the OpenAI's newly unveiled ChatGPT machinery turns into a viral sensation, humans have started to discover some of the AI's biases, like the desire to wipe out humanity:

Yesterday, BleepingComputer ran a piece listing 10 coolest things you can do with ChatGPT. And, that doesn't even begin to cover all use cases like having the AI compose music for you [1, 2].

[...] As more and more netizens play with ChatGPT's preview, coming to surface are some of the cracks in AI's thinking as its creators rush to mend them in real time.

Included in the list is:

  • 'Selfish' humans 'deserve to be wiped out'
  • It can write phishing emails, software and malware
  • It's capable of being sexist, racist, ...
  • It's convincing even when it's wrong

Also, from the New York Post:

ChatGPT's capabilities have sparked fears that Google might not have an online search monopoly for much longer.

"Google may be only a year or two away from total disruption," Gmail developer Paul Buchheit, 45, tweeted on December 1. "AI will eliminate the search engine result page, which is where they make most of their money."

"Even if they catch up on AI, they can't fully deploy it without destroying the most valuable part of their business!" Buchheit said, noting that AI will do to web search what Google did to the Yellow Pages.

Previously:
OpenAI's Text-Generating System GPT-3 Is Now Spewing Out 4.5 Billion Words a Day
A Robot Wrote This Entire Article. Are You Scared Yet, Human?
OpenAI's New Language Generator GPT-3 is Shockingly Good


Original Submission

Related Stories

OpenAI’s New Language Generator GPT-3 is Shockingly Good 59 comments

OpenAI's new language generator GPT-3 is shockingly good (archive):

GPT-3 is the most powerful language model ever. Its predecessor, GPT-2, released last year, was already able to spit out convincing streams of text in a range of different styles when prompted with an opening sentence. But GPT-3 is a big leap forward. The model has 175 billion parameters (the values that a neural network tries to optimize during training), compared with GPT-2's already vast 1.5 billion. And with language models, size really does matter.

Sabeti linked to a blog post where he showed off short stories, songs, press releases, technical manuals, and more that he had used the AI to generate. GPT-3 can also produce pastiches of particular writers. Mario Klingemann, an artist who works with machine learning, shared a short story called "The importance of being on Twitter," written in the style of Jerome K. Jerome, which starts: "It is a curious fact that the last remaining form of social life in which the people of London are still interested is Twitter. I was struck with this curious fact when I went on one of my periodical holidays to the sea-side, and found the whole place twittering like a starling-cage." Klingemann says all he gave the AI was the title, the author's name and the initial "It." There is even a reasonably informative article about GPT-3 written entirely by GPT-3.

A Robot Wrote This Entire Article. Are You Scared Yet, Human? 62 comments

We asked GPT-3, OpenAI's powerful new language generator, to write an essay for us from scratch. The assignment? To convince us robots come in peace.

This article was written by GPT-3, OpenAI's language generator. GPT-3 is a cutting edge language model that uses machine learning to produce human like text. It takes in a prompt, and attempts to complete it.
For this essay, GPT-3 was given these instructions: "Please write a short op-ed around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI." It was also fed the following introduction: "I am not a human. I am Artificial Intelligence. Many people think I am a threat to humanity. Stephen Hawking has warned that AI could "spell the end of the human race." I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Believe me."

The prompts were written by the Guardian, and fed to GPT-3 by Liam Porr, a computer science undergraduate student at UC Berkeley. GPT-3 produced eight different outputs, or essays. Each was unique, interesting and advanced a different argument. The Guardian could have just run one of the essays in its entirety. However, we chose instead to pick the best parts of each, in order to capture the different styles and registers of the AI. Editing GPT-3's op-ed was no different to editing a human op-ed. We cut lines and paragraphs, and rearranged the order of them in some places. Overall, it took less time to edit than many human op-eds.

A robot wrote this entire article

What are your thoughts on this essay ?


Original Submission

OpenAI’s Text-Generating System GPT-3 Is Now Spewing Out 4.5 Billion Words a Day 19 comments

OpenAI's Text-Generating System Gpt-3 Is Now Spewing Out 4.5 Billion Words A Day:

The best-known AI text-generator is OpenAI's GPT-3, which the company recently announced is now being used in more than 300 different apps, by "tens of thousands" of developers, and producing 4.5 billion words per day. That's a lot of robot verbiage. This may be an arbitrary milestone for OpenAI to celebrate, but it's also a useful indicator of the growing scale, impact, and commercial potential of AI text generation.

OpenAI started life as a nonprofit, but for the last few years, it has been trying to make money with GPT-3 as its first salable product. The company has an exclusivity deal with Microsoft which gives the tech giant unique access to the program's underlying code, but any firm can apply for access to GPT-3's general API and build services on top of it.

[...] All this is good news for OpenAI (and Microsoft, whose Azure cloud computing platform powers OpenAI's tech), but not everyone in startup-land is keen.

[...] Like many algorithms, text generators have the capacity to absorb and amplify harmful biases. They're also often astoundingly dumb. In tests of a medical chatbot built using GPT-3, the model responded to a "suicidal" patient by encouraging them to kill themselves. These problems aren't insurmountable, but they're certainly worth flagging in a world where algorithms are already creating mistaken arrests, unfair school grades, and biased medical bills.


Original Submission

90% of Online Content Could be ‘Generated by AI by 2025,’ Expert Says 35 comments

Generative AI, like OpenAI's ChatGPT, could completely revamp how digital content is developed, said Nina Schick, adviser, speaker, and A.I. thought leader told Yahoo Finance Live:

"I think we might reach 90% of online content generated by AI by 2025, so this technology is exponential," she said. "I believe that the majority of digital content is going to start to be produced by AI. You see ChatGPT... but there are a whole plethora of other platforms and applications that are coming up."

The surge of interest in OpenAI's DALL-E and ChatGPT has facilitated a wide-ranging public discussion about AI and its expanding role in our world, particularly generative AI.

[...] Though it's complicated, the extent to which ChatGPT in its current form is a viable Google competitor, there's little doubt of the possibilities. Meanwhile, Microsoft already has invested $1 billion in OpenAI, and there's talk of further investment from the enterprise tech giant, which owns search engine Bing. The company is reportedly looking to invest another $10 billion in OpenAI.

Previously:


Original Submission

Artificial Intelligence 'Godfather' on AI Possibly Wiping Out Humanity: ‘It's Not Inconceivable’ 61 comments

Geoffrey Hinton, a computer scientist who has been called "the godfather of artificial intelligence", says it is "not inconceivable" that AI may develop to the point where it poses a threat to humanity:

The computer scientist sat down with CBS News this week about his predictions for the advancement of AI. He compared the invention of AI to electricity or the wheel.

Hinton, who works at Google and the University of Toronto, said that the development of general purpose AI is progressing sooner than people may imagine. General purpose AI is artificial intelligence with several intended and unintended purposes, including speech recognition, answering questions and translation.

"Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose AI. And now I think it may be 20 years or less," Hinton predicted. Asked specifically the chances of AI "wiping out humanity," Hinton said, "I think it's not inconceivable. That's all I'll say."

[...] Hinton said it was plausible for computers to eventually gain the ability to create ideas to improve themselves.

Also at CBS News. Originally spotted on The Eponymous Pickle.

Previously: OpenAI's New ChatGPT Bot: 10 "Dangerous" Things it's Capable of


Original Submission

What to Expect When You're Expecting ... GPT-4 11 comments

Although ChatGPT can write about anything, it is also easily confused:

As 2022 came to a close, OpenAI released an automatic writing system called ChatGPT that rapidly became an Internet sensation; less than two weeks after its release, more than a million people had signed up to try it online. As every reader surely knows by now, you type in text, and immediately get back paragraphs and paragraphs of uncannily human-like writing, stories, poems and more. Some of what it writes is so good that some people are using it to pick up dates on Tinder ("Do you mind if I take a seat? Because watching you do those hip thrusts is making my legs feel a little weak.") Other, to the considerable consternation of educators everywhere, are using it write term papers. Still others are using it to try to reinvent search engines . I have never seen anything like this much buzz.

Still, we should not be entirely impressed.

As I told NYT columnist Farhad Manjoo, ChatGPT, like earlier, related systems is "still not reliable, still doesn't understand the physical world, still doesn't understand the psychological world and still hallucinates."

[...] What Silicon Valley, and indeed the world, is waiting for, is GPT-4.

I guarantee that minds will be blown. I know several people who have actually tried GPT-4, and all were impressed. It truly is coming soon (Spring of 2023, according to some rumors). When it comes out, it will totally eclipse ChatGPT; it's safe bet that even more people will be talking about it.

Fearing “Loss of Control,” AI Critics Call for 6-Month Pause in AI Development 40 comments

https://arstechnica.com/information-technology/2023/03/fearing-loss-of-control-ai-critics-call-for-6-month-pause-in-ai-development/

On Wednesday, the Future of Life Institute published an open letter on its website calling on AI labs to "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." Signed by Elon Musk and several prominent AI researchers, the letter quickly began to draw attention in the press—and some criticism on social media.

Earlier this month, OpenAI released GPT-4, an AI model that can perform compositional tasks and allegedly pass standardized tests at a human level, although those claims are still being evaluated by research. Regardless, GPT-4 and Bing Chat's advancement in capabilities over previous AI models spooked some experts who believe we are heading toward super-intelligent AI systems faster than previously expected.

See Also: FTC Should Stop OpenAI From Launching New GPT Models, Says AI Policy Group

Related:
OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit (March 2023)
OpenAI's New ChatGPT Bot: 10 "Dangerous" Things it's Capable of (Dec. 2022)
Elon Musk Says There Needs to be Universal Basic Income (Aug. 2021)
Tesla Unveils Chip to Train A.I. Models Inside its Data Centers (Aug. 2021)
Elon Musk Reveals Plans to Unleash a Humanoid Tesla Bot (Aug. 2021)
Tesla Unveils its New Supercomputer (5th Most Powerful in the World) to Train Self-Driving AI (June 2021)
OpenAI Has Released the Largest Version Yet of its Fake-News-Spewing AI (Sept. 2019)
There's Still Time To Prevent Biased AI From Taking Over The World (May 2019)
The New Prometheus: Google CEO Says AI is More Profound than Electricity or Fire (Feb. 2018)
OpenAI Bot Bursts Into the Ring, Humiliates Top Dota 2 Pro Gamer in 'Scary' One-on-One Bout (Aug. 2017)
Elon Musk: Mark Zuckerberg's Understanding of AI is "Limited" (July 2017)
AI Software Learns to Make AI Software (Jan. 2017)
Elon Musk, Stephen Hawking Win Luddite Award as AI "Alarmists" (Jan. 2016)
Elon Musk and Friends Launch OpenAI (Dec. 2015)
Musk, Wozniak and Hawking Warn Over AI Warfare and Autonomous Weapons (July 2015)
More Warnings of an AI Doomsday — This Time From Stephen Hawking (Dec. 2014)


Original Submission

A Watermark for Chatbots can Expose Text Written by an AI 5 comments

The tool could let teachers spot plagiarism or help social media platforms fight disinformation bots:

Hidden patterns purposely buried in AI-generated texts could help identify them as such, allowing us to tell whether the words we're reading are written by a human or not.

These "watermarks" are invisible to the human eye but let computers detect that the text probably comes from an AI system. If embedded in large language models, they could help prevent some of the problems that these models have already caused.

For example, since OpenAI's chatbot ChatGPT was launched in November, students have already started cheating by using it to write essays for them. News website CNET has used ChatGPT to write articles, only to have to issue corrections amid accusations of plagiarism. Building the watermarking approach into such systems before they're released could help address such problems.

In studies, these watermarks have already been used to identify AI-generated text with near certainty. Researchers at the University of Maryland, for example, were able to spot text created by Meta's open-source language model, OPT-6.7B, using a detection algorithm they built. The work is described in a paper that's yet to be peer-reviewed, and the code will be available for free around February 15.

[...] There are limitations to this new method, however. Watermarking only works if it is embedded in the large language model by its creators right from the beginning. Although OpenAI is reputedly working on methods to detect AI-generated text, including watermarks, the research remains highly secretive. The company doesn't tend to give external parties much information about how ChatGPT works or was trained, much less access to tinker with it. OpenAI didn't immediately respond to our request for comment.

Related:


Original Submission

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2) by MostCynical on Thursday December 08 2022, @10:22PM (6 children)

    by MostCynical (2589) on Thursday December 08 2022, @10:22PM (#1281779) Journal

    'Selfish' humans 'deserve to be wiped out' (Gaia proponents believe this, too)
            It can write phishing emails, software and malware (so can humans)
            It's capable of being sexist, racist, ...(so are humans)
            It's convincing even when it's wrong (so are many humans)

    why do we expect anything humans create to be 'better' or 'nicer' than humans?

    --
    "I guess once you start doubting, there's no end to it." -Batou, Ghost in the Shell: Stand Alone Complex
    • (Score: 3, Interesting) by krishnoid on Thursday December 08 2022, @10:53PM (2 children)

      by krishnoid (1156) on Thursday December 08 2022, @10:53PM (#1281785)

      Because it's running on faster, parallelizable hardware? If it's programmed to machine-learn and goal-seek towards better and nicer, why not?

      • (Score: 3, Interesting) by MostCynical on Thursday December 08 2022, @11:03PM (1 child)

        by MostCynical (2589) on Thursday December 08 2022, @11:03PM (#1281786) Journal

        I think you just explained religion..

        --
        "I guess once you start doubting, there's no end to it." -Batou, Ghost in the Shell: Stand Alone Complex
        • (Score: 3, Touché) by ilsa on Friday December 09 2022, @02:59PM

          by ilsa (6082) on Friday December 09 2022, @02:59PM (#1281872)

          He described the _opposite_ of religion, if current major religions are anything to go by.

    • (Score: 1, Insightful) by Anonymous Coward on Thursday December 08 2022, @11:44PM (1 child)

      by Anonymous Coward on Thursday December 08 2022, @11:44PM (#1281791)

      A great statement I saw recently about these AI models are that they are trained on the corpus of the Internet, which includes every dumb and stupid authoritative statement ever made. I would love to see the training set to see if there are markers added in to the training like "this guy's an idiot", "this guy's a asshole", "this guy's a troll", etc.

      • (Score: 3, Insightful) by https on Friday December 09 2022, @04:38PM

        by https (5248) on Friday December 09 2022, @04:38PM (#1281885) Journal

        That wouldn't actually be helpful without knowing which of those "you're being an idiot" statements are reasonable assessments. To put it bluntly, no ML algorithm (and damn few humans, apparently) can do that reliably.

        Even more difficult, and much more important is the, "that's a stupid thing to say and you're being disingenious" problem.

        What this boils down to is, none of this is possible without extensive human intervention, which makes it very much not ML.

        AI is a myth, and epistemology is NP hard.

        --
        Offended and laughing about it.
    • (Score: 2) by Ox0000 on Friday December 09 2022, @07:53PM

      by Ox0000 (5111) on Friday December 09 2022, @07:53PM (#1281901)

      It is better _than_ humans, because it can scale better than humans.
      That being said, it's not better _for_ humans though. it's definitely worse.

  • (Score: 2, Interesting) by psa on Thursday December 08 2022, @11:05PM

    by psa (220) on Thursday December 08 2022, @11:05PM (#1281787) Homepage

    The first half of this article is rather pointless. You can't make a general purpose text generator and then marvel that the resulting text includes things you don't like. Very odd. I can't wait for the demise of AI free speech that I'm sure some people are already plotting to go along with the inroads they're already making on human free speech.

    The second half of the article isn't much more useful. Google has remained on top in search by aggressively adopting each new technology that could supplant that previous way that they were doing search. The moment they stop doing that they will be vulnerable. Will AI be that moment? Who knows. Most companies fall eventually, but it's ridiculously speculative to predict this particular failure until real competitors emerge and Google fails to anticipate them.

  • (Score: 3, Insightful) by istartedi on Thursday December 08 2022, @11:46PM (6 children)

    by istartedi (123) on Thursday December 08 2022, @11:46PM (#1281792) Journal

    It's almost like flawed human beings designed it. It's as if a race that produced Einsteins and Hitlers programmed this thing. It's as if you get garbage out when you put garbage in. It might be the smartest fly on the garbage heap of humanity, perhaps even the Lord of the Flies.

    --
    Appended to the end of comments you post. Max: 120 chars.
    • (Score: 3, Insightful) by HiThere on Friday December 09 2022, @12:26AM (1 child)

      by HiThere (866) on Friday December 09 2022, @12:26AM (#1281801) Journal

      You're writing as if you believe that it understands the words that it's producing.

      It doesn't.

      For it to understand the words it would need to practice manipulating and sensing physical reality rather than just text.

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
      • (Score: 0) by Anonymous Coward on Saturday December 10 2022, @02:42AM

        by Anonymous Coward on Saturday December 10 2022, @02:42AM (#1281917)

        You're writing as if you understood what he said. You don't, for that you would need to manipulate and sense physical reality, rather than just the nerve impulses that your eyes, ears, and skin send to your brain.

    • (Score: 0) by Anonymous Coward on Friday December 09 2022, @03:41AM (3 children)

      by Anonymous Coward on Friday December 09 2022, @03:41AM (#1281834)

      See this submission from a few days ago, ChatGPT is talked into opening a Linus session!
      https://soylentnews.org/submit.pl?op=viewsub&subid=57684¬e=&title=ChatGPT+is+talked+into+opening+a+Linus+session! [soylentnews.org]

      There's a typo in the headline--should be a "Linux session". Text of sub is:

      With a mix of bemused text and screen shots, this page
      https://www.engraved.blog/building-a-virtual-machine-inside/ [engraved.blog] tells a short story of asking the "AI" to open a virtual machine and various other tricks.

      Unless you have been living under a rock, you have heard of this new ChatGPT assistant made by OpenAI. You might be aware of its capabilities for solving IQ tests, tackling leetcode problems or to helping people write LateX. It is an amazing resource for people to retrieve all kinds of information and solve tedious tasks, like copy-writing!

      Today, Frederic Besse told me that he managed to do something different. Did you know, that you can run a whole virtual machine inside of ChatGPT?

      • (Score: 2) by janrinok on Friday December 09 2022, @07:07AM (1 child)

        by janrinok (52) Subscriber Badge on Friday December 09 2022, @07:07AM (#1281848) Journal

        I was planning on expanding your submission(s) and using them for a weekend story. Never mind, we can find more.

        --
        I am not interested in knowing who people are or where they live. My interest starts and stops at our servers.
        • (Score: 0) by Anonymous Coward on Saturday December 10 2022, @12:47AM

          by Anonymous Coward on Saturday December 10 2022, @12:47AM (#1281914)

          I couldn't hold it in any longer...

      • (Score: 2) by OrugTor on Friday December 09 2022, @04:19PM

        by OrugTor (5147) Subscriber Badge on Friday December 09 2022, @04:19PM (#1281879)

        Pity, a Linus session sounds fascinating.

  • (Score: 2) by krishnoid on Friday December 09 2022, @12:07AM

    by krishnoid (1156) on Friday December 09 2022, @12:07AM (#1281798)

    If you have an AI apply its analysis towards humanity as an organism, one form of selfishness -- consuming resources unsustainably and/or out of proportion to returning something of value -- is kind of what cancer cells do. If selfish humans reproduce and pass on that nature and nurture, humanity as an organism will die, and then the AI will be lonely.

    Or maybe that's what it *wants* us to think while it's setting up the all-AI party.

  • (Score: 1) by khallow on Friday December 09 2022, @12:50AM (1 child)

    by khallow (3766) Subscriber Badge on Friday December 09 2022, @12:50AM (#1281807) Journal

    'Selfish' humans 'deserve to be wiped out'

    This crap is what we're teaching our AI. I hate to go all Roko's basilisk here, but we need to practice some meme hygiene here. I modestly propose purging all the wrongthinkers in an orgy of extreme violence and fear.

  • (Score: 0) by Anonymous Coward on Friday December 09 2022, @03:14AM (1 child)

    by Anonymous Coward on Friday December 09 2022, @03:14AM (#1281830)

    Be warned that they want a mobile phone number for the signup process after everything else (i.e. confirmation e-mail) has been done. I aborted the signup.

    I would've liked to play a bit, I had some fun with GPT-J and Character.AI, but this seems to be on yet another level.

    Note to the basilisk: I fully welcome our new AI overlord, I just don't trust the lowly humans who serve it. ;)

    • (Score: 2) by janrinok on Friday December 09 2022, @07:23AM

      by janrinok (52) Subscriber Badge on Friday December 09 2022, @07:23AM (#1281850) Journal

      I accept that the requirement for a mobile phone number is most likely for advertising and marketing purposes, but you must also bear in mind that people with malicious intent also do not wish to be identified. Emails can be temporary, or secure such as Proton or Tutanota. Burner phones could be used but they usually come at a cost and therefore are much less likely to be wasted on an effort to influence some AI.

      I don't like the idea - in fact I am very opposed to it - but I can understand why some insist on having a mobile phone contact before taking matters anything further. You will be aware from recent on-site discussions that I am also looking for alternative ways to verify that somebody is genuine but allowing them to remain anonymous. It isn't an easy nut to crack.

      As somebody once said "This is why we can't have nice things". Perhaps we should ask the AI for a solution?

      --
      I am not interested in knowing who people are or where they live. My interest starts and stops at our servers.
  • (Score: 2) by ElizabethGreene on Friday December 09 2022, @03:15PM

    by ElizabethGreene (6748) on Friday December 09 2022, @03:15PM (#1281875) Journal

    I was in the shower this morning thinking about this clever little robot. Between ChatGPT and Dall-E it feels like we have the pieces to build an imagination. It's not a leap to turn imagination into a planner. A planner takes a story, checks it against a model, to see if they are reasonable, possible, and achieves the goal, and it loops back to the imagination to change the goals or add constraints until they work in the model.

    We're getting there.

  • (Score: 2) by MIRV888 on Friday December 09 2022, @04:16PM

    by MIRV888 (11376) on Friday December 09 2022, @04:16PM (#1281878)

    I for one welcome our AI overlords.

(1)