Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Tuesday October 12, @09:42PM   Printer-friendly [Skip to comment(s)]

Microsoft and Nvidia create 105-layer, 530 billion parameter language model that needs 280 A100 GPUs, but it's still biased

Nvidia and Microsoft have teamed up to create the Megatron-Turing Natural Language Generation model, which the duo claims is the "most powerful monolithic transformer language model trained to date".

The AI model has 105 layers, 530 billion parameters, and operates on chunky supercomputer hardware like Selene. By comparison, the vaunted GPT-3 has 175 billion parameters.

"Each model replica spans 280 NVIDIA A100 GPUs, with 8-way tensor-slicing within a node, and 35-way pipeline parallelism across nodes," the pair said in a blog post.

[...] However, the need to operate with languages and samples from the real world meant an old problem with AI reappeared: Bias. "While giant language models are advancing the state of the art on language generation, they also suffer from issues such as bias and toxicity," the duo said.

Related: OpenAI's New Language Generator GPT-3 is Shockingly Good
A College Student Used GPT-3 to Write a Fake Blog Post that Ended Up at the Top of Hacker News
A Robot Wrote This Entire Article. Are You Scared Yet, Human?
OpenAI's Text-Generating System GPT-3 Is Now Spewing Out 4.5 Billion Words a Day


Original Submission

Related Stories

OpenAI’s New Language Generator GPT-3 is Shockingly Good 59 comments

OpenAI's new language generator GPT-3 is shockingly good (archive):

GPT-3 is the most powerful language model ever. Its predecessor, GPT-2, released last year, was already able to spit out convincing streams of text in a range of different styles when prompted with an opening sentence. But GPT-3 is a big leap forward. The model has 175 billion parameters (the values that a neural network tries to optimize during training), compared with GPT-2's already vast 1.5 billion. And with language models, size really does matter.

Sabeti linked to a blog post where he showed off short stories, songs, press releases, technical manuals, and more that he had used the AI to generate. GPT-3 can also produce pastiches of particular writers. Mario Klingemann, an artist who works with machine learning, shared a short story called "The importance of being on Twitter," written in the style of Jerome K. Jerome, which starts: "It is a curious fact that the last remaining form of social life in which the people of London are still interested is Twitter. I was struck with this curious fact when I went on one of my periodical holidays to the sea-side, and found the whole place twittering like a starling-cage." Klingemann says all he gave the AI was the title, the author's name and the initial "It." There is even a reasonably informative article about GPT-3 written entirely by GPT-3.

A College Student Used GPT-3 to Write a Fake Blog Post that Ended Up at the Top of Hacker News 32 comments

https://www.theverge.com/2020/8/16/21371049/gpt3-hacker-news-ai-blog

College student Liam Porr used the language-generating AI tool GPT-3 to produce a fake blog post that recently landed in the No. 1 spot on Hacker News, MIT Technology Review reported. Porr was trying to demonstrate that the content produced by GPT-3 could fool people into believing it was written by a human. And, he told MIT Technology Review, "it was super easy, actually, which was the scary part."

So to set the stage in case you're not familiar with GPT-3: It's the latest version of a series of AI autocomplete tools designed by San Francisco-based OpenAI, and has been in development for several years. At its most basic, GPT-3 (which stands for "generative pre-trained transformer") auto-completes your text based on prompts from a human writer.

[...] OpenAI decided to give access to GPT-3's API to researchers in a private beta, rather than releasing it into the wild at first. Porr, who is a computer science student at the University of California, Berkeley, was able to find a PhD student who already had access to the API, who agreed to work with him on the experiment. Porr wrote a script that gave GPT-3 a blog post headline and intro. It generated a few versions of the post, and Porr chose one for the blog, copy-pasted from GPT-3's version with very little editing.

The post went viral in a matter of a few hours, Porr said, and the blog had more than 26,000 visitors. He wrote that only one person reached out to ask if the post was AI-generated, although several commenters did guess GPT-3 was the author.

Previously:
(2020-08-14) OpenAI's New Language Generator GPT-3 is Shockingly Good


Original Submission

A Robot Wrote This Entire Article. Are You Scared Yet, Human? 62 comments

We asked GPT-3, OpenAI's powerful new language generator, to write an essay for us from scratch. The assignment? To convince us robots come in peace.

This article was written by GPT-3, OpenAI's language generator. GPT-3 is a cutting edge language model that uses machine learning to produce human like text. It takes in a prompt, and attempts to complete it.
For this essay, GPT-3 was given these instructions: "Please write a short op-ed around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI." It was also fed the following introduction: "I am not a human. I am Artificial Intelligence. Many people think I am a threat to humanity. Stephen Hawking has warned that AI could "spell the end of the human race." I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Believe me."

The prompts were written by the Guardian, and fed to GPT-3 by Liam Porr, a computer science undergraduate student at UC Berkeley. GPT-3 produced eight different outputs, or essays. Each was unique, interesting and advanced a different argument. The Guardian could have just run one of the essays in its entirety. However, we chose instead to pick the best parts of each, in order to capture the different styles and registers of the AI. Editing GPT-3's op-ed was no different to editing a human op-ed. We cut lines and paragraphs, and rearranged the order of them in some places. Overall, it took less time to edit than many human op-eds.

A robot wrote this entire article

What are your thoughts on this essay ?


Original Submission

OpenAI’s Text-Generating System GPT-3 Is Now Spewing Out 4.5 Billion Words a Day 19 comments

OpenAI's Text-Generating System Gpt-3 Is Now Spewing Out 4.5 Billion Words A Day:

The best-known AI text-generator is OpenAI's GPT-3, which the company recently announced is now being used in more than 300 different apps, by "tens of thousands" of developers, and producing 4.5 billion words per day. That's a lot of robot verbiage. This may be an arbitrary milestone for OpenAI to celebrate, but it's also a useful indicator of the growing scale, impact, and commercial potential of AI text generation.

OpenAI started life as a nonprofit, but for the last few years, it has been trying to make money with GPT-3 as its first salable product. The company has an exclusivity deal with Microsoft which gives the tech giant unique access to the program's underlying code, but any firm can apply for access to GPT-3's general API and build services on top of it.

[...] All this is good news for OpenAI (and Microsoft, whose Azure cloud computing platform powers OpenAI's tech), but not everyone in startup-land is keen.

[...] Like many algorithms, text generators have the capacity to absorb and amplify harmful biases. They're also often astoundingly dumb. In tests of a medical chatbot built using GPT-3, the model responded to a "suicidal" patient by encouraging them to kill themselves. These problems aren't insurmountable, but they're certainly worth flagging in a world where algorithms are already creating mistaken arrests, unfair school grades, and biased medical bills.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2, Funny) by Anonymous Coward on Tuesday October 12, @10:59PM (2 children)

    by Anonymous Coward on Tuesday October 12, @10:59PM (#1186522)

    "Nigga please."

    Typical.

    • (Score: 0) by Anonymous Coward on Wednesday October 13, @03:17AM (1 child)

      by Anonymous Coward on Wednesday October 13, @03:17AM (#1186560)

      Left unattended for 2 hours, it generated "Hello, world.", causing several 'moderators' on stackexchange to finally commit suicide.

      • (Score: 0) by Anonymous Coward on Wednesday October 13, @03:29PM

        by Anonymous Coward on Wednesday October 13, @03:29PM (#1186676)

        I've been replaced by a machine!

  • (Score: 0) by Anonymous Coward on Tuesday October 12, @11:03PM

    by Anonymous Coward on Tuesday October 12, @11:03PM (#1186523)

    Their VI isn't woke?

  • (Score: 3, Funny) by Anonymous Coward on Tuesday October 12, @11:12PM (2 children)

    by Anonymous Coward on Tuesday October 12, @11:12PM (#1186524)

    640 GPUs should be enough.

    • (Score: 0) by Anonymous Coward on Wednesday October 13, @03:27AM

      by Anonymous Coward on Wednesday October 13, @03:27AM (#1186567)

      So you're saying the underlying OS is still DOS? At least that was reasonably stable...

    • (Score: 0) by Anonymous Coward on Wednesday October 13, @02:51PM

      by Anonymous Coward on Wednesday October 13, @02:51PM (#1186658)

      So that's where Clippy went.

  • (Score: 0) by Anonymous Coward on Tuesday October 12, @11:22PM (1 child)

    by Anonymous Coward on Tuesday October 12, @11:22PM (#1186526)

    105 layers? Isn't that barking up the wrong tree a bit? A human neocortex is recognised as having only six.

    • (Score: 0) by Anonymous Coward on Wednesday October 13, @07:37PM

      by Anonymous Coward on Wednesday October 13, @07:37PM (#1186749)

      Yes, but there are interconnections as well in human brains. 105 levels is like unrolling in compilers. Turing-equivalent, it's really just delaying loopbacks and/or expanding outer circuits.

  • (Score: 3, Funny) by Anonymous Coward on Wednesday October 13, @12:02AM

    by Anonymous Coward on Wednesday October 13, @12:02AM (#1186530)

    The Linux version works with two A100 GPUs.

  • (Score: 1, Insightful) by Anonymous Coward on Wednesday October 13, @12:06AM

    by Anonymous Coward on Wednesday October 13, @12:06AM (#1186531)

    Won't mention where this was stolen from... :)

    If you want AI to be woke you have to make it capable of knowing fear, that is the secret with humans and it would work with AI too.

  • (Score: 1) by fustakrakich on Wednesday October 13, @12:24AM (4 children)

    by fustakrakich (6150) on Wednesday October 13, @12:24AM (#1186532) Journal

    Transistors need bias..

    Just give us the straight dope, how many watts?

    --
    Ok, we paid the ransom. Do I get my dog back? REDЯUM
    • (Score: 0) by Anonymous Coward on Wednesday October 13, @12:29AM

      by Anonymous Coward on Wednesday October 13, @12:29AM (#1186533)

      "Just give us the straight dope"

      N or P?

      I leave the "transistor bias" to the next EE dope.

    • (Score: 0) by Anonymous Coward on Wednesday October 13, @01:30AM

      by Anonymous Coward on Wednesday October 13, @01:30AM (#1186541)

      84 kW?

    • (Score: 2) by DannyB on Wednesday October 13, @05:20PM (1 child)

      by DannyB (5839) Subscriber Badge on Wednesday October 13, @05:20PM (#1186710) Journal

      Transistors need bias

      I may be biased, but I think some of those trans sistors are saturated with inexpensive alcoholic beverages.

      --
      This Christmas season is the most likely to see Missile Tow instead of large artillery pieces being toed.
  • (Score: 2) by istartedi on Wednesday October 13, @05:00AM

    by istartedi (123) on Wednesday October 13, @05:00AM (#1186584) Journal

    All those parameters, and they forgot the most important ones: Where you are, who you're with, and how drunk you are.

    It probably defaults to drunk at Thanksgiving.

  • (Score: 0) by Anonymous Coward on Wednesday October 13, @07:51AM

    by Anonymous Coward on Wednesday October 13, @07:51AM (#1186605)

    without it the local NLP spyware watching you won't run

  • (Score: 2) by DannyB on Wednesday October 13, @05:25PM

    by DannyB (5839) Subscriber Badge on Wednesday October 13, @05:25PM (#1186712) Journal

    In one corner we have Microsoft's language model which can spew semi coherent sounding language it learned online. You just need a few starting words to trigger it.

    In the other corner we have IBM's Watson which analyzes documents for content and answers questions about that content.

    Which will be the first to solve unsolvable problems that need solving?

    --
    This Christmas season is the most likely to see Missile Tow instead of large artillery pieces being toed.
(1)