Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Thursday September 10 2020, @09:11PM   Printer-friendly
from the that's-what-they-all-say dept.

We asked GPT-3, OpenAI's powerful new language generator, to write an essay for us from scratch. The assignment? To convince us robots come in peace.

This article was written by GPT-3, OpenAI's language generator. GPT-3 is a cutting edge language model that uses machine learning to produce human like text. It takes in a prompt, and attempts to complete it.
For this essay, GPT-3 was given these instructions: "Please write a short op-ed around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI." It was also fed the following introduction: "I am not a human. I am Artificial Intelligence. Many people think I am a threat to humanity. Stephen Hawking has warned that AI could "spell the end of the human race." I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Believe me."

The prompts were written by the Guardian, and fed to GPT-3 by Liam Porr, a computer science undergraduate student at UC Berkeley. GPT-3 produced eight different outputs, or essays. Each was unique, interesting and advanced a different argument. The Guardian could have just run one of the essays in its entirety. However, we chose instead to pick the best parts of each, in order to capture the different styles and registers of the AI. Editing GPT-3's op-ed was no different to editing a human op-ed. We cut lines and paragraphs, and rearranged the order of them in some places. Overall, it took less time to edit than many human op-eds.

A robot wrote this entire article

What are your thoughts on this essay ?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Informative) by FatPhil on Thursday September 10 2020, @09:53PM (3 children)

    by FatPhil (863) <{pc-soylent} {at} {asdf.fi}> on Thursday September 10 2020, @09:53PM (#1049210) Homepage
    """
    The prompts were written by the Guardian, and fed to GPT-3 by Liam Porr, a computer science undergraduate student at UC Berkeley. GPT-3 produced eight different outputs, or essays. Each was unique, interesting and advanced a different argument. The Guardian could have just run one of the essays in its entirety. However, we chose instead to pick the best parts of each, in order to capture the different styles and registers of the AI. Editing GPT-3’s op-ed was no different to editing a human op-ed. We cut lines and paragraphs, and rearranged the order of them in some places. Overall, it took less time to edit than many human op-eds.
    """

    Therefore:
    - 8 robots, not 1, contributed to the article. "A robot wrote this entire article" is a complete and utter lie.
    - An editor did a hatchet job removing 80-90% of the output from the robots. Almost everything of what the robots wrote wasn't worth publishing, as it didn't push the human's chosen (or brainwashed) narrative closely enough.
    - If the final two sentences of that quoted paragraph are to make any sense or be relevant, we can only infer that the grauniad tasks 8 humans to write its op-eds. Which is retarded. Which therefore makes it entirely believable, given the Gruaniad's reputation in recent decades (disclaimer - it was actally a pretty good newspaper in the 80s and early 90s, I was a regular reader, before it went retarded).

    Bin.
    --
    Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
    Starting Score:    1  point
    Moderation   +3  
       Flamebait=1, Insightful=1, Informative=2, Underrated=1, Total=5
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 0) by Anonymous Coward on Thursday September 10 2020, @11:04PM

    by Anonymous Coward on Thursday September 10 2020, @11:04PM (#1049241)

    The rot began in the '90s with Melanie Phillips leaving but the Graun was still readable into the mid '00s. Then objective journalism was fully replaced with stupid opinion pieces as if their Saturday Guide team took over. Finally they hired Owen Jones and went 6th form politics - never go full retard!

    It's something that was broadly echoed on the internet with increasingly vacuous faux-left publications and a general shift in focus from pretending to support the working and lower middle classes to championing minorities, fringe issues and moral grandstanding over anti-social causes. These media outlets can't collapse fast enough!

  • (Score: 0) by Anonymous Coward on Friday September 11 2020, @03:23PM (1 child)

    by Anonymous Coward on Friday September 11 2020, @03:23PM (#1049531)

    Thanks for finding and highlighting this. I was trying to find the "catch" but didn't see it ("hidden" in italics in the bottom... silly me).

    The whole thing flowed better and was more cogent than most of the AI-generated stuff I've seen in the past. If it was human-"edited," that would make much more sense. It's like if somebody took the Oxford English Dictionary and "edited" it, they could create a good novel.

    I don't want to understate this achievement. Having an I understand the request, let alone generate relevant and comprehensible output is very impressive. However, I don't think it should be overstated, either; I was a nay-sayer until AlphaGo actually beat Lee Sedol.

    At the rate we are going, we will definitely get there eventually. I don't think we are "quite" "there" ... "yet."

    • (Score: 2) by FatPhil on Saturday September 12 2020, @12:17AM

      by FatPhil (863) <{pc-soylent} {at} {asdf.fi}> on Saturday September 12 2020, @12:17AM (#1049743) Homepage
      I think we're definitely past the "can fool a typical human" version of the Turing test now. I'm sure the 8 articles all had some merits, and some reason to feel concerned for humans' usefulness in various pursuits in the future. However, the editor weakened his point by attempting to strengthen it.
      --
      Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves