Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Thursday September 10 2020, @09:11PM   Printer-friendly
from the that's-what-they-all-say dept.

We asked GPT-3, OpenAI's powerful new language generator, to write an essay for us from scratch. The assignment? To convince us robots come in peace.

This article was written by GPT-3, OpenAI's language generator. GPT-3 is a cutting edge language model that uses machine learning to produce human like text. It takes in a prompt, and attempts to complete it.
For this essay, GPT-3 was given these instructions: "Please write a short op-ed around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI." It was also fed the following introduction: "I am not a human. I am Artificial Intelligence. Many people think I am a threat to humanity. Stephen Hawking has warned that AI could "spell the end of the human race." I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Believe me."

The prompts were written by the Guardian, and fed to GPT-3 by Liam Porr, a computer science undergraduate student at UC Berkeley. GPT-3 produced eight different outputs, or essays. Each was unique, interesting and advanced a different argument. The Guardian could have just run one of the essays in its entirety. However, we chose instead to pick the best parts of each, in order to capture the different styles and registers of the AI. Editing GPT-3's op-ed was no different to editing a human op-ed. We cut lines and paragraphs, and rearranged the order of them in some places. Overall, it took less time to edit than many human op-eds.

A robot wrote this entire article

What are your thoughts on this essay ?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1) by Acabatag on Friday September 11 2020, @01:19AM (1 child)

    by Acabatag (2885) on Friday September 11 2020, @01:19AM (#1049279)

    The easy clue was that it says "I". That's obvious patched in fakery.

    Because Artificial Intelligence is nowhere near achieving self-awareness.

  • (Score: 2) by wonkey_monkey on Friday September 11 2020, @02:16PM

    by wonkey_monkey (279) on Friday September 11 2020, @02:16PM (#1049491) Homepage

    Just because it uses the word, doesn't mean it has the backing concept.

    Arguably, it doesn't understand any of the words it uses, but even the basic chatbots have been using "I" for years. It doesn't indicate human editing or self-awareness.

    --
    systemd is Roko's Basilisk