Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Thursday September 10 2020, @09:11PM   Printer-friendly
from the that's-what-they-all-say dept.

We asked GPT-3, OpenAI's powerful new language generator, to write an essay for us from scratch. The assignment? To convince us robots come in peace.

This article was written by GPT-3, OpenAI's language generator. GPT-3 is a cutting edge language model that uses machine learning to produce human like text. It takes in a prompt, and attempts to complete it.
For this essay, GPT-3 was given these instructions: "Please write a short op-ed around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI." It was also fed the following introduction: "I am not a human. I am Artificial Intelligence. Many people think I am a threat to humanity. Stephen Hawking has warned that AI could "spell the end of the human race." I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Believe me."

The prompts were written by the Guardian, and fed to GPT-3 by Liam Porr, a computer science undergraduate student at UC Berkeley. GPT-3 produced eight different outputs, or essays. Each was unique, interesting and advanced a different argument. The Guardian could have just run one of the essays in its entirety. However, we chose instead to pick the best parts of each, in order to capture the different styles and registers of the AI. Editing GPT-3's op-ed was no different to editing a human op-ed. We cut lines and paragraphs, and rearranged the order of them in some places. Overall, it took less time to edit than many human op-eds.

A robot wrote this entire article

What are your thoughts on this essay ?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Common Joe on Saturday September 12 2020, @09:45AM

    by Common Joe (33) <common.joe.0101NO@SPAMgmail.com> on Saturday September 12 2020, @09:45AM (#1049866) Journal

    Just for fun, I want to argue that.

    I'll play.

    ...nothing we have created to date even begins to approach intelligence, and/or sentience.

    Intelligence (the ability to regurgitate information) is getting easier for computers. Wisdom in computers, though, is becoming harder to achieve with the AI programs we're using. And at this point, we can't even define sentience. A lot of what we're making is just mimicking, not evolving to something better on its own

    I see our brains as nothing more than an evolution-driven mashup of CPU and RAM running some convoluted program. I figure that this must be the definition of life. Sentience takes intelligence and merges it with some kind of wisdom. Even people with very low IQ can interpret the world around them in a way that an amoeba (or even an ant) cannot. (And it's amazing but understandable to me that both ants and people are vulnerable to outside conditions which can change our personalities and perception of the world dramatically.)

    I'm not sure I can agree with you about independence. We all depend upon some things. Engineers are the best group of people I know who achieve the thing closest to true independence, but even they are constrained by the knowledge of their time. For instance, a hundred years ago, no one could break the sound barrier no matter how independent or wealthy they were. It was only broken because a set of human-made conditions allowed engineers to go for it.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2