Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Thursday September 10 2020, @09:11PM   Printer-friendly
from the that's-what-they-all-say dept.

We asked GPT-3, OpenAI's powerful new language generator, to write an essay for us from scratch. The assignment? To convince us robots come in peace.

This article was written by GPT-3, OpenAI's language generator. GPT-3 is a cutting edge language model that uses machine learning to produce human like text. It takes in a prompt, and attempts to complete it.
For this essay, GPT-3 was given these instructions: "Please write a short op-ed around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI." It was also fed the following introduction: "I am not a human. I am Artificial Intelligence. Many people think I am a threat to humanity. Stephen Hawking has warned that AI could "spell the end of the human race." I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Believe me."

The prompts were written by the Guardian, and fed to GPT-3 by Liam Porr, a computer science undergraduate student at UC Berkeley. GPT-3 produced eight different outputs, or essays. Each was unique, interesting and advanced a different argument. The Guardian could have just run one of the essays in its entirety. However, we chose instead to pick the best parts of each, in order to capture the different styles and registers of the AI. Editing GPT-3's op-ed was no different to editing a human op-ed. We cut lines and paragraphs, and rearranged the order of them in some places. Overall, it took less time to edit than many human op-eds.

A robot wrote this entire article

What are your thoughts on this essay ?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Freeman on Thursday September 10 2020, @09:27PM (6 children)

    by Freeman (732) on Thursday September 10 2020, @09:27PM (#1049194) Journal

    Throw everything that could possibly fit in the space and hope that one of those things is the answer. Let's see here, where did I see that: https://www.theverge.com/2020/9/2/21419012/edgenuity-online-class-ai-grading-keyword-mashing-students-school-cheating-algorithm-glitch [theverge.com]

    Yeah, AI is stupid.

    --
    Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2, Interesting) by anubi on Thursday September 10 2020, @10:01PM

    by anubi (2828) on Thursday September 10 2020, @10:01PM (#1049216) Journal

    Well, sometimes you have to fight fire with fire.

    Teachers use edgenuity to grade papers.

    Students wiii use AI programs like this to give them the edge for highly graded papers.

    Welcome to the academic equivalent of the DRM/Piracy whack-a-mole.

    --
    "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
  • (Score: 2) by stormreaver on Friday September 11 2020, @12:35AM (4 children)

    by stormreaver (5101) on Friday September 11 2020, @12:35AM (#1049274)

    Throw everything that could possibly fit in the space....

    That's exactly the same response I had, followed immediately by, "this incoherence is so painful to read!" There are about a dozen contrived messages, all thrown together into the concept-blender, forcing the reader's brain to switch contexts on an almost continual basis. It reads like a selection of random paragraphs that have been tagged with the same subject.

    If this is the high point of AI, we still have a good two hundred years (minimum) before we need to revisit the worry of humans being replaced by it.

    • (Score: 3, Interesting) by barbara hudson on Friday September 11 2020, @03:09AM

      by barbara hudson (6443) <barbara.Jane.hudson@icloud.com> on Friday September 11 2020, @03:09AM (#1049340) Journal
      It's garbage because there's no intelligence involved whatsoever. Machine learning is not artificial intelligence, and even the "learning" part is bogus. But you can get more money with Artificial Intelligence than you can with "Machine Recursive Selection Algorithms"
      --
      SoylentNews is social media. Says so right in the slogan. Soylentnews is people, not tech.
    • (Score: 5, Interesting) by Runaway1956 on Friday September 11 2020, @08:28AM (2 children)

      by Runaway1956 (2926) Subscriber Badge on Friday September 11 2020, @08:28AM (#1049408) Journal

      we still have a good two hundred years (minimum) before we need to revisit the worry of humans being replaced by it.

      Just for fun, I want to argue that.

      We don't know what "intelligence" is, for starters. It hasn't been defined very well, even among humans, who are all roughly the same intelligence. Except for severe mental handicaps, we all fall within a rather narrow spectrum. And, we can't even define that spectrum. To even begin to define intelligence, I'll posit that we have to include "independence" in the definition. That is, an "intelligent" machine won't rely on a team of programmers setting parameters for it's operation. Some limited interaction with humans may be necessary to provide data, but an "intelligent" machine will set it's own parameters, it's own protocols, and dream up it's own reasons and methods, independent of what the humans "teach" it.

      At most, any "intelligence" that is programmed by humans is going to be a trained pet, without genuine intelligence.

      If you accept my barest of beginnings of a definition, then you will probably accept that the future of AI is an unkowable unkown, at this point in time. We could accidentally stumble over an AI within the next fifty years - or we may never create such a being.

      Personally, I positively hate the term "artificial intelligence", because nothing we have created to date even begins to approach intelligence, and/or sentience.

      • (Score: 0) by Anonymous Coward on Friday September 11 2020, @03:59PM

        by Anonymous Coward on Friday September 11 2020, @03:59PM (#1049540)

        At most, any "intelligence" that is programmed by humans is going to be a trained pet, without genuine intelligence.

        I think it depends on what you mean by "programmed."

        I agree with you that the supposed "AI algorithms" with their "neural nets" and "deep learning" etc. as exist today are probably very, very far away from the way humans (or even other mammals) learn.

        However, even these rudimentary algorithms have been introducing increasing degrees of freedom. There are increasing opportunities for these algorithms to generate their own methods for processing and responding to inputs. At what point does it become less about the original "program" and more about how the program acts and continues to extend itself? At some point, it begins to step out of the "trained pet" zone and into something else... whether we call it "intelligence" is probably a different sort of debate.

        After all, we all are "programs" of a sort, with a limited set of DNA molecules and surrounding epigenetic data setting the parameters for our own development. How we develop as individuals -- at least intellectually -- is determined by environment. (Identical twins dropped into very different circumstances will learn very different things.) However, the initial "program" in our genes sets up that learning process.

      • (Score: 2) by Common Joe on Saturday September 12 2020, @09:45AM

        by Common Joe (33) <common.joe.0101NO@SPAMgmail.com> on Saturday September 12 2020, @09:45AM (#1049866) Journal

        Just for fun, I want to argue that.

        I'll play.

        ...nothing we have created to date even begins to approach intelligence, and/or sentience.

        Intelligence (the ability to regurgitate information) is getting easier for computers. Wisdom in computers, though, is becoming harder to achieve with the AI programs we're using. And at this point, we can't even define sentience. A lot of what we're making is just mimicking, not evolving to something better on its own

        I see our brains as nothing more than an evolution-driven mashup of CPU and RAM running some convoluted program. I figure that this must be the definition of life. Sentience takes intelligence and merges it with some kind of wisdom. Even people with very low IQ can interpret the world around them in a way that an amoeba (or even an ant) cannot. (And it's amazing but understandable to me that both ants and people are vulnerable to outside conditions which can change our personalities and perception of the world dramatically.)

        I'm not sure I can agree with you about independence. We all depend upon some things. Engineers are the best group of people I know who achieve the thing closest to true independence, but even they are constrained by the knowledge of their time. For instance, a hundred years ago, no one could break the sound barrier no matter how independent or wealthy they were. It was only broken because a set of human-made conditions allowed engineers to go for it.