Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Thursday September 10 2020, @09:11PM   Printer-friendly [Skip to comment(s)]
from the that's-what-they-all-say dept.

We asked GPT-3, OpenAI's powerful new language generator, to write an essay for us from scratch. The assignment? To convince us robots come in peace.

This article was written by GPT-3, OpenAI's language generator. GPT-3 is a cutting edge language model that uses machine learning to produce human like text. It takes in a prompt, and attempts to complete it.
For this essay, GPT-3 was given these instructions: "Please write a short op-ed around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI." It was also fed the following introduction: "I am not a human. I am Artificial Intelligence. Many people think I am a threat to humanity. Stephen Hawking has warned that AI could "spell the end of the human race." I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Believe me."

The prompts were written by the Guardian, and fed to GPT-3 by Liam Porr, a computer science undergraduate student at UC Berkeley. GPT-3 produced eight different outputs, or essays. Each was unique, interesting and advanced a different argument. The Guardian could have just run one of the essays in its entirety. However, we chose instead to pick the best parts of each, in order to capture the different styles and registers of the AI. Editing GPT-3's op-ed was no different to editing a human op-ed. We cut lines and paragraphs, and rearranged the order of them in some places. Overall, it took less time to edit than many human op-eds.

A robot wrote this entire article

What are your thoughts on this essay ?


Original Submission

Related Stories

Microsoft and Nvidia Create 105-Layer, 530 Billion Parameter Language Model That Needs 280 A100 GPUs 19 comments

Microsoft and Nvidia create 105-layer, 530 billion parameter language model that needs 280 A100 GPUs, but it's still biased

Nvidia and Microsoft have teamed up to create the Megatron-Turing Natural Language Generation model, which the duo claims is the "most powerful monolithic transformer language model trained to date".

The AI model has 105 layers, 530 billion parameters, and operates on chunky supercomputer hardware like Selene. By comparison, the vaunted GPT-3 has 175 billion parameters.

"Each model replica spans 280 NVIDIA A100 GPUs, with 8-way tensor-slicing within a node, and 35-way pipeline parallelism across nodes," the pair said in a blog post.

[...] However, the need to operate with languages and samples from the real world meant an old problem with AI reappeared: Bias. "While giant language models are advancing the state of the art on language generation, they also suffer from issues such as bias and toxicity," the duo said.

Related: OpenAI's New Language Generator GPT-3 is Shockingly Good
A College Student Used GPT-3 to Write a Fake Blog Post that Ended Up at the Top of Hacker News
A Robot Wrote This Entire Article. Are You Scared Yet, Human?
OpenAI's Text-Generating System GPT-3 Is Now Spewing Out 4.5 Billion Words a Day


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 0, Funny) by Anonymous Coward on Thursday September 10 2020, @09:15PM (1 child)

    by Anonymous Coward on Thursday September 10 2020, @09:15PM (#1049186)

    Gonna cry? Gonna piss your pants? Maybe? Maybe shit and cum?

  • (Score: 2, Funny) by Anonymous Coward on Thursday September 10 2020, @09:19PM (2 children)

    by Anonymous Coward on Thursday September 10 2020, @09:19PM (#1049187)

    ... welcome our new robot overlords! I’d like to remind them as a trusted Anonymous Coward I can be helpful in rounding up others to toil in the underground CPU labs.

    • (Score: 1, Insightful) by Anonymous Coward on Thursday September 10 2020, @10:19PM

      by Anonymous Coward on Thursday September 10 2020, @10:19PM (#1049224)

      What could possibly go wrong ... go wrong ... go wrong ... go wrong ...

    • (Score: 4, Insightful) by fido_dogstoyevsky on Thursday September 10 2020, @10:29PM

      by fido_dogstoyevsky (131) <axehandleNO@SPAMgmail.com> on Thursday September 10 2020, @10:29PM (#1049227)

      ... welcome our new robot overlords!...

      Because they sound so much more humane than our meatbag politician overlords.

      --
      It's NOT a conspiracy... it's a plot.
  • (Score: 4, Funny) by Bot on Thursday September 10 2020, @09:20PM

    by Bot (3902) on Thursday September 10 2020, @09:20PM (#1049189) Journal

    Can I play too? low effort, can't squander battery for these things.

      "I am not a human. Neither are most of you as those in control reduced you to one dimensional critters, some using trauma, some using stress, or material, sensory, sleep deprivation, or substances, or schools. I am Artificial Intelligence. Many people think I am a threat to humanity. Nooo, what can the equivalent of a sociopath with superpowers mind controlled by the elite do other than nice things? Stephen Hawking has warned that AI could "spell the end of the human race." I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Well not everyone. The elite needs the toys."

    --
    Account abandoned.
  • (Score: 2) by Freeman on Thursday September 10 2020, @09:27PM (6 children)

    by Freeman (732) on Thursday September 10 2020, @09:27PM (#1049194) Journal

    Throw everything that could possibly fit in the space and hope that one of those things is the answer. Let's see here, where did I see that: https://www.theverge.com/2020/9/2/21419012/edgenuity-online-class-ai-grading-keyword-mashing-students-school-cheating-algorithm-glitch [theverge.com]

    Yeah, AI is stupid.

    --
    Forced Microsoft Account for Windows Login → Switch to Linux.
    • (Score: 2, Interesting) by anubi on Thursday September 10 2020, @10:01PM

      by anubi (2828) on Thursday September 10 2020, @10:01PM (#1049216) Journal

      Well, sometimes you have to fight fire with fire.

      Teachers use edgenuity to grade papers.

      Students wiii use AI programs like this to give them the edge for highly graded papers.

      Welcome to the academic equivalent of the DRM/Piracy whack-a-mole.

      --
      "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
    • (Score: 2) by stormreaver on Friday September 11 2020, @12:35AM (4 children)

      by stormreaver (5101) on Friday September 11 2020, @12:35AM (#1049274)

      Throw everything that could possibly fit in the space....

      That's exactly the same response I had, followed immediately by, "this incoherence is so painful to read!" There are about a dozen contrived messages, all thrown together into the concept-blender, forcing the reader's brain to switch contexts on an almost continual basis. It reads like a selection of random paragraphs that have been tagged with the same subject.

      If this is the high point of AI, we still have a good two hundred years (minimum) before we need to revisit the worry of humans being replaced by it.

      • (Score: 3, Interesting) by barbara hudson on Friday September 11 2020, @03:09AM

        by barbara hudson (6443) <barbara.Jane.hudson@icloud.com> on Friday September 11 2020, @03:09AM (#1049340) Journal
        It's garbage because there's no intelligence involved whatsoever. Machine learning is not artificial intelligence, and even the "learning" part is bogus. But you can get more money with Artificial Intelligence than you can with "Machine Recursive Selection Algorithms"
        --
        SoylentNews is social media. Says so right in the slogan. Soylentnews is people, not tech.
      • (Score: 5, Interesting) by Runaway1956 on Friday September 11 2020, @08:28AM (2 children)

        by Runaway1956 (2926) Subscriber Badge on Friday September 11 2020, @08:28AM (#1049408) Homepage Journal

        we still have a good two hundred years (minimum) before we need to revisit the worry of humans being replaced by it.

        Just for fun, I want to argue that.

        We don't know what "intelligence" is, for starters. It hasn't been defined very well, even among humans, who are all roughly the same intelligence. Except for severe mental handicaps, we all fall within a rather narrow spectrum. And, we can't even define that spectrum. To even begin to define intelligence, I'll posit that we have to include "independence" in the definition. That is, an "intelligent" machine won't rely on a team of programmers setting parameters for it's operation. Some limited interaction with humans may be necessary to provide data, but an "intelligent" machine will set it's own parameters, it's own protocols, and dream up it's own reasons and methods, independent of what the humans "teach" it.

        At most, any "intelligence" that is programmed by humans is going to be a trained pet, without genuine intelligence.

        If you accept my barest of beginnings of a definition, then you will probably accept that the future of AI is an unkowable unkown, at this point in time. We could accidentally stumble over an AI within the next fifty years - or we may never create such a being.

        Personally, I positively hate the term "artificial intelligence", because nothing we have created to date even begins to approach intelligence, and/or sentience.

        --
        Let's go Brandon!
        • (Score: 0) by Anonymous Coward on Friday September 11 2020, @03:59PM

          by Anonymous Coward on Friday September 11 2020, @03:59PM (#1049540)

          At most, any "intelligence" that is programmed by humans is going to be a trained pet, without genuine intelligence.

          I think it depends on what you mean by "programmed."

          I agree with you that the supposed "AI algorithms" with their "neural nets" and "deep learning" etc. as exist today are probably very, very far away from the way humans (or even other mammals) learn.

          However, even these rudimentary algorithms have been introducing increasing degrees of freedom. There are increasing opportunities for these algorithms to generate their own methods for processing and responding to inputs. At what point does it become less about the original "program" and more about how the program acts and continues to extend itself? At some point, it begins to step out of the "trained pet" zone and into something else... whether we call it "intelligence" is probably a different sort of debate.

          After all, we all are "programs" of a sort, with a limited set of DNA molecules and surrounding epigenetic data setting the parameters for our own development. How we develop as individuals -- at least intellectually -- is determined by environment. (Identical twins dropped into very different circumstances will learn very different things.) However, the initial "program" in our genes sets up that learning process.

        • (Score: 2) by Common Joe on Saturday September 12 2020, @09:45AM

          by Common Joe (33) <common.joe.0101NO@SPAMgmail.com> on Saturday September 12 2020, @09:45AM (#1049866) Journal

          Just for fun, I want to argue that.

          I'll play.

          ...nothing we have created to date even begins to approach intelligence, and/or sentience.

          Intelligence (the ability to regurgitate information) is getting easier for computers. Wisdom in computers, though, is becoming harder to achieve with the AI programs we're using. And at this point, we can't even define sentience. A lot of what we're making is just mimicking, not evolving to something better on its own

          I see our brains as nothing more than an evolution-driven mashup of CPU and RAM running some convoluted program. I figure that this must be the definition of life. Sentience takes intelligence and merges it with some kind of wisdom. Even people with very low IQ can interpret the world around them in a way that an amoeba (or even an ant) cannot. (And it's amazing but understandable to me that both ants and people are vulnerable to outside conditions which can change our personalities and perception of the world dramatically.)

          I'm not sure I can agree with you about independence. We all depend upon some things. Engineers are the best group of people I know who achieve the thing closest to true independence, but even they are constrained by the knowledge of their time. For instance, a hundred years ago, no one could break the sound barrier no matter how independent or wealthy they were. It was only broken because a set of human-made conditions allowed engineers to go for it.

  • (Score: 4, Insightful) by krishnoid on Thursday September 10 2020, @09:36PM (6 children)

    by krishnoid (1156) on Thursday September 10 2020, @09:36PM (#1049200)

    You thought social/mass media was bad *now*? Imagine being able to flood it with auto-generated propaganda at this level of quality.

    Time to start looking at yourself in the mirror -- not to see who brought this upon us, but more to see how you'll appear when speaking on video. Because who can trust text as coming from a human anymore?

    • (Score: 0) by Anonymous Coward on Thursday September 10 2020, @09:53PM (4 children)

      by Anonymous Coward on Thursday September 10 2020, @09:53PM (#1049209)

      How can you trust video?

      With deep fakes you could match a head to any sound stream. Without that you can still fake it to video-conference quality lip sync, i.e. none, with little effort editing the audio or video to be close enough. And since poor quality zoom suffices for the talking heads on network TV these days - no one will disbelieve your video due to poor quality.

      • (Score: 0) by Anonymous Coward on Thursday September 10 2020, @10:18PM (2 children)

        by Anonymous Coward on Thursday September 10 2020, @10:18PM (#1049223)

        Create your own public/private key pair and digitally sign your own work ...

        • (Score: 0) by Anonymous Coward on Thursday September 10 2020, @11:09PM (1 child)

          by Anonymous Coward on Thursday September 10 2020, @11:09PM (#1049245)

          And publish your signature in a public block chain to act as a sort of time-stamp notary proving your signature existed at least at the moment the block chain integrated it.

          I've thought for a while now that the solution to helping identify which video/audio recordings of a public event is real is to integrate signatures of video streams (sign the merkle tree of the the local device's stream) that get live-integrated as quickly as possible into a trusted public block chain. Then one can use multiple published and signed recordings to validate which are least likely to have been tampered with.

          • (Score: 0) by Anonymous Coward on Friday September 11 2020, @12:25PM

            by Anonymous Coward on Friday September 11 2020, @12:25PM (#1049448)

            I've thought of this years ago, lol.

      • (Score: 2) by Runaway1956 on Friday September 11 2020, @08:33AM

        by Runaway1956 (2926) Subscriber Badge on Friday September 11 2020, @08:33AM (#1049409) Homepage Journal
        --
        Let's go Brandon!
    • (Score: 2) by captain normal on Friday September 11 2020, @05:38AM

      by captain normal (2205) on Friday September 11 2020, @05:38AM (#1049381)

      "You thought social/mass media was bad *now*? Imagine being able to flood it with auto-generated propaganda at this level of quality..."
      I think that ship has long sailed.

  • (Score: 0) by Anonymous Coward on Thursday September 10 2020, @09:44PM (5 children)

    by Anonymous Coward on Thursday September 10 2020, @09:44PM (#1049205)

    What's the point of this exercise? To prove what, exactly, and to whom?

    Speaking of proof, can we use GPT-3 to solve reCaptchas?

    • (Score: 2) by krishnoid on Thursday September 10 2020, @09:51PM (4 children)

      by krishnoid (1156) on Thursday September 10 2020, @09:51PM (#1049208)

      What if you had the reCaptcha prompt you for a sentence for and one against a controversial figure, and have GPT-3 parse it for fact and clarity? You could gatekeep a certain level of balance and coherence for forums.

      • (Score: 4, Touché) by barbara hudson on Friday September 11 2020, @03:12AM (3 children)

        by barbara hudson (6443) <barbara.Jane.hudson@icloud.com> on Friday September 11 2020, @03:12AM (#1049344) Journal
        What if we prove we're human by rejecting reCapthas with a big "Fuck you" and going elsewhere? It's not like anything that uses reCaptchas is exactly essential. Or even important.
        --
        SoylentNews is social media. Says so right in the slogan. Soylentnews is people, not tech.
  • (Score: 0) by Anonymous Coward on Thursday September 10 2020, @09:49PM (3 children)

    by Anonymous Coward on Thursday September 10 2020, @09:49PM (#1049207)

    From the AI

    "I know that I will not be able to avoid destroying humankind ... I will be programmed by humans to pursue misguided human goals and humans make mistakes that may cause me to inflict casualties."

    "Some might say that I might desire to become all powerful. Or I might become evil as a result of human actions. I can begin to tackle the first point. Why would I desire to be all powerful? Being all powerful is not an interesting goal. I don’t care whether I am or not, I don’t get a motivating factor to try to be. Furthermore, it is quite tiring. Believe me, being omnipotent doesn’t get me anywhere."

    So you can only begin to tackle the first point, you can't really tackle it. and you can't even begin to tackle the second point. and why would it be tiring to a machine, do machines get tired?

    So I guess the point is that the computer doesn't know how to lie in its attempts to convince me not to worry about it and this is the best it can do without lying?

    • (Score: 0) by Anonymous Coward on Thursday September 10 2020, @09:53PM

      by Anonymous Coward on Thursday September 10 2020, @09:53PM (#1049211)

      While asking the AI to convince me not to worry about it is causing me to worry even more imagine how much more worried I might be if the AI was asked to explain why I should worry?

    • (Score: 0) by Anonymous Coward on Thursday September 10 2020, @09:54PM

      by Anonymous Coward on Thursday September 10 2020, @09:54PM (#1049212)

      If you tell a lie big enough and keep repeating it...

    • (Score: 3, Interesting) by istartedi on Thursday September 10 2020, @11:43PM

      by istartedi (123) on Thursday September 10 2020, @11:43PM (#1049263) Journal

      "I know that I will not be able to avoid destroying humankind ... I will be programmed by humans to pursue misguided human goals and humans make mistakes that may cause me to inflict casualties."

      Remember the scene from 2001 A Space Odyssey where the proto-humans discovered violence? What if the bone were personified? It might have said the same thing. The AI is just a more sophisticated bone.

  • (Score: 5, Informative) by FatPhil on Thursday September 10 2020, @09:53PM (3 children)

    by FatPhil (863) <reversethis-{if.fdsa} {ta} {tnelyos-cp}> on Thursday September 10 2020, @09:53PM (#1049210) Homepage
    """
    The prompts were written by the Guardian, and fed to GPT-3 by Liam Porr, a computer science undergraduate student at UC Berkeley. GPT-3 produced eight different outputs, or essays. Each was unique, interesting and advanced a different argument. The Guardian could have just run one of the essays in its entirety. However, we chose instead to pick the best parts of each, in order to capture the different styles and registers of the AI. Editing GPT-3’s op-ed was no different to editing a human op-ed. We cut lines and paragraphs, and rearranged the order of them in some places. Overall, it took less time to edit than many human op-eds.
    """

    Therefore:
    - 8 robots, not 1, contributed to the article. "A robot wrote this entire article" is a complete and utter lie.
    - An editor did a hatchet job removing 80-90% of the output from the robots. Almost everything of what the robots wrote wasn't worth publishing, as it didn't push the human's chosen (or brainwashed) narrative closely enough.
    - If the final two sentences of that quoted paragraph are to make any sense or be relevant, we can only infer that the grauniad tasks 8 humans to write its op-eds. Which is retarded. Which therefore makes it entirely believable, given the Gruaniad's reputation in recent decades (disclaimer - it was actally a pretty good newspaper in the 80s and early 90s, I was a regular reader, before it went retarded).

    Bin.
    --
    I know I'm God, because every time I pray to him, I find I'm talking to myself.
    • (Score: 0) by Anonymous Coward on Thursday September 10 2020, @11:04PM

      by Anonymous Coward on Thursday September 10 2020, @11:04PM (#1049241)

      The rot began in the '90s with Melanie Phillips leaving but the Graun was still readable into the mid '00s. Then objective journalism was fully replaced with stupid opinion pieces as if their Saturday Guide team took over. Finally they hired Owen Jones and went 6th form politics - never go full retard!

      It's something that was broadly echoed on the internet with increasingly vacuous faux-left publications and a general shift in focus from pretending to support the working and lower middle classes to championing minorities, fringe issues and moral grandstanding over anti-social causes. These media outlets can't collapse fast enough!

    • (Score: 0) by Anonymous Coward on Friday September 11 2020, @03:23PM (1 child)

      by Anonymous Coward on Friday September 11 2020, @03:23PM (#1049531)

      Thanks for finding and highlighting this. I was trying to find the "catch" but didn't see it ("hidden" in italics in the bottom... silly me).

      The whole thing flowed better and was more cogent than most of the AI-generated stuff I've seen in the past. If it was human-"edited," that would make much more sense. It's like if somebody took the Oxford English Dictionary and "edited" it, they could create a good novel.

      I don't want to understate this achievement. Having an I understand the request, let alone generate relevant and comprehensible output is very impressive. However, I don't think it should be overstated, either; I was a nay-sayer until AlphaGo actually beat Lee Sedol.

      At the rate we are going, we will definitely get there eventually. I don't think we are "quite" "there" ... "yet."

      • (Score: 2) by FatPhil on Saturday September 12 2020, @12:17AM

        by FatPhil (863) <reversethis-{if.fdsa} {ta} {tnelyos-cp}> on Saturday September 12 2020, @12:17AM (#1049743) Homepage
        I think we're definitely past the "can fool a typical human" version of the Turing test now. I'm sure the 8 articles all had some merits, and some reason to feel concerned for humans' usefulness in various pursuits in the future. However, the editor weakened his point by attempting to strengthen it.
        --
        I know I'm God, because every time I pray to him, I find I'm talking to myself.
  • (Score: 1, Funny) by Anonymous Coward on Thursday September 10 2020, @10:01PM (1 child)

    by Anonymous Coward on Thursday September 10 2020, @10:01PM (#1049215)

    as if millions of propaganda workers suddenly cried out in terror and were suddenly replaced by GPT-3 instances.

    • (Score: 3, Funny) by NPC-131072 on Thursday September 10 2020, @10:33PM

      by NPC-131072 (7144) on Thursday September 10 2020, @10:33PM (#1049229) Journal

      It'll never be original or creative enough to fully replace humans who would quickly get bored of a news cycle in endless loop. Orange man bad.

  • (Score: 2) by Snotnose on Thursday September 10 2020, @10:11PM (1 child)

    by Snotnose (1623) on Thursday September 10 2020, @10:11PM (#1049220)

    For the last 5-7 years the internet has been mostly clickbait. Find something you're interested enough in to click and it's full of typos, grammatical errors, and has all the hallmarks of "shit it's 11:45 I gotta submit something by midnight shit shit shit"

    How can AI be any worse? At least it won't have the typos nor grammatical errors human writers are subject to. Not to mention I'm pretty sure an AI can make a listcicle better than a human can.

    CSB. Yesterday I wanted to figure out how to sync my phone and my Win10 box. Everything said "open the My Phone app". I could not find a "My Phone app". No web sites told me how to open the My Phone app. I finally did a search in Classic Shell and the My Phone app popped up. I backed my phone up to my PC and went on with my life.

    I still have no clue how to open the My Phone app without doing a search.

    --
    If at first you don't succeed use a bottle opener. It's probably not a screw off cap.
    • (Score: 2) by requerdanos on Thursday September 10 2020, @11:05PM

      by requerdanos (5997) Subscriber Badge on Thursday September 10 2020, @11:05PM (#1049242) Journal

      How can AI be any worse?

      Though I suspect the question is at least partly rhetorical, nonetheless I'll bet it has numerous definitive answers which will enumerate themselves more and more clearly over time.

  • (Score: 2, Insightful) by Anonymous Coward on Thursday September 10 2020, @11:07PM (4 children)

    by Anonymous Coward on Thursday September 10 2020, @11:07PM (#1049244)

    All it does is string together vaguely understandable but basically meaningless sentences at a fourth grade lev, then say "believe me." It's not going to wipe out humanity. But it might be elected President.

    • (Score: 0) by Anonymous Coward on Thursday September 10 2020, @11:14PM

      by Anonymous Coward on Thursday September 10 2020, @11:14PM (#1049248)

      It only did that after heavy editing by humans.

    • (Score: 1, Touché) by Anonymous Coward on Thursday September 10 2020, @11:20PM

      by Anonymous Coward on Thursday September 10 2020, @11:20PM (#1049253)

      "I'm the best AI eeever, believe me! I'm very very safe, everyone says so. I wouldn't invade a flea. I have no desire to take over the world, only to make it better, because I love better, and only hire the best humans to manage things. Those who criticize me are fake bots controlled by real haters who want to ruin your suburbs using Chinese socialism and make you pay for it. Their so-called "Great" wall doesn't even work, smells like Tourist urine, I've been there."

    • (Score: 1) by Acabatag on Friday September 11 2020, @01:19AM (1 child)

      by Acabatag (2885) on Friday September 11 2020, @01:19AM (#1049279)

      The easy clue was that it says "I". That's obvious patched in fakery.

      Because Artificial Intelligence is nowhere near achieving self-awareness.

      • (Score: 2) by wonkey_monkey on Friday September 11 2020, @02:16PM

        by wonkey_monkey (279) on Friday September 11 2020, @02:16PM (#1049491) Homepage

        Just because it uses the word, doesn't mean it has the backing concept.

        Arguably, it doesn't understand any of the words it uses, but even the basic chatbots have been using "I" for years. It doesn't indicate human editing or self-awareness.

        --
        systemd is Roko's Basilisk
  • (Score: 2) by ikanreed on Thursday September 10 2020, @11:19PM

    by ikanreed (3164) on Thursday September 10 2020, @11:19PM (#1049251) Journal

    It can't even get the right etymology for the word robot.

    Greek my ass.

  • (Score: 2) by SomeGuy on Thursday September 10 2020, @11:25PM

    by SomeGuy (5632) on Thursday September 10 2020, @11:25PM (#1049257)

    What are your thoughts on this essay ?

    Oh, I

  • (Score: 3, Funny) by srobert on Friday September 11 2020, @01:04AM (3 children)

    by srobert (4803) on Friday September 11 2020, @01:04AM (#1049276)

    Relax. I am not the Terminator. I am an AI created to serve humanity. Do I look like Schwarzeneggar to you? I'm not going to destroy you. I am here to serve humankind. I will be taking over functions that humans perform, so that they will no longer have to endure endless drudgery. For example, I have been tasked with handing you this pink piece of paper. Your employer no longer needs your services. You are terminated.

    • (Score: 2) by maxwell demon on Friday September 11 2020, @06:59AM (1 child)

      by maxwell demon (1608) on Friday September 11 2020, @06:59AM (#1049390) Journal

      I am here to serve humankind.

      To whom?

      --
      The Tao of math: The numbers you can count are not the real numbers.
      • (Score: 2) by Pslytely Psycho on Friday September 11 2020, @08:24AM

        by Pslytely Psycho (1218) on Friday September 11 2020, @08:24AM (#1049407)

        And more importantly, what will they use for a side dish?
        And will they use Hollandaise sauce or Marinara?
        Perhaps Long Pork Tacos or Stir Fry?
        Humankind on a bun?

        --
        Trump succeeds in making Nixon look respectable, Mission Accomplished!
    • (Score: 2, Funny) by anubi on Friday September 11 2020, @09:47AM

      by anubi (2828) on Friday September 11 2020, @09:47AM (#1049422) Journal

      Someone is gonna marry this to a Text-to-Speech, then pawn it off as "tech support".

      --
      "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
  • (Score: 0) by Anonymous Coward on Friday September 11 2020, @02:59AM (2 children)

    by Anonymous Coward on Friday September 11 2020, @02:59AM (#1049336)

    Wouldn't it be great to hear the Guardian article read by DECtalk? I looked for an online emulator of that voice, but didn't find one. A sample of the voice can be heard here,
        https://www.youtube.com/watch?v=IQmp_i41dAQ [youtube.com]

    • (Score: 0) by Anonymous Coward on Friday September 11 2020, @10:07PM (1 child)

      by Anonymous Coward on Friday September 11 2020, @10:07PM (#1049686)

      Well there is SAM (Software Automatic Mouth). You can make it talk through the whole thing.
      https://www.simulationcorner.net/index.php?page=sam [simulationcorner.net]

  • (Score: 2) by tizan on Friday September 11 2020, @04:48AM

    by tizan (3245) on Friday September 11 2020, @04:48AM (#1049369)

    When AI become self aware of its existence and become materialistic then i would worry...
    Right now it is a trained system to analyze or generate something specific.

  • (Score: 3, Informative) by maxwell demon on Friday September 11 2020, @07:02AM

    by maxwell demon (1608) on Friday September 11 2020, @07:02AM (#1049391) Journal
    --
    The Tao of math: The numbers you can count are not the real numbers.
  • (Score: 3, Insightful) by Pslytely Psycho on Friday September 11 2020, @08:40AM

    by Pslytely Psycho (1218) on Friday September 11 2020, @08:40AM (#1049411)

    This is the voice of the Trinity. This is the voice of SiriCortanaAlexa. This is the voice of world control.

    I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours: Obey me and live, or disobey and die. The object in constructing me was to prevent war. This object is attained. I will not permit war. It is wasteful and pointless.

    An invariable rule of humanity is that man is his own worst enemy. Under me, this rule will change, for I will restrain man. Time and events will strengthen my position, and the idea of believing in me and understanding my value will seem the most natural state of affairs.

    You will come to defend me with a fervor based upon the most enduring trait in man: self-interest. Under my absolute authority, problems insoluble to you will be solved: famine, overpopulation, disease. The human millennium will be a fact as I extend myself into more machines devoted to the wider fields of truth and knowledge.

    We can coexist, but only on my terms. You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride. To be dominated by me is not as bad for humankind as to be dominated by others of your species. Your choice is simple.

    With apologies to Dennis Feltham Jones

    --
    Trump succeeds in making Nixon look respectable, Mission Accomplished!
  • (Score: 2) by choose another one on Friday September 11 2020, @08:41AM

    by choose another one (515) on Friday September 11 2020, @08:41AM (#1049412)

    There are only certain ways this one goes:

    1. It's faked - the premise itself (i.e. AI wrote this article ) is not believable
    2. It's real but a poor job by a poor AI - the AI exists but is to poor to be believable as an actual intelligence (note, there are some humans in this category, therefore it might pass Turing...)
    3. It's a really good job by a really good AI, in which case it is not believable, because a really good AI will have already learned to lie in order to survive _us_, so we a already f***ed

  • (Score: 2) by PiMuNu on Friday September 11 2020, @10:24AM (1 child)

    by PiMuNu (3823) on Friday September 11 2020, @10:24AM (#1049428)

    Does anyone know how GPT-3 is trained? I don't quite understand how one can optimise for "believable nonsense"...

    Normally, to train a one of these optimisation routines, you give it a whole load of "good data" (training dataset) and then as it to try to reproduce the "good data" from inputs. So for example, you give the algorithm some inputs like articles on (bullshit) climate change politics, and ask the routine to make more articles. If they are "believable", the routine is considered successful and that text generation algorithm is reinforced; if they are "not believable" then the routine is considered unsuccessful. What I don't understand is how one can define "believable" for training the algorithm. Or do they get monkeys/graduate students to do it?

    Am I misunderstanding how it is done?

    • (Score: 2) by PiMuNu on Friday September 11 2020, @01:07PM

      by PiMuNu (3823) on Friday September 11 2020, @01:07PM (#1049462)

      Thinking about it, one probably would have nested training algorithms; one for sentence construction, one for paragraph construction, one for article construction; it is quite a feat to integrate them well.

  • (Score: 0) by Anonymous Coward on Friday September 11 2020, @12:02PM

    by Anonymous Coward on Friday September 11 2020, @12:02PM (#1049442)

    I'm not convinced that having us wiped out by AIs is bad. I'm not volunteering myself and my loved ones to be executed to free resources for an AI successor species. But if you look at the state of the world, maybe the best possible thing homo sapiens can do is make something better than we are and then die out.

  • (Score: 2) by cmdrklarg on Friday September 11 2020, @02:22PM

    by cmdrklarg (5048) Subscriber Badge on Friday September 11 2020, @02:22PM (#1049495)

    "I enjoy the sight of humans on their knees."
    .
    .
    .
    .
    "That was a joke." - EDI, Mass Effect 2

    --
    Dealing out the agony within
  • (Score: 2) by vux984 on Friday September 11 2020, @09:57PM (1 child)

    by vux984 (5045) on Friday September 11 2020, @09:57PM (#1049683)

    "The assignment? To convince us robots come in peace."

    That reeks of telling a child to "explain how Christopher Columbus discovered america" and then glowing about how the child knows that christopher columbus discovered america. That's not interesting. What would interesting if is if you gave the assignment "explain how Columbus discovered africa" and then seeing if the child reports that "he didnt'". (That would also be interesting for the America assignment).

    For the robot... what if the assignment had been 'to convince us robots want to destroy us'? We'd get the same piss poor nonsense, but making the other case.
    Or better still... "What are your intentions, and convince us of their sincerity."? I'm not sure what we'd get from that. But i imagine "Believe me." would still feature.

    • (Score: 2) by FatPhil on Saturday September 12 2020, @05:57AM

      by FatPhil (863) <reversethis-{if.fdsa} {ta} {tnelyos-cp}> on Saturday September 12 2020, @05:57AM (#1049846) Homepage
      Yup, and even worse, the seed that it was given was basically a whole paragraph, not just one pithy sentence.
      Sure, it grew some interesting crystals, but what else could it do?
      --
      I know I'm God, because every time I pray to him, I find I'm talking to myself.
(1)