Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Friday August 14 2020, @10:01AM   Printer-friendly
from the Voiced-by-Majel-Barrett-Roddenberry? dept.

OpenAI's new language generator GPT-3 is shockingly good (archive):

GPT-3 is the most powerful language model ever. Its predecessor, GPT-2, released last year, was already able to spit out convincing streams of text in a range of different styles when prompted with an opening sentence. But GPT-3 is a big leap forward. The model has 175 billion parameters (the values that a neural network tries to optimize during training), compared with GPT-2's already vast 1.5 billion. And with language models, size really does matter.

Sabeti linked to a blog post where he showed off short stories, songs, press releases, technical manuals, and more that he had used the AI to generate. GPT-3 can also produce pastiches of particular writers. Mario Klingemann, an artist who works with machine learning, shared a short story called "The importance of being on Twitter," written in the style of Jerome K. Jerome, which starts: "It is a curious fact that the last remaining form of social life in which the people of London are still interested is Twitter. I was struck with this curious fact when I went on one of my periodical holidays to the sea-side, and found the whole place twittering like a starling-cage." Klingemann says all he gave the AI was the title, the author's name and the initial "It." There is even a reasonably informative article about GPT-3 written entirely by GPT-3.

[...] Others have found that GPT-3 can generate any kind of text, including guitar tabs or computer code. For example, by tweaking GPT-3 so that it produced HTML rather than natural language, web developer Sharif Shameem showed that he could make it create web-page layouts by giving it prompts like "a button that looks like a watermelon" or "large text in red that says WELCOME TO MY NEWSLETTER and a blue button that says Subscribe." Even legendary coder John Carmack, who pioneered 3D computer graphics in early video games like Doom and is now consulting CTO at Oculus VR, was unnerved: "The recent, almost accidental, discovery that GPT-3 can sort of write code does generate a slight shiver."

[...] Yet despite its new tricks, GPT-3 is still prone to spewing hateful sexist and racist language. Fine-tuning the model helped limit this kind of output in GPT-2.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Disagree) by takyon on Friday August 14 2020, @01:43PM (3 children)

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Friday August 14 2020, @01:43PM (#1036522) Journal

    I think it's more like there's nothing (much) special about what humans DO.

    A non-sapient dumb algorithm can be used to write text or drive Good Enough™ to eliminate or streamline a lot of jobs, even if it's not perfect, non-thinking, requires a huge database of pre-existing training data or Mechanical Turks to refine the results, etc. Refine the algorithms some more, throw terabytes of RAM into GPUs/AI accelerators, and the encroachment will continue.

    At some point, neuromorphic architectures will enable the creation of human-level, sapient machine intelligence, and those agents will be able to think many times faster than humans while using those dumb algorithms as tools simultaneously. And then even more humans will become obsolete.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    Starting Score:    1  point
    Moderation   +1  
       Informative=1, Disagree=1, Total=2
    Extra 'Disagree' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 1, Interesting) by Anonymous Coward on Friday August 14 2020, @02:53PM (2 children)

    by Anonymous Coward on Friday August 14 2020, @02:53PM (#1036546)

    Thinking is also something that humans DO.
    I don't really know the definitions, but you can talk about levels of "abstract":

    1. Throwing rocks is something humans do.
    It's a genius achievement as far as the animal kingdom is concerned.
    It's also a sign of abstract thought: we have a model of the objective universe, we use the model to make predictions, we then set a physical process in motion choosing its parameters such that the outcome is the one we desire (i.e. "rock hits mamoth on head and kills it").

    2. You can then model abstract thoughts. For instance you can start comparing predictions of the rock trajectory done by Newtonian mechanics vs general relativity, etc. While this particular example is stupid, the point is that you can compare models, and design a method for building models, etc. Which would be "deeper" level of abstractions (or higher, or whatever you want to call it).

    3. In mathematics you can discuss about first order logic or second order logic. In first order logic predicates can only apply to individual statements. In second order logic predicates can be applied to predicates (and I don't actually know anyone who uses second order logic for something practical. Unless I have a severe misunderstanding of some high level programming languages, but I digress).

    4. I see no way to distinguish between extrapolations of current AI approaches and something that can achieve "1". And our achievement "2" is not qualitatively different from "1", since we have finite time and a finite brain. Even if it feels like we can think about thinking about thinking etc, what we can do can most likely be reduced, or replicated, by doing something smart with combinations of agents that can achieve "1".

    5. I don't see a way to refute those who say that the human brain is capable of infinite-order logic or whatever monstrous construction can be made starting from "3". And if they are right, then that would be an argument against current AI approaches ever leading to human-level intelligence. But personally I think human thought processes will ultimately be understood as elaborate, but finite, constructions/hierarchies of models of models. I've seen nothing that contradicts this yet.

    • (Score: 0) by Anonymous Coward on Friday August 14 2020, @08:59PM

      by Anonymous Coward on Friday August 14 2020, @08:59PM (#1036755)

      Perhabs thinking is just the perception of the brain muscle. Perhabs machines will think just as we do if we create a similar model. You are incapable of knowing that I think, how would you know if a machine does. Feel free to use that one next time you light a spliff or lick a stamp.

    • (Score: 0) by Anonymous Coward on Saturday August 15 2020, @09:35PM

      by Anonymous Coward on Saturday August 15 2020, @09:35PM (#1037250)

      As SN's resident philosophy professor, your definition of "first order logic" and assertion that predicate logics of higher order aren't used for practical applications wounds me.