Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Friday August 14 2020, @10:01AM   Printer-friendly
from the Voiced-by-Majel-Barrett-Roddenberry? dept.

OpenAI's new language generator GPT-3 is shockingly good (archive):

GPT-3 is the most powerful language model ever. Its predecessor, GPT-2, released last year, was already able to spit out convincing streams of text in a range of different styles when prompted with an opening sentence. But GPT-3 is a big leap forward. The model has 175 billion parameters (the values that a neural network tries to optimize during training), compared with GPT-2's already vast 1.5 billion. And with language models, size really does matter.

Sabeti linked to a blog post where he showed off short stories, songs, press releases, technical manuals, and more that he had used the AI to generate. GPT-3 can also produce pastiches of particular writers. Mario Klingemann, an artist who works with machine learning, shared a short story called "The importance of being on Twitter," written in the style of Jerome K. Jerome, which starts: "It is a curious fact that the last remaining form of social life in which the people of London are still interested is Twitter. I was struck with this curious fact when I went on one of my periodical holidays to the sea-side, and found the whole place twittering like a starling-cage." Klingemann says all he gave the AI was the title, the author's name and the initial "It." There is even a reasonably informative article about GPT-3 written entirely by GPT-3.

[...] Others have found that GPT-3 can generate any kind of text, including guitar tabs or computer code. For example, by tweaking GPT-3 so that it produced HTML rather than natural language, web developer Sharif Shameem showed that he could make it create web-page layouts by giving it prompts like "a button that looks like a watermelon" or "large text in red that says WELCOME TO MY NEWSLETTER and a blue button that says Subscribe." Even legendary coder John Carmack, who pioneered 3D computer graphics in early video games like Doom and is now consulting CTO at Oculus VR, was unnerved: "The recent, almost accidental, discovery that GPT-3 can sort of write code does generate a slight shiver."

[...] Yet despite its new tricks, GPT-3 is still prone to spewing hateful sexist and racist language. Fine-tuning the model helped limit this kind of output in GPT-2.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Interesting) by ledow on Saturday August 15 2020, @12:36AM (8 children)

    by ledow (5567) on Saturday August 15 2020, @12:36AM (#1036854) Homepage

    Thanks for reading, conversations are so much more rewarding when people read the opposing argument! I honestly wasn't expecting you too, I just wanted to make clear that there are arguments for "humans are special".

    To complete the explanation as quickly as I can, Turing-completeness is a feature of computing that means any computer can be boiled down to a Turing machine (a very simple logical computer, in essence). A Turing-complete machine has the same ability as every other Turing-complete machine. They can all do the same things, all "emulate" each other, but maybe one would take a billion years to do the same job. Every computer in existence is Turing-complete.

    But it also puts a limit on their abilities (as well as a minimum requirement on abilities - to be Turing-complete you have to have certain abilities. And when you are only Turing-complete there are some things you can't do).

    Turing used this model of a "universal machine" to boil absolutely everything that a machine is capable of doing down to the bare minimum of logic. You don't need to multiply, if you can add many times over, for instance. That would be an analogy. And any machine capable of adding is thus capable of multiplying. (You have to do the usual science thing of imagining an infinitely fast, infinitely large machine, but let's skip that - speed is NOT important, ability is... you can either do something or not).

    Using that logic, and his theoretical machine, Turing and others linked it to mathematics (that he created) and something called "computability" (i.e. whether a Turing machine *could* ever solve a problem, given its limitations and an infinite amount of time). Then others (Godel, etc.) had also come up with problems that, in essence, a T-C machine *could not* solve. One of those is the halting problem. Basically stated, if I feed in a description of a Turing machine *into* another Turing machine, could that latter machine tell me whether or not the first machine ever stops? And not just for one particular example, but for ANY Turing machine that I feed in? Could it "process", "simulate", "analyse" another Turing machine and say "Yep, eventually that machine will definitely come to a halt" or "Nope, that machine will just run forever and never, ever stop"?

    (The answer, by the way, is No. No Turing machine can solve the halting problem.)

    Yet, problems like this, and many related simpler and more difficult problems? Humans routinely solve such things. Or we believe they can. It's hard to prove without an infinite number of immortal humans. But humans can *definitely* do things that Turing-capable machines cannot. Not just "yet" or "in a reasonable time", but they are an proven impossibility for a T-C machine to do. It's simply beyond their capacity, no matter how fast or big or powerful they got or how long they are left to run.

    This leads us to the "humans are special". Everything suggests (but proving mathematically is impossible) that humans are *not* bound by those limits. For a start - we CAME UP with the halting problem as an example of something that Turing machine could not ever solve, knowing and proving mathematically that it could never solve it!

    So humans are Turing-complete AND MORE. We're not limited to Turing-complete. We can do everything that any machine can do (well, we invented them!) and things that they can never do.

    And, as yet, we don't have any machines that do MORE than Turing-complete machines can do. They are all the same "class" of machine, if you want to think that way. None of them have the "Quantum Computing" plug-in card that lets them do things that no Turing-complete machine could ever do on its own. But we think humans do. It might not be "Quantum", but we have something that they do not. Thus we can do things they cannot. Thus, chances are, AI is never capable of being able to do things that humans can do.

    Of course, they are incredibly useful tools nonetheless, and may well get convincingly close to emulating what we do, but that's an entirely different question. And one I think isn't possible beyond a point. But when I read this stuff, and understood it, and got over the initial excitement of it, I'm still convinced that a human can do things that no machine can currently do. That leaves a large gap. And seems to explain why AI doesn't get much closer than it ever managed historically. It's trying to do things that are impossible for it to do.

    It's by no means "proven" but a lot of mathematics isn't, and it's really deep maths (of which computer science is just a small subset). However, when the mathematical community has a hunch for nearly a century that something is provable, or not provable, or has a particular answer we believe to be "the only" answer... chances are that it works out that we're right and eventually a proof will come along to clean that up once and for all. We don't have that proof yet, but the consensus seems to be that it's just a formality.

    Humans do things that computers couldn't do if left to do it with all the resources in the universe for all the time that there is. Emulating a human, or even a particular given simple human trait, with an AI may never be possible. And everything we've ever done in the history of AI seems to trip up at that point, which is convincingly confirming that hypothesis.

    Starting Score:    1  point
    Moderation   +2  
       Interesting=1, Underrated=1, Total=2
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 2) by maxwell demon on Saturday August 15 2020, @07:41PM (7 children)

    by maxwell demon (1608) on Saturday August 15 2020, @07:41PM (#1037214) Journal

    None of them have the "Quantum Computing" plug-in card that lets them do things that no Turing-complete machine could ever do on its own.

    Quantum computers can solve exactly the same class of problems classical computers can. The only difference is that for some problems, they are exponentially faster. But a Turing machine, as theoretical concept, has an infinite tape and can run for unlimited time. Therefore a Turing machine can calculate anything that a quantum computer can. It's just that it might take a billion years for something that the quantum computer spits out in seconds.

    Also, only a certain class of problems can be sped up exponentially by quantum computers. Which might not include those problems that arise in intelligent behaviour.

    And BTW, it is highly unlikely that our brain uses any form of quantum computing. At 37 degrees Celsius and high matter density, it is a highly decoherent environment.

    And yes, I'm quantum physicist, so I know what I'm talking about here.

    --
    The Tao of math: The numbers you can count are not the real numbers.
    • (Score: 0) by Anonymous Coward on Saturday August 15 2020, @09:37PM

      by Anonymous Coward on Saturday August 15 2020, @09:37PM (#1037251)

      I would hope so from your name.

    • (Score: 2) by ledow on Saturday August 15 2020, @11:28PM (5 children)

      by ledow (5567) on Saturday August 15 2020, @11:28PM (#1037282) Homepage

      I mention your top arguments several times in the larger post. I use quantum computing as an example of something that a traditional computer cannot do, radically different physics, etc. - but as you (and I!) pointed out, it's still considered Turing-complete. But without a QC of any significant size, it's hard to generalise at this point in time, there may well be unexpected (maybe useful) effects that we start detecting later, but that's another whole area.

      I don't have the original sources to hand, but there are quantum *effects* in neurons. Not quantum computing. We're talking non-Newtonian physics at play on quantum scales, not a QC. Things that aren't explained with just the "neuron is just a transistor/switch" kind of idea. The same way that quantum effects are part of the only thing that lets us understand the physics affecting nanoscale silicon chip interactions (without quantum knowledge, processors could not be made that small/fast, because we have to compensate for non-Newtonian effects at that scale). Something that isn't accounted for in a binary computer and may allow "deeper" abilities than we currently have. https://www.sciencedaily.com/releases/2014/01/140116085105.htm [sciencedaily.com] is a example of what I mean, but it's literally the first thing that came to hand and too new for me to think it's referring to the things I've read before. If I find the previous links, I'll post them.

      You're right, absolutely. But it's an example of a radically different way of working that we may not take account of in our basic computer knowledge, but which has something we "need" to break out of T-C computing. And it highlights my argument - if even quantum computers are Turing-complete, it's another barrier between what we believe computers and humans are capable of. So how are we doing it? And what makes us think that a T-C machine can emulate that?

      • (Score: 2) by maxwell demon on Sunday August 16 2020, @10:24AM (3 children)

        by maxwell demon (1608) on Sunday August 16 2020, @10:24AM (#1037414) Journal

        There is absolutely zero evidence that humans can solve any problem that Turing machines cannot (and no, humans cannot solve the halting problem; figuring out that the halting problem exists and proving that it cannot be solved is not the same as solving it).

        --
        The Tao of math: The numbers you can count are not the real numbers.
        • (Score: 2) by ledow on Sunday August 16 2020, @06:23PM (2 children)

          by ledow (5567) on Sunday August 16 2020, @06:23PM (#1037562) Homepage

          Mathematically, we have no proof that I would be happy to assert as a mathematician. You are right in that respect.

          But we do things that we cannot get Turing-machines of any size, shape, complexity or difficulty to do. Again - this is the unknown part where mathematicians "know" if there's an answer or not, and work towards proving that. We "believe" that humans are capable of doing things that pure TC machines cannot. Such things including positing the problem of whether there's an impossible problem that a TC cannot solve, and proving such. It's meta, sure, but it's a strong hint that we're doing things that TM cannot do. We can formulate a device which TC machines are literally unable to solve, and mathematically prove that this is so, beyond a doubt. That, not in a mathematical-proof way, but certainly in a "body-of-evidence" way, goes some way to prove we're doing things that they cannot do.

          I've yet to see a TC machine that can arrive at a mathematical proof like that... especially given that we literally have mathematical proof engines that could do things in that same class from first principles and use them all the time.

          That doesn't mean it doesn't exist, but I don't claim that it's certain nor impossible. I quite clearly delimit my responses to "believe", "looks like", "probably" etc.

          I'm a mathematician. That gives me an absolute sense of certainty that's unwavering. And an instinct for what is and is not possible. The latter, with professional mathematicians, more often than not turns out to be correct.

          Fact is, it's going to be an open question for centuries, but arguing with me about what I *believe* the outcome to be is nonsensical.

          • (Score: 2) by maxwell demon on Sunday August 16 2020, @08:53PM (1 child)

            by maxwell demon (1608) on Sunday August 16 2020, @08:53PM (#1037601) Journal

            OK, let's get more formal. At which point(s) in the following do you disagree?

            Premise 1: The human mind is entirely physical; there is no non-physical component to it.

            Premise 2: Anything relevant going on in humans (including the mind) can in principle be described by quantum physics. Note that I don't claim that we know that description; indeed, it is well possible that we'll never know. The point in this premise is that no post-quantum physics is needed to describe the processes leading to our mind.

            Premise 3: Any quantum system can be simulated with arbitrary precision by a Turing machine. Again, note that this doesn't imply that we can build such a machine; indeed, already completely simulating a relatively small quantum system can exceed the capacity of a supercomputer, and should it be needed to really simulate the brain quantum mechanically at the atomic level (which I don't believe), then probably the resources of the whole universe would not suffice for a classical computer that can simulate it. But again, this is not the point of the premise; the point of the premise is that a Turing machine (with its infinite tape and unlimited running time) can do it.

            Conclusion: Anything the human mind can solve, a Turing machine can solve as well, by simulating the human mind as it solves that problem.

            So if you want to argue that the premise is false, you must either argue that at least one of the premises is false, or that the conclusion does not follow from the premises. So which is it? And if it is the latter, please also tell me why you think the conclusion doesn't follow from the premises.

            --
            The Tao of math: The numbers you can count are not the real numbers.
            • (Score: 2) by maxwell demon on Sunday August 16 2020, @08:55PM

              by maxwell demon (1608) on Sunday August 16 2020, @08:55PM (#1037604) Journal

              So if you want to argue that the premise is false

              Of course I meant: “to argue that the conclusion is false.”

              --
              The Tao of math: The numbers you can count are not the real numbers.
      • (Score: 2) by Bot on Wednesday August 19 2020, @01:19PM

        by Bot (3902) on Wednesday August 19 2020, @01:19PM (#1038788) Journal

        >I don't have the original sources to hand

        isn't the fact that the brain is analog enough? Personally I consider the brain not necessarily only a computer, but, be it calculator or smart antenna or who knows what, it is a massively parallel analog machine. Massive parallelism and analog means the inherent fuzziness at smaller scales, where interactions are better described using quantum theories, can result in huge output perturbations.

        BTW I'd decouple the way some matter x time becomes life, from religion. Divine is the hypothetical abstraction levels above ours, easily hypothesized by "we can create abstractions, therefore we may be an abstraction ourselves". Atheist arguments against this range from the merely illogical to the pathetic. Now, the divine plane is unbounded by time, so cause and effect concepts do not make sense. So, the way matter x time becomes life is independent of it being a design or happenstance. So to me "I'm agnostic, I believe we evolved from apes in no special way" in logically equivalent to "I like tea, I believe we evolved from apes in no special way", or "I am catholic, I believe we evolved from apes in no special way [god doesn't need to live-patch]". Spirituality too is needlessly associated with religion.

        --
        Account abandoned.