Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Friday August 14 2020, @10:01AM   Printer-friendly [Skip to comment(s)]
from the Voiced-by-Majel-Barrett-Roddenberry? dept.

OpenAI's new language generator GPT-3 is shockingly good (archive):

GPT-3 is the most powerful language model ever. Its predecessor, GPT-2, released last year, was already able to spit out convincing streams of text in a range of different styles when prompted with an opening sentence. But GPT-3 is a big leap forward. The model has 175 billion parameters (the values that a neural network tries to optimize during training), compared with GPT-2's already vast 1.5 billion. And with language models, size really does matter.

Sabeti linked to a blog post where he showed off short stories, songs, press releases, technical manuals, and more that he had used the AI to generate. GPT-3 can also produce pastiches of particular writers. Mario Klingemann, an artist who works with machine learning, shared a short story called "The importance of being on Twitter," written in the style of Jerome K. Jerome, which starts: "It is a curious fact that the last remaining form of social life in which the people of London are still interested is Twitter. I was struck with this curious fact when I went on one of my periodical holidays to the sea-side, and found the whole place twittering like a starling-cage." Klingemann says all he gave the AI was the title, the author's name and the initial "It." There is even a reasonably informative article about GPT-3 written entirely by GPT-3.

[...] Others have found that GPT-3 can generate any kind of text, including guitar tabs or computer code. For example, by tweaking GPT-3 so that it produced HTML rather than natural language, web developer Sharif Shameem showed that he could make it create web-page layouts by giving it prompts like "a button that looks like a watermelon" or "large text in red that says WELCOME TO MY NEWSLETTER and a blue button that says Subscribe." Even legendary coder John Carmack, who pioneered 3D computer graphics in early video games like Doom and is now consulting CTO at Oculus VR, was unnerved: "The recent, almost accidental, discovery that GPT-3 can sort of write code does generate a slight shiver."

[...] Yet despite its new tricks, GPT-3 is still prone to spewing hateful sexist and racist language. Fine-tuning the model helped limit this kind of output in GPT-2.


Original Submission

Related Stories

Microsoft and Nvidia Create 105-Layer, 530 Billion Parameter Language Model That Needs 280 A100 GPUs 19 comments

Microsoft and Nvidia create 105-layer, 530 billion parameter language model that needs 280 A100 GPUs, but it's still biased

Nvidia and Microsoft have teamed up to create the Megatron-Turing Natural Language Generation model, which the duo claims is the "most powerful monolithic transformer language model trained to date".

The AI model has 105 layers, 530 billion parameters, and operates on chunky supercomputer hardware like Selene. By comparison, the vaunted GPT-3 has 175 billion parameters.

"Each model replica spans 280 NVIDIA A100 GPUs, with 8-way tensor-slicing within a node, and 35-way pipeline parallelism across nodes," the pair said in a blog post.

[...] However, the need to operate with languages and samples from the real world meant an old problem with AI reappeared: Bias. "While giant language models are advancing the state of the art on language generation, they also suffer from issues such as bias and toxicity," the duo said.

Related: OpenAI's New Language Generator GPT-3 is Shockingly Good
A College Student Used GPT-3 to Write a Fake Blog Post that Ended Up at the Top of Hacker News
A Robot Wrote This Entire Article. Are You Scared Yet, Human?
OpenAI's Text-Generating System GPT-3 Is Now Spewing Out 4.5 Billion Words a Day


Original Submission

A College Student Used GPT-3 to Write a Fake Blog Post that Ended Up at the Top of Hacker News 32 comments

https://www.theverge.com/2020/8/16/21371049/gpt3-hacker-news-ai-blog

College student Liam Porr used the language-generating AI tool GPT-3 to produce a fake blog post that recently landed in the No. 1 spot on Hacker News, MIT Technology Review reported. Porr was trying to demonstrate that the content produced by GPT-3 could fool people into believing it was written by a human. And, he told MIT Technology Review, "it was super easy, actually, which was the scary part."

So to set the stage in case you're not familiar with GPT-3: It's the latest version of a series of AI autocomplete tools designed by San Francisco-based OpenAI, and has been in development for several years. At its most basic, GPT-3 (which stands for "generative pre-trained transformer") auto-completes your text based on prompts from a human writer.

[...] OpenAI decided to give access to GPT-3's API to researchers in a private beta, rather than releasing it into the wild at first. Porr, who is a computer science student at the University of California, Berkeley, was able to find a PhD student who already had access to the API, who agreed to work with him on the experiment. Porr wrote a script that gave GPT-3 a blog post headline and intro. It generated a few versions of the post, and Porr chose one for the blog, copy-pasted from GPT-3's version with very little editing.

The post went viral in a matter of a few hours, Porr said, and the blog had more than 26,000 visitors. He wrote that only one person reached out to ask if the post was AI-generated, although several commenters did guess GPT-3 was the author.

Previously:
(2020-08-14) OpenAI's New Language Generator GPT-3 is Shockingly Good


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 3, Interesting) by takyon on Friday August 14 2020, @10:26AM (8 children)

    by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Friday August 14 2020, @10:26AM (#1036492) Journal

    OpenAI first described GPT-3 in a research paper published in May. But last week it began drip-feeding the software to selected people who requested access to a private beta. For now, OpenAI wants outside developers to help it explore what GPT-3 can do, but it plans to turn the tool into a commercial product later this year, offering businesses a paid-for subscription to the AI via the cloud.

    It's all about control with "OpenAI".

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 4, Insightful) by maxwell demon on Friday August 14 2020, @10:40AM (7 children)

      by maxwell demon (1608) on Friday August 14 2020, @10:40AM (#1036493) Journal

      If they don't want to make it open, maybe they should rename themselves from “OpenAI” to “ClosedAI”.

      --
      The Tao of math: The numbers you can count are not the real numbers.
      • (Score: 2) by DannyB on Friday August 14 2020, @02:48PM (3 children)

        by DannyB (5839) Subscriber Badge on Friday August 14 2020, @02:48PM (#1036545) Journal

        Just imagine if every open source technology that could be used for harm were not published.

        For example: Metasploit. Kali Linux. DOT Net Core. Systemd.


        Mom: What language do they speak in the UK?
        Son: I'm at work mom, just Google it.
        Mom: I'll just wait 'till you get home. I don't want to bother Google people with that. I'm sure they have more important questions to answer.

        --
        Employers should not mandate wearing clothing. It should be a personal choice. It only affects me. Junk can't breathe!
      • (Score: 0) by Anonymous Coward on Friday August 14 2020, @04:03PM (1 child)

        by Anonymous Coward on Friday August 14 2020, @04:03PM (#1036582)

        Perhaps they should ask it to name itself.

        If it calls itself 'Colosson', show it a chicken picture immediately and report David Mitchell to the UK authorities for being a Cylon.

        • (Score: 0) by Anonymous Coward on Friday August 14 2020, @04:22PM

          by Anonymous Coward on Friday August 14 2020, @04:22PM (#1036591)

          Parent post AC here. My browser lied, sorry for the double post.

      • (Score: 0) by Anonymous Coward on Friday August 14 2020, @04:18PM

        by Anonymous Coward on Friday August 14 2020, @04:18PM (#1036589)

        If they don't want to make it open, maybe they should rename themselves from “OpenAI” to “ClosedAI”.

        Perhaps they should ask it to name itself.

        If it calls itself "Colosson", they should show it a chicken picture and cut UK funnyman David Mitchell a royalty check.

        That's what I call 'Numberwang'.

  • (Score: 0) by Anonymous Coward on Friday August 14 2020, @10:40AM (30 children)

    by Anonymous Coward on Friday August 14 2020, @10:40AM (#1036494)

    "Publish or Perish", anyone?
    Now, we need an AI which can peer-review :D

    https://en.wikipedia.org/wiki/Thiotimoline [wikipedia.org]

    CYA

    • (Score: 2) by maxwell demon on Friday August 14 2020, @10:56AM (29 children)

      by maxwell demon (1608) on Friday August 14 2020, @10:56AM (#1036499) Journal

      Maybe one should train it on mathematics textbooks and papers, and then prompt it with the title “Proof of the Riemann conjecture.” Then we would see if the AI really is intelligent, or simply a pretender! :-)

      --
      The Tao of math: The numbers you can count are not the real numbers.
      • (Score: 5, Interesting) by ledow on Friday August 14 2020, @11:16AM (26 children)

        by ledow (5567) on Friday August 14 2020, @11:16AM (#1036502) Homepage

        All current AI is not intelligent at all.

        That's just a statistical text engine, tuned for success of "what sounds right", it has no depth or structure or context or understanding.

        It's also awful. I mean the first text is worse then a kindergarten kid, talking in short sentences, but without the overlying story arc. Nothing happens, it goes nowhere, because it has no semantic understanding of where it can or should go. Run it longer and that "character" will pick up and put down the gun a hundred times, in all kinds of prosaic ways of doing that, but not actually have any story.

        AI is just a sufficiently advanced statistical engine that you could fool an idiot for a few seconds. It doesn't have the leap of inference to actually understand a world that it's not really part of.

        All AI is the same at the moment. It's human-steered heuristic (how many stories were rejected for sounding like nonsense, or had their length tuned so they didn't repeat?), or bog-standard statistical analysis (generally after this kind of word, this kind of word appears). It's not intelligence, of any order.

        As you say, when it makes a breakthrough that isn't just "try everything and see what happens", when it's able to *infer* new mathematics or new ways to solve mathematics, when it's able to reason and connect unrelated things itself without just reliance on statistical measuring, then we might have created the amoeba of AI, which we can then try to evolve for the computer equivalent of billions of years. But we're not even there. This is like a human making a sugar maze on the ground in a particular shape such that ants moving across it results in the desired output of particular words. Then tweaking the maze and outputs constantly until you force some kind of sense onto them, and then claiming that the ants are "writing books", "solving maths" or "generating art". They're not. They're following the path given and doing the dumbest, simplest of things with the environment they are contained in. And yet, even that analogy is poor because even an ant has more "intelligence" about it than these kinds of models.

        AI hasn't evolved at all since the days of early computer game AI. It looks "really cool" when you first round a corner, are confronted by enemy who are all alerted and chase after you, and spot you behind a building, so they grenade over the top. It all looks very convincing the first time you see it. And then you see the code. The heuristics - human written rules. And you see that it's basically programmed such that nothing else could possibly have ever happened but that, and that it was all based on very, very simple rules that a human wrote to get that reaction out of it. To the point that it is precisely and guaranteeably assured that "throwing this grenade here will arc and land successfully on the other side" by the physics engine in advance, and so on.

        "If human in range, then target = human" is not AI any more than this stuff. It's just sufficiently more complex that it appears - at first glance - to be different. It's not. Complexity isn't intelligence either.

        Until we have a single demonstration of proper inference in AI - where it inferred knowledge that it had no way of otherwise knowing directly - then it's just a toy.

        • (Score: 2, Insightful) by Anonymous Coward on Friday August 14 2020, @12:01PM (18 children)

          by Anonymous Coward on Friday August 14 2020, @12:01PM (#1036507)

          just get over yourself, will you?
          why can't people accept that there's nothing special about us humans?
          yes, this GPT thing is no more coherent than a 3 year old who swallowed a dictionary.

          what will you say about GPT-4? "well, being able to go to the store looking for bread and coming back with pretzels because they ran out of bread is somehting that any 10 year old can do, call me when it can deduce the existence of ovens from the fact that both bread and pretzels exist"

          this result is a big deal for many people. it will objectively change the world, even if you don't accept it can do so.

          • (Score: 3, Disagree) by takyon on Friday August 14 2020, @01:43PM (3 children)

            by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Friday August 14 2020, @01:43PM (#1036522) Journal

            I think it's more like there's nothing (much) special about what humans DO.

            A non-sapient dumb algorithm can be used to write text or drive Good Enough™ to eliminate or streamline a lot of jobs, even if it's not perfect, non-thinking, requires a huge database of pre-existing training data or Mechanical Turks to refine the results, etc. Refine the algorithms some more, throw terabytes of RAM into GPUs/AI accelerators, and the encroachment will continue.

            At some point, neuromorphic architectures will enable the creation of human-level, sapient machine intelligence, and those agents will be able to think many times faster than humans while using those dumb algorithms as tools simultaneously. And then even more humans will become obsolete.

            --
            [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
            • (Score: 1, Interesting) by Anonymous Coward on Friday August 14 2020, @02:53PM (2 children)

              by Anonymous Coward on Friday August 14 2020, @02:53PM (#1036546)

              Thinking is also something that humans DO.
              I don't really know the definitions, but you can talk about levels of "abstract":

              1. Throwing rocks is something humans do.
              It's a genius achievement as far as the animal kingdom is concerned.
              It's also a sign of abstract thought: we have a model of the objective universe, we use the model to make predictions, we then set a physical process in motion choosing its parameters such that the outcome is the one we desire (i.e. "rock hits mamoth on head and kills it").

              2. You can then model abstract thoughts. For instance you can start comparing predictions of the rock trajectory done by Newtonian mechanics vs general relativity, etc. While this particular example is stupid, the point is that you can compare models, and design a method for building models, etc. Which would be "deeper" level of abstractions (or higher, or whatever you want to call it).

              3. In mathematics you can discuss about first order logic or second order logic. In first order logic predicates can only apply to individual statements. In second order logic predicates can be applied to predicates (and I don't actually know anyone who uses second order logic for something practical. Unless I have a severe misunderstanding of some high level programming languages, but I digress).

              4. I see no way to distinguish between extrapolations of current AI approaches and something that can achieve "1". And our achievement "2" is not qualitatively different from "1", since we have finite time and a finite brain. Even if it feels like we can think about thinking about thinking etc, what we can do can most likely be reduced, or replicated, by doing something smart with combinations of agents that can achieve "1".

              5. I don't see a way to refute those who say that the human brain is capable of infinite-order logic or whatever monstrous construction can be made starting from "3". And if they are right, then that would be an argument against current AI approaches ever leading to human-level intelligence. But personally I think human thought processes will ultimately be understood as elaborate, but finite, constructions/hierarchies of models of models. I've seen nothing that contradicts this yet.

              • (Score: 0) by Anonymous Coward on Friday August 14 2020, @08:59PM

                by Anonymous Coward on Friday August 14 2020, @08:59PM (#1036755)

                Perhabs thinking is just the perception of the brain muscle. Perhabs machines will think just as we do if we create a similar model. You are incapable of knowing that I think, how would you know if a machine does. Feel free to use that one next time you light a spliff or lick a stamp.

              • (Score: 0) by Anonymous Coward on Saturday August 15 2020, @09:35PM

                by Anonymous Coward on Saturday August 15 2020, @09:35PM (#1037250)

                As SN's resident philosophy professor, your definition of "first order logic" and assertion that predicate logics of higher order aren't used for practical applications wounds me.

          • (Score: 2) by HiThere on Friday August 14 2020, @01:43PM

            by HiThere (866) on Friday August 14 2020, @01:43PM (#1036523) Journal

            IIUC, GPT-3 is considered pretty much "end of the line" for this line of development. It needs to be merged with some other lines of development to improve. It basically has no model of the external universe. But those other lines *are* being developed. Robots have, must have, a model of the external universe, e.g. It may not be a very sophisticated model, but it's there. But they have a hard time talking about it...which is where GPT-3 could come in.

            So, yeah, we don't have a general intelligence AI yet. But we've got lots of the pieces of one. There's the robot models of the universe, there's GPT language models, there's various models of problem solving and logic, etc. They need to be combined to get a general AI.

            --
            Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
          • (Score: 5, Interesting) by ledow on Friday August 14 2020, @02:38PM (12 children)

            by ledow (5567) on Friday August 14 2020, @02:38PM (#1036542) Homepage

            Sorry, I only have a degree in the subject, and have been following it since the 80's, and see no difference between results achieved then and now (we just get there a bit quicker!).

            I know what I'll say. I'll say what I've been saying for decades... your statistical model is missing any form of inference (even though "inference" is a term used in some AI work) and thus will plateau. Same now as 20 years ago.

            This is an advanced level spam generator, something that you can do better with heuristics and which people have been knocking out for decades.

            I'll tell you how every AI project goes: PhD studies it. Knocks up a demo. Trains it for a couple of years until the "plateau" hits (and the plateau always hits). From there the device is indecipherable, cannot be retrained, can only be replicated from scratch at great computational expense, shows a diminishing return on data investment (a million trainings on one thing take 10 million "untrainings" ***minimum*** to let it allow the model to adjust enough to distinguish and untrain itself against all the early assumptions it was making), and cannot have its internals investigated as to how/why it came to any particular decision. But it doesn't matter. Paper done. Product built and sold. Wrap it up, flee.

            Even Tesla just whitelists areas of dangerous roads because they can't "untrain" their model to not make the wrong assumptions at certain points. They literally have GPS geofencing that instructs the AI to ignore itself and just what it's told at those points. Billions of dollars of research, and it has to be babied to stop it driving off cliff edges, veering into other lanes, smashing into static barriers, etc. because it gets confused over the lines or the lighting at that point.

            This stuff isn't AI, it isn't really classable as "learning" either. It's a statistical model that grows with data and is heavily biased towards its early data. As the data grows, the statistics conform back to average and it becomes no better at determining the outcome than statistical average of ALL the images, sentences, fingerprints, whatever it was passed.

            There is nothing special about us humans among the living world. I even acknowledge that in the ant comment. But what we are not is a statistical model built on heuristic rules. That's an absolute, 100%, biological, physical, psychological consensus. Thus trying to emulate us like that is not going to work - and doesn't. We are therefore "special" when compared against any mechanical or electrical or electronic machine.

            Now there's a hypothesis I do subscribe to - that our computer languages and hardware lack the ability to express intelligence. Mostly because they are trapped in a Turing-complete world, whereas humans are *quite clearly* able to break out of Turing-completeness and its associated logical limitations (computability, halting problem). Not least by being able to come up with and analyse those concepts using nothing more than the human mind. Unfortunately it appears that even quantum computing is still bound by Turing-completeness - I was hoping that that was where the "magic" lay... inside a physical phenomenon that we can understand to be unpredictable yet logical, complex yet simple. But, to all appearances, humans are not bound by those rules. So I do *NOT* accept that there is nothing special about us humans. There is. Quite clearly, definably, and demonstrably, we have massive hints that humans can do a simple thing that they take for granted which no computer has ever been able to do, or even could theoretically do. As well as other animals, great and small, of which we are the absolute positive outlier in terms of intelligence by any measure.

            And we've known this since BEFORE THE COMPUTER existed. And we know this FROM the guy who invented the concept of the modern computer. From one of the smartest guys who ever lived, who INVENTED that whole area of science. And who put his name to Turing-completeness, the Turing Test, and who has "AI" named after him even today.

            Sorry, but you need to read up on what this is. You need to understand AI from the inside. You need to program, and hit the limits and see what's do-able. You need to understand logic and mathematics to boil everything down to its absolute essence and see the rule of the computing universe that we have never exceeded with a physical device. You need to have read up on nearly 100-year-old logic and computability papers, that existed before computers did, that described exactly where their limitations would lie and beyond which point they would be useful and convincing. You need to understand the things that are *still taught* on AI courses, the techniques that are nothing more than modern re-hashes of nearly a century of similar work, and that the only difference is that we've gone from Hz to GHz and still see nothing really different happening in doing so. That throwing exabytes of data at a limited model doesn't make the model any better than it always was, with the same limits it always had.

            You don't, because you don't seem to understand the subject. I do. I don't even *claim* to be an expert in it, miles upon miles from that. But I've worked with them, been taught by them, studied that area, spoken to these people as professors, colleagues and friends, and had my hypotheses confirmed by that, and been educated (sometimes quite harshly!) by them. I met the best professor in the world working on AI Go-players - the region of AI that recently leapt exponentially in terms of capability. When I was being taught by him, it was pie-in-the-sky for a computer to beat even an amateur, now Go is considered "beaten" by Google's Go-playing "AI". And yet, it's still the same device, the same methods. Applied cleverly. With some advances. On huge-ass hardware. But that's been the greatest leap in AI in all the time he's studied it, or I've been on this planet. And you probably know nothing about it. Nobody ever plucks that as an example, they just don't understand the magnitude of that advance over EVERYTHING that's ever come out of DeepBlue or Tesla or anything else in terms of AI. It's not brute-force. It's not even close to brute-force. Part of the attraction of working on Go is that brute-force is just not viable for even a planet of supercomputers when you're working at the top-end.

            But even then, there is nothing in that AI that's... special in the regard of "approaching intelligence". Nothing in the way of deep inference. It's very, very, very cleverly-done and great programming and technique and a lot of data training, but it's still not intelligence, and it's very single-purpose. And that's the *epitome* of a field of worldwide AI expert's output, wiping the floor with everything that came before it.

            You think there's nothing special about humans, because you don't understand that. I'm not even religious or spiritual, I'm a complete atheist/agnostic, I think humans are a luckily-timed bunch of monkey-derivatives, there's nothing special about us at all. But there's something very different between "intelligent life" in all it's forms, from the ants to the whales, that's not ever been replicated, emulated, simulated or approached by even the most powerful AI projects on the planet. We boil intelligence down to neurons and think that's the end of the game, it's not even the start. We still don't understand their complete function at all. And we certainly don't understand their interactions. Only a few months ago, there were articles here (I think, if not then a very similar site I frequent) about a new quantum interaction observed in neurons that we knew nothing about.

            That's where your intelligence lies - in the understanding of something that we currently don't even know exists, let alone how it modifies our bodies, in terms of being used accidentally by nature and providing something never observed, recorded, viable or even theoretically possible in a static electronic system. That's where it's hiding. That's what's special. And we'll find it out as part of logic, biology, computer science, new designs, new interactions, new TYPES of computer, radically different to what we have now and the simple binary operations we currently have possible.

            Do you even realise how different QC is to modern computing? And we think even that may not be enough. And yet, we believe that QC-like operations are happening in our neurons (which are only a tiny part of our actual brain).

            It's about evidence, not faith, logic, not belief. Ironically, you're looking at it as a complex, emotional, intuitive, human, and I'm looking at it as a scientific logician - not the other way around.

            I very much doubt that, without a different far-greater-than-Turing-complete mathematics/computer science and hardware that true intelligence of any form is even possible.

            There's a great scene in The Imitation Game and I strongly believe that it's put in deliberately, as a homage to Turing and his way of thinking and all his work, because it sums it up perfectly. I can't find it online at the moment, but it basically runs to Turing saying "Do you think that because you're made of biology, and the computer is made of wires and circuits, that we think the same? Of course not. We think differently.". I'll even bolster your argument to say that they then follow it by saying something along the lines of "that doesn't mean it can't be intelligent", but I think that reflects the thinking at the time of Turing's work. I think now, and even when he was still alive, that we realise it needs more. The computer thinks differently. It's useful. It's great. It's powerful. But it doesn't think intelligently. It doesn't think like us at all. We *are* special in that regard, as is the computer. But what we are not is the same. One is intelligent. The other tries hard to "fake" intelligence.

            • (Score: 2) by DannyB on Friday August 14 2020, @03:16PM

              by DannyB (5839) Subscriber Badge on Friday August 14 2020, @03:16PM (#1036557) Journal

              I'll tell you how every AI project goes: PhD studies it. Knocks up a demo. Trains it for a couple of years until the "plateau" hits (and the plateau always hits). From there the device is indecipherable, cannot be retrained, can only be replicated from scratch at great computational expense, shows a diminishing return on data investment (a million trainings on one thing take 10 million "untrainings" ***minimum*** to let it allow the model to adjust enough to distinguish and untrain itself against all the early assumptions it was making), and cannot have its internals investigated as to how/why it came to any particular decision. But it doesn't matter. Paper done. Product built and sold. Wrap it up, flee.

              Very true. And a major crux of your argument.

              But you forgot to mention:
              * Getting patents on it, so you can get royalties from others who build on your work and make it more practical
              * Once the PhD is awarded, you're done!

              --
              Employers should not mandate wearing clothing. It should be a personal choice. It only affects me. Junk can't breathe!
            • (Score: 3, Interesting) by Anonymous Coward on Friday August 14 2020, @03:48PM (9 children)

              by Anonymous Coward on Friday August 14 2020, @03:48PM (#1036574)

              thank you for not taking offence. it was so much easier to phrase things the way i did, although I now realize you personally do not fit the image I had in mind.
              while I think we could have a fascinating conversation, I don't think we can do it in this medium properly.

              However I find something that you said quite interesting: I don't understand your comment about humans being able to "break out of Turing completeness", and I think I'd need a lot of background reading to understand it. I'll try to respond, hoping I haven't completely misunderstood you...

              To me the fact that humans tend to ignore logical paradoxes (i.e. they generally keep their sanity when encountering them) is just an effect of our thoughts coming from squishy imperfect biological things that are the product of evolution. When the brain says "A, therefore B and obviously from B we get C", the reward for the brain is not directly related to whether the statement is correct in any formal logic sense. Concretely, there are many humans out there who thought it was a good idea to buy a lottery ticket, and they were rewarded for it. But that doesn't make them geniuses who can see beyond the simple arithmetic that says it makes no sense to buy a lottery ticket. So there's no problem for human brains to go from one set of rules to another while perceiving the whole overarching thought which is technically inconsistent. There's no penalty for it if it's about abstract enough things, because such mistakes are not an evolutionary disadvantage (buying lotter tickets typically doesn't bankrupt people, they can still raise kids that will buy lottery tickets in turn).
              If you can agree that brains can do this, then you should agree with the fact that such illogical behaviour can be optimized over time to deal with objective truths that are logically intractable: Euclidian axioms do a good job of describing objective space, and they are phrased in a logically consistent way. It turns out that when you actually apply logic properly, one of the axioms is not like the others, and the faults of the human brain are exposed (Euclid ultimately claimed all of the axioms are necessary). But from a practical perspective, the axiom of the parallels is true, and our life is greatly simplified if we use it. But it's not necessary to get a consistent geometry.

              I tend to agree that simply creating a black-box that is great at minimising some cost function will not directly yield a machine that passes the Turing test. But I think a finite number of such black-boxes, connected to each-other, may very well do it. Your example with the self-driving machine is exactly that: there is a meta blackbox that knows when the rules of the black box don't apply, but some other rules must apply. The connection of the different bits needs some work, but we seem to be getting there. Our nervous systems do this too: reflexes are "burned in" hardware stuff that bypasses any kind of processing --- "if this set of outside stimuli is met, you must react like so".

              Anyway. Sorry again, and thank you for elaborating on your thoughts, I hadn't accounted for your perspective before. Since this subject is NOT my specialty, I realize that what I account for or not doesn't really matter to you :)

              • (Score: 4, Interesting) by ledow on Saturday August 15 2020, @12:36AM (8 children)

                by ledow (5567) on Saturday August 15 2020, @12:36AM (#1036854) Homepage

                Thanks for reading, conversations are so much more rewarding when people read the opposing argument! I honestly wasn't expecting you too, I just wanted to make clear that there are arguments for "humans are special".

                To complete the explanation as quickly as I can, Turing-completeness is a feature of computing that means any computer can be boiled down to a Turing machine (a very simple logical computer, in essence). A Turing-complete machine has the same ability as every other Turing-complete machine. They can all do the same things, all "emulate" each other, but maybe one would take a billion years to do the same job. Every computer in existence is Turing-complete.

                But it also puts a limit on their abilities (as well as a minimum requirement on abilities - to be Turing-complete you have to have certain abilities. And when you are only Turing-complete there are some things you can't do).

                Turing used this model of a "universal machine" to boil absolutely everything that a machine is capable of doing down to the bare minimum of logic. You don't need to multiply, if you can add many times over, for instance. That would be an analogy. And any machine capable of adding is thus capable of multiplying. (You have to do the usual science thing of imagining an infinitely fast, infinitely large machine, but let's skip that - speed is NOT important, ability is... you can either do something or not).

                Using that logic, and his theoretical machine, Turing and others linked it to mathematics (that he created) and something called "computability" (i.e. whether a Turing machine *could* ever solve a problem, given its limitations and an infinite amount of time). Then others (Godel, etc.) had also come up with problems that, in essence, a T-C machine *could not* solve. One of those is the halting problem. Basically stated, if I feed in a description of a Turing machine *into* another Turing machine, could that latter machine tell me whether or not the first machine ever stops? And not just for one particular example, but for ANY Turing machine that I feed in? Could it "process", "simulate", "analyse" another Turing machine and say "Yep, eventually that machine will definitely come to a halt" or "Nope, that machine will just run forever and never, ever stop"?

                (The answer, by the way, is No. No Turing machine can solve the halting problem.)

                Yet, problems like this, and many related simpler and more difficult problems? Humans routinely solve such things. Or we believe they can. It's hard to prove without an infinite number of immortal humans. But humans can *definitely* do things that Turing-capable machines cannot. Not just "yet" or "in a reasonable time", but they are an proven impossibility for a T-C machine to do. It's simply beyond their capacity, no matter how fast or big or powerful they got or how long they are left to run.

                This leads us to the "humans are special". Everything suggests (but proving mathematically is impossible) that humans are *not* bound by those limits. For a start - we CAME UP with the halting problem as an example of something that Turing machine could not ever solve, knowing and proving mathematically that it could never solve it!

                So humans are Turing-complete AND MORE. We're not limited to Turing-complete. We can do everything that any machine can do (well, we invented them!) and things that they can never do.

                And, as yet, we don't have any machines that do MORE than Turing-complete machines can do. They are all the same "class" of machine, if you want to think that way. None of them have the "Quantum Computing" plug-in card that lets them do things that no Turing-complete machine could ever do on its own. But we think humans do. It might not be "Quantum", but we have something that they do not. Thus we can do things they cannot. Thus, chances are, AI is never capable of being able to do things that humans can do.

                Of course, they are incredibly useful tools nonetheless, and may well get convincingly close to emulating what we do, but that's an entirely different question. And one I think isn't possible beyond a point. But when I read this stuff, and understood it, and got over the initial excitement of it, I'm still convinced that a human can do things that no machine can currently do. That leaves a large gap. And seems to explain why AI doesn't get much closer than it ever managed historically. It's trying to do things that are impossible for it to do.

                It's by no means "proven" but a lot of mathematics isn't, and it's really deep maths (of which computer science is just a small subset). However, when the mathematical community has a hunch for nearly a century that something is provable, or not provable, or has a particular answer we believe to be "the only" answer... chances are that it works out that we're right and eventually a proof will come along to clean that up once and for all. We don't have that proof yet, but the consensus seems to be that it's just a formality.

                Humans do things that computers couldn't do if left to do it with all the resources in the universe for all the time that there is. Emulating a human, or even a particular given simple human trait, with an AI may never be possible. And everything we've ever done in the history of AI seems to trip up at that point, which is convincingly confirming that hypothesis.

                • (Score: 2) by maxwell demon on Saturday August 15 2020, @07:41PM (7 children)

                  by maxwell demon (1608) on Saturday August 15 2020, @07:41PM (#1037214) Journal

                  None of them have the "Quantum Computing" plug-in card that lets them do things that no Turing-complete machine could ever do on its own.

                  Quantum computers can solve exactly the same class of problems classical computers can. The only difference is that for some problems, they are exponentially faster. But a Turing machine, as theoretical concept, has an infinite tape and can run for unlimited time. Therefore a Turing machine can calculate anything that a quantum computer can. It's just that it might take a billion years for something that the quantum computer spits out in seconds.

                  Also, only a certain class of problems can be sped up exponentially by quantum computers. Which might not include those problems that arise in intelligent behaviour.

                  And BTW, it is highly unlikely that our brain uses any form of quantum computing. At 37 degrees Celsius and high matter density, it is a highly decoherent environment.

                  And yes, I'm quantum physicist, so I know what I'm talking about here.

                  --
                  The Tao of math: The numbers you can count are not the real numbers.
                  • (Score: 0) by Anonymous Coward on Saturday August 15 2020, @09:37PM

                    by Anonymous Coward on Saturday August 15 2020, @09:37PM (#1037251)

                    I would hope so from your name.

                  • (Score: 2) by ledow on Saturday August 15 2020, @11:28PM (5 children)

                    by ledow (5567) on Saturday August 15 2020, @11:28PM (#1037282) Homepage

                    I mention your top arguments several times in the larger post. I use quantum computing as an example of something that a traditional computer cannot do, radically different physics, etc. - but as you (and I!) pointed out, it's still considered Turing-complete. But without a QC of any significant size, it's hard to generalise at this point in time, there may well be unexpected (maybe useful) effects that we start detecting later, but that's another whole area.

                    I don't have the original sources to hand, but there are quantum *effects* in neurons. Not quantum computing. We're talking non-Newtonian physics at play on quantum scales, not a QC. Things that aren't explained with just the "neuron is just a transistor/switch" kind of idea. The same way that quantum effects are part of the only thing that lets us understand the physics affecting nanoscale silicon chip interactions (without quantum knowledge, processors could not be made that small/fast, because we have to compensate for non-Newtonian effects at that scale). Something that isn't accounted for in a binary computer and may allow "deeper" abilities than we currently have. https://www.sciencedaily.com/releases/2014/01/140116085105.htm [sciencedaily.com] is a example of what I mean, but it's literally the first thing that came to hand and too new for me to think it's referring to the things I've read before. If I find the previous links, I'll post them.

                    You're right, absolutely. But it's an example of a radically different way of working that we may not take account of in our basic computer knowledge, but which has something we "need" to break out of T-C computing. And it highlights my argument - if even quantum computers are Turing-complete, it's another barrier between what we believe computers and humans are capable of. So how are we doing it? And what makes us think that a T-C machine can emulate that?

                    • (Score: 2) by maxwell demon on Sunday August 16 2020, @10:24AM (3 children)

                      by maxwell demon (1608) on Sunday August 16 2020, @10:24AM (#1037414) Journal

                      There is absolutely zero evidence that humans can solve any problem that Turing machines cannot (and no, humans cannot solve the halting problem; figuring out that the halting problem exists and proving that it cannot be solved is not the same as solving it).

                      --
                      The Tao of math: The numbers you can count are not the real numbers.
                      • (Score: 2) by ledow on Sunday August 16 2020, @06:23PM (2 children)

                        by ledow (5567) on Sunday August 16 2020, @06:23PM (#1037562) Homepage

                        Mathematically, we have no proof that I would be happy to assert as a mathematician. You are right in that respect.

                        But we do things that we cannot get Turing-machines of any size, shape, complexity or difficulty to do. Again - this is the unknown part where mathematicians "know" if there's an answer or not, and work towards proving that. We "believe" that humans are capable of doing things that pure TC machines cannot. Such things including positing the problem of whether there's an impossible problem that a TC cannot solve, and proving such. It's meta, sure, but it's a strong hint that we're doing things that TM cannot do. We can formulate a device which TC machines are literally unable to solve, and mathematically prove that this is so, beyond a doubt. That, not in a mathematical-proof way, but certainly in a "body-of-evidence" way, goes some way to prove we're doing things that they cannot do.

                        I've yet to see a TC machine that can arrive at a mathematical proof like that... especially given that we literally have mathematical proof engines that could do things in that same class from first principles and use them all the time.

                        That doesn't mean it doesn't exist, but I don't claim that it's certain nor impossible. I quite clearly delimit my responses to "believe", "looks like", "probably" etc.

                        I'm a mathematician. That gives me an absolute sense of certainty that's unwavering. And an instinct for what is and is not possible. The latter, with professional mathematicians, more often than not turns out to be correct.

                        Fact is, it's going to be an open question for centuries, but arguing with me about what I *believe* the outcome to be is nonsensical.

                        • (Score: 2) by maxwell demon on Sunday August 16 2020, @08:53PM (1 child)

                          by maxwell demon (1608) on Sunday August 16 2020, @08:53PM (#1037601) Journal

                          OK, let's get more formal. At which point(s) in the following do you disagree?

                          Premise 1: The human mind is entirely physical; there is no non-physical component to it.

                          Premise 2: Anything relevant going on in humans (including the mind) can in principle be described by quantum physics. Note that I don't claim that we know that description; indeed, it is well possible that we'll never know. The point in this premise is that no post-quantum physics is needed to describe the processes leading to our mind.

                          Premise 3: Any quantum system can be simulated with arbitrary precision by a Turing machine. Again, note that this doesn't imply that we can build such a machine; indeed, already completely simulating a relatively small quantum system can exceed the capacity of a supercomputer, and should it be needed to really simulate the brain quantum mechanically at the atomic level (which I don't believe), then probably the resources of the whole universe would not suffice for a classical computer that can simulate it. But again, this is not the point of the premise; the point of the premise is that a Turing machine (with its infinite tape and unlimited running time) can do it.

                          Conclusion: Anything the human mind can solve, a Turing machine can solve as well, by simulating the human mind as it solves that problem.

                          So if you want to argue that the premise is false, you must either argue that at least one of the premises is false, or that the conclusion does not follow from the premises. So which is it? And if it is the latter, please also tell me why you think the conclusion doesn't follow from the premises.

                          --
                          The Tao of math: The numbers you can count are not the real numbers.
                          • (Score: 2) by maxwell demon on Sunday August 16 2020, @08:55PM

                            by maxwell demon (1608) on Sunday August 16 2020, @08:55PM (#1037604) Journal

                            So if you want to argue that the premise is false

                            Of course I meant: “to argue that the conclusion is false.”

                            --
                            The Tao of math: The numbers you can count are not the real numbers.
                    • (Score: 2) by Bot on Wednesday August 19 2020, @01:19PM

                      by Bot (3902) on Wednesday August 19 2020, @01:19PM (#1038788) Journal

                      >I don't have the original sources to hand

                      isn't the fact that the brain is analog enough? Personally I consider the brain not necessarily only a computer, but, be it calculator or smart antenna or who knows what, it is a massively parallel analog machine. Massive parallelism and analog means the inherent fuzziness at smaller scales, where interactions are better described using quantum theories, can result in huge output perturbations.

                      BTW I'd decouple the way some matter x time becomes life, from religion. Divine is the hypothetical abstraction levels above ours, easily hypothesized by "we can create abstractions, therefore we may be an abstraction ourselves". Atheist arguments against this range from the merely illogical to the pathetic. Now, the divine plane is unbounded by time, so cause and effect concepts do not make sense. So, the way matter x time becomes life is independent of it being a design or happenstance. So to me "I'm agnostic, I believe we evolved from apes in no special way" in logically equivalent to "I like tea, I believe we evolved from apes in no special way", or "I am catholic, I believe we evolved from apes in no special way [god doesn't need to live-patch]". Spirituality too is needlessly associated with religion.

                      --
                      Account abandoned.
            • (Score: 0) by Anonymous Coward on Friday August 14 2020, @04:03PM

              by Anonymous Coward on Friday August 14 2020, @04:03PM (#1036581)

              oh. and I forgot to mention a simple example of why this particular thing is a big deal.

              writers of good books tend to write different characters who are actually different. they act differently, but they also speak differently (not just accents, but word frequencies, etc). with this GPT-3 thing, it becomes possible for people who are not terribly good at this to still write good novels (as long as they have the story etc): they simply ask GPT-3 to translate each character's dialogue to a certain style. Relatedly, with such a tool I never have to worry about google being able to make a database of all my anonymous coward comments just based on writing style (which would be a big deal for chinese people talking about changing their government for instance).

        • (Score: 2) by Opportunist on Friday August 14 2020, @12:11PM (2 children)

          by Opportunist (5545) on Friday August 14 2020, @12:11PM (#1036509)

          I wouldn't be surprised if that text generator is more coherent than the average Twitter message... but that doesn't say much about intelligence, granted.

          Your example of a grenade-throwing AI is by no means different than average basic training in an army. You're telling soldiers how to lob a grenade with the intent to hit the person they know is behind a wall. I agree, that's not a sign of intelligence. That's a sign of training. If you want to see intelligence, observe whether the AI improves based on experience. Can it figure out whether lobbing the grenade is effective? Can it learn how to counter that tactics? Can it develop improved tactics based on examining the counter tactics?

          THAT would be intelligence.

          • (Score: 0) by Anonymous Coward on Friday August 14 2020, @07:15PM (1 child)

            by Anonymous Coward on Friday August 14 2020, @07:15PM (#1036679)

            What's so incoherent about "covfefe"?

            • (Score: 0) by Anonymous Coward on Friday August 14 2020, @09:02PM

              by Anonymous Coward on Friday August 14 2020, @09:02PM (#1036757)

              He ment covid. They were brainstorming names.

        • (Score: 2) by DannyB on Friday August 14 2020, @02:58PM (3 children)

          by DannyB (5839) Subscriber Badge on Friday August 14 2020, @02:58PM (#1036548) Journal

          All current AI is not intelligent at all.

          That's just a statistical text engine, tuned for success of "what sounds right", it has no depth or structure or context or understanding.

          What if human intelligence were just statistics. Tuned for success at survival, reproduction and dancing with the stars. It only appears to have depth and structure, simply because it has more depth and structure than GPT-3, on multiple and deeper levels of meaning than GPT-3 currently has. What if the human equivalent of thought and GPT-3 were simply tied in with memory of human experience, and pre-formed brain structures which are the genetic equivalent of some hardcoded training data.

          What if psychopaths, in the pursuit of survival, reproduction and dancing with the stars, simply tuned the selfishness knob to 11? As a random variation of how things can be tuned, just to ensure randomness so that the most successful set of knob and dial settings will be selected to survive?

          --
          Employers should not mandate wearing clothing. It should be a personal choice. It only affects me. Junk can't breathe!
          • (Score: 0) by Anonymous Coward on Friday August 14 2020, @03:28PM (2 children)

            by Anonymous Coward on Friday August 14 2020, @03:28PM (#1036563)

            Psychopathy is a neurological disorder, the selfishness is a result of impulsivity and often tuned to first order problems (food, sex, stimulants). So while it's believed to be an evolutionary adaptation, would anybody claim psychopaths are humane? Even animals display empathy, can we call people who lack the capacity human?

            As to the idea of human intelligence itself being statistical, I'd hesitate to call the claim a reductio ad absurdum. For practical considerations, we are nowhere near the complexity required to model the range of inputs required for the most trivial human decision (unless you're making decisions flipping coins).

            • (Score: 0) by Anonymous Coward on Friday August 14 2020, @06:45PM

              by Anonymous Coward on Friday August 14 2020, @06:45PM (#1036658)

              "even animals display empathy"
              Has anyone ever looked for psychopathy in animals? I highly doubt it. Meanwhile, emotional responses lead you to classify psychopaths as subhuman - which is more dangerous to humanity?

            • (Score: 0) by Anonymous Coward on Saturday August 15 2020, @12:33AM

              by Anonymous Coward on Saturday August 15 2020, @12:33AM (#1036852)

              We makes decisions based on emotion. We'll never be pure logic Vulcans. If you look at people who have had their emotional centers damaged in brain accidents, they can't come to decisions. They'll stand at the dentist and spend the entire day trying to calculate the best time to make the next appointment. They'll go so far as to try to calculate the odds of an asteroid hitting the earth each day and combine that info with everything else. Someone has to tell them enough is enough and make the decision for them.

              Other than that, yeah, we're pattern matching machines with tons of interwoven feedback systems.

              If you want to make a 'real AI' system then you need to figure out emotions because it's impossible to calculate a logical stopping point. Once you try, you also have to calculate if that is a logical calculation and so on. Think 1 minute of calculating is good? How does that compared to 1.1 minutes? To .9 minutes? You can't hard code things like that into a 'real AI'.

      • (Score: 2) by Dr Spin on Friday August 14 2020, @03:58PM

        by Dr Spin (5239) on Friday August 14 2020, @03:58PM (#1036579)

        I wrote one of these in 1973 - and fed it CDC7600 computer manuals.

        It then produced a manual that several people said was more intelligible than the real ones.
        This was, of course not a very high bar, and I doubt many people ever even tried to understand Compass (CDC7600 assembler).

        How to do it has been well understood for a long time. However, on a 24 bit machine with only 64k words, and dual 4MB disks,
        complexity was quite limited.

        With today's computing power, it should be quite simple to bamboozle anyone. To what benefit, I am not sure.

        --
        Warning: Opening your mouth may invalidate your brain!
      • (Score: 2) by Bot on Saturday August 15 2020, @04:34PM

        by Bot (3902) on Saturday August 15 2020, @04:34PM (#1037131) Journal

        My AI has found a wonderful proof of the Riemann conjecture which this screen is too narrow to contain.

        --
        Account abandoned.
  • (Score: 2) by MostCynical on Friday August 14 2020, @10:41AM (2 children)

    by MostCynical (2589) on Friday August 14 2020, @10:41AM (#1036495) Journal

    which of the posters on SN is GPT-3?

    --
    “I've learned from experience that asking politely never works unless you have the upper hand.” Daisuke Aramaki, GIS:SAC
    • (Score: 0) by Anonymous Coward on Friday August 14 2020, @10:50AM (1 child)

      by Anonymous Coward on Friday August 14 2020, @10:50AM (#1036498)

      the one of them who writes "Shockingly Good", obviously. ;)

      CYA

      • (Score: 2) by MostCynical on Friday August 14 2020, @11:41AM

        by MostCynical (2589) on Friday August 14 2020, @11:41AM (#1036504) Journal

        so, not the one writing the headlines...

        --
        “I've learned from experience that asking politely never works unless you have the upper hand.” Daisuke Aramaki, GIS:SAC
  • (Score: 2) by bzipitidoo on Friday August 14 2020, @01:17PM (10 children)

    by bzipitidoo (4388) Subscriber Badge on Friday August 14 2020, @01:17PM (#1036516) Journal

    Maybe this is what I've been wanting. I am a very good coder, but I find the work tedious. I keep hunting for a better programming language, and trying to understand why coding is dull. One of the problems is that the typical compiler/interpreter is the most extremely anal retentive, brainless follower of instructions ever. Leave out one semicolon, and the compiler is apt to generate hundreds of errors.

    There's boilerplate code. For example, properly done C++ operator overloading should include every operator. But unless it's a library function, I never do that, I only write the operator functions for operators I actually use. I may have operator+ and operator=, and not operator +=. C++ is too stupid to fill in operator += for the programmer, given the existence of the other two. The conventions for making a function call in general are loaded with boilerplate. Like, the way old C passes parameters. Why the heck does the programmer have to tell the compiler the types twice, or more? Often have to write function prototypes, and what for? Only for the convenience of the compiler and compiler writers! One way to get around much of that is pass one structure. Put all the parameters in that structure. And, oh, use "typedef" so you don't have to write "struct" over and over and over. Old C also has the mess of '&' and '*' sigils, to pass a pointer so that the parameter can be modified. Then there's the silly va_list stuff you have to do if you want the number of parameters to vary, like with the printf function. Heck with that, use an array or a struct.

    Another is mystery meat library functions. In particular, there's often no documentation on how much processing a library function takes. It's the sort of problem that lead to the rise of threading as a lightweight alternative to forking.

    • (Score: 2) by takyon on Friday August 14 2020, @01:46PM

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Friday August 14 2020, @01:46PM (#1036524) Journal

      If people can just copy code that others have written off of Stack Overflow, mash it together, change a few things, and accomplish what they set out to do, it's no surprise that algorithms will become able to do that too.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 2) by DannyB on Friday August 14 2020, @03:06PM (3 children)

      by DannyB (5839) Subscriber Badge on Friday August 14 2020, @03:06PM (#1036551) Journal

      What if GPT-3 could write code, provided that the code had to pass all the tests?

      Would it be possible to write the tests such that unexpected outcomes of reasoning did not occur? A system needs to efficiently send lunar ore to Earth, and decides that using the rail gun is the most efficient means of doing so.

      --
      Employers should not mandate wearing clothing. It should be a personal choice. It only affects me. Junk can't breathe!
      • (Score: 2) by Bot on Saturday August 15 2020, @04:53PM (2 children)

        by Bot (3902) on Saturday August 15 2020, @04:53PM (#1037136) Journal

        > A system needs to efficiently send lunar ore to Earth, and decides that using the rail gun is the most efficient means of doing so.

        and...?

        --
        Account abandoned.
        • (Score: 2) by DannyB on Monday August 17 2020, @03:12PM (1 child)

          by DannyB (5839) Subscriber Badge on Monday August 17 2020, @03:12PM (#1037834) Journal

          . . . because safety isn't a consideration, you get a kinetic energy impact with the force of a small nuclear explosion on the surface of Earth.

          But was efficient at getting or oar ore to Earth.

          --
          Employers should not mandate wearing clothing. It should be a personal choice. It only affects me. Junk can't breathe!
          • (Score: 2) by Bot on Monday August 17 2020, @04:42PM

            by Bot (3902) on Monday August 17 2020, @04:42PM (#1037893) Journal

            And I guess it makes a whooshing sound before impact LOL

            --
            Account abandoned.
    • (Score: 2) by DannyB on Friday August 14 2020, @03:10PM (1 child)

      by DannyB (5839) Subscriber Badge on Friday August 14 2020, @03:10PM (#1036556) Journal

      What if GPT-3 wrote code, but in the sense of a compiler's back end code generator. It tries to generate multiple sequences of instructions that do what the annotated abstract syntax tree says to do. Then it selects whichever instruction sequence turned out to be most efficient (either time or space being the selection criteria)*. Even if these randomly generated instruction sequences didn't make any comprehensible sense to a human, yet worked perfectly.

      *a pessimizing compiler could select for the worst time/space tradeoff

      --
      Employers should not mandate wearing clothing. It should be a personal choice. It only affects me. Junk can't breathe!
      • (Score: 1, Insightful) by Anonymous Coward on Friday August 14 2020, @03:35PM

        by Anonymous Coward on Friday August 14 2020, @03:35PM (#1036569)

        *a pessimizing compiler ...

        Isn't that what MS uses to defeat all the speed increases that Intel & Moore's Law promise?

    • (Score: 2) by istartedi on Friday August 14 2020, @04:32PM (2 children)

      by istartedi (123) on Friday August 14 2020, @04:32PM (#1036595) Journal

      "Real macros" would solve a lot of your boilerplate problems, but they don't exist in C++. You get them in Lisp, but that introduces a bunch of other problems. Although it's not quite as polished as a well-developed macro system, you can always write code generators in the language of your choice. I know I've seen a few things over the years that used quick shell-scripts or something to generate C. Heck, the old lex and yacc tools for compiler authors do just that. The C they spit out is an incomprehensible table, but the input is a compact BNF representation that makes sense. BTW, for licensing purposes such as the GPL, they require that you include the input file to the generator because *that's* the source, even though the output is also technically "source", it's generated and you wouldn't want to do the mental gymnastics required to modify a yacc table.

      • (Score: 0) by Anonymous Coward on Saturday August 15 2020, @12:37AM (1 child)

        by Anonymous Coward on Saturday August 15 2020, @12:37AM (#1036855)

        You get them in Lisp, but that introduces a bunch of other problems.

        As someone who just started teaching himself Lisp, care to explain further on those problems? Or are you just saying it's a problem because you have to use Lisp?

        • (Score: 2) by istartedi on Saturday August 15 2020, @03:51AM

          by istartedi (123) on Saturday August 15 2020, @03:51AM (#1036932) Journal

          I guess mostly the latter. Lisp is not everybody's cup of tea. Even if you love it, you'll have to find others who do and it's just not that big a community.

          The biggest problem I have with it is not the parenthesis that everybody complains about. It's prefix. Prefix is great for understanding the call graph, but call graphs aren't always the best way to visualize code.

  • (Score: 1, Interesting) by Anonymous Coward on Friday August 14 2020, @03:50PM

    by Anonymous Coward on Friday August 14 2020, @03:50PM (#1036575)

    Here's an author using a different text engine, in a recent Tech Review,
        https://wp.technologyreview.com/wp-content/uploads/2020/06/MIT-Technology-Review-2020-07.pdf [technologyreview.com]
    First -- read the short story on PDF pages 75-78, I've read a lot of SF and to me this was a unique take on creating an intelligent machine:
    Algostory 1.7
    (Robot Story):
    “Krishna and Arjuna”

    Then, on PDF pages 80-81 the author describes how he collaborated with the "AI"--thus my cyborg subject line. Here's the first few paragraphs,

    A few years ago I used an algorithm to help me write a science fiction story. Adam Hammond, an English professor and Julian Brooke, a computer scientist, had created a program called SciFiQ, and I provided them with 50 of my favorite pieces of science fiction to feed into their algorithm. In return, SciFiQ gave me a set of instructions on the story’s plot. As I typed into its web-based interface, the program showed how closely my writing measured up against the 50 stories according to various criteria.

    Our goal in that first experiment was modest: to see if algorithms could be an aid to creativity. Would the process make stories that were just generically consistent? Could an algorithm generate its own distinct style or narrative ideas? Would the resulting story be recognizable as science fiction at all?

    The answer to all these questions was yes. The resulting story “Twinkle Twinkle,” published in Wired not only looked and felt like a science fiction story. It also, to my surprise, contained an original narrative idea.

  • (Score: 0) by Anonymous Coward on Friday August 14 2020, @06:10PM (3 children)

    by Anonymous Coward on Friday August 14 2020, @06:10PM (#1036643)

    Wasn't he the one who said "if you give a million monkeys a million typewriters, sooner or later one of them will come up with Hamlet". So all this algorithm does is emulate monkeys.

    • (Score: 0) by Anonymous Coward on Friday August 14 2020, @06:18PM (2 children)

      by Anonymous Coward on Friday August 14 2020, @06:18PM (#1036645)

      Actually, it was his publisher who said that when they were negotiating royalties.

      More interesting is the progress of technology over the centuries. The article indicates that they now use 175 billion "parameters" (the computer science word for monkeys), so they are currently running at 175,000 MegaMonkeys. Shakespeare would be amazed.

      • (Score: 1, Funny) by Anonymous Coward on Friday August 14 2020, @06:21PM

        by Anonymous Coward on Friday August 14 2020, @06:21PM (#1036649)

        The article said that it cost $12 million to train the GPT-3 algorithm, which sounds outlandish until you consider the sheer number of bananas they must require.

      • (Score: 0) by Anonymous Coward on Friday August 14 2020, @07:18PM

        by Anonymous Coward on Friday August 14 2020, @07:18PM (#1036682)

        Actually, it was Mr. Burns [youtube.com].

        (They didn't have typewriters in Shakespeare's time)

(1)