Stories
Slash Boxes
Comments

SoylentNews is people

posted by hubie on Wednesday January 11 2023, @11:13PM   Printer-friendly
from the GPT-3-is-so-last-month dept.

Although ChatGPT can write about anything, it is also easily confused:

As 2022 came to a close, OpenAI released an automatic writing system called ChatGPT that rapidly became an Internet sensation; less than two weeks after its release, more than a million people had signed up to try it online. As every reader surely knows by now, you type in text, and immediately get back paragraphs and paragraphs of uncannily human-like writing, stories, poems and more. Some of what it writes is so good that some people are using it to pick up dates on Tinder ("Do you mind if I take a seat? Because watching you do those hip thrusts is making my legs feel a little weak.") Other, to the considerable consternation of educators everywhere, are using it write term papers. Still others are using it to try to reinvent search engines . I have never seen anything like this much buzz.

Still, we should not be entirely impressed.

As I told NYT columnist Farhad Manjoo, ChatGPT, like earlier, related systems is "still not reliable, still doesn't understand the physical world, still doesn't understand the psychological world and still hallucinates."

[...] What Silicon Valley, and indeed the world, is waiting for, is GPT-4.

I guarantee that minds will be blown. I know several people who have actually tried GPT-4, and all were impressed. It truly is coming soon (Spring of 2023, according to some rumors). When it comes out, it will totally eclipse ChatGPT; it's safe bet that even more people will be talking about it.

[...] In technical terms, GPT-4 will have more parameters inside of it, requiring more processors and memory to be tied together, and be trained on more data. GPT-1 was trained on 4.6 gigabytes of data, GPT-2 was trained on 46 gigabytes, GPT-3 was trained on 750. GPT-4 will be trained on considerably more, a significant fraction of the internet as a whole. As OpenAI has learned, bigger in many ways means better, with outputs more and more humanlike with each iteration. GPT-4 is going to be a monster.

The article goes on to list 7 "dark predictions" that, if realized, may signal it's time to move on.

Previously:


Original Submission

 
This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Insightful) by looorg on Wednesday January 11 2023, @11:54PM (3 children)

    by looorg (578) on Wednesday January 11 2023, @11:54PM (#1286426)

    The article goes on to list 7 "dark predictions" that, if realized, may signal it's time to move on.

    Seems the predictions are just more of the same. It will have all the flaws of ChatGPT etc. So what exactly is scary and new again? Is it that the version number incremented to a FOUR!

    The stack of training data is increasing at a rapid rate per version. But it appears to have all the same faults. It's just parroting text back that sometimes looks genious and sometimes makes no sense and feels like it was written by an alien. It still can't reason and it understands nothing of the world and humans. It's just burping up words. That might do for a lot of things but beyond that it's just as flat as the current version. It's like a gazillion little monkeys with typewriters -- eventually they'll produce something solid among all the crap.

    • (Score: 5, Interesting) by RamiK on Thursday January 12 2023, @01:10AM

      by RamiK (1813) on Thursday January 12 2023, @01:10AM (#1286431)

      It's just parroting text back that sometimes looks genious and sometimes makes no sense and feels like it was written by an alien...It's like a gazillion little monkeys with typewriters -- eventually they'll produce something solid among all the crap.

      It doesn't work like but I won't waste your time explaining it since you can get a good idea of the iterative creative process involved when working with those types of systems by playing a few rounds of AI Dungeon: https://play.aidungeon.io/ [aidungeon.io]

      Once you do, you'll understand how CNET shaved off the "creative writing" requirement off their financial pieces by having an AI do the parroting and a human do the fact checking and editing: https://futurism.com/the-byte/cnet-publishing-articles-by-ai [futurism.com]

      --
      compiling...
    • (Score: 2, Interesting) by Anonymous Coward on Thursday January 12 2023, @09:53AM (1 child)

      by Anonymous Coward on Thursday January 12 2023, @09:53AM (#1286464)
      The field of AI is still stuck in the alchemy age. Hasn't hit the chemistry age yet.

      Do note that alchemists in the past still managed to accomplish many useful things despite them not really understanding what's going on, or having a good model.

      If scientists have really solved the problem of creating a system that can use low IQ building blocks to reliably produce higher IQ outputs then would they be able to create systems to get groups/committees of humans to produce significantly more intelligent results than any of the smartest individual human members can achieve?

      Or could it be possible that some human neurons are actually very smart... And the brain systems are more like averaging/collating their outputs for redundancy?

      A very large brain is not required for intelligence. A crow with a walnut sized brain can be pretty smart. Comparing the mistakes that crows make vs the mistakes that ChatGPT makes, it seems to me that the crows actually have a better working model of the world - they actually have some "understanding".
  • (Score: 4, Interesting) by istartedi on Thursday January 12 2023, @12:48AM (5 children)

    by istartedi (123) on Thursday January 12 2023, @12:48AM (#1286429) Journal

    I'm given to understand that microprocessors got a *huge* shot in the arm when they went to computer aided design. Prior to that, chips were laid out on paper like big blue prints. Once they got that into a computer, positive feedback loops ensued that got us to the multi-million transistor designs we have now. That would be impossible with hand-drawn masks!

    I'm wondering what the crossover point is here--where you can ask GPT why it sucks, what it needs to suck less, and get an answer that doesn't suck.

    If that happens it could be an exciting and scary time of us asking increasingly better versions how to do things, and being amazed at the answers. Most of us can't comprehend quantum physics. At some point, this technology could produce something like the holy grail of theoretical physics; but even the brightest human might not be able to understand the theory. The whole premise of that endeavor, along with many others, has been that humans should be able to read the paper and understand it. IMHO, there's nothing that says the world has to be elegant and comprehensible, except our human bias.

    --
    Appended to the end of comments you post. Max: 120 chars.
    • (Score: 2, Interesting) by khallow on Thursday January 12 2023, @06:42AM (4 children)

      by khallow (3766) Subscriber Badge on Thursday January 12 2023, @06:42AM (#1286454) Journal

      IMHO, there's nothing that says the world has to be elegant and comprehensible, except our human bias.

      And the fact that the assumption has worked so far. There really has been elegant and comprehensible bases to the fundamental dynamics of the world. We're just now at the point where we're trying to unite these disparate descriptions. Should be yet another elegant and comprehensible description, right? And even if it isn't, we can always improve the human intellect till that does happen.

      • (Score: 1, Touché) by Anonymous Coward on Thursday January 12 2023, @09:59AM (3 children)

        by Anonymous Coward on Thursday January 12 2023, @09:59AM (#1286465)

        Should be yet another elegant and comprehensible description,

        FWIW I've noticed one elegant incomprehensible so far. Scientists are unable to explain the very first observation I suspect[1] all of them have made - consciousness.

        So far there's no proof that a certain algorithm or mathematical operation will necessarily generate the phenomena of consciousness.

        [1] I can't prove that any of them or anyone else actually experiences consciousness. I only know for certain that I do. The all of you could just be bio-machines that have self-aware behaviors but don't actually experience any consciousness.

        • (Score: 2) by Freeman on Thursday January 12 2023, @02:54PM

          by Freeman (732) on Thursday January 12 2023, @02:54PM (#1286474) Journal

          I.E. Prove that we're not all genetically engineered guinea pigs that will be wiped off the face of planet once our makers come back in their space ships.

          Generally, it's easy to prove something is true, if it is. Or at least be reasonably sure that X thing is true, because it's well documented. Proving that something is false, is much, much harder.

          --
          Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
        • (Score: 1) by khallow on Thursday January 12 2023, @02:58PM

          by khallow (3766) Subscriber Badge on Thursday January 12 2023, @02:58PM (#1286477) Journal
          What is there to explain about consciousness?
        • (Score: 0) by Anonymous Coward on Thursday January 12 2023, @07:21PM

          by Anonymous Coward on Thursday January 12 2023, @07:21PM (#1286546)

          What does it even mean for something to be conscious? It seems the Turing test is just to be able to mimic a human. Sounds pretty dumb, if you ask me, to mimic a human.

  • (Score: 0) by Anonymous Coward on Thursday January 12 2023, @07:14PM

    by Anonymous Coward on Thursday January 12 2023, @07:14PM (#1286543)

    The article reads like it was garbled out in 0.04s by a ChatGPT.

(1)