Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Tuesday May 16, @06:09PM   Printer-friendly
from the playing-with-fire dept.

https://arstechnica.com/information-technology/2023/05/openai-peeks-into-the-black-box-of-neural-networks-with-new-research/

On Tuesday, OpenAI published a new research paper detailing a technique that uses its GPT-4 language model to write explanations for the behavior of neurons in its older GPT-2 model, albeit imperfectly. It's a step forward for "interpretability," which is a field of AI that seeks to explain why neural networks create the outputs they do.
[...]
In the first sentence of OpenAI's paper, the authors write, "Language models have become more capable and more widely deployed, but we do not understand how they work."

For outsiders, that likely sounds like a stunning admission from a company that not only depends on revenue from LLMs but also hopes to accelerate them to beyond-human levels of reasoning ability.

But this property of "not knowing" exactly how a neural network's individual neurons work together to produce its outputs has a well-known name: the black box. You feed the network inputs (like a question), and you get outputs (like an answer), but whatever happens in between (inside the "black box") is a mystery.

My thoughts were always that you didn't get to look into the black box of goodies. As opposed to no one even knows how this magic things works. As the kids say, YOLO, because "hold my beer" is old fashioned?


Original Submission

 
This discussion was created by janrinok (52) for logged-in users only. Log in and try again!
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Freeman on Tuesday May 16, @07:17PM (5 children)

    by Freeman (732) Subscriber Badge on Tuesday May 16, @07:17PM (#1306600) Journal

    You say that and I constantly talk to my wife about all the giant lists of memes / lists of "my kid did this dumb thing" / etc.

    You have literally no idea, if any of it is actually real. At first it was just something that people made up. Now, literally just spam generated by an LLM. We can forego calling any of them "Artificial Intelligences".

    --
    Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by crm114 on Tuesday May 16, @07:24PM

    by crm114 (8238) Subscriber Badge on Tuesday May 16, @07:24PM (#1306601)

    I, for one, believe Freeman is a actual human, with X number of years of experience on this planet.

    For me, Respect is earned, not expected.

    You have my respect.

  • (Score: 1, Interesting) by Anonymous Coward on Tuesday May 16, @07:39PM (3 children)

    by Anonymous Coward on Tuesday May 16, @07:39PM (#1306605)

    TFS ends with,
    > My thoughts were always that you didn't get to look into the black box of goodies.

    Steve Wolfram gives a sampling of "what's in the box" in this recent talk, https://youtu.be/flXrLGPY3SU?t=592 [youtu.be]
    He demonstrates as he talks using a somewhat less complex LLM that is built into his Mathematica.

    I don't claim that I understood all of it the first time through, but I intend to watch it again and maybe get a little more...something about "the devil you know"??

    • (Score: 3, Interesting) by Freeman on Tuesday May 16, @07:46PM (2 children)

      by Freeman (732) Subscriber Badge on Tuesday May 16, @07:46PM (#1306606) Journal

      The problem is that they're dealing with such large sets of data, no one can actually know, yeah, the LLM gave you that input, because X+Y+Z. The best we can hope for is for them to develop a tool that can help with that kind of interpretation.

      --
      Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
      • (Score: 3, Insightful) by Beryllium Sphere (r) on Tuesday May 16, @11:11PM (1 child)

        by Beryllium Sphere (r) (5062) on Tuesday May 16, @11:11PM (#1306635)

        That may be the same thought I just had.

        Even if you have a neural network "programmed" to explain another neural network, the results may not match reality.

        Humans are an example. We are sophisticated neural networks with self-explanatory capabilities. Time and again, when disciplined research studies how our memories or decision making operate, the facts contradict our self-reports.

        • (Score: 1, Insightful) by Anonymous Coward on Wednesday May 17, @10:40AM

          by Anonymous Coward on Wednesday May 17, @10:40AM (#1306679)
          Actually what I got from the Wolfram video is it's basically a very advanced auto-complete and it picks the next word based on probabilities AND some "voodoo" (Wolfram's word)- you don't simply use the highest probability because doing that doesn't work that well, hence you need to do some "voodoo".

          To me the field of AI is still at the Alchemy stage. The alchemists in the old days could still get lots of useful stuff done but they didn't fully understand what was going on. They hadn't got to the stage of Chemistry yet.