On Tuesday, OpenAI published a new research paper detailing a technique that uses its GPT-4 language model to write explanations for the behavior of neurons in its older GPT-2 model, albeit imperfectly. It's a step forward for "interpretability," which is a field of AI that seeks to explain why neural networks create the outputs they do.
[...]
In the first sentence of OpenAI's paper, the authors write, "Language models have become more capable and more widely deployed, but we do not understand how they work."For outsiders, that likely sounds like a stunning admission from a company that not only depends on revenue from LLMs but also hopes to accelerate them to beyond-human levels of reasoning ability.
But this property of "not knowing" exactly how a neural network's individual neurons work together to produce its outputs has a well-known name: the black box. You feed the network inputs (like a question), and you get outputs (like an answer), but whatever happens in between (inside the "black box") is a mystery.
My thoughts were always that you didn't get to look into the black box of goodies. As opposed to no one even knows how this magic things works. As the kids say, YOLO, because "hold my beer" is old fashioned?
(Score: 1, Touché) by Anonymous Coward on Tuesday May 16, @07:08PM (10 children)
What's the point of a human reading stuff generated by the language models?
If there's no human writer, what's the point? There's nothing that is attempted at being conveyed. It's just mechanical, statistically generated words. It could just as well have been complete gibberish in a made up language.
I ask this genuinely: what is the actual point of a human reading stuff generated by these models?
(Score: 2) by Freeman on Tuesday May 16, @07:17PM (5 children)
You say that and I constantly talk to my wife about all the giant lists of memes / lists of "my kid did this dumb thing" / etc.
You have literally no idea, if any of it is actually real. At first it was just something that people made up. Now, literally just spam generated by an LLM. We can forego calling any of them "Artificial Intelligences".
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 2) by crm114 on Tuesday May 16, @07:24PM
I, for one, believe Freeman is a actual human, with X number of years of experience on this planet.
For me, Respect is earned, not expected.
You have my respect.
(Score: 1, Interesting) by Anonymous Coward on Tuesday May 16, @07:39PM (3 children)
TFS ends with,
> My thoughts were always that you didn't get to look into the black box of goodies.
Steve Wolfram gives a sampling of "what's in the box" in this recent talk, https://youtu.be/flXrLGPY3SU?t=592 [youtu.be]
He demonstrates as he talks using a somewhat less complex LLM that is built into his Mathematica.
I don't claim that I understood all of it the first time through, but I intend to watch it again and maybe get a little more...something about "the devil you know"??
(Score: 3, Interesting) by Freeman on Tuesday May 16, @07:46PM (2 children)
The problem is that they're dealing with such large sets of data, no one can actually know, yeah, the LLM gave you that input, because X+Y+Z. The best we can hope for is for them to develop a tool that can help with that kind of interpretation.
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 3, Insightful) by Beryllium Sphere (r) on Tuesday May 16, @11:11PM (1 child)
That may be the same thought I just had.
Even if you have a neural network "programmed" to explain another neural network, the results may not match reality.
Humans are an example. We are sophisticated neural networks with self-explanatory capabilities. Time and again, when disciplined research studies how our memories or decision making operate, the facts contradict our self-reports.
(Score: 1, Insightful) by Anonymous Coward on Wednesday May 17, @10:40AM
To me the field of AI is still at the Alchemy stage. The alchemists in the old days could still get lots of useful stuff done but they didn't fully understand what was going on. They hadn't got to the stage of Chemistry yet.
(Score: 0) by Anonymous Coward on Tuesday May 16, @07:49PM (1 child)
What's the point of whining about humans reading stuff generated by language models, if you can't tell if the stuff was generated by language models? You seem to think you can, but I'm betting you won't be able to tell the difference most of the time a few years from now. Maybe you didn't hear but they are working on making the language models better.
(Score: 4, Interesting) by Freeman on Tuesday May 16, @08:30PM
At what point will there be a Discord server that's run entirely by bots, with only bots as members, except for the target? Or other platform, not meaning to single out Discord per se. Are we just literally going to be living in our own bubbles surrounded by our own happy little AIs, with no actual human interaction? Aside from the targeted manipulation by the big players?
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 3, Informative) by Beryllium Sphere (r) on Tuesday May 16, @11:05PM (1 child)
The proof of the pudding ...
I've been able to make practical use of the results at work more than once.
Even without that, they'd have value as games to explore what they can and cannot do.
(Score: 2) by Freeman on Wednesday May 17, @01:16PM
I've made practical use of ChatGPT at work as well. It's a tool and can be useful, but an "Artificial Intelligence" it is not.
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"