Stories
Slash Boxes
Comments

SoylentNews is people

Submission Preview

Link to Story

OpenAI peeks into the “black box” of neural networks with new research

Accepted submission by Freeman at 2023-05-12 16:15:42 from the playing with fire dept.
News

https://arstechnica.com/information-technology/2023/05/openai-peeks-into-the-black-box-of-neural-networks-with-new-research/ [arstechnica.com]

On Tuesday, OpenAI published [windows.net] a new research paper detailing a technique that uses its GPT-4 [arstechnica.com] language model to write explanations for the behavior of neurons in its older GPT-2 [arstechnica.com] model, albeit imperfectly. It's a step forward for "interpretability," which is a field of AI that seeks to explain why neural networks create the outputs they do.
[...]
In the first sentence of OpenAI's paper, the authors write, "Language models have become more capable and more widely deployed, but we do not understand how they work."

For outsiders, that likely sounds like a stunning admission from a company that not only depends on revenue from LLMs but also hopes to accelerate them [openai.com] to beyond-human levels of reasoning ability.

But this property of "not knowing" exactly how a neural network's individual neurons work together to produce its outputs has a well-known name: the black box [towardsdatascience.com]. You feed the network inputs (like a question), and you get outputs (like an answer), but whatever happens in between (inside the "black box") is a mystery.

My thoughts were always that you didn't get to look into the black box of goodies. As opposed to no one even knows how this magic things works. As the kids say, YOLO, because "hold my beer" is old fashioned?


Original Submission