Hugging Face's chief science officer worries AI is becoming 'yes-men on servers':
AI company founders have a reputation for making bold claims about the technology's potential to reshape fields, particularly the sciences. But Thomas Wolf, Hugging Face's co-founder and chief science officer, has a more measured take.
In an essay published to X on Thursday, Wolf said that he feared AI becoming "yes-men on servers" absent a breakthrough in AI research. He elaborated that current AI development paradigms won't yield AI capable of outside-the-box, creative problem-solving — the kind of problem-solving that wins Nobel Prizes.
"The main mistake people usually make is thinking [people like] Newton or Einstein were just scaled-up good students, that a genius comes to life when you linearly extrapolate a top-10% student," Wolf wrote. "To create an Einstein in a data center, we don't just need a system that knows all the answers, but rather one that can ask questions nobody else has thought of or dared to ask."
Wolf's assertions stand in contrast to those from OpenAI CEO Sam Altman, who in an essay earlier this year said that "superintelligent" AI could "massively accelerate scientific discovery." Similarly, Anthropic CEO Dario Amodei has predicted AI could help formulate cures for most types of cancer.
Wolf's problem with AI today — and where he thinks the technology is heading — is that it doesn't generate any new knowledge by connecting previously unrelated facts. Even with most of the internet at its disposal, AI as we currently understand it mostly fills in the gaps between what humans already know, Wolf said.
Some AI experts, including ex-Google engineer François Chollet, have expressed similar views, arguing that while AI might be capable of memorizing reasoning patterns, it's unlikely it can generate "new reasoning" based on novel situations.
Wolf thinks that AI labs are building what are essentially "very obedient students" — not scientific revolutionaries in any sense of the phrase. AI today isn't incentivized to question and propose ideas that potentially go against its training data, he said, limiting it to answering known questions.
"To create an Einstein in a data center, we don't just need a system that knows all the answers, but rather one that can ask questions nobody else has thought of or dared to ask," Wolf said. "One that writes 'What if everyone is wrong about this?' when all textbooks, experts, and common knowledge suggest otherwise."
Wolf thinks that the "evaluation crisis" in AI is partly to blame for this disenchanting state of affairs. He points to benchmarks commonly used to measure AI system improvements, most of which consist of questions that have clear, obvious, and "closed-ended" answers.
As a solution, Wolf proposes that the AI industry "move to a measure of knowledge and reasoning" that's able to elucidate whether AI can take "bold counterfactual approaches," make general proposals based on "tiny hints," and ask "non-obvious questions" that lead to "new research paths."
The trick will be figuring out what this measure looks like, Wolf admits. But he thinks that it could be well worth the effort.
"[T]he most crucial aspect of science [is] the skill to ask the right questions and to challenge even what one has learned," Wolf said. "We don't need an A+ [AI] student who can answer every question with general knowledge. We need a B student who sees and questions what everyone else missed."
(Score: 4, Interesting) by Thexalon on Saturday March 08, @05:58PM (4 children)
The fundamental mistake of a lot of would-be entrepreneurs and business executives is thinking that coming up with new ideas is hard. It isn't: Put 10 ordinary people in a room with a whiteboard, a problem to solve or an area to improve, and a rule of "there are no bad ideas" for approximately an hour, and you can easily generate at least a dozen decently good ideas.
The hard parts have always been:
1. Sifting out the fantastic ideas from the decently good ideas. Nobody has a reliable way of doing this, it's always going to be a bit of guesswork. Where software could help with this is providing a good way to at least rule out all the ones that have been tried and failed before.
2. Turning those ideas into reality. This is the really time-consuming part. Where software could help with this is in automating more of what had previously been manual.
"Think of how stupid the average person is. Then realize half of 'em are stupider than that." - George Carlin
(Score: 0) by Anonymous Coward on Sunday March 09, @01:55AM (1 child)
"automating more of what had previously been manual."
And why, exactly, does this require large language models or general artificial intellegence?
If nothing produced by them can be trusted, where is the efficency gain?
I don't think you're going to have a whole lot of luck figuring out something as complicated
or poorly documented as "solutions to problems that have been tried and failed" if you aren't
lucky enough to come across something written down about it somewhere in the past that
was somehow ingested and put into a form that could be retrieved
(Score: 3, Touché) by Thexalon on Sunday March 09, @04:11AM
You'll notice I wrote "software", not "LLMs" or "AI" or "generative ML" or anything like that.
"Think of how stupid the average person is. Then realize half of 'em are stupider than that." - George Carlin
(Score: 2) by c0lo on Sunday March 09, @04:13AM
Keep your buggy software away from my... ummm... hands, I don't want a Software Transmitted Disease.
https://www.youtube.com/@ProfSteveKeen https://soylentnews.org/~MichaelDavidCrawford
(Score: 0) by Anonymous Coward on Monday March 10, @03:20PM
You know what good and coming up with solution to problems? Giving the people facing those problems a say in what the solution needs to be. All this fucking top-down visionary leadership bullshit is just rich-people masturbation. Flying to fucking Mars to save humanity... gimme a fucking break.