The Morning After: The Godfather of AI leaves Google amid ethical concerns:
Geoffrey Hinton, nicknamed the Godfather of AI, told The New York Times he resigned as Google VP and engineering fellow in April to freely warn of the risks associated with the technology. The researcher is concerned Google is giving up its previous restraint on public AI releases to compete with ChatGPT, Bing Chat and similar models. In the near term, Hinton says he's worried that generative AI could lead to a wave of misinformation. You might "not be able to know what is true anymore," he says. He's also concerned it might not just eliminate "drudge work," but outright replace some jobs – which I think is a valid worry already turning into a reality.
AI 'godfather' Geoffrey Hinton warns of dangers as he quits Google:
A man widely seen as the godfather of artificial intelligence (AI) has quit his job, warning about the growing dangers from developments in the field.
Geoffrey Hinton, 75, announced his resignation from Google in a statement to the New York Times, saying he now regretted his work.
He told the BBC some of the dangers of AI chatbots were "quite scary". "Right now, they're not more intelligent than us, as far as I can tell. But I think they soon may be."
Dr Hinton also accepted that his age had played into his decision to leave the tech giant, telling the BBC: "I'm 75, so it's time to retire." Dr Hinton's pioneering research on neural networks and deep learning has paved the way for current AI systems like ChatGPT.
In artificial intelligence, neural networks are systems that are similar to the human brain in the way they learn and process information. They enable AIs to learn from experience, as a person would. This is called deep learning. The British-Canadian cognitive psychologist and computer scientist told the BBC that chatbots could soon overtake the level of information that a human brain holds.
"Right now, what we're seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it's not as good, but it does already do simple reasoning," he said. "And given the rate of progress, we expect things to get better quite fast. So we need to worry about that."
[...] He added: "I've come to the conclusion that the kind of intelligence we're developing is very different from the intelligence we have.
"We're biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world. "And all these copies can learn separately but share their knowledge instantly. So it's as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that's how these chatbots can know so much more than any one person."
[...] Dr Hinton joins a growing number of experts who have expressed concerns about AI - both the speed at which it is developing and the direction in which it is going.
Geoffrey Hinton tells us why he's now scared of the tech he helped build:
"I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they're very close to it now and they will be much more intelligent than us in the future," he says. "How do we survive that?"
He is especially worried that people could harness the tools he himself helped breathe life into to tilt the scales of some of the most consequential human experiences, especially elections and wars "Look, here's one way it could all go wrong," he says. "We know that a lot of the people who want to use these tools are bad actors [...] . They want to use them for winning wars or manipulating electorates."
Hinton believes that the next step for smart machines is the ability to create their own subgoals, interim steps required to carry out a task. What happens, he asks, when that ability is applied to something inherently immoral?
(Score: 0) by Anonymous Coward on Wednesday May 03 2023, @09:17PM (2 children)
that's a bit different than where we'll be at soon where you'll have complete videos of people doing and saying things that didn't really happen. What you're talking about is maybe you have doubts of what you've heard, or have doubts about how it was interpreted, so you can go track down the speech itself and listen to what was or wasn't said and see if you come to your own conclusion. Now, we're going to be a the point where you'll see and hear the speech, but you won't know if it is true or was computer generated.
That would be Paul Harvey . . . . Gd-Day
(Score: 0) by Anonymous Coward on Wednesday May 03 2023, @09:49PM (1 child)
I wonder what (if any) part the Internet Archive will play in all this? I'm a big fan of librarians, maybe these internet librarians will have some clever ways of sorting the "reasonably real" from the "most likely AI generated"--after the fact.
(Score: 0) by Anonymous Coward on Thursday May 04 2023, @03:40AM
Adversarial AI means that the methods to detect AI content will just be trained against until they fail.
Internet Archive? They're busy trying to not get demolished by copyright lawsuits.