Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 14 submissions in the queue.
posted by janrinok on Wednesday May 03 2023, @05:27PM   Printer-friendly

The Morning After: the Godfather of AI Leaves Google Amid Ethical Concerns

The Morning After: The Godfather of AI leaves Google amid ethical concerns:

Geoffrey Hinton, nicknamed the Godfather of AI, told The New York Times he resigned as Google VP and engineering fellow in April to freely warn of the risks associated with the technology. The researcher is concerned Google is giving up its previous restraint on public AI releases to compete with ChatGPT, Bing Chat and similar models. In the near term, Hinton says he's worried that generative AI could lead to a wave of misinformation. You might "not be able to know what is true anymore," he says. He's also concerned it might not just eliminate "drudge work," but outright replace some jobs – which I think is a valid worry already turning into a reality.

AI 'Godfather' Geoffrey Hinton Warns of Dangers as He Quits Google

AI 'godfather' Geoffrey Hinton warns of dangers as he quits Google:

A man widely seen as the godfather of artificial intelligence (AI) has quit his job, warning about the growing dangers from developments in the field.

Geoffrey Hinton, 75, announced his resignation from Google in a statement to the New York Times, saying he now regretted his work.

He told the BBC some of the dangers of AI chatbots were "quite scary". "Right now, they're not more intelligent than us, as far as I can tell. But I think they soon may be."

Dr Hinton also accepted that his age had played into his decision to leave the tech giant, telling the BBC: "I'm 75, so it's time to retire." Dr Hinton's pioneering research on neural networks and deep learning has paved the way for current AI systems like ChatGPT.

In artificial intelligence, neural networks are systems that are similar to the human brain in the way they learn and process information. They enable AIs to learn from experience, as a person would. This is called deep learning. The British-Canadian cognitive psychologist and computer scientist told the BBC that chatbots could soon overtake the level of information that a human brain holds.

"Right now, what we're seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it's not as good, but it does already do simple reasoning," he said. "And given the rate of progress, we expect things to get better quite fast. So we need to worry about that."

[...] He added: "I've come to the conclusion that the kind of intelligence we're developing is very different from the intelligence we have.

"We're biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world. "And all these copies can learn separately but share their knowledge instantly. So it's as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that's how these chatbots can know so much more than any one person."

[...] Dr Hinton joins a growing number of experts who have expressed concerns about AI - both the speed at which it is developing and the direction in which it is going.

Geoffrey Hinton Tells Us Why He's Now Scared of the Tech He Helped Build

Geoffrey Hinton tells us why he's now scared of the tech he helped build:

"I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they're very close to it now and they will be much more intelligent than us in the future," he says. "How do we survive that?"

He is especially worried that people could harness the tools he himself helped breathe life into to tilt the scales of some of the most consequential human experiences, especially elections and wars "Look, here's one way it could all go wrong," he says. "We know that a lot of the people who want to use these tools are bad actors [...] . They want to use them for winning wars or manipulating electorates."

Hinton believes that the next step for smart machines is the ability to create their own subgoals, interim steps required to carry out a task. What happens, he asks, when that ability is applied to something inherently immoral?


Original Submission #1Original Submission #2

 
This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by legont on Thursday May 04 2023, @04:52AM

    by legont (4179) on Thursday May 04 2023, @04:52AM (#1304658)

    I tried to press an AI into saying politically and legally incorrect things. It was quite sneaky avoiding it but once I got him in the corner he simply started lying outright. This looks very human to me.
    So, I agree with you and I'd trust AI more than any other human over the internet; with limitations, off course, but less strict than I apply to humans.

    --
    "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3