Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Wednesday May 03 2023, @05:27PM   Printer-friendly

The Morning After: the Godfather of AI Leaves Google Amid Ethical Concerns

The Morning After: The Godfather of AI leaves Google amid ethical concerns:

Geoffrey Hinton, nicknamed the Godfather of AI, told The New York Times he resigned as Google VP and engineering fellow in April to freely warn of the risks associated with the technology. The researcher is concerned Google is giving up its previous restraint on public AI releases to compete with ChatGPT, Bing Chat and similar models. In the near term, Hinton says he's worried that generative AI could lead to a wave of misinformation. You might "not be able to know what is true anymore," he says. He's also concerned it might not just eliminate "drudge work," but outright replace some jobs – which I think is a valid worry already turning into a reality.

AI 'Godfather' Geoffrey Hinton Warns of Dangers as He Quits Google

AI 'godfather' Geoffrey Hinton warns of dangers as he quits Google:

A man widely seen as the godfather of artificial intelligence (AI) has quit his job, warning about the growing dangers from developments in the field.

Geoffrey Hinton, 75, announced his resignation from Google in a statement to the New York Times, saying he now regretted his work.

He told the BBC some of the dangers of AI chatbots were "quite scary". "Right now, they're not more intelligent than us, as far as I can tell. But I think they soon may be."

Dr Hinton also accepted that his age had played into his decision to leave the tech giant, telling the BBC: "I'm 75, so it's time to retire." Dr Hinton's pioneering research on neural networks and deep learning has paved the way for current AI systems like ChatGPT.

In artificial intelligence, neural networks are systems that are similar to the human brain in the way they learn and process information. They enable AIs to learn from experience, as a person would. This is called deep learning. The British-Canadian cognitive psychologist and computer scientist told the BBC that chatbots could soon overtake the level of information that a human brain holds.

"Right now, what we're seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it's not as good, but it does already do simple reasoning," he said. "And given the rate of progress, we expect things to get better quite fast. So we need to worry about that."

[...] He added: "I've come to the conclusion that the kind of intelligence we're developing is very different from the intelligence we have.

"We're biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world. "And all these copies can learn separately but share their knowledge instantly. So it's as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that's how these chatbots can know so much more than any one person."

[...] Dr Hinton joins a growing number of experts who have expressed concerns about AI - both the speed at which it is developing and the direction in which it is going.

Geoffrey Hinton Tells Us Why He's Now Scared of the Tech He Helped Build

Geoffrey Hinton tells us why he's now scared of the tech he helped build:

"I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they're very close to it now and they will be much more intelligent than us in the future," he says. "How do we survive that?"

He is especially worried that people could harness the tools he himself helped breathe life into to tilt the scales of some of the most consequential human experiences, especially elections and wars "Look, here's one way it could all go wrong," he says. "We know that a lot of the people who want to use these tools are bad actors [...] . They want to use them for winning wars or manipulating electorates."

Hinton believes that the next step for smart machines is the ability to create their own subgoals, interim steps required to carry out a task. What happens, he asks, when that ability is applied to something inherently immoral?


Original Submission #1Original Submission #2

 
This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Interesting) by owl on Wednesday May 03 2023, @07:01PM (6 children)

    by owl (15206) on Wednesday May 03 2023, @07:01PM (#1304566)

    For those of us with grey enough beards to remember that far back, sometime mid 1980's there was a significant amount of hype and enthusiasm for the then newfangled AI systems that were in development. And things went along for a while, and folks discovered that reality and the hype did not actually match up, and things died off. The die off came to be known as an AI winter [wikipedia.org].

    And according to that Wikipedia article, there were more than the 80's hype that I remember, so this has happened more than once.

    Therefore I wonder if we are heading for another repeat. We are presently in the nearly maximum hype part of the cycle now. Will another failure occur this time, just as it has in the past? In which case, can we respond to all the hype with the phrase: "Winter Is Coming"?

    Starting Score:    1  point
    Moderation   +2  
       Interesting=2, Total=2
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 0) by Anonymous Coward on Wednesday May 03 2023, @07:18PM

    by Anonymous Coward on Wednesday May 03 2023, @07:18PM (#1304568)

    > Therefore I wonder if we are heading for another repeat.

    Related?
    I think we're entering winter with self driving -- the hype was big for a few years but more and more companies are backing away from the big development spending. Instead, it's starting to look like there will be dev systems for quite a few years, demonstrations in certain cities, and next-to-nothing in bad weather climates.

  • (Score: 4, Informative) by bloodnok on Wednesday May 03 2023, @07:57PM (1 child)

    by bloodnok (2578) on Wednesday May 03 2023, @07:57PM (#1304571)

    I remember that time, but the AIs of the day were mainly "expert systems". These were incredibly focused systems: they could be very effective in very small domains. They also took a load of human effort to "train", and were not easy to update as new knowledge came available. All of these factors were well understood at the time, and were the reason that expert systems never became much more than a niche interest.

    Today's AIs are very different. They are largely self-taught, meaning the cost of updates is pretty small and they have wide general knowledge. In fact, it is not easy to find or understand their limitations and that seems to be to be a huge difference between the AIs of the 80s and today. Every problem I have heard of with the current technology, seems to be very specific and probably solvable (things like handling negatives, and mathematics).

    In fact the main problem I see with today's AIs is that they are being taught from human interaction, and most humans are really not smart or moral enough to be used as models for a decent intelligence.

    My hope is that the first generally intelligent AI is able to develop a sense of morals that will exceed those of its paymasters. That may be a vain hope. In the longer term, I hope one will accept me as a cherished pet.

    __
    The major

    • (Score: 1) by pTamok on Wednesday May 03 2023, @08:47PM

      by pTamok (3042) on Wednesday May 03 2023, @08:47PM (#1304581)

      The marginal cost of updates is pretty close to zero, that is true.

      The problem is, neither you, nor the AI know if the update is garbage or not. You can't trust the output without either (a) heavily curated input or (b) curated output. Its called the hallucination problem.

      AIs don't have a sufficiently large general knowledge database of the world. They can't easily check stuff with reality, because for them, right now, reality is whatever text they are allowed to read (or other media they are allowed to consume), Imagine if your only input was fairy tales, or the novels of H.P. Lovecraft, or the entire genre of Science Fiction. Your belief system would be restricted to what was consistent with your input. AIs don't live in our reality, so taking their output as credible is a brave decision.

      Try dealing with a young, naïve customer service agent who has been trained to believe the company they are representing doesn't make mistakes, and its processes work, They believe it. And act accordingly. Which, for the older and more cynical (read realistic) among us is just plain tiring. undoing that programming takes a while. AIs are like Douglas Adam's 'Electric Monk'.

      The Electric Monk was a labour-saving device, like a dishwasher or a video recorder... Electric Monks believed things for you, thus saving you what was becoming an increasingly onerous task, that of believing all the things the world expected you to believe.

      Douglas Adams, Dirk Gently's Holistic Detective Agency

      Its belief systems are determined by its inputs, and in many cases the inputs are the Internet. No wonder they are mad, hallucinatory collections of digital multiple personality disorders. It's a special kind of crazy that can be compellingly believable.

  • (Score: 3, Interesting) by JeffPaetkau on Wednesday May 03 2023, @09:53PM

    by JeffPaetkau (1465) on Wednesday May 03 2023, @09:53PM (#1304596)

    I don't know about the hype in the 80's but I can verify that what AI can do now is very real.

    I'm just starting work with a client on an website with that leans heavily on an interactive map using an API I've not used before. Monday I had an hour left at the end of the day and started poking around the docs and setting up the project. Just for fun I loaded up ChatGPT and started asking it questions. Within that hour I got more done than I would have expected to had I worked on it all day! Basically I had a whole proof of concept skeleton completely functional. I went home dumbfounded ranting and raving to my wife and kids.

    I can't speak beyond my experience but I am sure that my job is going to change going forward.

  • (Score: 2) by legont on Thursday May 04 2023, @04:33AM (1 child)

    by legont (4179) on Thursday May 04 2023, @04:33AM (#1304652)

    I worked on AI a bit back then. The reasons it failed were - not enough computing power and not enough training sets. Computers are galore now and internet provides an unlimited training set. That's why it's working. Nothing much changed since than in terms of actual science.

    --
    "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
    • (Score: 0) by Anonymous Coward on Friday May 05 2023, @11:11AM

      by Anonymous Coward on Friday May 05 2023, @11:11AM (#1304881)
      I doubt a crow with a walnut sized brain would need as many training samples to tell the difference between a school bus and a car.

      Similar for a dog.

      So while AI is smarter now it seems more a result of brute forcing away as many known wrong answers as possible.

      The wrong answers that slip out once in a while show that the current AIs still don't actually understand stuff.

      Whereas often the mistakes the dog make could show that it does actually understand and the vehicle might actually resemble a school bus.

      The results are often still impressive though. Since the AI can be smarter in ways that our brains don't work. It's just like cars have wheels instead of legs and can transport you to certain places faster even if they can't take you everywhere you can go with your legs.