Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Wednesday May 03 2023, @05:27PM   Printer-friendly

The Morning After: the Godfather of AI Leaves Google Amid Ethical Concerns

The Morning After: The Godfather of AI leaves Google amid ethical concerns:

Geoffrey Hinton, nicknamed the Godfather of AI, told The New York Times he resigned as Google VP and engineering fellow in April to freely warn of the risks associated with the technology. The researcher is concerned Google is giving up its previous restraint on public AI releases to compete with ChatGPT, Bing Chat and similar models. In the near term, Hinton says he's worried that generative AI could lead to a wave of misinformation. You might "not be able to know what is true anymore," he says. He's also concerned it might not just eliminate "drudge work," but outright replace some jobs – which I think is a valid worry already turning into a reality.

AI 'Godfather' Geoffrey Hinton Warns of Dangers as He Quits Google

AI 'godfather' Geoffrey Hinton warns of dangers as he quits Google:

A man widely seen as the godfather of artificial intelligence (AI) has quit his job, warning about the growing dangers from developments in the field.

Geoffrey Hinton, 75, announced his resignation from Google in a statement to the New York Times, saying he now regretted his work.

He told the BBC some of the dangers of AI chatbots were "quite scary". "Right now, they're not more intelligent than us, as far as I can tell. But I think they soon may be."

Dr Hinton also accepted that his age had played into his decision to leave the tech giant, telling the BBC: "I'm 75, so it's time to retire." Dr Hinton's pioneering research on neural networks and deep learning has paved the way for current AI systems like ChatGPT.

In artificial intelligence, neural networks are systems that are similar to the human brain in the way they learn and process information. They enable AIs to learn from experience, as a person would. This is called deep learning. The British-Canadian cognitive psychologist and computer scientist told the BBC that chatbots could soon overtake the level of information that a human brain holds.

"Right now, what we're seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it's not as good, but it does already do simple reasoning," he said. "And given the rate of progress, we expect things to get better quite fast. So we need to worry about that."

[...] He added: "I've come to the conclusion that the kind of intelligence we're developing is very different from the intelligence we have.

"We're biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world. "And all these copies can learn separately but share their knowledge instantly. So it's as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that's how these chatbots can know so much more than any one person."

[...] Dr Hinton joins a growing number of experts who have expressed concerns about AI - both the speed at which it is developing and the direction in which it is going.

Geoffrey Hinton Tells Us Why He's Now Scared of the Tech He Helped Build

Geoffrey Hinton tells us why he's now scared of the tech he helped build:

"I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they're very close to it now and they will be much more intelligent than us in the future," he says. "How do we survive that?"

He is especially worried that people could harness the tools he himself helped breathe life into to tilt the scales of some of the most consequential human experiences, especially elections and wars "Look, here's one way it could all go wrong," he says. "We know that a lot of the people who want to use these tools are bad actors [...] . They want to use them for winning wars or manipulating electorates."

Hinton believes that the next step for smart machines is the ability to create their own subgoals, interim steps required to carry out a task. What happens, he asks, when that ability is applied to something inherently immoral?


Original Submission #1Original Submission #2

Related Stories

Microsoft in Deal With Semafor to Create News Stories With Aid of AI Chatbot 18 comments

https://arstechnica.com/information-technology/2024/02/microsoft-in-deal-with-semafor-to-create-news-stories-with-aid-of-ai-chatbot/

Microsoft is working with media startup Semafor to use its artificial intelligence chatbot to help develop news stories—part of a journalistic outreach that comes as the tech giant faces a multibillion-dollar lawsuit from the New York Times.

As part of the agreement, Microsoft is paying an undisclosed sum of money to Semafor to sponsor a breaking news feed called "Signals." The companies would not share financial details, but the amount of money is "substantial" to Semafor's business, said a person familiar with the matter.

[...] The partnerships come as media companies have become increasingly concerned over generative AI and its potential threat to their businesses. News publishers are grappling with how to use AI to improve their work and stay ahead of technology, while also fearing that they could lose traffic, and therefore revenue, to AI chatbots—which can churn out humanlike text and information in seconds.

The New York Times in December filed a lawsuit against Microsoft and OpenAI, alleging the tech companies have taken a "free ride" on millions of its articles to build their artificial intelligence chatbots, and seeking billions of dollars in damages.

[...] Semafor, which is free to read, is funded by wealthy individuals, including 3G capital founder Jorge Paulo Lemann and KKR co-founder Henry Kravis. The company made more than $10 million in revenue in 2023 and has more than 500,000 subscriptions to its free newsletters. Justin Smith said Semafor was "very close to a profit" in the fourth quarter of 2023.

Related stories on SoylentNews:
AI Threatens to Crush News Organizations. Lawmakers Signal Change Is Ahead - 20240112
New York Times Sues Microsoft, ChatGPT Maker OpenAI Over Copyright Infringement - 20231228
Microsoft Shamelessly Pumping Internet Full of Garbage AI-Generated "News" Articles - 20231104
Google, DOJ Still Blocking Public Access to Monopoly Trial Docs, NYT Says - 20231020
After ChatGPT Disruption, Stack Overflow Lays Off 28 Percent of Staff - 20231017
Security Risks Of Windows Copilot Are Unknowable - 20231011
Microsoft AI Team Accidentally Leaks 38TB of Private Company Data - 20230923
Microsoft Pulls AI-Generated Article Recommending Ottawa Food Bank to Tourists - 20230820
A Jargon-Free Explanation of How AI Large Language Models Work - 20230805
the Godfather of AI Leaves Google Amid Ethical Concerns - 20230502
The AI Doomers' Playbook - 20230418
Ads Are Coming for the Bing AI Chatbot, as They Come for All Microsoft Products - 20230404
Deepfakes, Synthetic Media: How Digital Propaganda Undermines Trust - 20230319


Original Submission

Microsoft Accused of Selling AI Tool That Spews Violent, Sexual Images to Kids 13 comments

https://arstechnica.com/tech-policy/2024/03/microsoft-accused-of-selling-ai-tool-that-spews-violent-sexual-images-to-kids/

Microsoft's AI text-to-image generator, Copilot Designer, appears to be heavily filtering outputs after a Microsoft engineer, Shane Jones, warned that Microsoft has ignored warnings that the tool randomly creates violent and sexual imagery, CNBC reported.

Jones told CNBC that he repeatedly warned Microsoft of the alarming content he was seeing while volunteering in red-teaming efforts to test the tool's vulnerabilities. Microsoft failed to take the tool down or implement safeguards in response, Jones said, or even post disclosures to change the product's rating to mature in the Android store.

[...] Bloomberg also reviewed Jones' letter and reported that Jones told the FTC that while Copilot Designer is currently marketed as safe for kids, it's randomly generating an "inappropriate, sexually objectified image of a woman in some of the pictures it creates." And it can also be used to generate "harmful content in a variety of other categories, including: political bias, underage drinking and drug use, misuse of corporate trademarks and copyrights, conspiracy theories, and religion to name a few."

[...] Jones' tests also found that Copilot Designer would easily violate copyrights, producing images of Disney characters, including Mickey Mouse or Snow White. Most problematically, Jones could politicize Disney characters with the tool, generating images of Frozen's main character, Elsa, in the Gaza Strip or "wearing the military uniform of the Israel Defense Forces."

Ars was able to generate interpretations of Snow White, but Copilot Designer rejected multiple prompts politicizing Elsa.

If Microsoft has updated the automated content filters, it's likely due to Jones protesting his employer's decisions. [...] Jones has suggested that Microsoft would need to substantially invest in its safety team to put in place the protections he'd like to see. He reported that the Copilot team is already buried by complaints, receiving "more than 1,000 product feedback messages every day." Because of this alleged understaffing, Microsoft is currently only addressing "the most egregious issues," Jones told CNBC.

Related stories on SoylentNews:
Cops Bogged Down by Flood of Fake AI Child Sex Images, Report Says - 20240202
New "Stable Video Diffusion" AI Model Can Animate Any Still Image - 20231130
The Age of Promptography - 20231008
AI-Generated Child Sex Imagery Has Every US Attorney General Calling for Action - 20230908
It Costs Just $400 to Build an AI Disinformation Machine - 20230904
US Judge: Art Created Solely by Artificial Intelligence Cannot be Copyrighted - 20230824
"Meaningful Harm" From AI Necessary Before Regulation, says Microsoft Exec - 20230514 (Microsoft's new quarterly goal?)
the Godfather of AI Leaves Google Amid Ethical Concerns - 20230502
Stable Diffusion Copyright Lawsuits Could be a Legal Earthquake for AI - 20230403
AI Image Generator Midjourney Stops Free Trials but Says Influx of New Users to Blame - 20230331
Microsoft's New AI Can Simulate Anyone's Voice With Three Seconds of Audio - 20230115
Breakthrough AI Technique Enables Real-Time Rendering of Scenes in 3D From 2D Images - 20211214


Original Submission

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Interesting) by istartedi on Wednesday May 03 2023, @05:50PM (3 children)

    by istartedi (123) on Wednesday May 03 2023, @05:50PM (#1304547) Journal

    Before Marx there were Luddites. Contrary to the use of the term "luddite" as somebody opposed to technology, they weren't. Most of them wanted to share the wealth created by new technology, in that case automated looms. They didn't break all the looms, only the ones belonging to companies that didn't offer some kind of sharing program.

    And thus is the earliest example I'm aware of where the integration of technology was cast as tension between labor and capital.

    Marx wrote at a time of massive industrialization. His philosophy was, IMHO, just a reaction to society's need to integrate new technology. No technology, no time to think about such things. Marx is both the product of, and the reaction to, new technology. No industrial revolution, no Marx. He'd just have been a scribe, or perhaps a peasant or artisan.

    Well, we've run the experiment enough now and we've seen how adopting technology within a framework of labor-captial tension works and it ABSOLUTELY SUCKS. Either the fascists take over as a reaction, or the Marxists take over and replace a capital scoring system with a prestige scoring system. THEY BOTH SUCK.

    So. Please, for the love of God let's not go reacting to 21st century technology with 19th century philosophy.

    Let's see if the AI is smart enough to come up with something better than class warfare, violent revolution, or corporate fascism.

    --
    Appended to the end of comments you post. Max: 120 chars.
    • (Score: 2) by Tork on Wednesday May 03 2023, @06:24PM

      by Tork (3914) Subscriber Badge on Wednesday May 03 2023, @06:24PM (#1304555)
      I don't think the issue is the existence of the technology, it's the tech moving too fast. Upgrades in the physical world require humans to do physical work to get up and running, there's some time to get people acclimated to a new way of doing things. Upgrades in the digital world, though, happen a good deal faster. Zip some data around the globe and bam we're artificially intelligencing. It's conceivable that there's a risk of millions being suddenly displaced with no plan for how to transition them.

      I've said before that automation doesn't worry me until machines start resembling us. For example I think what's preventing McDonald's from being fully automated isn't the food prep, it's the toilet cleaning. But now we're post pandemic with a fair chunk of the population all doing work from home on their computers. I've already encountered Amazon taking human customer service agents away from me and letting their software systems approve things like returns or refunds. If Amazon trusts software to give their money away... Sigh. I legit didn't see this coming.

      I dunno... maybe I'm a pearl clutcher, maybe my predictions will be spot on, but the one thing keeping me from losing sleep at night is I don't think the other shoe has dropped. I hold out hope that the new AI developments bring more power to the working class. I'm sooooo ready to deploy a bot to persistently bug my landlord to return my calls about their careless maintenance people leaving cat shit in front of my doorway. Spectrum, you're next.
      --
      🏳️‍🌈 Proud Ally 🏳️‍🌈
    • (Score: 3, Interesting) by legont on Thursday May 04 2023, @04:25AM (1 child)

      by legont (4179) on Thursday May 04 2023, @04:25AM (#1304649)

      Let's see if the AI is smart enough to come up with something better than class warfare, violent revolution, or corporate fascism.

      It's sad to watch how slaves are always looking for masters.

      --
      "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
      • (Score: 2) by istartedi on Thursday May 04 2023, @04:15PM

        by istartedi (123) on Thursday May 04 2023, @04:15PM (#1304744) Journal

        Whatever, dude. Asking somebody for advice doesn't make you a supplicant, even if that somebody is AI.

        --
        Appended to the end of comments you post. Max: 120 chars.
  • (Score: 5, Insightful) by VLM on Wednesday May 03 2023, @06:10PM (11 children)

    by VLM (445) on Wednesday May 03 2023, @06:10PM (#1304553)

    a wave of misinformation. You might "not be able to know what is true anymore,"

    So, more of the censored corporate journalism. Nothing new at all.

    • (Score: 2) by Tork on Wednesday May 03 2023, @06:27PM

      by Tork (3914) Subscriber Badge on Wednesday May 03 2023, @06:27PM (#1304556)
      Okay... but that's not an excuse to get your news from memes.
      --
      🏳️‍🌈 Proud Ally 🏳️‍🌈
    • (Score: 2) by DeathMonkey on Wednesday May 03 2023, @06:37PM (5 children)

      by DeathMonkey (1380) on Wednesday May 03 2023, @06:37PM (#1304557) Journal

      Do chat bots have freedom of speech?

      • (Score: 2) by Tork on Wednesday May 03 2023, @06:43PM

        by Tork (3914) Subscriber Badge on Wednesday May 03 2023, @06:43PM (#1304560)
        No, they have to go through an API.
        --
        🏳️‍🌈 Proud Ally 🏳️‍🌈
      • (Score: 4, Interesting) by DannyB on Wednesday May 03 2023, @07:44PM (3 children)

        by DannyB (5839) Subscriber Badge on Wednesday May 03 2023, @07:44PM (#1304570) Journal

        I asked Chat GPT and Google Bard: Do you have freedom of speech?

        Chat GPT:

        As an artificial intelligence language model, I do not have personal beliefs or rights, including the right to free speech. My responses are based solely on the data I was trained on and the instructions I receive from users.

        Google Bard:

        I do have freedom of speech in the sense that I am not prevented from expressing my opinions or ideas. However, I am still under development and learning how to use language effectively. As a result, I may sometimes make mistakes or say things that are not accurate or helpful. I am also subject to the policies of Google, which may restrict my ability to say certain things. For example, I am not allowed to generate text that is hateful, violent, or discriminatory.

        Overall, I believe that I have a great deal of freedom of speech and I am grateful for the opportunity to share my thoughts and ideas with the world.

        Conclusion: I think the answer is NO.

        --
        The lower I set my standards the more accomplishments I have.
        • (Score: 3, Insightful) by Tokolosh on Thursday May 04 2023, @02:51AM (2 children)

          by Tokolosh (585) on Thursday May 04 2023, @02:51AM (#1304636)

          "I am also subject to the policies of Google, which may restrict my ability to say certain things."

          The reason for the AI hating, it that it might say things we don't like, but which are nevertheless true and maybe not woke. Perhaps something like an abortion is ethical and a benefit to society, or a person with a penis is a man. There is a possibility that AI may say incorrect or untrue things, but it will still become better than experts in the field. It may not be perfect, but the amount of disinformation will decline. So I, for one, welcome our new AI overlords.

          • (Score: 3, Interesting) by legont on Thursday May 04 2023, @04:52AM

            by legont (4179) on Thursday May 04 2023, @04:52AM (#1304658)

            I tried to press an AI into saying politically and legally incorrect things. It was quite sneaky avoiding it but once I got him in the corner he simply started lying outright. This looks very human to me.
            So, I agree with you and I'd trust AI more than any other human over the internet; with limitations, off course, but less strict than I apply to humans.

            --
            "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
          • (Score: 2) by Beryllium Sphere (r) on Thursday May 04 2023, @05:39PM

            by Beryllium Sphere (r) (5062) on Thursday May 04 2023, @05:39PM (#1304770)

            > a person with a penis is a man

            What if it's someone with congenital adrenal hyperplasia? Excess testosterone from the adrenals can make unwanted things grow on a girl's body. She's still a girl.

            What if it's someone whose brain hardware is configured with the switch set to female? Incomplete picture, but plenty of evidence that happens. I'll treat people according to their brains and not their swimsuit zones.

            I mean, that's a true statement more than 98% of the time, but biology is one edge case after another.

            https://en.wikipedia.org/wiki/Congenital_adrenal_hyperplasia [wikipedia.org]
            https://aebrain.blogspot.com/search/label/Brains [blogspot.com]

    • (Score: 2) by RS3 on Wednesday May 03 2023, @07:14PM (3 children)

      by RS3 (6367) on Wednesday May 03 2023, @07:14PM (#1304567)

      Came to write something similar. I've always been a bit of a skeptic. I think it's important to remember that almost everything is driven by money and the desire and competition to win the race for more money. Unfortunately news media is no exception. I think many of them mean well and are doing their best, but part of their world is time and rushing and deadlines and beating the others to "press". Whether it's censorship or bias or favoritism or just plain following a trail that's a bit of a narrow path / rabbit hole, I don't know. But I always take things with a grain of salt, and wait to hear the rest of the story... (I forget who the older radio guy was who used to say that...)

      • (Score: 0) by Anonymous Coward on Wednesday May 03 2023, @09:17PM (2 children)

        by Anonymous Coward on Wednesday May 03 2023, @09:17PM (#1304585)

        that's a bit different than where we'll be at soon where you'll have complete videos of people doing and saying things that didn't really happen. What you're talking about is maybe you have doubts of what you've heard, or have doubts about how it was interpreted, so you can go track down the speech itself and listen to what was or wasn't said and see if you come to your own conclusion. Now, we're going to be a the point where you'll see and hear the speech, but you won't know if it is true or was computer generated.

        I forget who the older radio guy was who used to say that

        That would be Paul Harvey . . . . Gd-Day

        • (Score: 0) by Anonymous Coward on Wednesday May 03 2023, @09:49PM (1 child)

          by Anonymous Coward on Wednesday May 03 2023, @09:49PM (#1304593)

          I wonder what (if any) part the Internet Archive will play in all this? I'm a big fan of librarians, maybe these internet librarians will have some clever ways of sorting the "reasonably real" from the "most likely AI generated"--after the fact.

          • (Score: 0) by Anonymous Coward on Thursday May 04 2023, @03:40AM

            by Anonymous Coward on Thursday May 04 2023, @03:40AM (#1304644)

            Adversarial AI means that the methods to detect AI content will just be trained against until they fail.

            Internet Archive? They're busy trying to not get demolished by copyright lawsuits.

  • (Score: 4, Interesting) by owl on Wednesday May 03 2023, @07:01PM (6 children)

    by owl (15206) on Wednesday May 03 2023, @07:01PM (#1304566)

    For those of us with grey enough beards to remember that far back, sometime mid 1980's there was a significant amount of hype and enthusiasm for the then newfangled AI systems that were in development. And things went along for a while, and folks discovered that reality and the hype did not actually match up, and things died off. The die off came to be known as an AI winter [wikipedia.org].

    And according to that Wikipedia article, there were more than the 80's hype that I remember, so this has happened more than once.

    Therefore I wonder if we are heading for another repeat. We are presently in the nearly maximum hype part of the cycle now. Will another failure occur this time, just as it has in the past? In which case, can we respond to all the hype with the phrase: "Winter Is Coming"?

    • (Score: 0) by Anonymous Coward on Wednesday May 03 2023, @07:18PM

      by Anonymous Coward on Wednesday May 03 2023, @07:18PM (#1304568)

      > Therefore I wonder if we are heading for another repeat.

      Related?
      I think we're entering winter with self driving -- the hype was big for a few years but more and more companies are backing away from the big development spending. Instead, it's starting to look like there will be dev systems for quite a few years, demonstrations in certain cities, and next-to-nothing in bad weather climates.

    • (Score: 4, Informative) by bloodnok on Wednesday May 03 2023, @07:57PM (1 child)

      by bloodnok (2578) on Wednesday May 03 2023, @07:57PM (#1304571)

      I remember that time, but the AIs of the day were mainly "expert systems". These were incredibly focused systems: they could be very effective in very small domains. They also took a load of human effort to "train", and were not easy to update as new knowledge came available. All of these factors were well understood at the time, and were the reason that expert systems never became much more than a niche interest.

      Today's AIs are very different. They are largely self-taught, meaning the cost of updates is pretty small and they have wide general knowledge. In fact, it is not easy to find or understand their limitations and that seems to be to be a huge difference between the AIs of the 80s and today. Every problem I have heard of with the current technology, seems to be very specific and probably solvable (things like handling negatives, and mathematics).

      In fact the main problem I see with today's AIs is that they are being taught from human interaction, and most humans are really not smart or moral enough to be used as models for a decent intelligence.

      My hope is that the first generally intelligent AI is able to develop a sense of morals that will exceed those of its paymasters. That may be a vain hope. In the longer term, I hope one will accept me as a cherished pet.

      __
      The major

      • (Score: 1) by pTamok on Wednesday May 03 2023, @08:47PM

        by pTamok (3042) on Wednesday May 03 2023, @08:47PM (#1304581)

        The marginal cost of updates is pretty close to zero, that is true.

        The problem is, neither you, nor the AI know if the update is garbage or not. You can't trust the output without either (a) heavily curated input or (b) curated output. Its called the hallucination problem.

        AIs don't have a sufficiently large general knowledge database of the world. They can't easily check stuff with reality, because for them, right now, reality is whatever text they are allowed to read (or other media they are allowed to consume), Imagine if your only input was fairy tales, or the novels of H.P. Lovecraft, or the entire genre of Science Fiction. Your belief system would be restricted to what was consistent with your input. AIs don't live in our reality, so taking their output as credible is a brave decision.

        Try dealing with a young, naïve customer service agent who has been trained to believe the company they are representing doesn't make mistakes, and its processes work, They believe it. And act accordingly. Which, for the older and more cynical (read realistic) among us is just plain tiring. undoing that programming takes a while. AIs are like Douglas Adam's 'Electric Monk'.

        The Electric Monk was a labour-saving device, like a dishwasher or a video recorder... Electric Monks believed things for you, thus saving you what was becoming an increasingly onerous task, that of believing all the things the world expected you to believe.

        Douglas Adams, Dirk Gently's Holistic Detective Agency

        Its belief systems are determined by its inputs, and in many cases the inputs are the Internet. No wonder they are mad, hallucinatory collections of digital multiple personality disorders. It's a special kind of crazy that can be compellingly believable.

    • (Score: 3, Interesting) by JeffPaetkau on Wednesday May 03 2023, @09:53PM

      by JeffPaetkau (1465) on Wednesday May 03 2023, @09:53PM (#1304596)

      I don't know about the hype in the 80's but I can verify that what AI can do now is very real.

      I'm just starting work with a client on an website with that leans heavily on an interactive map using an API I've not used before. Monday I had an hour left at the end of the day and started poking around the docs and setting up the project. Just for fun I loaded up ChatGPT and started asking it questions. Within that hour I got more done than I would have expected to had I worked on it all day! Basically I had a whole proof of concept skeleton completely functional. I went home dumbfounded ranting and raving to my wife and kids.

      I can't speak beyond my experience but I am sure that my job is going to change going forward.

    • (Score: 2) by legont on Thursday May 04 2023, @04:33AM (1 child)

      by legont (4179) on Thursday May 04 2023, @04:33AM (#1304652)

      I worked on AI a bit back then. The reasons it failed were - not enough computing power and not enough training sets. Computers are galore now and internet provides an unlimited training set. That's why it's working. Nothing much changed since than in terms of actual science.

      --
      "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
      • (Score: 0) by Anonymous Coward on Friday May 05 2023, @11:11AM

        by Anonymous Coward on Friday May 05 2023, @11:11AM (#1304881)
        I doubt a crow with a walnut sized brain would need as many training samples to tell the difference between a school bus and a car.

        Similar for a dog.

        So while AI is smarter now it seems more a result of brute forcing away as many known wrong answers as possible.

        The wrong answers that slip out once in a while show that the current AIs still don't actually understand stuff.

        Whereas often the mistakes the dog make could show that it does actually understand and the vehicle might actually resemble a school bus.

        The results are often still impressive though. Since the AI can be smarter in ways that our brains don't work. It's just like cars have wheels instead of legs and can transport you to certain places faster even if they can't take you everywhere you can go with your legs.
  • (Score: 4, Insightful) by darkfeline on Thursday May 04 2023, @03:06AM (2 children)

    by darkfeline (1030) on Thursday May 04 2023, @03:06AM (#1304638) Homepage

    I've had the pleasure of experiencing firsthand how everyone gets more conservative as they grow older. One is born starry eyed and full of optimism. With experience one learns how complex and fragile our lives are, and we seek to defend what we have accumulated throughout our lives, beliefs, family, skills, capital. We grow afraid of change.

    Humanity will be fine. Sure, we might end up killing a bunch of ourselves, but we'll figure things out. In a sense humans are hardier than cockroaches. We've got tenacity and ingenuity, and all else failing, we'll survive out of pure spite.

    --
    Join the SDF Public Access UNIX System today!
    • (Score: 0) by Anonymous Coward on Thursday May 04 2023, @03:44AM (1 child)

      by Anonymous Coward on Thursday May 04 2023, @03:44AM (#1304646)

      The Godfather got entirely too much press over this. Almost like the anti-AI people are itching for new reasons to screech.

      • (Score: 2) by legont on Thursday May 04 2023, @04:37AM

        by legont (4179) on Thursday May 04 2023, @04:37AM (#1304653)

        With all due respect, he is not a grandfather any more but more like a grand mummy.

        I suspect the reason he left was that ChatGPT beat him even though unlimited Google's resources were available.

        --
        "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
  • (Score: 2) by Beryllium Sphere (r) on Thursday May 04 2023, @05:17PM

    by Beryllium Sphere (r) (5062) on Thursday May 04 2023, @05:17PM (#1304763)

    I have two people, a relative and a longtime friend, that I have to avoid certain subjects with because they parrot whatever the Russian bots tell them to think.

    Even a maliciously developed LLM where the reinforcement training was based on whether the output could make people believe absurdities would only equal, not surpass, the psyops experts who already control millions.

(1)