Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.
posted by janrinok on Thursday May 25 2023, @10:41PM   Printer-friendly

Eric Schmidt wants to prevent potential abuse of AI:

Add Eric Schmidt to the list of tech luminaries concerned about the dangers of AI. The former Google chief tells guests at The Wall Street Journal's CEO Council Summit that AI represents an "existential risk" that could get many people "harmed or killed." He doesn't feel that threat is serious at the moment, but he sees a near future where AI could help find software security flaws or new biology types. It's important to ensure these systems aren't "misused by evil people," the veteran executive says.

Schmidt doesn't have a firm solution for regulating AI, but he believes there won't be an AI-specific regulator in the US. He participated in a National Security Commission on AI that reviewed the technology and published a 2021 report determining that the US wasn't ready for the tech's impact.

Schmidt doesn't have direct influence over AI. However, he joins a growing number of well-known moguls who have argued for a careful approach. Current Google CEO Sundar Pichai has cautioned that society needs to adapt to AI, while OpenAI leader Sam Altman has expressed concern that authoritarians might abuse these algorithms. In March, numerous industry leaders and researchers (including Elon Musk and Steve Wozniak) signed an open letter calling on companies to pause AI experiments for six months while they rethought the safety and ethical implications of their work.


Original Submission

 
This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by Snotnose on Thursday May 25 2023, @11:18PM (5 children)

    by Snotnose (1623) on Thursday May 25 2023, @11:18PM (#1308227)

    Leading stable owner says cars pose an existential risk that puts lives in danger
    Leading train operator says airplanes pose an existential risk that puts lives in danger
    Leading doctor says vaccines pose an existential risk that puts lives in danger
    Leading NASA calculator says computers pose an existential risk that puts lives in danger
    Need I go on?

    --
    Bad decisions, great stories
    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 1) by Runaway1956 on Friday May 26 2023, @12:05AM (1 child)

    by Runaway1956 (2926) Subscriber Badge on Friday May 26 2023, @12:05AM (#1308231) Journal

    Don't forget that electricity is an existential threat, that has killed thousands at least! I don't even know if there is an official body count. Add up all the professional electricians who made a mistake, all the amateurs who really screwed up badly, all those people who have accidentally been killed by downed power lines, plus those felons who were invited to sit in a chair that would give them the charge of their lives.

    I have little doubt that idiots putting their faith in AI will manage to kill themselves. Eventually, people will begin to sort things out.

    --
    “I have become friends with many school shooters” - Tampon Tim Walz
    • (Score: 2) by ewk on Friday May 26 2023, @07:40AM

      by ewk (5923) on Friday May 26 2023, @07:40AM (#1308273)

      I think the main fear is that the idiots will not just kill themselves.
      Hopefully enough people will be left to sort things out.

      --
      I don't always react, but when I do, I do it on SoylentNews
  • (Score: 4, Insightful) by SomeGuy on Friday May 26 2023, @01:55AM (2 children)

    by SomeGuy (5632) on Friday May 26 2023, @01:55AM (#1308244)

    The thing is, in the early days of each of those, there WERE things that could put lives in danger.

    But what happened is society identified the risks, adapted, and regulated where needed.

    During the early days of cars, before the model-T and even after, you might have preferred to walk instead. Probably the same with airplanes. Getting in to classicly programed computers, I could tell you about many f-ups.

    But what ARE the dangers of AI? Except for deepfakes, nobody seems to want to answer that. I think one of the main dangers is privacy-sucking.

    A huge danger, as with many of those previous technologies, is becoming overly-reliant and expecting them to always work perfectly. A vaccine does not guarantee you won't die of a disease, so the correct thing to do is use other precautions in addition. Even though we have airplanes, you can still live in Fresno (Airplane operator says: Fresno? Nobody goes to Fresno any more.) Thankfully we are still allowed to walk, even though we have cars.

    • (Score: 1) by Mezion on Friday May 26 2023, @04:42AM

      by Mezion (18509) on Friday May 26 2023, @04:42AM (#1308253)

      But what ARE the dangers of AI? Except for deepfakes, nobody seems to want to answer that.

      Because it is FUD for an agenda. First thought on this article is that he (was part of what) made AI development the current "next big thing".
      This is the next stage, where the early birds attempt to block each other, and to block new competition from starting up.
      He doesn't want regulation of their AI products, he wants roadblocks created for anyone else that might attempt to get started in the same field.

      What we really need is methods preventing companies from making requirements for their "marketing features".
      The current amount of "required" information that is really only required for marketing/sale of supplied information is ridiculous.

    • (Score: 2) by Tork on Friday May 26 2023, @04:19PM

      by Tork (3914) Subscriber Badge on Friday May 26 2023, @04:19PM (#1308328)

      But what ARE the dangers of AI? Except for deepfakes, nobody seems to want to answer that.

      Nobody seems to want to answer that? The danger is millions of people suddenly going unemployed because trainable software can reduce headcount.

      --
      🏳️‍🌈 Proud Ally 🏳️‍🌈