Slash Boxes

SoylentNews is people

posted by janrinok on Thursday May 25 2023, @10:41PM   Printer-friendly

Eric Schmidt wants to prevent potential abuse of AI:

Add Eric Schmidt to the list of tech luminaries concerned about the dangers of AI. The former Google chief tells guests at The Wall Street Journal's CEO Council Summit that AI represents an "existential risk" that could get many people "harmed or killed." He doesn't feel that threat is serious at the moment, but he sees a near future where AI could help find software security flaws or new biology types. It's important to ensure these systems aren't "misused by evil people," the veteran executive says.

Schmidt doesn't have a firm solution for regulating AI, but he believes there won't be an AI-specific regulator in the US. He participated in a National Security Commission on AI that reviewed the technology and published a 2021 report determining that the US wasn't ready for the tech's impact.

Schmidt doesn't have direct influence over AI. However, he joins a growing number of well-known moguls who have argued for a careful approach. Current Google CEO Sundar Pichai has cautioned that society needs to adapt to AI, while OpenAI leader Sam Altman has expressed concern that authoritarians might abuse these algorithms. In March, numerous industry leaders and researchers (including Elon Musk and Steve Wozniak) signed an open letter calling on companies to pause AI experiments for six months while they rethought the safety and ethical implications of their work.

Original Submission

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Insightful) by Immerman on Thursday May 25 2023, @11:56PM (5 children)

    by Immerman (3985) on Thursday May 25 2023, @11:56PM (#1308229)

    In the entire history of the world there's been only one method that's had any success at keeping a technology things from being abused by evil people:

    Never develop the fucking technology in the first place.

    If it exists it will be abused. Full stop.

    And because that should be obvious to anyone with two honest brain cells to rub together, that imposes upon every inventor the moral responsibility to not create anything whose risks outweigh its benefits.

    And yes, you can't really tell that for sure beforehand - but if you're smart enough to build it, you're smart enough to at least take a good hard look at the ways it could be used and abused and make an educated guess.

    Starting Score:    1  point
    Moderation   +2  
       Insightful=2, Total=2
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 4, Insightful) by RS3 on Friday May 26 2023, @12:05AM

    by RS3 (6367) on Friday May 26 2023, @12:05AM (#1308232)

    I agree as a philosophical ideal, but reality is: many others are working on it and will develop it too, much like the "atom bomb". So now what do you do? If you don't develop it, and another country that is aggressively hell-bent on world domination does, you may never catch up and may end up conquered and assimilated.

  • (Score: 3, Insightful) by Tork on Friday May 26 2023, @12:58AM (2 children)

    by Tork (3914) Subscriber Badge on Friday May 26 2023, @12:58AM (#1308241)
    I don't think the issue is that it'll be used for harm, I think the issue will be that it's used carelessly. The nuance here is that a business might use AI to make a lot of money but that business wouldn't care at all if its product puts millions out of work. I'm not sure how an inventor could reliably be held accountable for developing the foundation to get a business there. Kinda reminds me of Silicon Valley. "Who knew an enhancement in data compression would lead to SkyNet?"
    🏳️‍🌈 Proud Ally 🏳️‍🌈
    • (Score: 2) by MIRV888 on Friday May 26 2023, @02:04AM

      by MIRV888 (11376) on Friday May 26 2023, @02:04AM (#1308246)

      Putting millions out of work for profit is just capitalism. It's been done before. Safe bet using tech to replace paid workers will happen again.

    • (Score: 2) by legont on Saturday May 27 2023, @01:10AM

      by legont (4179) on Saturday May 27 2023, @01:10AM (#1308408)

      The issue is that the big guys fall behind

      "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
  • (Score: 2) by Thexalon on Friday May 26 2023, @11:03AM

    by Thexalon (636) on Friday May 26 2023, @11:03AM (#1308291)

    The bad news is that it's pretty much guaranteed that if the technology can be developed, it will be developed, because the people who want to use it for evil will be able to find technologists desperate enough or greedy enough to take the money and develop it.

    If you need a good real-life example of this, there have been people working on facial recognition for a while in an effort to do fairly innocuous things. And lots of people working on general purpose databases. The Chinese Communist Party is very interested in that, because they want to combine it with their widespread network of surveillance cameras to build a database of what everybody is doing all day. I'm sure government agencies in other countries are as well, with or without the permission of the politicians.

    The only thing that stops a bad guy with a compiler is a good guy with a compiler.