Slash Boxes

SoylentNews is people

posted by janrinok on Thursday May 25 2023, @10:41PM   Printer-friendly

Eric Schmidt wants to prevent potential abuse of AI:

Add Eric Schmidt to the list of tech luminaries concerned about the dangers of AI. The former Google chief tells guests at The Wall Street Journal's CEO Council Summit that AI represents an "existential risk" that could get many people "harmed or killed." He doesn't feel that threat is serious at the moment, but he sees a near future where AI could help find software security flaws or new biology types. It's important to ensure these systems aren't "misused by evil people," the veteran executive says.

Schmidt doesn't have a firm solution for regulating AI, but he believes there won't be an AI-specific regulator in the US. He participated in a National Security Commission on AI that reviewed the technology and published a 2021 report determining that the US wasn't ready for the tech's impact.

Schmidt doesn't have direct influence over AI. However, he joins a growing number of well-known moguls who have argued for a careful approach. Current Google CEO Sundar Pichai has cautioned that society needs to adapt to AI, while OpenAI leader Sam Altman has expressed concern that authoritarians might abuse these algorithms. In March, numerous industry leaders and researchers (including Elon Musk and Steve Wozniak) signed an open letter calling on companies to pause AI experiments for six months while they rethought the safety and ethical implications of their work.

Original Submission

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by HiThere on Friday May 26 2023, @01:08PM (1 child)

    by HiThere (866) Subscriber Badge on Friday May 26 2023, @01:08PM (#1308305) Journal

    So far the "AI"s that have been built are not self-aware. (Well, maybe some of the robots are, but they can't tell us about it.)

    OTOH, if your point is that the goal is so ambiguous that we don't have any idea of how to achieve it, you're probably right.

    Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by RS3 on Friday May 26 2023, @05:32PM

    by RS3 (6367) on Friday May 26 2023, @05:32PM (#1308338)

    So far the "AI"s that have been built are not self-aware.

    That's what they want you to believe. So far it's working. :) (part of me is smiling, part of me is worried)

    OTOH, if your point is that the goal is so ambiguous that we don't have any idea of how to achieve it, you're probably right.

    Despite sci-fi, thinktanks, planners, visionaries, civic leaders, tech-types, musings, etc., I think it's very unknown. Our pattern-matching brains are limited by what we think we can imagine. As this thing grows we may see repercussions we just could not envision.

    One of my biggest immediate concerns is what we've seen in society, more and more: think of all of us humans as neurons, and the Internet and social media as the interconnects. It seems like false "information" sometimes spreads faster than facts. Long story (that I've written about before) short- the news media is too often just plain wrong, but it seems that humans don't exercise good healthy skepticism. News' desire / goal to beat everyone else with the breaking headlines means they don't spend time fact-checking. I had an inside connection to a very major news story (Exxon Valdez) and what 99% of the "news" and people believed was boldface rumors and lies, started by people who wanted $. Not just through news story sales, but trying to seed a potential jury for future lawsuits. But everyone believed that captain Hazelwood was drunk and that's what caused the crash and environmental disaster. (he was off duty. and not drunk.)

    Point is: how do we know what the "facts" are on which AI bases its outputs? I also remember all too well: Pons and Fleischmann "cold fusion". It was all the rage, craze, viral, the world is saved. I was skeptical, and as things came out that it was all false, wrong, hoax, whatever, I remember thinking: I will never flat-out believe any news story up front. "Time will tell" and "we'll see" will rule.

    At some point AIs will communicate, collaborate, share information, etc., and who's to say what is fact anymore...

    All that said, sadly the world is a competitive, rather than collaborative / cooperative place, so like the "atom bomb", if we don't develop AI, it's very unclear what will happen (because unfriendly entities are developing AI.)

    One of my optimistic hopes is that AI will be smart enough to be skeptical of what "information" it has; to understand its output is based on potentially flawed / false data.

    My fear is what I observe: people are far too trusting of whatever they see coming out of a computer. I'm having a huge personal problem with it right now, long story but basically an incorrect postal mail address has gotten into some kind of automated USPS system, and my mail is being sent to a very incorrect place. I don't know how this happened, nor who, nor when, but it's automagically propagating out to every entity that would need to send me USPS mail, including my bank. Nobody is telling me anything. USPS "inspectors" are absolute crap- absolute zero help.