Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Thursday May 25 2023, @10:41PM   Printer-friendly

Eric Schmidt wants to prevent potential abuse of AI:

Add Eric Schmidt to the list of tech luminaries concerned about the dangers of AI. The former Google chief tells guests at The Wall Street Journal's CEO Council Summit that AI represents an "existential risk" that could get many people "harmed or killed." He doesn't feel that threat is serious at the moment, but he sees a near future where AI could help find software security flaws or new biology types. It's important to ensure these systems aren't "misused by evil people," the veteran executive says.

Schmidt doesn't have a firm solution for regulating AI, but he believes there won't be an AI-specific regulator in the US. He participated in a National Security Commission on AI that reviewed the technology and published a 2021 report determining that the US wasn't ready for the tech's impact.

Schmidt doesn't have direct influence over AI. However, he joins a growing number of well-known moguls who have argued for a careful approach. Current Google CEO Sundar Pichai has cautioned that society needs to adapt to AI, while OpenAI leader Sam Altman has expressed concern that authoritarians might abuse these algorithms. In March, numerous industry leaders and researchers (including Elon Musk and Steve Wozniak) signed an open letter calling on companies to pause AI experiments for six months while they rethought the safety and ethical implications of their work.


Original Submission

Related Stories

The Dangers of a Superintelligent AI is Fiction 19 comments

Speaking of the existential threat of AI is science fiction, and bad science fiction for that matter because it is not based on anything we know about science, logic, and nothing that we even know about ourselves:

Despite their apparent success, LLMs are not (really) 'models of language' but are statistical models of the regularities found in linguistic communication. Models and theories should explain a phenomenon (e.g., F = ma) but LLMs are not explainable because explainability requires structured semantics and reversible compositionality that these models do not admit (see Saba, 2023 for more details). In fact, and due to the subsymbolic nature of LLMs, whatever 'knowledge' these models acquire about language will always be buried in billions of microfeatures (weights), none of which is meaningful on its own. In addition to the lack of explainability, LLMs will always generate biased and toxic language since they are susceptible to the biases and toxicity in their training data (Bender et. al., 2021). Moreover, and due to their statistical nature, these systems will never be trusted to decide on the "truthfulness" of the content they generate (Borji, 2023) – LLMs ingest text and they cannot decide which fragments of text are true and which are not. Note that none of these problematic issues are a function of scale but are paradigmatic issues that are a byproduct of the architecture of deep neural networks (DNNs) and their training procedures. Finally, and contrary to some misguided narrative, these LLMs do not have human-level understanding of language (for lack of space we do not discuss here the limitations of LLMs regarding their linguistic competence, but see this for some examples of problems related to intentionality and commonsense reasoning that these models will always have problems with). Our focus here is on the now popular theme of how dangerous these systems are to humanity.

The article goes on to provide a statistical argument as to why we are many, many years away from AI being an existential threat, ending with:

So enjoy the news about "the potential danger of AI". But watch and read this news like you're watching a really funny sitcom. Make a nice drink (or a nice cup of tea), listen and smile. And then please, sleep well, because all is OK, no matter what some self-appointed god fathers say. They might know about LLMs, but they apparently never heard of BDIs.

The author's conclusion seems to be that although AI may pose a threat to certain professions, it doesn't endanger the existence of humanity.

Related:


Original Submission

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Insightful) by Rosco P. Coltrane on Thursday May 25 2023, @11:08PM (1 child)

    by Rosco P. Coltrane (4757) on Thursday May 25 2023, @11:08PM (#1308226)

    Eric Schmidt is a major cause of what's happening. He ran the beast that's busy stealing all the data that AI is trained on for many years. He literally made AI happen on the backs of everyone else. And now he's concerned that some people may get harmed? The Google man loves his fellow man now? Go fuck yourself Eric.

    • (Score: 2) by sjames on Friday May 26 2023, @08:07PM

      by sjames (2882) on Friday May 26 2023, @08:07PM (#1308368) Journal

      More likely, he has seen that others will probably get a really good AI working before Google can (especially the way they're shedding development projects now) and so it must be stopped while they catch up.

  • (Score: 3, Interesting) by Snotnose on Thursday May 25 2023, @11:18PM (5 children)

    by Snotnose (1623) on Thursday May 25 2023, @11:18PM (#1308227)

    Leading stable owner says cars pose an existential risk that puts lives in danger
    Leading train operator says airplanes pose an existential risk that puts lives in danger
    Leading doctor says vaccines pose an existential risk that puts lives in danger
    Leading NASA calculator says computers pose an existential risk that puts lives in danger
    Need I go on?

    --
    When the dust settled America realized it was saved by a porn star.
    • (Score: 1) by Runaway1956 on Friday May 26 2023, @12:05AM (1 child)

      by Runaway1956 (2926) Subscriber Badge on Friday May 26 2023, @12:05AM (#1308231) Journal

      Don't forget that electricity is an existential threat, that has killed thousands at least! I don't even know if there is an official body count. Add up all the professional electricians who made a mistake, all the amateurs who really screwed up badly, all those people who have accidentally been killed by downed power lines, plus those felons who were invited to sit in a chair that would give them the charge of their lives.

      I have little doubt that idiots putting their faith in AI will manage to kill themselves. Eventually, people will begin to sort things out.

      • (Score: 2) by ewk on Friday May 26 2023, @07:40AM

        by ewk (5923) on Friday May 26 2023, @07:40AM (#1308273)

        I think the main fear is that the idiots will not just kill themselves.
        Hopefully enough people will be left to sort things out.

        --
        I don't always react, but when I do, I do it on SoylentNews
    • (Score: 4, Insightful) by SomeGuy on Friday May 26 2023, @01:55AM (2 children)

      by SomeGuy (5632) on Friday May 26 2023, @01:55AM (#1308244)

      The thing is, in the early days of each of those, there WERE things that could put lives in danger.

      But what happened is society identified the risks, adapted, and regulated where needed.

      During the early days of cars, before the model-T and even after, you might have preferred to walk instead. Probably the same with airplanes. Getting in to classicly programed computers, I could tell you about many f-ups.

      But what ARE the dangers of AI? Except for deepfakes, nobody seems to want to answer that. I think one of the main dangers is privacy-sucking.

      A huge danger, as with many of those previous technologies, is becoming overly-reliant and expecting them to always work perfectly. A vaccine does not guarantee you won't die of a disease, so the correct thing to do is use other precautions in addition. Even though we have airplanes, you can still live in Fresno (Airplane operator says: Fresno? Nobody goes to Fresno any more.) Thankfully we are still allowed to walk, even though we have cars.

      • (Score: 1) by Mezion on Friday May 26 2023, @04:42AM

        by Mezion (18509) on Friday May 26 2023, @04:42AM (#1308253)

        But what ARE the dangers of AI? Except for deepfakes, nobody seems to want to answer that.

        Because it is FUD for an agenda. First thought on this article is that he (was part of what) made AI development the current "next big thing".
        This is the next stage, where the early birds attempt to block each other, and to block new competition from starting up.
        He doesn't want regulation of their AI products, he wants roadblocks created for anyone else that might attempt to get started in the same field.

        What we really need is methods preventing companies from making requirements for their "marketing features".
        The current amount of "required" information that is really only required for marketing/sale of supplied information is ridiculous.

      • (Score: 2) by Tork on Friday May 26 2023, @04:19PM

        by Tork (3914) Subscriber Badge on Friday May 26 2023, @04:19PM (#1308328)

        But what ARE the dangers of AI? Except for deepfakes, nobody seems to want to answer that.

        Nobody seems to want to answer that? The danger is millions of people suddenly going unemployed because trainable software can reduce headcount.

        --
        🏳️‍🌈 Proud Ally 🏳️‍🌈
  • (Score: 3, Insightful) by HiThere on Thursday May 25 2023, @11:20PM (4 children)

    by HiThere (866) Subscriber Badge on Thursday May 25 2023, @11:20PM (#1308228) Journal

    The problem is you need to assess relative risks. There are risks both directions. To me is seems as if the larger risk is for humans to continue running power games with an increasing array of omnilethal weapons. The AI is a risk, but once a superhuman AI takes control, the risk is passed. If people keep running things, there's no bound, and if you keep playing games of bluff and counter-bluff, at some point there will be a showdown and everyone dies.

    So, yeah, it's a risk. And we should do what we can to minimize it. My favored plan is to have the AI like people, and not what them to really get hurt.

    --
    Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
    • (Score: 2) by RS3 on Friday May 26 2023, @12:09AM (2 children)

      by RS3 (6367) on Friday May 26 2023, @12:09AM (#1308236)

      My favored plan is to have the AI like people, and not what want them to really get hurt.

      Sounds good! Do you know if anyone has asked an AI about the value of humans? Like, are we worth keeping around? Do we have any redeeming qualities, or not worth all the mess that many cause?

      • (Score: 2) by HiThere on Friday May 26 2023, @01:08PM (1 child)

        by HiThere (866) Subscriber Badge on Friday May 26 2023, @01:08PM (#1308305) Journal

        So far the "AI"s that have been built are not self-aware. (Well, maybe some of the robots are, but they can't tell us about it.)

        OTOH, if your point is that the goal is so ambiguous that we don't have any idea of how to achieve it, you're probably right.

        --
        Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
        • (Score: 2) by RS3 on Friday May 26 2023, @05:32PM

          by RS3 (6367) on Friday May 26 2023, @05:32PM (#1308338)

          So far the "AI"s that have been built are not self-aware.

          That's what they want you to believe. So far it's working. :) (part of me is smiling, part of me is worried)

          OTOH, if your point is that the goal is so ambiguous that we don't have any idea of how to achieve it, you're probably right.

          Despite sci-fi, thinktanks, planners, visionaries, civic leaders, tech-types, musings, etc., I think it's very unknown. Our pattern-matching brains are limited by what we think we can imagine. As this thing grows we may see repercussions we just could not envision.

          One of my biggest immediate concerns is what we've seen in society, more and more: think of all of us humans as neurons, and the Internet and social media as the interconnects. It seems like false "information" sometimes spreads faster than facts. Long story (that I've written about before) short- the news media is too often just plain wrong, but it seems that humans don't exercise good healthy skepticism. News' desire / goal to beat everyone else with the breaking headlines means they don't spend time fact-checking. I had an inside connection to a very major news story (Exxon Valdez) and what 99% of the "news" and people believed was boldface rumors and lies, started by people who wanted $. Not just through news story sales, but trying to seed a potential jury for future lawsuits. But everyone believed that captain Hazelwood was drunk and that's what caused the crash and environmental disaster. (he was off duty. and not drunk.)

          Point is: how do we know what the "facts" are on which AI bases its outputs? I also remember all too well: Pons and Fleischmann "cold fusion". It was all the rage, craze, viral, the world is saved. I was skeptical, and as things came out that it was all false, wrong, hoax, whatever, I remember thinking: I will never flat-out believe any news story up front. "Time will tell" and "we'll see" will rule.

          At some point AIs will communicate, collaborate, share information, etc., and who's to say what is fact anymore...

          All that said, sadly the world is a competitive, rather than collaborative / cooperative place, so like the "atom bomb", if we don't develop AI, it's very unclear what will happen (because unfriendly entities are developing AI.)

          One of my optimistic hopes is that AI will be smart enough to be skeptical of what "information" it has; to understand its output is based on potentially flawed / false data.

          My fear is what I observe: people are far too trusting of whatever they see coming out of a computer. I'm having a huge personal problem with it right now, long story but basically an incorrect postal mail address has gotten into some kind of automated USPS system, and my mail is being sent to a very incorrect place. I don't know how this happened, nor who, nor when, but it's automagically propagating out to every entity that would need to send me USPS mail, including my bank. Nobody is telling me anything. USPS "inspectors" are absolute crap- absolute zero help.

    • (Score: 0) by Anonymous Coward on Friday May 26 2023, @02:05PM

      by Anonymous Coward on Friday May 26 2023, @02:05PM (#1308313)

      So you think the superhuman AI is going to spare us humans?

  • (Score: 4, Insightful) by Immerman on Thursday May 25 2023, @11:56PM (5 children)

    by Immerman (3985) on Thursday May 25 2023, @11:56PM (#1308229)

    In the entire history of the world there's been only one method that's had any success at keeping a technology things from being abused by evil people:

    Never develop the fucking technology in the first place.

    If it exists it will be abused. Full stop.

    And because that should be obvious to anyone with two honest brain cells to rub together, that imposes upon every inventor the moral responsibility to not create anything whose risks outweigh its benefits.

    And yes, you can't really tell that for sure beforehand - but if you're smart enough to build it, you're smart enough to at least take a good hard look at the ways it could be used and abused and make an educated guess.

    • (Score: 4, Insightful) by RS3 on Friday May 26 2023, @12:05AM

      by RS3 (6367) on Friday May 26 2023, @12:05AM (#1308232)

      I agree as a philosophical ideal, but reality is: many others are working on it and will develop it too, much like the "atom bomb". So now what do you do? If you don't develop it, and another country that is aggressively hell-bent on world domination does, you may never catch up and may end up conquered and assimilated.

    • (Score: 3, Insightful) by Tork on Friday May 26 2023, @12:58AM (2 children)

      by Tork (3914) Subscriber Badge on Friday May 26 2023, @12:58AM (#1308241)
      I don't think the issue is that it'll be used for harm, I think the issue will be that it's used carelessly. The nuance here is that a business might use AI to make a lot of money but that business wouldn't care at all if its product puts millions out of work. I'm not sure how an inventor could reliably be held accountable for developing the foundation to get a business there. Kinda reminds me of Silicon Valley. "Who knew an enhancement in data compression would lead to SkyNet?"
      --
      🏳️‍🌈 Proud Ally 🏳️‍🌈
      • (Score: 2) by MIRV888 on Friday May 26 2023, @02:04AM

        by MIRV888 (11376) on Friday May 26 2023, @02:04AM (#1308246)

        Putting millions out of work for profit is just capitalism. It's been done before. Safe bet using tech to replace paid workers will happen again.

      • (Score: 2) by legont on Saturday May 27 2023, @01:10AM

        by legont (4179) on Saturday May 27 2023, @01:10AM (#1308408)

        The issue is that the big guys fall behind

        --
        "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
    • (Score: 2) by Thexalon on Friday May 26 2023, @11:03AM

      by Thexalon (636) on Friday May 26 2023, @11:03AM (#1308291)

      The bad news is that it's pretty much guaranteed that if the technology can be developed, it will be developed, because the people who want to use it for evil will be able to find technologists desperate enough or greedy enough to take the money and develop it.

      If you need a good real-life example of this, there have been people working on facial recognition for a while in an effort to do fairly innocuous things. And lots of people working on general purpose databases. The Chinese Communist Party is very interested in that, because they want to combine it with their widespread network of surveillance cameras to build a database of what everybody is doing all day. I'm sure government agencies in other countries are as well, with or without the permission of the politicians.

      --
      The only thing that stops a bad guy with a compiler is a good guy with a compiler.
  • (Score: 0) by Anonymous Coward on Saturday May 27 2023, @07:06AM

    by Anonymous Coward on Saturday May 27 2023, @07:06AM (#1308436)

    Seriously though, most of these so-called risks of AI are not really unique to AI, and have mostly existed before. The issue of automation taking away jobs has been a thing since at least the beginning of the Industrial Revolution. It didn't take AI to flood our brains with propaganda and lies: that was happening long before there was even telecommunications. All of these things Schmidt says he is worried about like finding software flaws or new biological types are possible even without AI. What answer does our society continue to collectively give to these issues? We don't fucking care! There are real concerns to the advent of any new technology, but it is not because the technology itself is problematic, but how people will misuse it. These expressions of concern by these tech luminaries seem like just more hypocritical posturing that just serve as a distraction from the real issues that we should be worrying about.

(1)