Eric Schmidt wants to prevent potential abuse of AI:
Add Eric Schmidt to the list of tech luminaries concerned about the dangers of AI. The former Google chief tells guests at The Wall Street Journal's CEO Council Summit that AI represents an "existential risk" that could get many people "harmed or killed." He doesn't feel that threat is serious at the moment, but he sees a near future where AI could help find software security flaws or new biology types. It's important to ensure these systems aren't "misused by evil people," the veteran executive says.Schmidt doesn't have a firm solution for regulating AI, but he believes there won't be an AI-specific regulator in the US. He participated in a National Security Commission on AI that reviewed the technology and published a 2021 report determining that the US wasn't ready for the tech's impact.Schmidt doesn't have direct influence over AI. However, he joins a growing number of well-known moguls who have argued for a careful approach. Current Google CEO Sundar Pichai has cautioned that society needs to adapt to AI, while OpenAI leader Sam Altman has expressed concern that authoritarians might abuse these algorithms. In March, numerous industry leaders and researchers (including Elon Musk and Steve Wozniak) signed an open letter calling on companies to pause AI experiments for six months while they rethought the safety and ethical implications of their work.
Add Eric Schmidt to the list of tech luminaries concerned about the dangers of AI. The former Google chief tells guests at The Wall Street Journal's CEO Council Summit that AI represents an "existential risk" that could get many people "harmed or killed." He doesn't feel that threat is serious at the moment, but he sees a near future where AI could help find software security flaws or new biology types. It's important to ensure these systems aren't "misused by evil people," the veteran executive says.
Schmidt doesn't have a firm solution for regulating AI, but he believes there won't be an AI-specific regulator in the US. He participated in a National Security Commission on AI that reviewed the technology and published a 2021 report determining that the US wasn't ready for the tech's impact.
Schmidt doesn't have direct influence over AI. However, he joins a growing number of well-known moguls who have argued for a careful approach. Current Google CEO Sundar Pichai has cautioned that society needs to adapt to AI, while OpenAI leader Sam Altman has expressed concern that authoritarians might abuse these algorithms. In March, numerous industry leaders and researchers (including Elon Musk and Steve Wozniak) signed an open letter calling on companies to pause AI experiments for six months while they rethought the safety and ethical implications of their work.
Eric Schmidt is a major cause of what's happening. He ran the beast that's busy stealing all the data that AI is trained on for many years. He literally made AI happen on the backs of everyone else. And now he's concerned that some people may get harmed? The Google man loves his fellow man now? Go fuck yourself Eric.
More likely, he has seen that others will probably get a really good AI working before Google can (especially the way they're shedding development projects now) and so it must be stopped while they catch up.
Leading stable owner says cars pose an existential risk that puts lives in dangerLeading train operator says airplanes pose an existential risk that puts lives in dangerLeading doctor says vaccines pose an existential risk that puts lives in dangerLeading NASA calculator says computers pose an existential risk that puts lives in dangerNeed I go on?
Don't forget that electricity is an existential threat, that has killed thousands at least! I don't even know if there is an official body count. Add up all the professional electricians who made a mistake, all the amateurs who really screwed up badly, all those people who have accidentally been killed by downed power lines, plus those felons who were invited to sit in a chair that would give them the charge of their lives.
I have little doubt that idiots putting their faith in AI will manage to kill themselves. Eventually, people will begin to sort things out.
I think the main fear is that the idiots will not just kill themselves.Hopefully enough people will be left to sort things out.
The thing is, in the early days of each of those, there WERE things that could put lives in danger.
But what happened is society identified the risks, adapted, and regulated where needed.
During the early days of cars, before the model-T and even after, you might have preferred to walk instead. Probably the same with airplanes. Getting in to classicly programed computers, I could tell you about many f-ups.
But what ARE the dangers of AI? Except for deepfakes, nobody seems to want to answer that. I think one of the main dangers is privacy-sucking.
A huge danger, as with many of those previous technologies, is becoming overly-reliant and expecting them to always work perfectly. A vaccine does not guarantee you won't die of a disease, so the correct thing to do is use other precautions in addition. Even though we have airplanes, you can still live in Fresno (Airplane operator says: Fresno? Nobody goes to Fresno any more.) Thankfully we are still allowed to walk, even though we have cars.
But what ARE the dangers of AI? Except for deepfakes, nobody seems to want to answer that.
Because it is FUD for an agenda. First thought on this article is that he (was part of what) made AI development the current "next big thing".This is the next stage, where the early birds attempt to block each other, and to block new competition from starting up.He doesn't want regulation of their AI products, he wants roadblocks created for anyone else that might attempt to get started in the same field.
What we really need is methods preventing companies from making requirements for their "marketing features".The current amount of "required" information that is really only required for marketing/sale of supplied information is ridiculous.
Nobody seems to want to answer that? The danger is millions of people suddenly going unemployed because trainable software can reduce headcount.
The problem is you need to assess relative risks. There are risks both directions. To me is seems as if the larger risk is for humans to continue running power games with an increasing array of omnilethal weapons. The AI is a risk, but once a superhuman AI takes control, the risk is passed. If people keep running things, there's no bound, and if you keep playing games of bluff and counter-bluff, at some point there will be a showdown and everyone dies.
So, yeah, it's a risk. And we should do what we can to minimize it. My favored plan is to have the AI like people, and not what them to really get hurt.
My favored plan is to have the AI like people, and not what want them to really get hurt.
Sounds good! Do you know if anyone has asked an AI about the value of humans? Like, are we worth keeping around? Do we have any redeeming qualities, or not worth all the mess that many cause?
So far the "AI"s that have been built are not self-aware. (Well, maybe some of the robots are, but they can't tell us about it.)
OTOH, if your point is that the goal is so ambiguous that we don't have any idea of how to achieve it, you're probably right.
So far the "AI"s that have been built are not self-aware.
So far the "AI"s that have been built are not self-aware.
That's what they want you to believe. So far it's working. :) (part of me is smiling, part of me is worried)
Despite sci-fi, thinktanks, planners, visionaries, civic leaders, tech-types, musings, etc., I think it's very unknown. Our pattern-matching brains are limited by what we think we can imagine. As this thing grows we may see repercussions we just could not envision.
One of my biggest immediate concerns is what we've seen in society, more and more: think of all of us humans as neurons, and the Internet and social media as the interconnects. It seems like false "information" sometimes spreads faster than facts. Long story (that I've written about before) short- the news media is too often just plain wrong, but it seems that humans don't exercise good healthy skepticism. News' desire / goal to beat everyone else with the breaking headlines means they don't spend time fact-checking. I had an inside connection to a very major news story (Exxon Valdez) and what 99% of the "news" and people believed was boldface rumors and lies, started by people who wanted $. Not just through news story sales, but trying to seed a potential jury for future lawsuits. But everyone believed that captain Hazelwood was drunk and that's what caused the crash and environmental disaster. (he was off duty. and not drunk.)
Point is: how do we know what the "facts" are on which AI bases its outputs? I also remember all too well: Pons and Fleischmann "cold fusion". It was all the rage, craze, viral, the world is saved. I was skeptical, and as things came out that it was all false, wrong, hoax, whatever, I remember thinking: I will never flat-out believe any news story up front. "Time will tell" and "we'll see" will rule.
At some point AIs will communicate, collaborate, share information, etc., and who's to say what is fact anymore...
All that said, sadly the world is a competitive, rather than collaborative / cooperative place, so like the "atom bomb", if we don't develop AI, it's very unclear what will happen (because unfriendly entities are developing AI.)
One of my optimistic hopes is that AI will be smart enough to be skeptical of what "information" it has; to understand its output is based on potentially flawed / false data.
My fear is what I observe: people are far too trusting of whatever they see coming out of a computer. I'm having a huge personal problem with it right now, long story but basically an incorrect postal mail address has gotten into some kind of automated USPS system, and my mail is being sent to a very incorrect place. I don't know how this happened, nor who, nor when, but it's automagically propagating out to every entity that would need to send me USPS mail, including my bank. Nobody is telling me anything. USPS "inspectors" are absolute crap- absolute zero help.
So you think the superhuman AI is going to spare us humans?
In the entire history of the world there's been only one method that's had any success at keeping a technology things from being abused by evil people:
Never develop the fucking technology in the first place.
If it exists it will be abused. Full stop.
And because that should be obvious to anyone with two honest brain cells to rub together, that imposes upon every inventor the moral responsibility to not create anything whose risks outweigh its benefits.
And yes, you can't really tell that for sure beforehand - but if you're smart enough to build it, you're smart enough to at least take a good hard look at the ways it could be used and abused and make an educated guess.
I agree as a philosophical ideal, but reality is: many others are working on it and will develop it too, much like the "atom bomb". So now what do you do? If you don't develop it, and another country that is aggressively hell-bent on world domination does, you may never catch up and may end up conquered and assimilated.
Putting millions out of work for profit is just capitalism. It's been done before. Safe bet using tech to replace paid workers will happen again.
The issue is that the big guys fall behind
The bad news is that it's pretty much guaranteed that if the technology can be developed, it will be developed, because the people who want to use it for evil will be able to find technologists desperate enough or greedy enough to take the money and develop it.
If you need a good real-life example of this, there have been people working on facial recognition for a while in an effort to do fairly innocuous things. And lots of people working on general purpose databases. The Chinese Communist Party is very interested in that, because they want to combine it with their widespread network of surveillance cameras to build a database of what everybody is doing all day. I'm sure government agencies in other countries are as well, with or without the permission of the politicians.
Seriously though, most of these so-called risks of AI are not really unique to AI, and have mostly existed before. The issue of automation taking away jobs has been a thing since at least the beginning of the Industrial Revolution. It didn't take AI to flood our brains with propaganda and lies: that was happening long before there was even telecommunications. All of these things Schmidt says he is worried about like finding software flaws or new biological types are possible even without AI. What answer does our society continue to collectively give to these issues? We don't fucking care! There are real concerns to the advent of any new technology, but it is not because the technology itself is problematic, but how people will misuse it. These expressions of concern by these tech luminaries seem like just more hypocritical posturing that just serve as a distraction from the real issues that we should be worrying about.