Eric Schmidt wants to prevent potential abuse of AI:
Add Eric Schmidt to the list of tech luminaries concerned about the dangers of AI. The former Google chief tells guests at The Wall Street Journal's CEO Council Summit that AI represents an "existential risk" that could get many people "harmed or killed." He doesn't feel that threat is serious at the moment, but he sees a near future where AI could help find software security flaws or new biology types. It's important to ensure these systems aren't "misused by evil people," the veteran executive says.
Schmidt doesn't have a firm solution for regulating AI, but he believes there won't be an AI-specific regulator in the US. He participated in a National Security Commission on AI that reviewed the technology and published a 2021 report determining that the US wasn't ready for the tech's impact.
Schmidt doesn't have direct influence over AI. However, he joins a growing number of well-known moguls who have argued for a careful approach. Current Google CEO Sundar Pichai has cautioned that society needs to adapt to AI, while OpenAI leader Sam Altman has expressed concern that authoritarians might abuse these algorithms. In March, numerous industry leaders and researchers (including Elon Musk and Steve Wozniak) signed an open letter calling on companies to pause AI experiments for six months while they rethought the safety and ethical implications of their work.
(Score: 3, Interesting) by Snotnose on Thursday May 25 2023, @11:18PM (5 children)
Leading stable owner says cars pose an existential risk that puts lives in danger
Leading train operator says airplanes pose an existential risk that puts lives in danger
Leading doctor says vaccines pose an existential risk that puts lives in danger
Leading NASA calculator says computers pose an existential risk that puts lives in danger
Need I go on?
Bad decisions, great stories
(Score: 1) by Runaway1956 on Friday May 26 2023, @12:05AM (1 child)
Don't forget that electricity is an existential threat, that has killed thousands at least! I don't even know if there is an official body count. Add up all the professional electricians who made a mistake, all the amateurs who really screwed up badly, all those people who have accidentally been killed by downed power lines, plus those felons who were invited to sit in a chair that would give them the charge of their lives.
I have little doubt that idiots putting their faith in AI will manage to kill themselves. Eventually, people will begin to sort things out.
“I have become friends with many school shooters” - Tampon Tim Walz
(Score: 2) by ewk on Friday May 26 2023, @07:40AM
I think the main fear is that the idiots will not just kill themselves.
Hopefully enough people will be left to sort things out.
I don't always react, but when I do, I do it on SoylentNews
(Score: 4, Insightful) by SomeGuy on Friday May 26 2023, @01:55AM (2 children)
The thing is, in the early days of each of those, there WERE things that could put lives in danger.
But what happened is society identified the risks, adapted, and regulated where needed.
During the early days of cars, before the model-T and even after, you might have preferred to walk instead. Probably the same with airplanes. Getting in to classicly programed computers, I could tell you about many f-ups.
But what ARE the dangers of AI? Except for deepfakes, nobody seems to want to answer that. I think one of the main dangers is privacy-sucking.
A huge danger, as with many of those previous technologies, is becoming overly-reliant and expecting them to always work perfectly. A vaccine does not guarantee you won't die of a disease, so the correct thing to do is use other precautions in addition. Even though we have airplanes, you can still live in Fresno (Airplane operator says: Fresno? Nobody goes to Fresno any more.) Thankfully we are still allowed to walk, even though we have cars.
(Score: 1) by Mezion on Friday May 26 2023, @04:42AM
Because it is FUD for an agenda. First thought on this article is that he (was part of what) made AI development the current "next big thing".
This is the next stage, where the early birds attempt to block each other, and to block new competition from starting up.
He doesn't want regulation of their AI products, he wants roadblocks created for anyone else that might attempt to get started in the same field.
What we really need is methods preventing companies from making requirements for their "marketing features".
The current amount of "required" information that is really only required for marketing/sale of supplied information is ridiculous.
(Score: 2) by Tork on Friday May 26 2023, @04:19PM
Nobody seems to want to answer that? The danger is millions of people suddenly going unemployed because trainable software can reduce headcount.
🏳️🌈 Proud Ally 🏳️🌈