Eric Schmidt wants to prevent potential abuse of AI:
Add Eric Schmidt to the list of tech luminaries concerned about the dangers of AI. The former Google chief tells guests at The Wall Street Journal's CEO Council Summit that AI represents an "existential risk" that could get many people "harmed or killed." He doesn't feel that threat is serious at the moment, but he sees a near future where AI could help find software security flaws or new biology types. It's important to ensure these systems aren't "misused by evil people," the veteran executive says.
Schmidt doesn't have a firm solution for regulating AI, but he believes there won't be an AI-specific regulator in the US. He participated in a National Security Commission on AI that reviewed the technology and published a 2021 report determining that the US wasn't ready for the tech's impact.
Schmidt doesn't have direct influence over AI. However, he joins a growing number of well-known moguls who have argued for a careful approach. Current Google CEO Sundar Pichai has cautioned that society needs to adapt to AI, while OpenAI leader Sam Altman has expressed concern that authoritarians might abuse these algorithms. In March, numerous industry leaders and researchers (including Elon Musk and Steve Wozniak) signed an open letter calling on companies to pause AI experiments for six months while they rethought the safety and ethical implications of their work.
(Score: 4, Insightful) by Immerman on Thursday May 25 2023, @11:56PM (5 children)
In the entire history of the world there's been only one method that's had any success at keeping a technology things from being abused by evil people:
Never develop the fucking technology in the first place.
If it exists it will be abused. Full stop.
And because that should be obvious to anyone with two honest brain cells to rub together, that imposes upon every inventor the moral responsibility to not create anything whose risks outweigh its benefits.
And yes, you can't really tell that for sure beforehand - but if you're smart enough to build it, you're smart enough to at least take a good hard look at the ways it could be used and abused and make an educated guess.
(Score: 4, Insightful) by RS3 on Friday May 26 2023, @12:05AM
I agree as a philosophical ideal, but reality is: many others are working on it and will develop it too, much like the "atom bomb". So now what do you do? If you don't develop it, and another country that is aggressively hell-bent on world domination does, you may never catch up and may end up conquered and assimilated.
(Score: 3, Insightful) by Tork on Friday May 26 2023, @12:58AM (2 children)
🏳️🌈 Proud Ally 🏳️🌈
(Score: 2) by MIRV888 on Friday May 26 2023, @02:04AM
Putting millions out of work for profit is just capitalism. It's been done before. Safe bet using tech to replace paid workers will happen again.
(Score: 2) by legont on Saturday May 27 2023, @01:10AM
The issue is that the big guys fall behind
"Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
(Score: 2) by Thexalon on Friday May 26 2023, @11:03AM
The bad news is that it's pretty much guaranteed that if the technology can be developed, it will be developed, because the people who want to use it for evil will be able to find technologists desperate enough or greedy enough to take the money and develop it.
If you need a good real-life example of this, there have been people working on facial recognition for a while in an effort to do fairly innocuous things. And lots of people working on general purpose databases. The Chinese Communist Party is very interested in that, because they want to combine it with their widespread network of surveillance cameras to build a database of what everybody is doing all day. I'm sure government agencies in other countries are as well, with or without the permission of the politicians.
"Think of how stupid the average person is. Then realize half of 'em are stupider than that." - George Carlin