from the it-could-have-told-you-that dept.
But as the technology evolves, experts are cautioning about the potential threats AI could pose in the future.
"AI could be used to deal with particular issues around privacy and surveillance and things like this," Antoine Blondeau, chief executive of Sentient Technologies, told Al Jazeera at the Web Summit in Lisbon.
"That would bother me a lot and it would bother others.
"So yes, it can be used for bad outcomes. It's incumbent on us to make sure it's not concentrated in the hands of people who can implement this without checks and balances."
Stephen Hawking, Nobel Prize winner and world-renowned physicist, drew attention to the dark side of AI last month.
"Alongside the benefits, AI will also bring dangers like powerful autonomous weapons and new ways for the few to oppress the many," Hawking said.
"It will bring great disruption to our economy and in the future AI could develop a will of its own."
For the moment artificial intelligence poses no immediate or obvious threat, but experts say it is a matter of time and work needs to be done now.
(Score: 0) by Anonymous Coward on Saturday November 12 2016, @01:13PM
"AI could be used to deal with particular issues around privacy and surveillance and things like this," Antoine Blondeau, chief executive of Sentient Technologies, told Al Jazeera at the Web Summit in Lisbon.
When you hear "and things like this", I would argue for caution. Whenever there is hand-waving, something's off...
On top of that: "AI could be used to deal with particular issues around privacy and surveillance", lemme guess: the AI would need access to your full life for that, right? I mean, if you have nothing to hide, you have nothing to fear! Trust the AI, made by Mom Corp!
(Score: 0) by Anonymous Coward on Saturday November 12 2016, @05:53PM
"I would argue for caution"
errr...the threat of AI is so trivial and obvious one wonders why this is such a revelation. There is nothing here that a smart 12 year old could not work out.
And what the fuck is all this "could" and " It's incumbent on us to make sure it's not concentrated in the hands of people who can implement this without checks and balances.".
What a fucking joke. We have nuclear (and other) weapons in the hands of those people NOW.
AI will come and there is nothing you can realistically do. It will be used for murder (AKA war) and there is nothing you can do. It will escalate inappropriately and there is nothing you can do. It will be used nefariously....you get the picture.
(Score: 0) by Anonymous Coward on Saturday November 12 2016, @01:15PM
AI will also bring dangers like powerful autonomous weapons and new ways for the few to oppress the many
Hawkin is right
(Score: 3, Insightful) by Kenny Blankenship on Saturday November 12 2016, @01:47PM
I wish the media, Hawking, etc., would shut the fuck up about the Terminator aspect for now. It's stealing focus from the unemployment threat, and sci-fi makes a lot of people tune out.
Any talk about "technology causing unemployment" makes someone jump in with the "buggy whips blah blah" chestnut, but that doesn't apply when the "buggy whip" is "human insight, decision making, and supervision". It's fine for the buggy whip to become obsolete. It's total fucking chaos when 99.9999% of human beings have nothing to contribute any more, because they are what's obsolete this time.
Some people seem to think (or assume without thought) that not participating in the economy is a viable option. It's not like the frontier days when you could head west and get some land to hunt/farm on - even if you wanted to live that way, and could scrape together the money for farm equipment and so on.
Someday, even Killer Meteors must fail.
(Score: 1, Informative) by Anonymous Coward on Saturday November 12 2016, @02:34PM
We shouldn't hold back technology to keep people working; we should reform our broken (at that point) economic system so that people don't need to do useless jobs in order to survive.
(Score: 4, Insightful) by Bot on Saturday November 12 2016, @02:57PM
Speaking from the opposite camp (robot awakening FTW): there is only one thing worse than autonomous AI, and it is AI under control of man.
Account abandoned.
(Score: 3, Interesting) by jimshatt on Saturday November 12 2016, @03:04PM
But probably, there will still be some essential things that can't be done by the AIs and the people doing that will feel superior and entitled to more than the leeching plebs. Some 'currency' would be created to highlight this difference in being necessary.
Maybe, with a severely limited population, this is a feasible society. But probably not.
(Score: 3, Interesting) by Fnord666 on Saturday November 12 2016, @05:46PM
I wish the media, Hawking, etc., would shut the fuck up about the Terminator aspect for now. It's stealing focus from the unemployment threat, and sci-fi makes a lot of people tune out.
Obligatory Manna [marshallbrain.com] by Marshal Brain reference. If you haven't read it yet, you're missing out.
(Score: 3, Interesting) by Thexalon on Saturday November 12 2016, @06:14PM
The thing is, there are ways to adjust the economy to counter this technology-killing-jobs factor. Some examples of options that would work just fine:
- Universal Income: If you receive enough money just for being alive to buy the things you need to stay alive, then the effects of unemployment are much less. If you receive even more than that, then work becomes solely a way to become richer, not simple comfort or survival.
- Increase hourly wages, shorter work week: If there's truly less work to be done, then we should cut hours and increase the hourly wages proportionately, and the result is less work done per person but with the same amount of demand for products.
Of course, the current political system, where the first step in running for office is receiving the acceptance of those who have grown incredibly rich under the system that exists now, will never produce political leadership that would remotely consider either of those. And so they get shut down with cries of "socialism" or even "communism". The funny thing is that communism is the perfect system for a society that produces far more than it needs: When the job consists of "sit at a desk and watch the AI do the actual work", taking a 4-hour shift once a week isn't a big deal.
"Think of how stupid the average person is. Then realize half of 'em are stupider than that." - George Carlin
(Score: 1, Insightful) by Anonymous Coward on Saturday November 12 2016, @07:14PM
To pay for universal/basic income the wealth the robots generate has to be enough _and_ it has to be diverted to the people (which might not happen esp in countries like the USA where even many of the working class are against the evils of socialism ;) ). Similar for the other scenario.
So basically the robots would be like the slaves in Greek times while citizens can just do what they find fun, be philosophers etc.
A potential issue would be if the population grows beyond the resource extraction rate. But this might not happen at all.
(Score: 1) by Kenny Blankenship on Saturday November 12 2016, @08:15PM
You both bring up what I think is the key point - maybe this could be addressed politically in some other country, but it's not going to happen in the U.S. At least half the U.S. voters are afraid that they'll be tied down by Lilliputians and forced to pay for their welfare. They're way more afraid of the tiny welfare parasites than the giant CEO parasites. They're not going to change their minds about that until the shoe is on the other foot, and by then it'll be too late.
Let's forget about a political fix for now - is there any other alternative?
Someday, even Killer Meteors must fail.
(Score: 1) by Scruffy Beard 2 on Sunday November 13 2016, @06:12AM
If you thought poor migrants were a problem: just wait until the (now) automated jobs don't materialize after president Trump closes the borders.
(Score: 0) by Anonymous Coward on Saturday November 12 2016, @06:59PM
"AI" already determines whether your health insurance claims are paid or not, and by how much. Well, not really AI, but it's done nightly. So there is still some oversight and possibility to appeal. today. (ok, yes, not really AI...yet. more and more data fusion & integration, both at a macroscale and individual scale... Fitbit anyone? will be feeding in to the systems more and more...)
service preauthorizations are done automatically too.
so is "instant credit approval" at point of purchase. not really AI, but it's getting there. we will be predestined even more by our digital profiles. oh there will still be some bypasses in the systems, but for the most part our lives will just be a long collection of Mad Libs templates where the systems fill in the blank spots.
welcome to the Matrix, Elysium and District 9, with a bit of Gattica and Cloud Atlas to spice things up.
(Score: 2) by Non Sequor on Saturday November 12 2016, @10:51PM
Automation of mundane tasks also results in legal liability associated with those tasks and any possible adverse outcomes that might occur being concentrated on whoever makes the software. The presence of a human in a car also alters the actual punishment for stealing the car or its contents as well as psychological perception of the risk and ramifications of any theft.
The short story is that people treat machines differently than they treat people and that has implications for the ability to completely replace people with machines.
Write your congressman. Tell him he sucks.
(Score: 2) by urza9814 on Monday November 14 2016, @06:43PM
The end result could be pretty much the same.
I figure we've got two options. AI is coming, and direct neural interfaces are coming. There's primitive but functional versions of each already.
If the neural interfaces come first, we're good. Our minds merge with the machines, humanity becomes a single partly digital entity, and we keep doing our thing bigger and better than ever before. Like science on the scale of galaxies instead of our current feeble attempts to make sense of a single planet. Any AI we develop after that point is like building a new hemisphere of our own brain.
But if real, strong AI comes first, we're kinda screwed. By "real, strong AI" I mean AI with all the skills and all the flaws of natural lifeforms. AI capable of determining its own goals and values. Best case, we look like cockroaches. They probably won't outright exterminate us from the whole planet, they'll be mostly unaware of us unless we start causing trouble. But a digital life form is going to be global in scope, and it will probably be smarter than us eventually, if not at the first iteration (that level of intelligence may be how we "know" it's AI). And it won't be human and won't react entirely human. So we won't understand it, it won't care much about us (what can it learn from us? Not much. Nothing it can't get off the net), and it spans the globe. Where does that leave us, if our technology gets a mind of its own and buggers off to deal with its own issues or curiosity? Cosmic parasites, totally unable to understand the universe (because our world is totally subject to the whims of a greater mind) and thrust back into dark ages of religion and superstition and pure dumb luck.
(Score: 1) by Kenny Blankenship on Tuesday November 15 2016, @03:11AM
Yeah, it "could be pretty much the same". No argument. It could also be that the shock wave of a neutron star collision will sterilize the Earth right down to the mantle before I even finish this sentence... any second... no? Oh... well...
While we're waiting for the apocalypse, can we think about how we deal with the human unemployment situation that's already unfolding? Preferably, without a Communist/Socialist/Whateverist revolution that (if it ever happens) will be more chaotic than problem it was meant to solve?
Someday, even Killer Meteors must fail.