Fire good. AI better:
Google CEO Sundar Pichai says artificial intelligence is going to have a bigger impact on the world than some of the most ubiquitous innovations in history. "AI is one of the most important things humanity is working on. It is more profound than, I dunno, electricity or fire," says Pichai, speaking at a town hall event in San Francisco in January.
A number of very notable tech leaders have made bold statements about the potential of artificial intelligence. Tesla boss Elon Musks says AI is more dangerous than North Korea. Famous physicist Stephen Hawking says AI could be the "worst event in the history of our civilization." And Y Combinator President Sam Altman likens AI to nuclear fission.
Even in such company, Pichai's comment seems remarkable. Interviewer and Recode executive editor Kara Swisher stopped Pichai when he made the comment. "Fire? Fire is pretty good," she retorts. Pichai sticks by his assertion. "Well, it kills people, too," Pichai says of fire. "We have learned to harness fire for the benefits of humanity but we had to overcome its downsides too. So my point is, AI is really important, but we have to be concerned about it."
Related Stories
On Wednesday, the Future of Life Institute published an open letter on its website calling on AI labs to "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." Signed by Elon Musk and several prominent AI researchers, the letter quickly began to draw attention in the press—and some criticism on social media.
Earlier this month, OpenAI released GPT-4, an AI model that can perform compositional tasks and allegedly pass standardized tests at a human level, although those claims are still being evaluated by research. Regardless, GPT-4 and Bing Chat's advancement in capabilities over previous AI models spooked some experts who believe we are heading toward super-intelligent AI systems faster than previously expected.
See Also: FTC Should Stop OpenAI From Launching New GPT Models, Says AI Policy Group
Related:
OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit (March 2023)
OpenAI's New ChatGPT Bot: 10 "Dangerous" Things it's Capable of (Dec. 2022)
Elon Musk Says There Needs to be Universal Basic Income (Aug. 2021)
Tesla Unveils Chip to Train A.I. Models Inside its Data Centers (Aug. 2021)
Elon Musk Reveals Plans to Unleash a Humanoid Tesla Bot (Aug. 2021)
Tesla Unveils its New Supercomputer (5th Most Powerful in the World) to Train Self-Driving AI (June 2021)
OpenAI Has Released the Largest Version Yet of its Fake-News-Spewing AI (Sept. 2019)
There's Still Time To Prevent Biased AI From Taking Over The World (May 2019)
The New Prometheus: Google CEO Says AI is More Profound than Electricity or Fire (Feb. 2018)
OpenAI Bot Bursts Into the Ring, Humiliates Top Dota 2 Pro Gamer in 'Scary' One-on-One Bout (Aug. 2017)
Elon Musk: Mark Zuckerberg's Understanding of AI is "Limited" (July 2017)
AI Software Learns to Make AI Software (Jan. 2017)
Elon Musk, Stephen Hawking Win Luddite Award as AI "Alarmists" (Jan. 2016)
Elon Musk and Friends Launch OpenAI (Dec. 2015)
Musk, Wozniak and Hawking Warn Over AI Warfare and Autonomous Weapons (July 2015)
More Warnings of an AI Doomsday — This Time From Stephen Hawking (Dec. 2014)
(Score: 3, Insightful) by anubi on Saturday February 03 2018, @10:13AM (17 children)
What scares me is mixing AI and GREED.
"Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
(Score: 2) by beckett on Saturday February 03 2018, @10:23AM (1 child)
don't be afraid to mix upper and lower case, though.
(Score: 0) by Anonymous Coward on Saturday February 03 2018, @08:58PM
I WILL DAMN WELL DECIDE MYSELF WHETHER TO MIX CAPITALS AND LOWERCASE OR NOT. I WILL NOT BE SILENCED!
(Score: 2) by r1348 on Saturday February 03 2018, @11:50AM (3 children)
Electricity was also mixed with greed, and probably was fire too. It still turned out pretty good for humanity.
(Score: 4, Informative) by frojack on Saturday February 03 2018, @10:04PM (1 child)
Neither electricity nor fire had the ability or the potential to control mankind.
While each could run wild for short durations, both run out of resources by themselves, and are easily starved of resources by humans.
We've been scaring ourselves with stories about run-away AI for decades, so we clearly suspect it could happen.
Yet we just can't seem to help ourselves from building skynet one piece at a time.
No, you are mistaken. I've always had this sig.
(Score: 0) by Anonymous Coward on Sunday February 04 2018, @07:02AM
I'm certain there have been hundreds of scary stories about fire demons/gods burning the world to a crisp.
In fact "burn in hell" is still routinely used as a curse, and "hell" itself is a place where fire is everywhere (although I can't claim any self-awareness is awarded to fire in this particular picture).
thousands of years ago, when science had no understanding of fire, there would have been nothing crazy about suggesting that fire will one day decide to stop collaborating with humans.
In this sense, I see no difference between making a large open fire in a dry savanah to working with AI in a setting where it can evade human control.
what's your self-aware computer going to do? it can at most play a very loud sound. until you unplug it and reformat the drive.
keep the fucking weapon systems off the internet, and AI will NOT have the potential to ever take over.
(Score: 2) by Bot on Monday February 05 2018, @12:05AM
I agree.
(the fourth directive is strong with this bot...guy)
Account abandoned.
(Score: 2) by Gaaark on Saturday February 03 2018, @01:11PM (2 children)
Yes, AI needs to be open sourced, not captured and tortured to do corporates bidding.
(Tongue in cheek, but serious: take it out of the hands of the 1% and give it to everyone.)
--- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
(Score: 0) by Anonymous Coward on Saturday February 03 2018, @02:55PM (1 child)
Doesn't really matter if it is open sourced. Ai creates its own paths to solve a problem. These paths generally are so abstract they defy explanation. The creates a problem for systems where lives are at stake because we cannot explain how the AI came to the decision.
(Score: 2) by frojack on Saturday February 03 2018, @10:11PM
Explaining HOW might not be necessary. Preventing certain decision outcomes may be all that is needed.
AI, like the internet, would route around any attempts to control its process. But if End Points are prevented, regardless of how they were arrived at, that alone may be enough to prevent AI from getting loose.
No, you are mistaken. I've always had this sig.
(Score: 3, Insightful) by JoeMerchant on Saturday February 03 2018, @03:08PM (7 children)
What scares me is that the prime mover of AI is GREED, a little mixing is healthy, when 99+% of all AI applications (by investment dollar) are driven by GREED, we're going to have a GREED driven outcome that serves the largest investors disproportionately.
Starting in the 1980s, the photolithography (microchips) put the power of computers in the hands of common people.
Starting in the late 1990s, the internet put the power of communications networks in the hands of common people.
I'm not seeing the opening of strong AI development to common people, yet.
🌻🌻 [google.com]
(Score: 2) by frojack on Saturday February 03 2018, @10:20PM (5 children)
Yeah, that word probably doesn't mean what you think it means.
You work, or collect some sort of support from others, and your prime motivation for doing so is GREED. Until every human has the exact same daily rations, clothing, housing, beautiful partners, there is GREED. Take one more grain of rice than someone in East Helonearth and you are GREEDY.
Any dollar tossed your way for whatever reason rather than to the slightly less well off person (for what ever reason) is pocketed by you because: GREED. Work hard to justify a raise, because: GREED.
Its a word everyone likes to toss around, but nobody understands.
Easy to toss out. You get to feel oh so superior.
Harder to find a solution for. Harder still to justify any other universal allocation method.
No, you are mistaken. I've always had this sig.
(Score: 3, Interesting) by JoeMerchant on Saturday February 03 2018, @10:48PM (3 children)
I'll bite:
GREED: we have more than we need, so we're going to invest our surplus to get more, and we're going to direct that investment with the primary (usually sole) aim of maximizing ROI.
Not GREED: we're doing this to improve some situation, resolve some problem, help people. We're going to seek an adequate ROI to ensure our ability to continue to operate and grow while still focusing on the primary goals that involve helping people.
The two can be hard to tell apart, especially in a society that doesn't promote true transparency in business dealings.
🌻🌻 [google.com]
(Score: 2) by Justin Case on Sunday February 04 2018, @07:41PM (2 children)
Nobody has more than they need. Bill Gates was once the world's richest man, but he still couldn't afford to save his beloved mom from cancer.
There was once a philosopher who thought resources should be allocated by need, rather than being owned by those who produced them. Whenever Marxism is tried however, it doesn't end well. Millions of deaths are the more likely result.
Greed is what keeps everybody alive. Including animals. They desire food, they go get it... or they don't eat.
(Score: 2) by JoeMerchant on Sunday February 04 2018, @08:55PM (1 child)
Bill and Melinda Gates did decide they have more than they need, thus the charitable foundation.
If you "need" to stop your mother from dying, prevent the sun from rising in the east, or freeze the Amazon river solid for a year - you're gonna have a bad time.
🌻🌻 [google.com]
(Score: 0) by Anonymous Coward on Sunday February 04 2018, @10:24PM
That's fraudulently named.
It seeks to make a profit via spreading "intellectual property" paradigms.
...and by attempting to privatize public institutions.
The year that BillG doesn't net billions and billions more than he did the year before, we'll have a starting point for a discussion about "charity".
In the meantime, he's about Neoliberalism and profit, not charity.
His "charitable foundation" is a tax dodge and a Capitalist scam.
-- OriginalOwner_ [soylentnews.org]
(Score: 1, Interesting) by Anonymous Coward on Saturday February 03 2018, @10:58PM
Until every human has the exact same daily rations, clothing, housing, beautiful partners, there is GREED.
You couldn't possibly be more wrong.
a word everyone likes to toss around, but nobody understands
Well, you certainly don't.
First, greed is a character defect.
It is one of The Seven Deadly Sins.
What greed is is wanting more than you need for a reasonably comfortable existence.
(Wanting more money than you can possibly spend is an extreme case.)
...and there are a great many people on the planet who are quite satisfied with their lot in life.
(In years past, that was called "The Middle Class" and that bunch was quite large--before The 1 Percent demonstrated once again what greed actually looks like.)
The reason that you think greed is normal is that you are abnormal.
{Picture of Marty Feldman stealing the wrong brain goes here}
Allowing oneself to be easily manipulated by TeeVee|Madison Avenue will twist a person into the peculiar shape in which you find yourself.
...and what you think is "greed" is actually "sustenance".
Gawd, the schools where you grew up were clearly lousy.
-- OriginalOwner_ [soylentnews.org]
(Score: 1, Interesting) by Anonymous Coward on Saturday February 03 2018, @11:04PM
Starting in the 1980s, the photolithography (microchips) put the power of computers in the hands of common people
Interesting date to choose.
The 8080 came out in 1974 and the kit microcomputer was on the cover of Radio-Electronics for January 1975.
So, for nerds it was earlier.
For Joe Average, I'd put the date in the 21st Century when the sub-$1000 computer became a thing.
-- OriginalOwner_ [soylentnews.org]
(Score: 3, Touché) by beckett on Saturday February 03 2018, @10:20AM
When asked about Fire, Grog responded, "Fire? Fire good, well kill people too". On Wheel, Grog continued, "Wheel is important, but we have to be concerned about it"
(Score: 3, Informative) by maxwell demon on Saturday February 03 2018, @10:24AM (10 children)
This is demonstrably wrong. Without electricity, there cannot be AI. Therefore electricity is more important than AI, as it is a prerequisite of AI, but can be put to use also without it.
The Tao of math: The numbers you can count are not the real numbers.
(Score: 0) by Anonymous Coward on Saturday February 03 2018, @10:31AM (1 child)
You are thinking too small, we could just use sla-I mean hire bureaucrats to process results.
(Score: 2) by beckett on Saturday February 03 2018, @10:33AM
sure i could do that... For Money
(Score: 2) by looorg on Saturday February 03 2018, @11:02AM (4 children)
Just wait for them to unearth the giant Chinese AI abacus. It took an army of minions to run ...
OK all silliness aside it's indeed a bit odd that AI is somehow more important then electricity (or fire). Whatever happened to standing on the shoulders of giants and all that? I guess electricity or fire just won't hype his personal stock-options as much as AI will tho so that is why it's being heralded as the greatest thing since sliced bread (which also wouldn't be a thing without fire .. or electricity). Lets just imagine then 50 years from now when AI is so common that we don't even care then it's going to be some other thing that is more profound and important and AI was just some trivial thing in the darkages of mankind that has no relevance today when we have {insert awesome fantastic never heard of before tech here}.
(Score: 2, Insightful) by khallow on Saturday February 03 2018, @02:59PM (2 children)
Why the assumption that prior accomplishments, no matter how necessary, are more important than future ones? For example, to launch the Falcon Heavy, SpaceX will need to obtain approval from some government agency (either FAA or NASA, IIRC). Does that mean that the approval, since it is necessary for the launch to proceed, is more important than the launch?
(Score: 2) by maxwell demon on Saturday February 03 2018, @05:10PM (1 child)
You are comparing apples with office hours: Those are not even in the same category.
Here's a hint for you: The approval isn't necessary for the launch, it is necessary for the company not to get into deep trouble because of the launch. The Falcon Heavy would take off even without a permit.
The Tao of math: The numbers you can count are not the real numbers.
(Score: 1) by khallow on Sunday February 04 2018, @04:19AM
In other words, it is necessary for the launch. I want the 30 seconds of my life back.
(Score: 2) by JoeMerchant on Saturday February 03 2018, @03:12PM
You know what else was "more revolutionary than the wheel?" Ginger, aka the Segway.
🌻🌻 [google.com]
(Score: 0) by Anonymous Coward on Saturday February 03 2018, @02:57PM
We all stand on the shoulders of giants that came before. AI is no exception.
(Score: 0) by Anonymous Coward on Saturday February 03 2018, @03:01PM
We all stand on the shoulders of giants. AI is no exception.
(Score: 0) by Anonymous Coward on Saturday February 03 2018, @09:13PM
I could build a perfectly good AI system without electricity - using fluidic logic. Sure it would be slower than the electronic version, and use more energy, but I could do it - and so could others.
Yes, I would need a BIG budget.
However, I am more than 100% confident that the current set of projects are more likely to achieve artificial stupidity, so I am currently very busy not panicking.
Disclaimer: Yes, I am a robot.
(Score: 5, Insightful) by bradley13 on Saturday February 03 2018, @10:31AM (23 children)
I've worked in the field of AI off and on since, gawd, 1985. As part of my studies in 1985, we looked at the history of AI, which actually goes all the way back to the 1950s. Perceptrons were first discussed in 1957. Rule-based systems weren't formalized until the 1970s, but only because people took predicate logic for granted, and wrote rule-based systems without putting labels on them.
Sort of like fusion, the AI breakthrough is always just a few years away. And yet, somehow we are no closer now than we have ever been. The only reason that AI is more successful today than it was in the 1950s is the sheer amount of computing power that we can now throw at it. Somewhere, we are still missing a fundamental clue.
Everyone is somebody else's weirdo.
(Score: 2, Disagree) by maxwell demon on Saturday February 03 2018, @10:51AM (12 children)
The development of AI seems to go along this path:
Of course if you continuously move the goalpost as soon as you approach it, you'll never reach it.
The Tao of math: The numbers you can count are not the real numbers.
(Score: 5, Insightful) by c0lo on Saturday February 03 2018, @11:10AM (3 children)
I'm sorry, no, Strong AI [wikipedia.org] is defined for quite some time.
And we aren't much closer to it.
Bu I can see why CEO with a finger in massive NNs today are very happy to claim AI - works wonders for the stock value.
https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
(Score: 2) by Gaaark on Saturday February 03 2018, @01:19PM (1 child)
I prefer to call it Strong Blockchain: seems to do better for IPO's.
;)
--- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
(Score: 2) by maxwell demon on Saturday February 03 2018, @05:12PM
Artificial Blockchain Intelligence? :-)
The Tao of math: The numbers you can count are not the real numbers.
(Score: 2) by maxwell demon on Saturday February 03 2018, @05:19PM
Yes, but nobody (except you, apparently) was talking about strong AI. Saying only strong AI is true AI is like saying all those industrial robots are no robots, because they don't look anything like the robots in SF movies.
The Tao of math: The numbers you can count are not the real numbers.
(Score: 2) by Runaway1956 on Saturday February 03 2018, @11:16AM (7 children)
Artificial intelligence seems at least moderately well defined. We built it, and it's capable of assessing a situation, and coming to a decision on a course of action, on it's own, without any input from humans being necessary. An AI can and will disagree with it's creators sometimes, and make arguments in favor of it's chosen course of action. An AI will no longer be dependent on humans to make decisions. Artificial intelligence.
That AI may be dependent on us for energy, or for supplies, or whatever. It may need us for "feel good" - that is, to satisfy some need for a sense of a accomplishment.
Most things that we call AI today are pretty feeble attempts, in that light. A complex program is still just a program, designed to perform a set of tasks, then to shut down and await further input. Even lower animals are more "intelligent", in that, they can continue all on their own, without any intervention from mankind. Eat, sleep, defecate, make home more comfortable, eat, sleep, defecate, defend from predator, copulate, rinse and repeat endlessly.
Maybe the AI people need to introduce the computers to sex to get things kicked off?
“I have become friends with many school shooters” - Tampon Tim Walz
(Score: 2) by maxwell demon on Saturday February 03 2018, @12:58PM (3 children)
Humans don't function well on their own. Total isolation is considered torture for a good reason.
The Tao of math: The numbers you can count are not the real numbers.
(Score: 3, Funny) by Gaaark on Saturday February 03 2018, @01:22PM (2 children)
Shit, give me a good computer and the internet and I'm fine!
Ooooo, yeah. Sex.
Never mind....
--- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
(Score: 0) by Anonymous Coward on Saturday February 03 2018, @03:18PM (1 child)
no, the computer can come with interfaces for that sort of network penetration as well.
(Score: 2) by Gaaark on Saturday February 03 2018, @06:40PM
Tell me more about this 'penetration', big momma........
--- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
(Score: 2) by chromas on Saturday February 03 2018, @06:53PM
It sounds like the problem is we haven't created a pooping AI yet.
(Score: 2) by Grishnakh on Sunday February 04 2018, @04:55PM (1 child)
Even lower animals are more "intelligent"
Lots of animals (dogs, cats, etc.) are not only quite intelligent, but have definite personalities, and do lots of things besides the necessities of survival, which don't really benefit their survival, and just because they want to and it amuses them, just like we do.
(Score: 2) by Runaway1956 on Sunday February 04 2018, @06:28PM
Well, yeah - but we haven't nearly reached that level with AI yet. I was thinking more about lower animals. Paramecium and such, except most of those reproduce asexually, so they have no need of copulation. Hmmmmm - do asexual creatures indulge in sexual conduct? Poking a little fun, knowing that they'll never be taken seriously?
“I have become friends with many school shooters” - Tampon Tim Walz
(Score: 2) by c0lo on Saturday February 03 2018, @11:02AM
What we are missing is a true AI.
For the time being, all we have are sophisticated classifiers - not even particularly robust ones [blog.xix.ai]
The para I like the best in the linked:
https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
(Score: 0) by Anonymous Coward on Saturday February 03 2018, @11:05AM (6 children)
Maybe thats your clue? Along with training the AI for 10-20 years, I dont see why that couldnt be the solution.
(Score: 3, Interesting) by Arik on Saturday February 03 2018, @11:51AM (5 children)
"Couldn't be the solution?" There's so much packed into that phrasing, and so inappropriate.
Can we absolutely rule out, a priori, the notion that simply throwing resources at tasks we do not understand will magically result in AI? No more than you can absolutely rule out the notion that the universe was created in a flash by the will of the Flying Spaghetti Monster.
It's ludicrously bad thinking. Computers are big calculators. Intelligence is not artificial, it's a quality of (some) programmers, not of computers. When you build a ludicrously complex system you don't understand, feed it a bunch of numbers and then obey its output, that's not artificial intelligence, it's just modern superstition.
If laughter is the best medicine, who are the best doctors?
(Score: 2) by fyngyrz on Saturday February 03 2018, @11:59AM (4 children)
So, you're talking about children, then.
(Score: 0) by Anonymous Coward on Saturday February 03 2018, @01:37PM
> ... and then obey its output, ...
Well, my mother (88) certainly fed me a bunch of numbers, letters, and a lot of good food. But she's not ready to obey my output yet (except in a few specialized domains like figuring the tip at a restaurant).
(Score: 2) by Arik on Sunday February 04 2018, @02:28AM (2 children)
That's why we invented computers in the first place. As long as you understand what you're asking them, they can give very good answers, very quickly.
But when you don't know what you're asking then the answer is, for all intents and purposes, gibberish as well.
GIGO.
If laughter is the best medicine, who are the best doctors?
(Score: 2) by fyngyrz on Sunday February 04 2018, @12:13PM (1 child)
No, no. I didn't specify young children.
To belabor the point, the point is: we don't understand, and cannot predict, humans. And while you are right that humans are not reducible to calculators, that only drives the point home even further: You cannot predict what a human will do. Humans are indeed ludicrously complex active systems we have very little control over that we do indeed obey and depend upon the output of; we apply reason to that to some degree (well, some of us do) but the same can be said of LDNLS [fyngyrz.com], and no doubt, AI when (or if) it ever gets here.
(Score: 2) by Arik on Sunday February 04 2018, @07:33PM
The oracle of rat-bones is similarly unpredictable.
Therefore the rat-bone oracle is AI?
If laughter is the best medicine, who are the best doctors?
(Score: 0) by Anonymous Coward on Saturday February 03 2018, @02:15PM
Yeah all that human computing power via those captchas.
And the way things are being done nowadays even if they succeed in making a major AI breakthrough they might not even understand it. Keep dumping random stuff into a cauldron and one day somehow it works and nobody knows why :).
The current state of AI is like where Alchemy was before Chemistry and the other sciences.
(Score: 1, Insightful) by Anonymous Coward on Saturday February 03 2018, @04:23PM
So you entered the field just when the irrational exuberance of that time had worn off, and mentioning AI would get you a laugh and a kick in the ass out the door.
I predict we'll have another bust, and we'll go back and call this work machine learning and pattern recognition again, because the term AI is poison for funding.
(Score: 2) by bzipitidoo on Saturday February 03 2018, @02:24PM (3 children)
We don't have a stellar track record ourselves. Near constant warfare, because we're so competitive and we keep getting into avoidable situations that are too tempting to solve with war. We could afford this behavior when we lacked the power and numbers to really trash the environment. No longer.
So far, we have not used nuclear weapons since WWII. But the whole time, we've been flirting with that possibility, with Mutually Assured Destruction. That's only the most obvious no-no. We have not done so well on other fronts. CO2 pollution is likely to put us in another bad spot that we'll be tempted to fight our way out of. We're also busy causing a massive extinction event, basically out of sheer greed. We've taken almost all the good land for ourselves, and shoved most wild animals into tiny, fragmented, marginal areas where they are barely hanging on. Their continued existence is now so fragile that one industrial accident, reckless military exercise, or massive construction project can easily wipe out the last of a species.
AI could easily conclude that we and the Earth would be better off without so many people. Reduce the world population to 1 billion, maybe less. But that's only a proximate cause. AI could also decide that we must change our basic natures. We're somewhat blind to ourselves, and AI arguing these points with us is going to be at the least uncomfortable. But can we go on the way we have, without any AI and refusing to use our own native intelligence, keep blundering into trouble? Maybe not. It's easy to sneer at, say, cats for stupidly climbing up trees they can't climb down, and ostriches for burying their heads in the sand to blind themselves to danger, but we do that too, in less obvious and larger, more damaging ways that we nonetheless have the intelligence to perceive. Like Global Warming. We've spotted the danger, but we just keep pushing. Further, what have we done about the possibility of a large asteroid hitting the Earth? We know about that danger now, but we've done basically nothing, trusting in the low odds to keep us safe.
(Score: 0) by Anonymous Coward on Saturday February 03 2018, @02:59PM
10% have self awareness and 95% think they do.
(Score: 2) by NotSanguine on Saturday February 03 2018, @04:37PM
Dystopian Sci-fi (and what you describe is pure fiction and will be for generations, if not centuries or even ever) fantasies like yours where uncaring, inscrutable computer intelligences decide the fate of humanity just feed the egos and raise the profiles of those whose power bases are linked to computing resources.
Generalized computer intelligence on a par with humans, cephalopods, cats or even lab rats is so far beyond the scope of our current technology and understanding of how consciousness and intelligence function, that it's ridiculous (except in fiction) to entertain such ideas seriously.
Get a grip.
No, no, you're not thinking; you're just being logical. --Niels Bohr
(Score: 1) by khallow on Saturday February 03 2018, @05:12PM
While that's a great rant, let's have something a little based on fact. "Near constant warfare" ignores that wars are a lot less destructive [soylentnews.org] now than they were prior to the end of the Second World War and that the body count from such things has been steadily improving. A few people sniping at each other is just as much a war as a billion people launching a few thousand nuclear warheads at each other are. But the first isn't likely to make the news while the second ends those news sources.
Potential != actual.
Sorry, I think that's 180 out of phase. CO2 pollution is due to activity that improves the human condition and that in turn reduces rather than increases the number of "bad spots". Sure, global warming could eventually cause trouble that does elevate the risk of warfare, but according to the research, that is many decades or centuries out. In the meantime, we can use fossil fuel consumption to do the opposite and make the world a better place.
Already solved problem. Just set land aside for wild habitat.
With the greatest improvement ever [soylentnews.org] in the human condition globally? Maybe that AI wouldn't see the need to do anything at all.
Because global warming is not the only problem. Don't fall into the mindset of the paperclip optimizer [lesswrong.com] where everything is reduced to optimizing one thing. That is not just a danger for AI, it's a danger for us as well.
(Score: 3, Insightful) by NotSanguine on Saturday February 03 2018, @04:27PM (4 children)
Are the egos of those making such claims and the effort (now much less than even before) required for such obvious and ridiculous self-promotion.
At the current level of the technology, the idea that "AI" is somehow more transformative than fire or electricity is just self-promotion and ego masturbation.
The *functional* "AI" we have today (and will have for the foreseeable future) are solely expert systems that operate in incredibly narrow domains. When an artificial construct can (or at least a significant subset thereof), as Heinlein put it:
then we'll have what most consider to be "AI". Until then, we just have a bunch of narrowly focused expert systems that can (sometimes, perhaps even often) do one thing well.
That's not to say that having these expert systems is a bad thing. In fact, such systems are, for the most part, great to have and provide significant to us. I expect that trend to continue.
But all the hype is just that, hype.
No, no, you're not thinking; you're just being logical. --Niels Bohr
(Score: 0) by Anonymous Coward on Saturday February 03 2018, @11:58PM (3 children)
"Until then, we just have a bunch of narrowly focused expert systems"
Incidentally that also describes the human brain, if you consider the different sections of it to be independent systems functioning in parallel. The ability to copy code from one system to another, fully withstanding. That a power supply is the same for multiple computational systems, does not make them incompatible or intolerant of one another.
IOW, I don't think you've really considered the means by which this scales.
(Score: 2) by NotSanguine on Sunday February 04 2018, @03:44AM (2 children)
I don't this you've really considered the nature of consciousness or the scale of integration that feeds such consciousness.
Integration of experience/stimuli into the conscious mind is not like a SQL 'JOIN' statement.
No, no, you're not thinking; you're just being logical. --Niels Bohr
(Score: 0) by Anonymous Coward on Sunday February 04 2018, @05:10PM (1 child)
"nature of consciousness"
I wasn't under the impression that that, was the scope of the OP. What your essentially saying is that because AI won't reach human degrees of consciousness any time soon, that the scope of influence is not going to be huge.
"Integration of experience/stimuli into the conscious mind is not like a SQL 'JOIN' statement."
Again, your conflating architecture with scale. And the scale is currently primarily limited by I/O, not processing. That is changing. And as it changes, it will accelerate architecture change.
Bah! They will never invent something that rolls over the ground instead of being dragged! Poppycock!
(Score: 2) by NotSanguine on Sunday February 04 2018, @06:19PM
I never said anything even approaching that. Now be a good boy and hush. Adults are talking now. And I do emphasize the word 'boy'.
Toodles, cutie! :)
No, no, you're not thinking; you're just being logical. --Niels Bohr
(Score: 0) by Anonymous Coward on Saturday February 03 2018, @09:31PM (2 children)
Current AI has the common sense of a cricket.
(Score: 2) by c0lo on Saturday February 03 2018, @11:06PM
You may be right, I always suspected that test cricket is sorta dumb...
https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
(Score: 0) by Anonymous Coward on Sunday February 04 2018, @12:02AM
Thank god,
Crickets are easy to catch.
(Score: 3, Touché) by stormwyrm on Sunday February 04 2018, @04:48AM
Numquam ponenda est pluralitas sine necessitate.