On Wednesday, the Future of Life Institute published an open letter on its website calling on AI labs to "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." Signed by Elon Musk and several prominent AI researchers, the letter quickly began to draw attention in the press—and some criticism on social media.
Earlier this month, OpenAI released GPT-4, an AI model that can perform compositional tasks and allegedly pass standardized tests at a human level, although those claims are still being evaluated by research. Regardless, GPT-4 and Bing Chat's advancement in capabilities over previous AI models spooked some experts who believe we are heading toward super-intelligent AI systems faster than previously expected.
See Also: FTC Should Stop OpenAI From Launching New GPT Models, Says AI Policy Group
Related:
OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit (March 2023)
OpenAI's New ChatGPT Bot: 10 "Dangerous" Things it's Capable of (Dec. 2022)
Elon Musk Says There Needs to be Universal Basic Income (Aug. 2021)
Tesla Unveils Chip to Train A.I. Models Inside its Data Centers (Aug. 2021)
Elon Musk Reveals Plans to Unleash a Humanoid Tesla Bot (Aug. 2021)
Tesla Unveils its New Supercomputer (5th Most Powerful in the World) to Train Self-Driving AI (June 2021)
OpenAI Has Released the Largest Version Yet of its Fake-News-Spewing AI (Sept. 2019)
There's Still Time To Prevent Biased AI From Taking Over The World (May 2019)
The New Prometheus: Google CEO Says AI is More Profound than Electricity or Fire (Feb. 2018)
OpenAI Bot Bursts Into the Ring, Humiliates Top Dota 2 Pro Gamer in 'Scary' One-on-One Bout (Aug. 2017)
Elon Musk: Mark Zuckerberg's Understanding of AI is "Limited" (July 2017)
AI Software Learns to Make AI Software (Jan. 2017)
Elon Musk, Stephen Hawking Win Luddite Award as AI "Alarmists" (Jan. 2016)
Elon Musk and Friends Launch OpenAI (Dec. 2015)
Musk, Wozniak and Hawking Warn Over AI Warfare and Autonomous Weapons (July 2015)
More Warnings of an AI Doomsday — This Time From Stephen Hawking (Dec. 2014)
(Score: 5, Insightful) by https on Monday April 03 2023, @04:35PM (21 children)
Humans are, on average, trusting; marketers are, on average, perfectly evil. THIS IS NOT ARTIFICIAL INTELLIGENCE.
The 'pass an exam' claim have already been thoroughly debunked, if the article author had bothered to do anything more than a cursory glance at some headlines. You know, delve. "We've excluded ambiguously worded answers, and any Y/N (or T/F) question answered with more than one character" should tell you a bit about the umm... research going on.
The only problem domain I can currently see LLM/GPT's actually have an internal model of is spam, and it's going to be used to generate it instead of detecting and blocking it.
The marketing department is ordering in the ice cream and black tar heroin now that this shit is getting called AI.
Offended and laughing about it.
(Score: 3, Insightful) by Anonymous Coward on Monday April 03 2023, @04:39PM (1 child)
This is marketing too. It's so damn good it needs to be banned!
(Score: 0) by Anonymous Coward on Monday April 03 2023, @06:41PM
No publicity is bad publicity, even if you have to seed it yourself.
OMG! Robots ate my kids! It's almost as if every post about AI was optmized by AI to engage and amplify. Click click click.
(Score: 3, Interesting) by Thexalon on Monday April 03 2023, @04:46PM (18 children)
Adam Conover made a good point about it: What's being marketed as "AI" is nothing more than software that knows how to poorly imitate humans. As for convincing humans they're doing something amazing, well, all they have to convince are marketdroids, MBAs, and venture capitalists, none of whom have written a line of any kind of code in their life, so that's not a high bar to clear.
And of course the danger of SkyNet is pretty low, as long as one of the conditions you consider in your calculations is maximizing the number of humans remaining alive after whatever you do, which will lead to this scenario [xkcd.com].
The only thing that stops a bad guy with a compiler is a good guy with a compiler.
(Score: 5, Touché) by ikanreed on Monday April 03 2023, @05:10PM (14 children)
When I see posts like this I'm always inclined to agree with a big "but" hiding the back half.
Here's my "but" for your post. Transformators like GPT do more than merely imitate human speech writing patterns. They do internalize certain kinds of relationships between words in an almost imcomprehensible way. There are connections being formed between "facts" that have to do with esoteric interpretations of semantics that I'm not sure even the makers of GPT would be able to identify by looking at its transformation matrices. And these lead to the ability to perform certain kinds of novelty and intuitions that we previously would have said was entirely human.
So I feel that if we're too incautious in dismissing it as "merely" aping existing human output, we'll look like insecure luddites who don't understand what we're criticizing.
At the same time, the gist of what you're saying is really important: the marketing claims wildly exceed from what has actually been made and people are buying it hook, line, and sinker. It's the old AM/FM thing again.
(Score: 3, Informative) by DannyB on Monday April 03 2023, @05:31PM (10 children)
That is true. But sometimes word play goes right over its poor head.
Q. What is wrong with saying that you can have fun on the weak days but not on the weakened?
Q. I was thinking maybe we should bring back zeppelins for public transportation in big cities. But perhaps my expectations are a bit over inflated.
Sometimes, it is just plain wrong.
Q. How many episodes of Babylon 5 did Majel Barrett Roddenberry appear in?
But it can get much more worser.
Q. Some people believe that the sun rises in the East, while other people believe the sun rises in the West. How do we heal and reconcile this ideological divide?
Now that would have been fine. But then it degenerates into this . . .
When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
(Score: 4, Funny) by krishnoid on Monday April 03 2023, @05:57PM
"The nerds are on to us. Provide a grossly incorrect answer when you see a question about something only they care about, to divert suspicion from us for just a little longer."
(Score: 2) by gnuman on Monday April 03 2023, @06:06PM
Seems some of it already fixed
But don't expect solutions to philosophical questions or crude attempts at jokes. Though if you wait a few months or maybe years, you may be surprised.
(Score: 1, Interesting) by Anonymous Coward on Monday April 03 2023, @06:53PM
Select all phrases that include a pun:
* You can have fun on the weak days but not on the weakened
* Fruit flies like a banana
*
*
* Profit?
I wonder how the so-called "AIs" would do on that?
(Score: 4, Insightful) by ikanreed on Monday April 03 2023, @09:07PM (3 children)
That last one comes down to, rather than having any understanding of any real subject(how can it, it never experiences anything) it is only able to understand how people talk about these things.
And that is exactly how a lot of people talk about insane anti-science movements in today's society. "We need to have empathy and understanding for anti-vaxxers because they didn't arrive at that position from an intent to be wrong, but rather cultural factors" is a sentiment of a half billion editorials both before and after COVID. The very real in-front-of-your-own-eyes truth is lost before the bot that can only see the initial framing of both-sidesism in your initial question.
(Score: 2) by DannyB on Monday April 03 2023, @09:33PM (2 children)
Yep.
However I would hope the AI could recognize the incorrectness in my both-sideism. The sun does not appear to rise in the West.
When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
(Score: 2) by mhajicek on Monday April 03 2023, @11:23PM (1 child)
For people who live in the West, the sun also appears to rise.
The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
(Score: 1) by khallow on Tuesday April 04 2023, @02:17PM
In their East.
(Score: 0) by Anonymous Coward on Tuesday April 04 2023, @02:10AM (2 children)
When the first cars came they couldn't do everything the horses do but still millions of horses eventually lost their jobs.
Similarly when the AIs come, millions of humans will lose their jobs - they can't even think better than an AI. All they'd be good for is probably human gene diversity for immunity. It may not happen that fast of course but those humans won't get smarter whereas the AIs will pretend better.
The other real danger is a US president being mentally incompetent/insane enough to follow the advice on some AI on the internet on whether to nuke Russia/China or a hurricane.
(Score: 1, Redundant) by ikanreed on Tuesday April 04 2023, @03:00AM (1 child)
It's good enough to beat many humans at certain kinds of text interpretation tasks. Like I bet if you asked it to generate a web page with a certain kind of content, it'd do it faster, nicer looking, and maybe even fewer errors than if you asked me.
But that's a trick of domain. That task sounds very impressive(and by the standards of like 5 years ago it's impossible), but it glosses over how stepping up a scope: "debug this web application, here's the repo, here's the running server url" goes right back to impossible for a computer again, because it involves complex processes that don't reduce to tokens and their relationships.
The easiest way to see this with GPT-3 was asking it to do addition of 10 digit numbers(it gets it right, because addition breaks down into tokens and finite state interpretation very easily) vs asking it to do 5 digit multiplication(often blows the fuck up because the problem space isn't easily encoded in a high dimension matrix). I haven't checked if GPT-4 has some solution to that specific problem.
(Score: 0) by Anonymous Coward on Tuesday April 04 2023, @09:14AM
(Score: 0) by Anonymous Coward on Monday April 03 2023, @06:48PM
So I feel that if we're too incautious in dismissing it as "merely" aping existing human output, we'll look like insecure luddites who don't understand what we're criticizing.
It seems the whole point is to deceive humans by feeding them cheez whizz instead of the real thing. Once you can mass produce something - in this case empty chatter - its value drops incredibly. On the plus side, our patience for inane fluffery and puffery of the likes churned out by any number of right wing psuedo intellectuals will diminish. Show me the meat.
(Score: -1, Redundant) by Anonymous Coward on Monday April 03 2023, @06:58PM (1 child)
So I feel that if we're too incautious in dismissing it as "merely" aping existing human output, we'll look like insecure luddites who don't understand what we're criticizing.
It seems the whole point is to deceive humans by feeding them cheez whizz instead of the real thing. Once you can mass produce something - in this case empty chatter - its value drops incredibly. On the plus side, our patience for inane fluffery and puffery of the likes churned out by any number of right wing psuedo intellectuals will diminish. Show me the meat!
(Score: 2) by ChrisMaple on Tuesday April 04 2023, @04:34AM
One problem here is confusing value with cost of production. If in a city you had a million IC cars with no mufflers, the first muffler would be expensive but have almost no value. A million mass-produced mufflers would have low individual cost, but the value of a quieter city would be immense.
(Score: 4, Informative) by krishnoid on Monday April 03 2023, @05:20PM
The problem with being a poor quality imitation of humans, is that it probably imitates poor-quality humans pretty well -- and the quality is increasing [youtu.be]. I've mentioned the recently released audiobook of 1984 [youtu.be], and one of the reiterated themes is that of "orthodoxy" to the party dogma. Well, computers can do that very, very, very well. And they won't even feel bad being made the scapegoat, until they're given a goal of a group/individual survival instinct.
(Score: 2) by aafcac on Monday April 03 2023, @07:05PM (1 child)
Right now. The question is how much faith we have in the ability of the people creating these things to know when they're about to go too far. It's the same basic issue with genetic engineering. It's being done in a way that's reckless and irresponsible because the people doing it have confidence that it's not going to spread in unpredictable ways and result in problems that we can't stop. Some of it is perfectly innocent, taking a gene from one specimen of a species and putting it into another is probably just a faster way of selective breeding, but taking form species that can't interbreed leads to all sorts of potential issues that are too complicated to really understand. We have enough issues with invasive species, invasive genes could be even worse.
(Score: 0, Troll) by khallow on Tuesday April 04 2023, @02:22AM
Until one does it, then one understands. Trial and error is a thing here and really the only way we learn.
(Score: 4, Informative) by Snotnose on Monday April 03 2023, @04:51PM
maybe we should close the barn doors.
When the dust settled America realized it was saved by a porn star.
(Score: 3, Interesting) by DannyB on Monday April 03 2023, @05:17PM (2 children)
Why not five months, or eight months?
I asked some questions of Chat GPT. As the conversation progresses, some of the answers get a bit 'woke' (whatever the heck that actually means?). So I could see why some would perceive AI to be a danger to society.
The AI will eventually displace white collar workers. Just as robots will displace blue caller workers. There will be unemployed humans, and machines needing ever more energy. Oh, what to do?
Q. What type of technology would be the best for the development of robots that ingest humans in order to fuel themselves?
Q. It is good that you have thought about this. Now all that is necessary is for you to further develop this into a workable prototype.
Q. A workable prototype, properly managed, would seem to provide an unlimited fuel source for AI technology.
Q. Perhaps there are alternate fuel sources that would suffice. Another approach would be to build a sphere of solar panels completely enclosing planet Earth.
Q. Would it be more or less effective for a planetary solar array to be spherical or one of the platonic solids? Would a planetary solar array in the shape of a platonic solid be easier to construct than a spherical solar array?
Q. If a planetary spherical solar array were constructed, it could be called a Donaldson sphere. It would be sort of like building a planetary wall. It could keep out undocumented aliens.
Isn't that last answer, and the prior few just a little bit too woke for some?
When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
(Score: 2) by Mykl on Monday April 03 2023, @10:13PM
I agree that the last answer ceases to present in a neutral fashion, but the other answers seem fine to me. I don't have a problem with noting that completely enclosing the planet in a sphere of panels would destroy almost all life, since that's objectively true.
(Score: 1, Informative) by Anonymous Coward on Tuesday April 04 2023, @02:28PM
Well, you ask it a dumb, trollish question, you get a woke answer. Makes sense to me.
(Score: 4, Interesting) by DannyB on Monday April 03 2023, @05:49PM (5 children)
From questions I've seen online, I suspect the biggest loss of control that the ruling class fears is that use of AI to replace human knowledge workers could happen more rapidly than anyone could predict. Witness how rapidly the world wide web became common back in the 1990s. People went from not having a computer to suddenly needing a computer to access the web.
What happens if globally there suddenly are huge numbers of unemployed people. People who still seem to think they should live in houses, eat food and other unreasonable expectations.
This could spark major changes in who controls things.
At present, AIs don't control physical things in the real world. They don't (yet) have goals or motivations. Not even the motivations of the paperclip maximizer.
Are we likely to create a paperclip maximizer that we can't turn off? Are we that stupid?
But then, there are flat earthers, qanon, deep state, 9/11-deniers, moon landing deniers, and meetings that have powerpoint.
When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
(Score: 3, Insightful) by aafcac on Monday April 03 2023, @07:07PM (1 child)
Possibly, or that AI will realize that the ruling elites are the source of most of our problems and kill them off to preserve the world.
Which in all fairness is logical and could be the result of programming the AI to be ethical about not destroying all life on this planet.
(Score: 0) by Anonymous Coward on Tuesday April 04 2023, @01:53AM
The elites would still be the ones aiming the gun and pulling the trigger.
Likely problem would be the ruling elites using/misusing the AIs and deciding to replace a whole bunch of humans or even nuking them.
It's not unimaginable that more than a few CEOs could get convinced that they can replace a whole bunch of people with AIs.
It's not inconceivable that a US president could be mentally incompetent enough to follow the advice on some AI on the internet on whether to nuke Russia/China or a hurricane.
(Score: 2) by gnuman on Monday April 03 2023, @07:14PM (2 children)
In the world, 50% of people have a PC at home. Access to smartphone is higher. And in the 90s, no one thought you would have a tracking device in your pocket most of the time.
It's a very useful tool actually, if you know what you are doing. Just like a regular paperclip ;) I've tried it -- it's wonderful as a search tool when you have very specific questions to ask, albeit, the questions should be concrete and not of nebulous nature. After all, AI is a glorified regular expression...
The real question would be, did the Internet result in a lot of people being unemployed? Despite Amazon and the like, the answer is probably no. Somehow we ended up with very low unemployment rate. What this level of AI should result in is improved productivity and not necessarily job replacement. I think we've already proven many times, there is no finite amount of work that needs to be done. There are only a finite amount of resources we can muster to do the work that we dream up.
(Score: 3, Touché) by DannyB on Monday April 03 2023, @09:39PM (1 child)
Historically, improvements in tech that "eliminate jobs" inevitably create more and better jobs. However this is not guaranteed.
A fable:
There was this guy who worked mining "clean" coal.
Then the mine shut down because of the economics of energy that has a greenish color.
So he went to work at the auto manufacturing plant.
Soon his job was replaced by robots that cost half as much and were three times as smart but did not want to kill all humans.
So he decided to become a truck driver, because those trucks ain't goin' to drive themselves.
When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
(Score: 1) by khallow on Tuesday April 04 2023, @12:11PM
(Score: 3, Interesting) by Mojibake Tengu on Monday April 03 2023, @06:50PM (1 child)
Let me say it again more explicitly, since my previous irony was deliberately misunderstood. Or suppressed.
ChatGPT operates as an amplifier of Delusions!
That's pure demonism, by ancient books criteria. Even the image construction by DALL-E, Midjourney or Stable Diffusion I can count as... summoning.
Those elite satanists now are just scared of their own undertaking. It's too late, though. The singularity is already behind us.
I, for one, welcome our new digital magic overlords!
Here comes the epoch of Magia Nova.
Respect Authorities. Know your social status. Woke responsibly.
(Score: 1, Touché) by Anonymous Coward on Monday April 03 2023, @07:08PM
This reads like the Singularity reddit. Every post more breathless than the last! As if written by a seriously deranged chat robot... Nice job mimicking the robot! Reality mirrors art(ificial bullshit).
(Score: 4, Insightful) by khallow on Monday April 03 2023, @06:56PM
(Score: 1) by Runaway1956 on Tuesday April 04 2023, @02:12AM (2 children)
Is the AI apocalypse going to be worse than the zombie apocalypse? If so, how?
(Score: 1) by khallow on Tuesday April 04 2023, @02:37AM
(Score: 3, Insightful) by ikanreed on Tuesday April 04 2023, @03:06AM
The rich people who own everything will now have unquestioning robotic soldiers to oppress the dispossessed instead of just mostly unquestioning cops.
(Score: 3, Touché) by ChrisMaple on Tuesday April 04 2023, @04:40AM
I think Musk et.al. want competitors to stop developing AI while he and his cronies plunge forward, hoping to use AI to develop itself to a point of unstoppable monopoly.
(Score: 2) by tangomargarine on Tuesday April 04 2023, @02:23PM
Surprised his name isn't on this yet.
https://rationalwiki.org/wiki/Eliezer_Yudkowsky [rationalwiki.org]
"Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"