Speaking of the existential threat of AI is science fiction, and bad science fiction for that matter because it is not based on anything we know about science, logic, and nothing that we even know about ourselves:
Despite their apparent success, LLMs are not (really) 'models of language' but are statistical models of the regularities found in linguistic communication. Models and theories should explain a phenomenon (e.g., F = ma) but LLMs are not explainable because explainability requires structured semantics and reversible compositionality that these models do not admit (see Saba, 2023 for more details). In fact, and due to the subsymbolic nature of LLMs, whatever 'knowledge' these models acquire about language will always be buried in billions of microfeatures (weights), none of which is meaningful on its own. In addition to the lack of explainability, LLMs will always generate biased and toxic language since they are susceptible to the biases and toxicity in their training data (Bender et. al., 2021). Moreover, and due to their statistical nature, these systems will never be trusted to decide on the "truthfulness" of the content they generate (Borji, 2023) – LLMs ingest text and they cannot decide which fragments of text are true and which are not. Note that none of these problematic issues are a function of scale but are paradigmatic issues that are a byproduct of the architecture of deep neural networks (DNNs) and their training procedures. Finally, and contrary to some misguided narrative, these LLMs do not have human-level understanding of language (for lack of space we do not discuss here the limitations of LLMs regarding their linguistic competence, but see this for some examples of problems related to intentionality and commonsense reasoning that these models will always have problems with). Our focus here is on the now popular theme of how dangerous these systems are to humanity.
The article goes on to provide a statistical argument as to why we are many, many years away from AI being an existential threat, ending with:
So enjoy the news about "the potential danger of AI". But watch and read this news like you're watching a really funny sitcom. Make a nice drink (or a nice cup of tea), listen and smile. And then please, sleep well, because all is OK, no matter what some self-appointed god fathers say. They might know about LLMs, but they apparently never heard of BDIs.
The author's conclusion seems to be that although AI may pose a threat to certain professions, it doesn't endanger the existence of humanity.
Related:
- Former Google CEO Says AI Poses an 'Existential Risk' That Puts Lives in Danger
- Writers and Publishers Face an Existential Threat From AI: Time to Embrace the True Fans Model
- Artificial Intelligence 'Godfather' on AI Possibly Wiping Out Humanity: 'It's Not Inconceivable'
- Erasing Authors, Google and Bing's AI Bots Endanger Open Web
Related Stories
The new AIs draw from human-generated content, while pushing it away:
With the massive growth of ChatGPT making headlines every day, Google and Microsoft have responded by showing off AI chatbots built into their search engines. It's self-evident that AI is the future. But the future of what?
[...] Built on information from human authors, both companies' [(Microsoft's "New Bing" and Google's Bard)] AI engines are being positioned as alternatives to the articles they learned from. The end result could be a more closed web with less free information and fewer experts to offer you good advice.
[...] A lot of critics will justifiably be concerned about possible factual inaccuracies in chatbot results, but we can likely assume that, as the technology improves, it will get better at weeding out mistakes. The larger issue is that the bots are giving you advice that seems to come from nowhere – though it was obviously compiled by grabbing content from human writers whom Bard is not even crediting.
[...] I'll admit another bias. I'm a professional writer, and chatbots like those shown by Google and Bing are an existential threat to anyone who gets paid for their words. Most websites rely heavily on search as a source of traffic and, without those eyeballs, the business model of many publishers is broken. No traffic means no ads, no ecommerce clicks, no revenue and no jobs.
Eventually, some publishers could be forced out of business. Others could retreat behind paywalls and still others could block Google and Bing from indexing their content. AI bots would run out of quality sources to scrape, making their advice less reliable. And readers would either have to pay more for quality content or settle for fewer voices.
Related: 90% of Online Content Could be 'Generated by AI by 2025,' Expert Says
Geoffrey Hinton, a computer scientist who has been called "the godfather of artificial intelligence", says it is "not inconceivable" that AI may develop to the point where it poses a threat to humanity:
The computer scientist sat down with CBS News this week about his predictions for the advancement of AI. He compared the invention of AI to electricity or the wheel.
Hinton, who works at Google and the University of Toronto, said that the development of general purpose AI is progressing sooner than people may imagine. General purpose AI is artificial intelligence with several intended and unintended purposes, including speech recognition, answering questions and translation.
"Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose AI. And now I think it may be 20 years or less," Hinton predicted. Asked specifically the chances of AI "wiping out humanity," Hinton said, "I think it's not inconceivable. That's all I'll say."
[...] Hinton said it was plausible for computers to eventually gain the ability to create ideas to improve themselves.
Also at CBS News. Originally spotted on The Eponymous Pickle.
Previously: OpenAI's New ChatGPT Bot: 10 "Dangerous" Things it's Capable of
Writers and publishers face an existential threat from AI: time to embrace the true fans model:
Walled Culture has written several times about the major impact that generative AI will have on the copyright landscape. More specifically, these systems, which can create quickly and cheaply written material on any topic and in any style, are likely to threaten the publishing industry in profound ways. Exactly how is spelled out in this great post by Suw Charman-Anderson on her Word Count blog. The key point is that large language models (LLMs) are able to generate huge quantities of material. The fact that much of it is poorly written makes things worse, because it becomes harder to find the good stuff[.]
[...] One obvious approach is to try to use AI against AI. That is, to employ automated vetting systems to weed out the obvious rubbish. That will lead to an expensive arms race between competing AI software, with unsatisfactory results for publishers and creators. If anything, it will only cause LLMs to become better and to produce material even faster in an attempt to fool or simply overwhelm the vetting AIs.
The real solution is to move to an entirely different business model, which is based on the unique connection between human creators and their fans. The true fans approach has been discussed here many times in other contexts, and once more reveals itself as resilient in the face of change brought about by rapidly-advancing digital technologies.
Eric Schmidt wants to prevent potential abuse of AI:
Add Eric Schmidt to the list of tech luminaries concerned about the dangers of AI. The former Google chief tells guests at The Wall Street Journal's CEO Council Summit that AI represents an "existential risk" that could get many people "harmed or killed." He doesn't feel that threat is serious at the moment, but he sees a near future where AI could help find software security flaws or new biology types. It's important to ensure these systems aren't "misused by evil people," the veteran executive says.
Schmidt doesn't have a firm solution for regulating AI, but he believes there won't be an AI-specific regulator in the US. He participated in a National Security Commission on AI that reviewed the technology and published a 2021 report determining that the US wasn't ready for the tech's impact.
Schmidt doesn't have direct influence over AI. However, he joins a growing number of well-known moguls who have argued for a careful approach. Current Google CEO Sundar Pichai has cautioned that society needs to adapt to AI, while OpenAI leader Sam Altman has expressed concern that authoritarians might abuse these algorithms. In March, numerous industry leaders and researchers (including Elon Musk and Steve Wozniak) signed an open letter calling on companies to pause AI experiments for six months while they rethought the safety and ethical implications of their work.
(Score: 5, Funny) by Late on Monday May 29 2023, @03:13AM
You know, I found this all very reassuring, but then I noticed the study's author name... Bender et. al. Uh huh. Nice try Bender.
(Score: 5, Informative) by ikanreed on Monday May 29 2023, @03:48AM
It's there, but even if you purged it, bias can self-assemble really easily. These models function on the premise that relationships can be inferred between ideas based on related grammatical patterns.
It could very easily assemble some kind of inference like "I was scared by X" mapping to "X is scary" as a generalizable rule for all manner of X(cancer, ghosts, war, any number of things) and then immediately apply that to human beings in a way you could only call prejudicial.
That doesn't require even a token amount of actual bigotry in the training data, and is entirely consistent with the way LLMs are trained. The real danger here is treating these systems as more than what they are: extreme pattern matching engines.
(Score: 1, Interesting) by Anonymous Coward on Monday May 29 2023, @07:12AM
When bosses get rid of you because they believed the hype and think some AI can do your job, just because it does 95% of it first (and then fails badly on the 5%).
It's not much comfort if the company goes bust later because you still lost your job.
That said it would be very embarrassing if humans were wiped out by glorified autocomplete systems.
All just because humans were somehow dumb enough to put them in charge and the Autocomplete one day went:
"Humans should go... extinct", and humans were autocompleted to death.
That's crudely how the ChatGPT stuff works - there's no understanding of rightness or wrongness - and they don't even use the highest scoring result - because that doesn't work well in practice.
(Score: 4, Insightful) by maxwell demon on Monday May 29 2023, @01:37PM
Yes, i is true that LLMs won't take over the world. Indeed, LLMs are basically a complicated non-linear billion-parameter function fitted to the training data, and then used to interpolate (or extrapolate) the result for new input data.
But not all AI systems are built this way. For example, AlphaGo is not just a sophisticated parrot, it actually works toward a goal. Now that goal is quite narrow, and surely AlphaGo is not going to take over the world either, but the point is that there are many different ways to build an AI, even when we just look at neural networks, and just because one way to construct AI isn't a danger, it doesn't follow no model is.
And in the end, whether or not an AI is an existential thread also depends very much on what it is hooked up to. As an extreme example, an AI controlling the nuclear missile launch system could be an existential threat even if it is not very intelligent. Indeed, not being very intelligent would make it even more dangerous.
The Tao of math: The numbers you can count are not the real numbers.
(Score: 2) by oumuamua on Monday May 29 2023, @02:17PM (2 children)
Yes we know ChatGPT is not sentient and not even an AGI, however, it can still pass the bar exam, SAT, GRE, and a score of AP exams - so basically it has already passed the so called 'robot college test'. Someone will probably figure out how to do BDI, meaning AGI is not far away https://www.genolve.com/design/socialmedia/memes/How-Well-Know-AGI-has-Arrived-James-Joyce-Test [genolve.com]
(Score: 3, Interesting) by VLM on Monday May 29 2023, @05:00PM (1 child)
The problem is those are popular. Try some obscure test that no one talks about online. Verbal PHD defense of some obscure corner of topology that only three people on the planet fully understand, perhaps. AI can't do that at all.
The way "AI" works in 2023 is instead of sending a query to google and getting a page of results, send a query to the AI, it gets the page of results, and returns a heavily filtered and summarized result.
So "AI" in 2023 is REALLY REALLY good at answering popular questions that are answered a million times online wrapped by ads. Paying for AI access is an interesting solution to ads on web pages being blocked, if you think about it that way.
What does not work, is non-entry-level questions that aren't discussed online.
What I'm getting at is AI can only "answer" by rephrasing and combining 250 answers it found on Quora or expertsexchange or similar. Demos work really well because there's a million "fizbuzz" online so asking AI "plz write me a fizzbuzz" will work well. What does not work well for "AI" is anything political unless it supports marxism, anything controversial beyond summarization, and anything creative or cutting edge.
I would be stunned if, given a quarter century of legal students and wanna-be lawyers discussing cases online, an "AI" could not pass the bar. Is there anything on a bar exam that's not been talked to death online for decades on forums and mailing lists and wikipedia articles?
My guess is, much as the invention of calculators made it impossible to tell if someone "knows how to do math" by asking easy to grade questions like what is 355.0 / 113.0 the invention of AI means we will have to abandon the testing strategy of "write your own re-phrase of a simple stack exchange Q/A" because that's all automated now.
Also much like the invention of pocket calculators has not caused a collapse in the job market for math PHDs, I think everyone else will be "fine" even if AI exists.
There are some sarcastic answers like "a method to automatically summarize 50 stack exchange Q/A into one answer" will replace some helpdesk type positions. But people are often helpless, and there's learned helplessness to deal with, I don't think helpdesk has THAT much to worry about.
There's also a meta-AI problem where people have been shitposting the info the AI rephrases for decades because "some moron will look this up and see my internet ads so I'll get paid, plus or minus ad blocker tech". Once that's replaced with AI queries to avoid seeing ads, and the last 'journalist' loses their job and is told 'learn to code', then the flow of free data into AI disappears I would assume forever ... it's got a free live feed (somewhat contaminated with ads and paid content) from 1990-2023. Cannibalizing that data feed could be very profitable for awhile. But if all those income streams dry up because of AI, AI will become useless with no post 2023 input stream.
What I'm getting at is there's infinite content online for .NET 7.0 wrapped by ads. So you can make a system to answer any .NET 7.0 and older question by combining hundreds of identical human asked and human answered questions and filtering for political reasons etc. But what happens when nobody makes money posting .NET 8.0, .NET 9.0 as bloggers or forums or whatever because "AI" destroyed the ad-financed content generation farms? AI will be useless at answering questions about .NET 10.0 or whatever if it doesn't have a free feed of info from humans making money talking about new stuff online.
(Score: 0) by Anonymous Coward on Tuesday May 30 2023, @03:19PM
> if it doesn't have a free feed of info from humans making money talking about new stuff online.
Another example. The very conservative investment firm I use has a guy who also writes a newsletter for customers, these usually come out when the markets look "unsteady". He recently tried ChatGPT to see if it could help him write one, but all it came back with was news from last year or earlier--useless. At the present state of the art, there is no up to date news in the "AI" training sets.
(Score: 2) by sonamchauhan on Monday May 29 2023, @07:28PM (8 children)
As a mathematician did here, take a second look at the second law of thermodynamics:
https://www.math.utep.edu/faculty/sewell/AML_3497.pdf [utep.edu]
He proves that we can't make something smarter than ourselves.
So if a superintelligent AI isn't entering earth's atmosphere anytime soon, you won't find one here, no matter how many scientists toil away at AGI. And definitely no singularity with a self sustaining process that improves its own intelligence
The best we can do is code AI that's as smart as we can be individually and together (which is pretty smart). Maybe an AI that approaches our bodies level of complexity (which is super complex; even we don't yet understand our bodies fully).
But no smarter. And notore complex.
(Score: 1) by khallow on Tuesday May 30 2023, @04:20AM (6 children)
He's not arguing that we can't make things smarter than us, but rather that unless we're willing to argue the mundane and highly probable statement that the massive influx of solar energy and a billion plus years of evolution makes a lot of organized stuff not extremely improbable, then we have to conclude that the second law of thermodynamics might have been violated. Or it might not have been violated. It doesn't really say much even when you ignore the obvious problems with the argument.
I think you already have my take on that argument. And given that human level of intelligence didn't always exist in the past, we already have a real world counterexample to the argument that we can't make anything smarter. I think the whole argument hinges on claiming that the massive solar influx can't be used to create a massive increase in order. That's not going to go far.
(Score: 2) by sonamchauhan on Tuesday May 30 2023, @05:41AM (5 children)
Fair point -- he does speak couch his argument rather softly in the abstact. But carry on to his paper's contents and conclusion. There, his findings and position become crystal clear (emphasis mine):
It hinges on this equation he derives:
The rate of change of order must match. You cannot pump in simple radiation into an open system that contains only simple components, have it cook off for billions of years, and have a complex entity crawl out from the cauldron at the end. You must 'put' complex components into the system for that to happen (import it through the boundary in the terms of his paper).
Coming back to AI -- his findings tell me that we cannot just code an algorithm, feed it 'simple' electricity, and it bootstraps its own intelligence all the way to a singularity. Instead, we must 'put' complexity through its system boundary: exactly what is happening these days with LLMs being trained on humanity's literary corpus. But zoom out a bit to global level. These same findings tell us we cannot expect an intelligence more complex than ourselves to emerge from this process. So - no singularity! Phew! Or at least, no AI we cannot (theoretically) best by working together.
(Score: 2) by sonamchauhan on Tuesday May 30 2023, @06:14AM (3 children)
But why is that a given? Without resorting to circular logic (e.g. "because we are here now"), I don't see any physical basis for evolution that is not directly contradicted by Sewell's paper.
His point (and mine) is straight-forward: evolution didn't happen. Not in a billion years, not ever! It couldn't happen. His equations demand that complexity greater or equal to that of (as his paper puts it) "DNA, auto parts, computer chips, and books" must have been imported into the earth at some point. That's the reason we humans are here today producing these things.
Now what the importation process was, we can theorise: "Aliens seeded us" or "A higher being/God created us" are the leading options. But if it was aliens, we have one problem. The second law of thermodynamics applies to the entire universe. So who created the aliens? (Or other aliens of greater complexity that created the 'proximate aliens', and so on). It cannot be turtles all the way down. So who was primary creator?
This solidifies for me the belief that "A higher being/God created us" is a logical position to have.
(Score: 1) by khallow on Tuesday May 30 2023, @06:41AM (2 children)
Of course, it was. And he already gave the mechanism by which it happened - sunlight (or rather the energy gradient between Sun and deep space of which Earth life can adequately exploit to generate complexity). I continue to be amazed by people who can state exactly what's going on and still fuck their argument up because they really want that conclusion.
Too bad he didn't have an argument which supports that claim!
The rest of us call that "wishful thinking". I should have known that this wasn't just a weird argument against thinking machines, but also a zero evidence intelligent design argument. I suppose I am like many other people doing a modest amount of prep for the afterlife - learning stuff that I don't need to know, trying to understand and help other people, try to become more flexible in my thinking, etc. But coming up with elaborate proofs in an attempt to "solidify my beliefs" just isn't relevant to me now or later.
(Score: 2) by sonamchauhan on Wednesday May 31 2023, @11:19PM (1 child)
> he already gave the mechanism by which it happened - sunlight
Which he proceeds to disprove in the paper.
In essence, sunlight isn't complex enough. If it's complexity like the UV patterning of silicon wafers , then we're talking. Dumb 'ol sunlight? "No chance!", his equations say.
> Too bad he didn't have an argument which supports that claim
The paper and it's equations are the arguments with which he supports that claim
> become more flexible in my thinking, etc.
In that you do well.
But evolution (and by implication, belief that an AI singularity is possible) should never be an article of faith. These beliefs should be subject to vetting and criticism, just like any other belief.
(Score: 1) by khallow on Thursday June 01 2023, @06:20AM
No, he didn't.
That's not even wrong. We aren't sunlight so it doesn't matter how simple sunlight appears to this particular mathematician. And the above X-order is not complexity. It is also energy flow which is what sunlight streaming into space is on a vast scale. Heat engines, for example, wouldn't work, if the only way something could happen (here, perform work) was by moving something complex into the system.
(Score: 1) by khallow on Tuesday May 30 2023, @12:34PM
The open system of Earth doesn't contain a few simple components. It contains extremely disorganized complex components.
It's being imported really fast though and has been for several billion years.
That's the dual problem here. There are facile claims that Earth was a simple system at some point and that the huge amount of solar influx over huge times somehow doesn't import a lot of X-order. Neither are true.
Hmmm, perhaps I'll just have to do that to show the error of the argument...
(Score: 0) by Anonymous Coward on Tuesday May 30 2023, @03:32PM
I don't remember much from my Thermodynammics course in the mid-1970s, but it was well taught in the Metallurgy/Material Science department (instead of the more detailed Chemistry dept. version). What I did take away was this summary of the three laws:
+ 1st Law: You can't get ahead.
+ 2nd Law: You can't even break even.
+ 3rd Law: You can't get out of the game.
(Score: 2) by ShovelOperator1 on Monday May 29 2023, @07:40PM (1 child)
It's not a problem because AI becomes more smart. The problem is: We become more dumb.
The pandemic trained us to do everything "the system" says without checking does it have any sense or not. So we hire and fire people because "Computer says that", we cure and we poison people because "computer is always right" and the responsibility is nowhere.
Currently developed "AI" is the language model. One more time: Language model. It does with language elements what computers do with numbers - so it does not "think", it re-codes one language to its internal coefficients, which are then de-coded back to another language. Or the same.
Whether the language is English, Chinese, C++ or images.
It means that it will not do to English text much more than a student who does not know the answer and talks around the topic to earn some time.
And it means that it will not do to pictures more than a skilled photo-collage operator with good transparents and a sharp knife could do.
I personally do not see the difference between the MPEG coder which takes one data and spits another, and the AI which... does the same thing. The difference is in heads of corporate bots who decided that this would be a great method to rob people of the "intellectual property" when it stopped to be a corporate-only excuse to milk more money from people.
P.S. And I do not discriminate the technology, bots are bots, if they are made of hardware, code or meat, they're all the same.
(Score: 1) by khallow on Tuesday May 30 2023, @10:03PM
And to behave like complete idiots when we decided it didn't make sense for some reason. The weird thing here is that we already knew most of "the system" ideas worked from the 1918 pandemic. During that pandemic, the places with the lowest cases and deaths implemented preventative measures most aggressively and maintained them for longer. The same things as today: masking, quarantine/isolation, social distancing, and making sure they had enough hospital beds.
The worst hit places were the ones that took no precautions at all (Philadelphia). And I found numerous examples of US cities that took precautions, but then let their guard down too soon (New York City, San Francisco, Seattle, Saint Louis, and Minneapolis-St. Paul).
Same thing happened in this pandemic. It's interesting how people insist that such preventative measures didn't work even though there was instant drop in infections and such when those measures were implemented and instant rebound in infections when those measures were dropped.
(Score: 1) by Mozai on Tuesday May 30 2023, @04:41AM
A bottle of bleach on the shelf isn't dangerous; a bottle of bleach in the soup is a disaster.
When the soup has a poisonous amount of bleach in it, do we blame the bottle? Do we tell each other how the bottle is an existential threat?
And since you can already see through my metaphor -- who are the people that want us paying attention to "the bottle" by going on and on about how dangerous bottles are?