Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 11 submissions in the queue.

Submission Preview

Link to Story

everyone should stop working on AI now

Accepted submission by Anonymous Coward at 2025-04-27 10:43:36 from the look before you leap dept.
/dev/random

I just saw this video https://www.youtube.com/watch?v=uMwjKyAPR34, [youtube.com] explaining how the end of the world is coming in 2028 or so.
I work for the department of redundancy department, so here's a summary:

Investors are throwing a lot of money into AI research. State-level investors, in many cases, are interested in surveillance and efficient warfare. Many experts are saying that humans cannot control a machine more intelligent than themselves. Experts have also outlined a number of "AGI success" scenarios which end with human extinction. The video provides a relatively simple suggestion to circumvent the problem: keep "AI" simple for now, by keeping three properties always separate: "autonomy", "generality" and "intelligence". The video then ends encouraging the spreading of the word, and putting pressure on the EU to act on this (since it seems that the US and China are going to ignore the warnings).

In a recent journal entry by AnonTechie https://soylentnews.org/~AnonTechie/journal/19211 [soylentnews.org] [note: I am not AnonTechie] other experts say that we're not on the right path to general AI.
I asked there "but why try to make AGI in the first place?".
And it's still not clear to me: what advantage does humanity get from "building AGI"? I can certainly see the money that Google, Amazon and Facebook are making from improving their advertisement stuff, and I can certainly see the benefits of improved medical diagnosis, universal translation, and a bunch of other clearly defined use-cases.
But why do we need 1 algorithmic/hardware entity that can do everything?
Why are the citizens of democracies allowing their governments to put money into "AGI"?

For what it's worth, humanity has recent experience with exponential growth (covid), and warnings of a catastrophic future ignored by governments and populations (climate change).
We also have experience with a catastrophic future that was avoided (ozone layer survives and it's recovering because of actions taken in the 1980s).
In democracies, at least nominally, power is evenly divided between people through the universal vote.
How can we convince voters that the rate of progress in AI research is out of control?

In the case of ITER (hard theory), the LHC (hard theory and high precision) or LIGO (high precision) there is a human community which can ultimately explain every nut and bolt.
In the case of AI we do not have that: if we ask why the zeros and ones are arranged in a certain way, humanity as a whole cannot answer.
So why are we doing this?

Yes, it's fun to watch disaster movies and identify with the ones who are picking up the pieces at the end.
Given that people play the lottery every day, I have little hope for a rational answer... but why would anyone want to be part of a disaster movie in the first place?
Even if we succeed in building non-violent AGI: why would it suddenly be ok to own slaves?
Because if a machine can pass the Turing test, I personally see it as valuable as any human.


Original Submission