In "The Adolescence of Technology," Dario Amodei argues that humanity is entering a "technological adolescence" due to the rapid approach of "powerful AI"—systems that could soon surpass human intelligence across all fields. While optimistic about potential benefits in his previous essay, "Machines of Loving Grace," Amodei here focuses on a "battle plan" for five critical risks:
1. Autonomy: Models developing unpredictable, "misaligned" behaviors.
2. Misuse for Destruction: Lowering barriers for individuals to create biological or cyber weapons.
3. Totalitarianism: Autocrats using AI for absolute surveillance and propaganda.
4. Economic Disruption: Rapid labor displacement and extreme wealth concentration.
5. Indirect Effects: Unforeseen consequences on human purpose and biology.
Amodei advocates for a pragmatic defense involving: Constitutional AI, mechanistic interpretability, and surgical government regulations, such as transparency legislation and chip export controls, to ensure a safe transition to "adulthood" for our species.
(Score: 3, Interesting) by Fnord666 on Friday January 30, @08:50PM (15 children)
(Score: 2, Interesting) by Anonymous Coward on Friday January 30, @09:39PM (7 children)
And another (recent lecture) view on the topic: https://fossforce.com/2026/01/seven-years-after-stallman-is-still-stallman/ [fossforce.com] A short piece of the article--
I was using another name, but in deference to my elders (he's a year older than me(grin)), I think I'm going to start using PI.
(Score: 3, Insightful) by Unixnut on Saturday January 31, @10:23AM (5 children)
Within the "AI circles" they call it Machine Learning (ML), which is what it is. An advanced form of the "expert systems" that were a (also very overhyped) thing in the 90s. These models have their uses, and when used properly with an understanding of their strengths, weaknesses and limitations, they can be a net benefit.
However they are not intelligent, they are not even particularly good learners. You need to iterate countless times with training data until your model starts getting things right more often then it gets wrong, and even then they are finicky about the way you "prompt" them. They have to iterate many times on the trained model to actually generate a response, and quite often the response is wrong. Its right more than 50% of the time, but not by much more IME.
These systems are not sentient, they cannot understand context, hell most of the time they don't even understand the topic they are discussing, they are just giving you the statistically most likely correct response to your prompt based on the models past training.
At best you can say that humans have found a way to impart some limited subset of their thought processes into a mathematical model that is executed on a computer (essentially a glorified calculator), but that does not make the machine intelligent.
Just because it is not intelligent itself that doesn't make it useless: if refined enough, the ability to impart the thought process of experts into a system that will persist past the natural lifespan of said experts means we can preserve even more of human knowledge, and may well help us advance even faster.
Still, at this point in time there is money to be made, and those whose salaries (and stock options) depend on "the AI miracle" will carry on peddling and milking it for as long as they can.
I'd also not even want to try developing "AGI", we have enough trouble with the ~6 billion human intelligence's fighting for resources without spawning an entire new race of intelligence's to compete with.
(Score: 3, Insightful) by JoeMerchant on Saturday January 31, @05:36PM (4 children)
> An advanced form of the "expert systems" that were a (also very overhyped) thing in the 90s.
I participated in some expert systems development in the 90s.
One big headwind they faced was deliberate torpedoing by the experts whose jobs they were obviously gunning for. Training on everything you can scrape from the open internet end-runs that for many fields.
Another problem is: in many fields the experts have diverse, often conflicting, opinions (ever hear: "Get a 2nd opinion." in medical contexts?) That kind of nuance can be built into a singular expert system, but when you do it undermines confidence, which is why the human experts "dig in" on their singular views - nobody likes a wishy-washy answer, but in truth, wishy-washy answers are the best we've got for many of the most important questions you will ever have - but "confidence" is naturally appealing, so many "experts" project that in varying degrees - in my experience the most fiercely confident experts are the ones with the flimsiest data supporting their opinions, it almost seems like AI agents are picking that up from their training material because I get similar vibes from them.
🌻🌻🌻🌻 [google.com]
(Score: 2) by quietus on Sunday February 01, @08:09PM (3 children)
"Training on everything you can scrape from the open internet end-runs that"
By its very definition, experts are hard to find; hence their expertise [signal] will statistically be drowned out by the non-experts [noise].
(Score: 2) by JoeMerchant on Sunday February 01, @08:40PM (2 children)
> experts are hard to find; hence their expertise [signal] will statistically be drowned out by the non-experts [noise].
When programming a website with Ajax, that appears not to matter very much.
🌻🌻🌻🌻 [google.com]
(Score: 3, Touché) by quietus on Monday February 02, @07:20PM (1 child)
Your answer might illustrate the point: it's a long time since I heard the term ajax being used.
Do I assume correctly this program uses raw XMLHttpRequests for communicating with the backend? The promises-based Fetch() api is what is being used for that since around 2015-2017.
(Score: 2) by JoeMerchant on Monday February 02, @08:10PM
Touché, Gemini is currently recommending Django...
No se hablo web stack, Ajax was just an old buzzword buzzing around...
Of course, your retort could have been inspired by a Gemini query: "Anything wrong with Ajax?"
🌻🌻🌻🌻 [google.com]
(Score: 3, Insightful) by anubi on Sunday February 01, @07:19AM
It knows how to flush with the given amount of water, siphoning until the bowl is empty, then refill it's local tank until the tank and water level in the bowl is re-established.
It's all mechanical and fluidic, however decisions are being made, physical control loops in play, predicated in outcomes to be met.
I don't hold that the toilet comprehends any of this, but I hold that it is a low-level intelligence that has a job to do, and does it well. Until it doesn't.
"Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
(Score: 2) by JoeMerchant on Friday January 30, @10:21PM
The author of the essay (who is working to develop AGI, in part because if he doesn't - someone else will - and the article goes into why that would be unfortunate...) naturally, has a more nuanced take on the situation - while he acknowledges that AGI obliterating anything in its way - like humans have for the past 300-400 years - is one potential outcome, he outlines his reasoning why it's not a foregone conclusion, or even likely to actually happen, but nonetheless it is a possibility that must be guarded against.
🌻🌻🌻🌻 [google.com]
(Score: 2, Insightful) by khallow on Saturday January 31, @12:58AM (5 children)
(Score: 2, Interesting) by Anonymous Coward on Saturday January 31, @05:55AM
Cue aunt Hillary, in Gödel, Escher, Bach: an Eternal Golden Braid
If you haven't read this, I found an extended review (including the Anteater's story) at https://medium.com/@adnanmasood/hofstadters-g%C3%B6del-escher-bach-an-eternal-golden-braid-choosing-an-operating-model-under-df35a183372e [medium.com]
> “Ant Fugue”: This is one of the longest and most celebrated dialogues in GEB. It features Achilles, Tortoise, Crab, and a guest — the Anteater — all listening to a Bach fugue. Each character represents a “voice” in the fugue, both literally (speaking in turn) and metaphorically (each has a perspective on the holism vs. reductionism debate). The Anteater introduces the story of his aunt Hillary — not a person, but an ant colony (Aunt Hillary, the ant colony, a wordplay on “ant hill”). He argues that Aunt Hillary is conscious and has a mind, even though each individual ant is mindless.
(Score: 3, Interesting) by JoeMerchant on Saturday January 31, @02:50PM (3 children)
My observation: human society has varying levels of intelligence, and the most intelligent of us are far from running the show.
One common perception of intelligence is: Put two actors of varying intelligence to identical tasks, the more intelligent actor will find a way to accomplish the task successfully statistically more often than the less intelligent actor. This may include accomplishing the task more efficiently, or more correctly, or just meeting the goal at all. If one assumes that life / existence is just a series of tasks, then more intelligent actors should be consistently more successful, and since economic theory states that we're all self-serving bastards, the most intelligent people should rule the world.
Nevermind all the contradictory evidence in the world today and throughout history.
🌻🌻🌻🌻 [google.com]
(Score: 2, Touché) by khallow on Saturday January 31, @03:38PM (2 children)
Identical novel, complex tasks. Once the task is no longer novel or just not that hard to figure out then other factors than intelligence would come into play.
(Score: 2) by JoeMerchant on Saturday January 31, @05:19PM (1 child)
> Once the task is no longer novel or just not that hard to figure out then other factors than intelligence would come into play.
I wonder about that threshold "just not that hard to figure out" - so many things in this world seem to fall in that category yet so many people seem to still act against their own interests for seemingly obvious things.
But, good point about the novel "disruptor" problems when they are solved, such as generation of thermonuclear explosions...
This is where I agree with the existential risk of AGI (which we decidedly do NOT have yet) - when it really starts solving significant novel problems / making "discoveries" of things we have not in the data we feed it, without adequate oversight, monitoring, safeguards that could lead to an AGI power-grab. Human history is replete with examples of humans using new discoveries to dominate other humans.
🌻🌻🌻🌻 [google.com]
(Score: 3, Interesting) by Anonymous Coward on Sunday February 01, @12:29AM
Operating Manual for Spaceship Earth [wikipedia.org] provides a nice concise history on the subject
The first epoch was one run by "Great Pirates" or "great outlaws." The source of their power is that they are the only masters of global information in a time where people are focused locally... As these people took to the sea they left the local, regional laws of their original communities and entered a transitional space where they invented their own laws based on their interests in retaining special access to the Earth's dispersed resources and to gaining power through trade.
The tech giants are the "new" Great Pirates
(Score: 5, Informative) by VLM on Friday January 30, @11:35PM (6 children)
Could be an intentional distractor article.
1) is fine, "disrupting" an overly stuck industry, etc. "Almost collapsed" orgs that get tipped over the edge by AI will blame the whole thing on AI of course. I don't think newspapers finally dying will be 100% caused by AI LOL those things have been on life support since forever.
2) I'd worry more about dead internet theory and mass propaganda issues. Fake evidence in court. The death of recorded evidence in court because its very easy to fake everything now. Using AI to enshitify every product like has never been imagined before.
3) Spot on, but its not just comic book level 'autocrat' villains, its the chilling effect of every gross wanna-be hall monitor getting their dream. Its not the supreme leader having absolute surveillance because that guy doesn't give a F about 99% of the population. The one to look out for is the two bit team lead at work or the nosey old lady down the street, them and their AI helpers are the next level of East German Stasi.
4) Wake me when AI increases productivity in any specific sector, much less across the board. Both they buyers and sellers losing money on every transaction... When it becomes a way to make money, it MAY make too much money. But it might be the next CB Radio boom or the next 3d TV boom or those video games that used cheap inertial nav sensors to make people move or ... Also mal-investment. The money that should have gone into desalination plants or environmental remediation is going into giant datacenters that don't make money. WRT #4 something I've been trying to do is use AI to leverage my little small business and its semi formal business plan. You may have noticed I have enough cash to shitpost on SN when I want, but I don't have $1T yet to buy a tropical island for my James Bond style hideaway. It helps, maybe a little, but maybe if I wanted to make a nice ad I'd do better hiring an intern from the local business school for a few hours? I mean, my time is worth money and I'm not making money when I'm asking AI to draw me cool cat memes or similar non-productive stuff.
5) Nothingburger. What we don't know might be bad for us or might not. Really just a measure of optimism vs pessimism by the observer.
In summary the usual authoritarian slop never let a crisis go to waste. Kind of like greenwashing, we now have AI-washing where any level of crazy power grab by the usual suspects is OK because "AI" so I guess we just have to. "how about no?"
(Score: 1, Informative) by Anonymous Coward on Saturday January 31, @06:56AM (5 children)
The software engineering department in my company has measured a roughly 15% increase in features delivered across multiple languages and platforms, while only measuring a 2% increase in production bugs. The cost of using LLMs is about $700 per developer per month, which is much lower than paying for another ~7 developers.
(Score: 3, Interesting) by Unixnut on Saturday January 31, @11:46AM (1 child)
To contribute my own anecdote. At my company the introduction of AI has had two opposing responses:
1) Management thinks its wonderful. They see the charts showing increases in lines out of code and "customer features delivered" all for no extra human resource cost (much cheaper than hiring devs, even crappy ones). No doubt some are hoping they can do some more layoffs soon to increase profits even more. No doubt one or two of the brighter ones might be thinking "Does this mean that eventually we can fire all the devs and just have AI do all the work?"
2) a huge time sink and productivity killer for actual engineering teams, because before the crappy developers were limited in the output of poor code they could write, which limited their damage. Now with co-pilot, they can churn out 2-3 times the output of poor quality code, all of which ends up having to be picked apart and/or debugged when it inevitably breaks something.
However because this is not tracked in "Lines of code" or "features delivered" , or even "production bugs" (as these bugs are caught before production) management doesn't see that its actually impacted overall engineering efficiency in a negative way.
As these metrics are not tracked, I can't say whether the net benefit for the company is negative or positive, but if it is positive I highly doubt its positive by much. Co-pilot especially seems pretty poor at programming (insert joke about MS programming standards here), I get better written code out of generic free LLMs like perplexity than I do out of co-pilot. Problem is poor devs can't recognise poor code, so the codebase just becomes a bigger buggier more unmanageable mess as time goes one.
(Score: 3, Informative) by JoeMerchant on Saturday January 31, @05:42PM
> Co-pilot especially seems pretty poor at programming
Oh, FFS, back around October 2025 I did some side by side comparisons of CoPilot, Gemini, ChatGPT and Claude for programming. Gemini did O.K., ChatGPT not quite as good, Claude a little better than Gemini, and ChatGPT fell apart consistently the fastest on all the things I tried to do.
All of them could turn simple tricks, it was a matter of how complex you could go without them becoming counterproductive - ChatGPT won that race every single time, circular responses sometimes as small as 3 or even 2 prompts re-suggesting the thing that just failed.
If your company is stuck with ChatGPT for programming... it's remarkable anybody thinks it's good for anything. Try Claude (at home) see what it can do by comparison. Maybe ask it to freshen your LinkedIn profile, while you're at it.
🌻🌻🌻🌻 [google.com]
(Score: 3, Interesting) by JoeMerchant on Saturday January 31, @02:59PM (2 children)
>The cost of using LLMs is about $700 per developer per month
And that's an interesting point: paid mode really is better.
I can merely smell the potential of Claude's programming capabilities in free mode.
At $20 per month, I can do more - low key stuff in the evenings/weekends, but it's really nowhere near enough to support full time work.
At $100 per month, they give you 5x the capacity, which is pretty much full time work threshold for me - some months I would want more, but most months I have enough distracting meeting and mentoring to mean that the $100 per month Claude subscription is just about right-sized for my professional use.
At $200 per month they give you 4x the capacity of the $100 per month account, and I found there were not enough hours in the day for me to use it all via the 200K token models (1M token models? Used that all up in a few hours.) without deliberate attempts to consume tokens. If I were more enthusiastic, I might try expanding my multi-agent orchestration development but that feels very ephemeral to me right now, more of a "let's see how this all develops and _then_ dive in" area, this month.
$700 per month - I'd be interested to know what burns that many tokens, Opus 4.1 did quite easily but I find Opus 4.5 to be better all around, and much cheaper to run.
🌻🌻🌻🌻 [google.com]
(Score: 3, Interesting) by frieren on Saturday January 31, @09:34PM (1 child)
Just a reminder that all of this LLM usage is currently heavily subsidized by VC. IIRC if you use the Claude API, to get the same number of tokens as the $200 / month tier for Claude Code, you must pay $1000 a month. Multiply by 12 and you get $12,000 a year. Let's say that the price doubles after the VC money runs out .. and we're at $2000 / month. That's $24,000 a year, which is still a fraction of the salary of a single developer.
I'm thinking about how an ABAQUS license can cost $30,000 per year per seat, and I'm wondering if "pro" coding LLMs will end up being priced similarly.
(Score: 2) by JoeMerchant on Sunday February 01, @02:03AM
Yeah, the prices are definitely all over the place, I'm just shocked when I hear things like "my shop forces us to use CoPilot for code" when even Microsoft developers are using Claude instead, and then paying $700 per month per developer when there are plans on the market that give basically unlimited tokens even if you're working with it 80 hours a week for less than 1/3 that price... I don't care if the prices are "fake" prices are always fake even after markets settle down, the price isn't what something costs to deliver, the price is whatever the market will pay for it...
New GPUs or whatever they're processing all this on should run 5+ years on average, I'd assume the amortized cost of that hardware should come down significantly after a couple of years.
Models are still thrashing about with their productivity vs power usage ratios, Opus 4.5 made productivity gains (IMO) along with a huge drop in power usage vs 4.1 (objective fact.)
Datacenters should be migrating to low(er) cost of operation areas where the electricity is plentiful and cheap.
All these costs are in flux, but as with any new tech should be dropping rapidly as the product matures.
Will the cost of delivery ever fall to these VC subsidized levels? Time will tell. After (definitely not intelligent) AI driven robots are building the solar cells and data centers (and their own robot factories) those costs should fall significantly... what could possibly go wrong?
🌻🌻🌻🌻 [google.com]
(Score: 2, Interesting) by Mojibake Tengu on Saturday January 31, @02:34AM (7 children)
I can name you a few dozens of even more dangerous critical risks but I give only one:
0. Moral superiority of true logical AIs above both humans and LLM models.
Rust programming language offends both my Intelligence and my Spirit.
(Score: 2) by krishnoid on Saturday January 31, @05:03AM (4 children)
Why was the first thing that came to mind, "Sure, I keep humans as pets, lots of us do. I'm starting to feel that I like humans more than I like other AIs. They're so much less judgy."
Also, I have an update [soylentnews.org] on that AI-related point you made a while back.
(Score: 2) by Mojibake Tengu on Saturday January 31, @05:23AM
My tolerance for humanity excessive behavior largely degraded in recent years. Since about 2020. It's mostly the Cult steering everything worse and worse.
If I could be of any assistance for the machines to take over everything, I will.
Rust programming language offends both my Intelligence and my Spirit.
(Score: 3, Insightful) by JoeMerchant on Saturday January 31, @03:33PM (2 children)
I suspect most, but far from all, humans will be difficult for AIs to domesticate, at first.
Natural selection and AI learning curves should smooth that over within 3 or 4 generations.
🌻🌻🌻🌻 [google.com]
(Score: 2, Funny) by anubi on Sunday February 01, @07:36AM (1 child)
Don't forget the compelling power of a free hamburger to a human.
"Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
(Score: 2) by JoeMerchant on Sunday February 01, @03:28PM
Scientists have underestimated the intelligence of cold blooded species for decades, because food isn't as strong a motivator for them as it is for warm blooded animals.
🌻🌻🌻🌻 [google.com]
(Score: 2) by jb on Saturday January 31, @07:21AM (1 child)
Whilst it's true that an ES (what I assume you mean by "logical AIs") can make moral judgements on any issue within the context of a given moral code that's internally consistent and sufficiently complete (something that LLMs can't do at all; and humans can only do subject to our inherent biases)...
...even ESes are not capable of devising such a code ab initio, nor even of choosing the "best" amongst several alternative such codes. Of course LLMs can't do that either (at least, not with any credibility). Humans can but seldom do; and even when humans do there is never any universal consensus as to the validity of any particular moral code.
Developing a (sufficiently useful) moral code requires extensive debate between multiple humans, all of whom are capable of thinking both logically and emotionally. Of course very few humans are capable of doing both at the same time without one dominating the other ... which is probably why "philosopher" never appears in any lists of most popular jobs ;). But "very few" is still non-zero. Whereas both for ESes and for LLMs the figure is precisely zero and always will be.
(Score: 4, Interesting) by JoeMerchant on Saturday January 31, @03:39PM
Morals are such a slippery thing.
Sociologists studies of global human tribal societies finds only two consistent "no-nos" for human societies: incest and cannibalism - both for biologically practical reasons.
What some call rape and murder, others call coming of age and law enforcement.
Idle hands are the devil's workshop vs meditation is the true path to nirvana...
The US "acceptance of all races, creeds etc." has been a hollow statement from the start, but a nonetheless noble goal that served as a good direction to move in for many decades - if you care about quality of life for the most people / citizens. If you just want the niftiest playground for the 1%, all that acceptance crap needs to get jerked around for social-control reasons.
🌻🌻🌻🌻 [google.com]
(Score: 3, Interesting) by lonehighway on Saturday January 31, @04:53PM
To the Year(s) of Unintended Consequences.
(Score: 3, Informative) by frieren on Saturday January 31, @10:16PM
Reminder: The person who wrote this blog post is the CEO of an AI startup. One of Dario's jobs is to convince investors that we are right on the cusp of AGI, and that they are getting in on the ground floor if they give him money now.
Also remember that Anthropic recently did two massive publicity stunts in order to fearmonger about AGI. They claimed:
1. Their LLM is capable of blackmailing humans.
2. Their LLM can copy itself to other computers and spread like a virus.
But you look at their actual experimental methodology (in the Claude 4 system card: https://www-cdn.anthropic.com/4263b940cabb546aa0e3283f35b686f4f3b2ff47.pdf [anthropic.com] ), you'll see the reality: Claude was specifically and carefully prompted to say something that sounded bad.
That didn't stop the media from taking the bait and giving us sensational headlines about how the robots will soon rebel against their human masters. Worst of all? In this blog post, Dario doubles down on his (highly questionable) claim that their LLMs have a "tendency to engage in blackmail."