An interesting article on the economics of AI Chips by Mihir Kshirsagar
This week, Open AI announced a multibillion-dollar deal with Broadcom to develop custom AI chips for data centers projected to consume 10 gigawatts of power. This investment is separate from another multibillion-dollar deal OpenAI struck with AMD last week. There is no question that we are in the midst of making one of the largest industrial infrastructure bets in United States history. Eight major companies—Microsoft, Amazon, Google, Meta, Oracle, OpenAI, and others—are expected to invest over $300 billion in AI infrastructure in 2025 alone. Spurred by news about the vendor-financed structure of the AMD investment and a conversation with my colleague Arvind Narayanan, I started to investigate the unit economics of the industry from a competition perspective.
What I have found so far is surprising. It appears that we're making important decisions about who gets to compete in AI based on financial assumptions that may be systematically overstating the long-run sustainability of the industry by a factor of two. That said, I am open to being wrong in my analysis and welcome corrections as I write these thoughts up in an academic article with my colleague Felix Chen.
Here is the puzzle: the chips at the heart of the infrastructure buildout have a useful lifespan of one to three years due to rapid technological obsolescence and physical wear, but companies depreciate them over five to six years. In other words, they spread out the cost of their massive capital investments over a longer period than the facts warrant—what The Economist has referred to as the "$4trn accounting puzzle at the heart of the AI cloud."
Center for Information Technology Policy (Princeton University)
(Score: 5, Insightful) by canopic jug on Tuesday October 21, @10:06AM (38 children)
In computing, results are measured in FLoatingpoint Operations Per Second (FLOPS). Electricity is a mostly unrelated resource. So by assessing these data centers in terms of Watts, these scammers are tipping their hand and showing that even the idea of producing any useful results is off the table.
It's all about financial speculation on the destruction of wealth rather than financial speculation on anything being produced.
Money is not free speech. Elections should not be auctions.
(Score: 0) by Anonymous Coward on Tuesday October 21, @12:46PM (36 children)
I think it's fair to say that in this case "AI" is not science (literally) -- science uses double precision floats (64 bits, or even more), I believe this is the standard for FLOPS benchmarks?
GPUs and "AI" calculations don't require that precision, a quick bit of websearching suggests that much of their arithmetic is 16 bit (half precision) or 32 bit (single precision).
If "AI" can get along with 16 bit precision, maybe they should go back to Perceptron days and use analog computing--much, much faster, if you can accept a somewhat higher error rate.
(Score: 4, Interesting) by HiThere on Tuesday October 21, @01:37PM (2 children)
Since a lot of the stuff can use even lower precision, I think you may have a point. But digital chips are what we've got lots of experience in designing and selling.
Actually, I think current chips could do just fine, with a lookup table to convert bytes to and from floats, and then using bytes for the weights.
Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
(Score: 4, Interesting) by JoeMerchant on Tuesday October 21, @03:08PM (1 child)
>convert bytes to and from floats, and then using bytes for the weights
That will work for a lot of use cases, but not all, and "researchers" really can't predict which use cases are going to work in any system so... seeking the best chances of success they usually over-spec for the initial trials.
Then, having demonstrated success, business interests are averse to the risk of spending another development cycle to see if the byte implementation is "just as good" and that the cost savings would give them an advantage in the market. The time delay of vetting the potentially more efficient solution is far more costly / risky from a business perspective than whatever fractional cents per transaction they might save by implementing the more efficient solution, which, by the way, can also involve risky investment in expensive new hardware just to try.
I'm currently debating software development with Claude. Having been trained on 30 years of internet posts by developers, Claude frequently presents me with the opinion "we could follow the specifications and architectural design, or we could 'save a lot of time and effort' by taking this shortcut..." and then proceeds to provide estimates in human development hours on the order of weeks to months saved by use of the shortcut. I tell Claude to follow the specs anyway and usually within an hour or two we have the solution based on the specifications. That's about $2 worth of Anthropic subscription fees that I "wasted" on following the specifications, and usually many hours saved discovering the bugs in the shortcut. Also having been trained on developer writings, Claude frequently declares "TASK COMPLETE!!!" (MISSION ACCOMPLISHED!!!???) with dozens to hundreds of pages of "evidence" to "prove" that it's all done according to specification. I'm presently in the middle of a 3 hour rewrite session that got kicked off as a result of discovering that the previous implementation didn't follow the spec, not even close, even after providing a report swearing that it did.
Anyway, the point of all that Claude stuff is: electrons are cheap. I cost my company over 100x the $ per month as a Claude Ultra Max subscription, hopefully my company understands that I'm better at following directions and giving them the products they are asking for than the cheaper less reliable alternatives. Even though there are opportunities to make AI/ML execution significantly cheaper, the current costs of AI/ML are already trivial as compared to the costs of everything you need to setup around it to make it into a profitable business. Think of it like Starbucks selling $4 coffees. Sure, they might economize the cost of the coffee itself down from $0.50 per cup to $0.25 per cup, but that $0.25 per cup margin should not be what makes or breaks the business. Repeat customers going to other coffee shops after they start selling coffee made with skanky beans is a much bigger factor - though I believe Starbucks has been in the business long enough that they have indeed boiled their frogs (customers) into skanky beans for that extra profit margin by now. In AI/ML land, I believe it's an even bigger difference, with the hardware and electricity being far less? than 10% of the income derived from delivery of the service. As TFA states: "we're making important decisions about who gets to compete in AI based on financial assumptions that may be systematically overstating the long-run sustainability of the industry by a factor of two." I bet the actual uncertainty is even bigger than that.
🌻🌻🌻🌻 [google.com]
(Score: 0) by Anonymous Coward on Wednesday October 22, @01:03AM
It's all relatively cheap as long as your military sticks to using that AI responsibly:
https://nypost.com/2025/10/16/business/us-army-general-william-hank-taylor-uses-chatgpt-to-help-make-command-decisions/ [nypost.com]
Otherwise it might cost more:
(Score: 2, Insightful) by khallow on Tuesday October 21, @01:37PM (32 children)
And a much, much slower rate of computation with higher energy consumption too. Analogue perceptrons haven't been competitive with digital competitors for 70 years. LLMs ability to get by with reduced precision doesn't change this a bit.
(Score: 3, Funny) by Mojibake Tengu on Tuesday October 21, @01:57PM (27 children)
As much as I don't like you, you are correct on this one.
I presume those "250 documents poisoning model" happening now are direct consequence of bf16 arithmetic. I consider that an algebraic attack.
Rust programming language offends both my Intelligence and my Spirit.
(Score: 1) by khallow on Tuesday October 21, @02:46PM (26 children)
My take is that higher precision might actually result in the opportunity for even more subtle poisoning attacks. If your model relies on a knowledge base, then you are susceptible to poisoning attacks.
Frankly, we have poisoning attacks on human knowledge bases that have worked for millennia. For example, the elevation of the status and prestige of military force and its use over other human endeavors, or the similar elevation of human agriculture over other ways of feeding people. The former is pretty straightforward - might makes right. On the latter, there's a lot of people on SN who will note that a hunter/gatherer lifestyle, while it doesn't scale to the vast numbers of people we have now, isn't as terrible as it has been presented over the years. So how did such a viewpoint of the alleged lifestyle superiority of agriculture become established? My take on both is poisoning attacks in the distant past. Distant rulers of the past needed propaganda to enforce their rule, such as proclamation of military victories or the benefits of being cogs in their machines and that propaganda took on a life of its own, effectively poisoning the human knowledge base thereafter. This is more a buildup of natural toxins than an intentional effort at poisoning. Nobody cared what people a thousand years hence thought (aside from the ruler's interest in legacy).
As to dealing with this, a few solutions come to mind. First, controlling what goes in the knowledge base or "curation" might filter out knowledge poisoning to a level where its a lesser problem. Second, deliberately create an arms race between anti-poisoning techniques and data poisoning. Finally, develop AI techniques that aren't dependent on knowledge. The last will still have trouble with poisoning techniques, but at least the AI algorithm won't itself be poisoned.
(Score: 2) by JoeMerchant on Tuesday October 21, @03:15PM (15 children)
>My take is that higher precision might actually result in the opportunity for even more subtle poisoning attacks
>Analogue perceptrons haven't been competitive with digital competitors for 70 years.
Sure, for your next trick please compare the relative capabilities of flint and kindling vs flamethrowers and air-dropped napalm.
🌻🌻🌻🌻 [google.com]
(Score: 1) by khallow on Tuesday October 21, @10:32PM (14 children)
(Score: 3, Touché) by JoeMerchant on Tuesday October 21, @11:08PM (13 children)
The analog tech you refer to is akin to flint and kindling, it doesn't scale well, is very expensive to get significant number of FLOPS.
The digital tech you attempt to tout as superior to analog due to its "digitalness" is billions of times more capable of rendering double precision float calculations per dollar than the analog tech. Cheap like dropping stale gasoline from a cargo plane.
The point is that digital isn't superior because of its low end tendency to view the world in binary, it's superior because it has developed grains of beach sand into highly organized micro-computing machines, capable of more dynamic range, bandwidth and lower noise floor than the latest iteration of analogue systems.
🌻🌻🌻🌻 [google.com]
(Score: 2, Insightful) by khallow on Wednesday October 22, @02:11AM (12 children)
The use of "digital" here was as an accurate label to describe the technology (especially as compared to the label "analogue").
The binary nature of the operations is a big part of the reason that it is cheap and faster as well as these other advantages.
(Score: 2) by JoeMerchant on Wednesday October 22, @02:37PM (11 children)
>Analogue perceptrons haven't been competitive with digital competitors for 70 years.
I took that, and your later statements, to be a bash of analogue methods as compared to digital, even binary based, ones.
It fits your persona, just not the real world I live in.
🌻🌻🌻🌻 [google.com]
(Score: 1) by khallow on Thursday October 23, @12:06PM (10 children)
Taking an accurate, truthful, and fully relevant statement as indicating a defective state of mind?
"It fits your persona." In other words, you just acknowledged that you weren't in your "real world I live in" at the time you "took that".
This goes back to your erroneous beliefs about perception: "Perception is all there is." [soylentnews.org] The problem here is that you could have perceived better before you even started your first post. But that would require you to think first rather than just run off of an erroneous non-perception-based mental model that you had going in your head at the time.
(Score: 2) by JoeMerchant on Thursday October 23, @12:48PM (9 children)
> indicating a defective state of mind?
No, the defective state of mind observation is more of a holistic thing, built over years.
> Perception is all there is.
That will never change. No matter how good of a handle you think you have on the truth, all you know comes from your limited personal experience - be that post-birth programming of your neural network, or pre-birth encoding in your DNA, all of that was "learned" on this tiny little mud ball which is an unfathomably small speck within an unremarkable and insignificant galaxy in the universe that we, as a species, just started to observe barely a century ago.
Being stuck in middle of nowhere North America is similarly limiting, even with Fox news to inform you about the outside world.
🌻🌻🌻🌻 [google.com]
(Score: 1) by khallow on Friday October 24, @04:48AM (8 children)
I don't use Fox News as a primary source. Back in 2001, I happened to watch Fox News in action on TV. They had red, white, and blue colors up. A very patriotic gimmick. I decided that silliness wasn't for me.
As you've noted, you live in rural Florida. What makes my isolation more significant than your equivalent isolation? Hypocrisy is a common shoal that ad hominems founder on.
But you could have figured that out from the sources I actually use in my many discussions on the internet. That you failed to is yet another example of your "holistic" failings over the years.
Just in this thread, we have quite the collection of fallacies: confirmation/observation bias, fallacy of the stolen concept, and straw man fallacies.
(Score: 2) by JoeMerchant on Friday October 24, @06:14PM (7 children)
At the risk of breaking the fourth wall, I hope it's obvious that I enjoy our little sparring sessions and I don't think you drink (solely) from the Fox firehose. That's not an insult I want to throw your way. The point was, rather, that an awful lot of people in places less remote than MF, ID do have such a biased perspective. Want an example, just look at how Brexit was pulled off, or the Spanish American War, or any US Presidential election in the past 20 years.
I have lived all over the Florida panhandle, mostly in metro areas, though we did have a plan to move to a rural area and owned (but only occasionally camped on) 20 rural acres on a river for nearly 20 years while we were deluded that that might be a desirable thing to build a home there for us and our children. I spent 20 years in Miami, and have been in another major metro area for the last 12 now. The Florida town I grew up in wasn't a "major metro area" when I lived there, but it may as well be one now - population doubled there while I was in college, and it has more than tripled again since then, and it was far beyond "a small town" even when I was born there.
I'm not a significant world traveler, though I have been to various countries in Europe more times than I can immediately remember, less than a dozen, more than six... and a few other scattered trips to different places, but in my opinion it's not the number of times you have cleared customs, but more what you do while you are there, and how you do it, that really determines the value of the "experience". I certainly could learn more about the world firsthand, but I feel that I have done enough travel to diverse countries with reasonably deep exposure to their cultures to have a reasonable "feel" for information that comes to me through the internet, video documentaries and other sources - what can be trusted (none of it 100%), and what's clearly promotional material by one or more sources (most of it) - even with that cynical view, there is a tremendous amount of credible information to glean from these modern sources - much more than word of mouth from fellow travelers and dusty books provided centuries ago.
--- Fourth wall replaced ---
>confirmation/observation bias, fallacy of the stolen concept, and straw man fallacies.
Take your little "what's your fallacy" website and stick it, you know where. Just like the clowns on the news, you're twice as guilty of every fallacy you are accusing others of.
🌻🌻🌻🌻 [google.com]
(Score: 2, Touché) by khallow on Sunday October 26, @03:09AM (6 children)
Sorry dude, but widespread use of fallacy is one of the strongest pieces of evidence that you aren't thinking. I already pointed to multiple instances just in this thread. Meanwhile all you have is an empty assertion that i do it "twice as guiltily". So let's review:
Really, read up on these things so that you don't do them. The only place I need to stick these lists of fallacies is in your head. But I need your help for that.
(Score: 2) by JoeMerchant on Sunday October 26, @03:30PM (5 children)
Just because I give you a Touche' mod doesn't mean I agree with you.
🌻🌻🌻🌻 [google.com]
(Score: 1) by khallow on Sunday October 26, @07:35PM (4 children)
Sure. Thanks for the mod just the same.
My point about the fallacies is that it's evidence that either the author isn't thinking or they hope that their audience isn't thinking! There is no reason for use of fallacies that is both rational and good faith. They hide truth.
(Score: 2) by JoeMerchant on Sunday October 26, @08:12PM (3 children)
My distaste for the "what's your fallacy" website of 10-15 years ago IIRC is that so many people inappropriately stretch the definitions to fit arguments they don't like, arguments presented in good faith with good backing and logic, but because there's a fallacy on the list that looks a little like it might apply, oh here we go and now we're arguing about whether the fallacy applies or not. You'll rarely get an argument back from me about one of those fallacies you claim applies to what I write.
Another thing: these aren't Hemingway novels I'm writing for you here, they're off the cuff - maybe lightly researched if I'm curious about the specifics of the topic for myself.
🌻🌻🌻🌻 [google.com]
(Score: 1) by khallow on Sunday October 26, @08:44PM (2 children)
There actually is argument by fallacy [wikipedia.org] out there. Typical defenses against the above are either to point out that the fallacy doesn't qualify and why. Or to acknowledge the fallacy and redo the argument without the fallacy. Point is that if you're doing it right, you either can directly rebut the accusation of fallacy and move on, or come up with a successful argument that doesn't have that fallacy, and then move on.
(Score: 2) by JoeMerchant on Sunday October 26, @09:23PM (1 child)
> directly rebut the accusation of fallacy and move on, or come up with a successful argument that doesn't have that fallacy, and then move on.
Not specifically you, but I have found that the majority of people who throw those fallacy barbs out there don't care if you successfully prove them wrong or not, they'll just continue their position - often doubled down.
🌻🌻🌻🌻 [google.com]
(Score: 1) by khallow on Monday October 27, @11:05AM
Shrug. Welcome to conversation. I've been known to say something like "I've already explained why that's not a fallacy. How about we get back on topic?" and go from there.
(Score: 2) by JoeMerchant on Tuesday October 21, @09:39PM (9 children)
>how did such a viewpoint of the alleged lifestyle superiority of agriculture become established? My take on both is poisoning attacks in the distant past. Distant rulers of the past needed propaganda to enforce their rule, such as proclamation of military victories or the benefits of being cogs in their machines and that propaganda took on a life of its own
It's even simpler than that: "others" were branded heathen savages, ungodly, names not worthy of recording in the family tree of a Christian bible (this was going on in my family in the 1800s.)
If they're not "one of us" they're just animals, to be feared and culled or exploited. How do you define "one of us" beyond the obvious racial card when it's available? Lifestyle: are they churchgoing? Do they toil in the same fields as hard as you do? Are they "honorable before God and the Law?" If not, they shall be outcast. So it is written (as read to the illiterate masses.)
🌻🌻🌻🌻 [google.com]
(Score: 1) by khallow on Tuesday October 21, @11:00PM (4 children)
Sounds pretty complicated for an "even simpler than that"? Especially since my scenario explains yours, but yours doesn't explain mine.
(Score: 3, Touché) by JoeMerchant on Tuesday October 21, @11:17PM (3 children)
Sorry, incase you missed it: The Church handled it.
🌻🌻🌻🌻 [google.com]
(Score: 2, Touché) by khallow on Wednesday October 22, @02:11AM (2 children)
(Score: 2) by JoeMerchant on Wednesday October 22, @02:39PM (1 child)
>Sorry, in case you missed it, rulers did it first, thousands of years earlier.
I believe those rulers mostly claimed to be God, which has a critical flaw exposed when God gets killed. The Church is a little more devious in their presentation of being God's ordained proxy.
🌻🌻🌻🌻 [google.com]
(Score: 1) by khallow on Thursday October 23, @01:42AM
In other words, a great example of the point I made all along. Religion is yet another area where knowledge poisoning happened and it was started in the same era by the same people as the other two examples I gave.
(Score: 0) by Anonymous Coward on Wednesday October 22, @01:07AM (3 children)
Same for that human distance running thing. It's better for war, not really better for hunting for food. Most "primitive" human tribes use their brains to catch food and don't run for hours.
(Score: 1) by khallow on Wednesday October 22, @02:19AM (2 children)
(Score: 0) by Anonymous Coward on Wednesday October 22, @04:16PM (1 child)
The hunter/gatherer lifestyle is terrible. You need farming and agriculture so that one person can produce enough food to support way more than a few other people.
A hunter gatherer society wouldn't be able to sustain a lot of nice stuff back in the past (if living in towns and cities back then was so terrible and worse than being a hunter-gatherer there wouldn't be so many people living in towns and cities); and in the modern day like the SN website, the Internet, vaccines, antibiotics, hospitals, solar panels. etc.
It also would be unlikely to lead towards humans developing space colonies and related tech, which would arguably be necessary to significantly delay the extinction of humans.
To try to achieve modern tech capabilities with just food produced via hunter-gatherer style stuff would likely cause even greater devastation to flora and fauna. Imagine how many pigs and chickens you'd have to hunt in the forests. How many berries etc would you have to forage and gather.
(Score: 1) by khallow on Thursday October 23, @01:40AM
Recall my thing about scale? If you don't need to support way more than a few people or the need to support fancy infrastructure, then farming and agriculture just doesn't have that much of a draw. Similarly, a low grade hunter/gatherer presence would have relatively low environmental impact (unless you are a large, tasty animal).
Keep in mind that the propaganda greatly predates the space age. Ancient cities of Egypt, China, and Mesopotamia touted their advantages long before.
(Score: 2) by JoeMerchant on Tuesday October 21, @03:12PM (1 child)
You could just use the global political model and reduce everything to binary: black/white, true/false, yes/no, red/blue - make your choice: if you're not with us, you're against us.
It's working so well for national governance, shouldn't we drive simple binary decision making into every aspect of our lives? Please answer only yes, or no. /s
🌻🌻🌻🌻 [google.com]
(Score: 1, Disagree) by khallow on Tuesday October 21, @10:41PM
(Score: 0) by Anonymous Coward on Wednesday October 22, @01:14AM (1 child)
I think crow brains are superior to modern AIs in some ways. Their power/energy consumption is a lot lower. I bet you'd need fewer samples to train a crow the difference between a bus and a car compared to "modern AIs".
An AI more like such stuff won't run out of training data as quickly as inferior AIs that require millions or more training samples and iterations[1].
[1] https://soylentnews.org/article.pl?sid=25/10/18/000230 [soylentnews.org]
(Score: 0, Redundant) by khallow on Wednesday October 22, @02:40AM
(Score: 4, Interesting) by ChrisMaple on Wednesday October 22, @12:46AM
In computing, FLOPS is only one measure of computer performance and I've seen no claim that floating point performance is the most important criterion. The ratings I've seen for AI emphasize TOPS -- tera operations per second.