Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by hubie on Tuesday October 21, @09:15AM   Printer-friendly

An interesting article on the economics of AI Chips by Mihir Kshirsagar

This week, Open AI announced a multibillion-dollar deal with Broadcom to develop custom AI chips for data centers projected to consume 10 gigawatts of power. This investment is separate from another multibillion-dollar deal OpenAI struck with AMD last week. There is no question that we are in the midst of making one of the largest industrial infrastructure bets in United States history. Eight major companies—Microsoft, Amazon, Google, Meta, Oracle, OpenAI, and others—are expected to invest over $300 billion in AI infrastructure in 2025 alone. Spurred by news about the vendor-financed structure of the AMD investment and a conversation with my colleague Arvind Narayanan, I started to investigate the unit economics of the industry from a competition perspective.

What I have found so far is surprising. It appears that we're making important decisions about who gets to compete in AI based on financial assumptions that may be systematically overstating the long-run sustainability of the industry by a factor of two. That said, I am open to being wrong in my analysis and welcome corrections as I write these thoughts up in an academic article with my colleague Felix Chen.

Here is the puzzle: the chips at the heart of the infrastructure buildout have a useful lifespan of one to three years due to rapid technological obsolescence and physical wear, but companies depreciate them over five to six years. In other words, they spread out the cost of their massive capital investments over a longer period than the facts warrant—what The Economist has referred to as the "$4trn accounting puzzle at the heart of the AI cloud."

Center for Information Technology Policy (Princeton University)


Original Submission

 
This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Wednesday October 22, @01:03AM

    by Anonymous Coward on Wednesday October 22, @01:03AM (#1421693)

    It's all relatively cheap as long as your military sticks to using that AI responsibly:
    https://nypost.com/2025/10/16/business/us-army-general-william-hank-taylor-uses-chatgpt-to-help-make-command-decisions/ [nypost.com]

    He added that he’s exploring how AI could support his decision-making processes — not in combat situations, but in managing day-to-day leadership tasks.

    Otherwise it might cost more:

    The US military has been pushing to integrate artificial intelligence into its operations at every level — from logistics and surveillance to battlefield tactics — as rival nations like China and Russia race to do the same.

    Officials say AI-driven systems could allow faster data processing and more precise targeting, though they also have also raised concerns about reliability and accountability when software takes on roles traditionally reserved for human judgment.

    The Pentagon has said future conflicts could unfold at “machine speed,” requiring split-second decisions that exceed human capability.

    Former Air Force Secretary Frank Kendall warned last year that rapid advances in autonomous weapons mean “response times to bring effects to bear are very short,” and that commanders who fail to adapt “won’t survive the next battlefield.”