Stories
Slash Boxes
Comments

SoylentNews is people

posted by hubie on Tuesday December 24, @05:24PM   Printer-friendly

As a part of the Allen Lab's Political Economy of AI Essay Collection, David Gray Widder and Mar Hicks draw on the history of tech hype cycles to warn against the harmful effects of the current generative AI bubble.

Only a few short months ago, generative AI was sold to us as inevitable by AI company leaders, their partners, and venture capitalists. Certain media outlets promoted these claims, fueling online discourse about what each new beta release could accomplish with a few simple prompts. As AI became a viral sensation, every business tried to become an AI business. Some even added "AI" to their names to juice their stock prices, and companies that mentioned "AI" in their earnings calls saw similar increases.

Investors and consultants urged businesses not to get left behind. Morgan Stanley positioned AI as key to a $6 trillion opportunity. McKinsey hailed generative AI as "the next productivity frontier" and estimated $2.6 to 4.4 trillion gains, comparable to the annual GDP of the United Kingdom or all the world's agricultural production. Conveniently, McKinsey also offers consulting services to help businesses "create unimagined opportunities in a constantly changing world." Readers of this piece can likely recall being exhorted by news media or their own industry leaders to "learn AI" while encountering targeted ads hawking AI "boot camps."

While some have long been wise to the hype, global financial institutions and venture capitalists are now beginning to ask if generative AI is overhyped. In this essay, we argue that even as the generative AI hype bubble slowly deflates, its harmful effects will last: carbon can't be put back in the ground, workers continue to face AI's disciplining pressures, and the poisonous effect on our information commons will be hard to undo.

An archival PDF of this essay can be found here.

[Source]: Harvard Kennedy School


Original Submission

This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Insightful) by FunkyLich on Tuesday December 24, @06:50PM (6 children)

    by FunkyLich (4689) on Tuesday December 24, @06:50PM (#1386351)

    .. to be made when using LLM, this deflating process will be a very long one.

    People are too busy to see the big picture. Rents to pay, offspring mouths to feed, and any other things that translate to expenses. If they can ask the LLM to word an email text for them (I have colleagues at work who do this regularly now), if they can ask the LLM "how to add users in bulk in Active Directory" (again colleagues at work do this regularly now), if they can ask ChatGPT for the endless small chores that they face everyday and which none of them are deep complex problems but mostly simple ones and yet numerous... well if they can do that, then damned will be always all these other problems which they can't see and are just somewhere out there in the world.

      Carbon emissions? Billions wasted which could have been put to better use? Poisoning the information commons? The answer to all these, for me has literally been: "Yes right, this is not the time for ethical and moral sensibilization campains, this ChatGPT gives results and cuts down my wasted time in small useless things. And this is the future, you are just delusional and weird". If words like this are told to me from a colleague who is a programmer (myself I am a system administrator), I think this is "genie is out of the bottle now" situation for the common less tech literate society at large.

    I really have very little hope that anything will change for the better at this point. Greed will just simply advance and take over, just like the everwarming water in the pot with a frog in it.

    • (Score: 2, Touché) by Anonymous Coward on Tuesday December 24, @08:02PM (1 child)

      by Anonymous Coward on Tuesday December 24, @08:02PM (#1386362)

      Interesting to hear that "AI" is actually being used in your company. Everyone I've discussed with has been unhappy with the results, because they are all about old news and the answers are poor.

      Investment advisor only got pat answers, nothing that could help him write newsletters on the current state of the markets.
      I tried summarizing a long bio page to make a short summary to put into a conference program
      R&D engineers weren't inspired by the results to give them new ways to look at their problems.
      Correspondents in general find that it takes more time constructing a useful prompt than to just write the short email themself.

      Google search still seems to be the best search option for me, and now I click the "Web" button that eliminates the Gemini output (maybe I'm saving $0.01 of electricity by not querying that "AI"?)

      • (Score: 4, Insightful) by FunkyLich on Tuesday December 24, @08:23PM

        by FunkyLich (4689) on Tuesday December 24, @08:23PM (#1386364)

        Some do, some don't.

        There are those who write a text for an email, originally at 15-20 sentences, and feed that to the LLM with an addition like "Make this email more official". And they get a response of an email with some standard boring bureaucratic intro, a revised content of whatever the original text was, some ending lines which incudes "if you need further help please don't hesitate to contact me" and the "best regards, truly yours" end.. and so on.

        As I said, most of them don't need to do research. They deal with simple, indeed existing already problems, for which they'd be happy to get something already there (the more popular the better, because the less would be the hallucinating probability) and then make small modifications on it to suit the particular problem. They use it for generating "template and skelettons" for whatever, which then they fill in with more.

        Very few on this world get to do R&D. Very few get to be investment advisors. Not many are out there to write news for newsletters. For each of these kind of people, there are probably 50-100 ones who do some boring, repetiteve, uninteresting, task which doesn't require much new things or research, but only small incremental modifications to what there is already there.

        Personally I don't use it. I also preffer google/duckduckgo to do some problem solving. But I have seen and work with people who use it and enjoy the time not spent to search and evaluate something but get something almost ready instead. Literally with the reason behind it being: "because I am not inventing anything new, so the answers are most probably accurate".

    • (Score: 3, Interesting) by mhajicek on Tuesday December 24, @08:36PM (2 children)

      by mhajicek (51) on Tuesday December 24, @08:36PM (#1386365)

      My understanding is that these models tend to be very energy expensive to train, but relatively cheap to use.

      --
      The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
      • (Score: 3, Informative) by Anonymous Coward on Tuesday December 24, @09:00PM

        by Anonymous Coward on Tuesday December 24, @09:00PM (#1386367)

        > ... these models tend to be very energy expensive to train, but relatively cheap to use.

        Good question, I've seen a variety of answers, here's one article from June 2024:
        https://www.scientificamerican.com/article/what-do-googles-ai-answers-cost-the-environment/ [scientificamerican.com]
        Article has several examples and my tl;dr is that the energy costs of an AI query are going to be a moving target for some time. Here are a few paras from the article,

        When compared to traditional search engines, AI uses “orders of magnitude more energy,” says Sasha Luccioni of the AI research company Hugging Face, who studies how these technologies impact the environment. “It just makes sense, right?” While a mundane search query finds existing data from the Internet, she says, applications like AI Overviews must create entirely new information; Luccioni’s team has estimated it costs about 30 times as much energy to generate text versus simply extracting it from a source.

        Big tech companies typically do not disclose the resources required to run their models, but outside researchers such as Luccioni have come up with estimates (though these numbers are highly variable and depend on an AI’s size and its task). She and her colleagues calculated that the large language model BLOOM emitted greenhouse gases equivalent to 19 kilograms of CO2 per day of use, or the amount generated by driving 49 miles in an average gas-powered car. They also found that generating two images with AI could use as much energy as the average smartphone charge. Others have estimated in research posted on the preprint server arXiv.org that every 10 to 50 responses from ChatGPT running GPT-3 evaporate the equivalent of a bottle of water to cool the AI’s servers.

        Such demands translate to financial costs. John Hennessy, chair of Google’s parent company Alphabet, told Reuters last year that an exchange with a large language model could cost 10 times more than a traditional search, though he predicted that those costs would decrease as the models are fine-tuned. Analysts at Morgan Stanley estimated that if AI generated 50-word answers in response to 50 percent of queries, it could cost Google $6 billion dollars per year. (Alphabet reported $300 billion in revenue in 2023.)

      • (Score: 5, Informative) by FunkyLich on Wednesday December 25, @09:05AM

        by FunkyLich (4689) on Wednesday December 25, @09:05AM (#1386401)

        I found this here, https://pragmaticai.substack.com/p/the-energy-consumption-of-large-language [substack.com] that has some information about energy consumption.

        From that, some quick summary figures:

          * The average U.S. household consumes approximately 10,500 kilowatt-hours (kWh) of electricity per year. In contrast, the energy consumption of a large language model during its training phase can reach up to 10 gigawatt-hours (GWh)

          * Data centers currently account for 1-1.5% of global electricity use

          * By 2027, AI could consume 85-134 terawatt-hours (TWh) annually, similar to countries like Argentina, Netherlands, Sweden. This would be a 26-36% compound annual growth in AI's energy use.

          * A typical ChatGPT query consumes 2.9 Wh of electricity, 100x more than a Google search (0.3 Wh).

          * 2 million daily ChatGPT queries can consume around 1 GWh each day, equivalent to the daily energy use of 33,000 U.S. households

          * Training GPT-3 (175B parameters) consumed 1,287 MWh, emitting 552 tons of CO2e, equal to 123 gas-powered cars driven for a year

          * 60% of AI energy goes to inference (generating outputs), 40% to training. As AI models grow and usage increases, inference will consume even more energy.

          * Most data center electricity still comes from fossil fuels.

          * Water used for cooling data centers stresses watersheds.

          * E-waste from AI hardware is a growing concern.

        In another place I found a PDF paper, https://arxiv.org/pdf/2211.02001 [arxiv.org] , with a study for the BLOOM LLM. Among other things, it was a small table of comparing different LLMs energy consumption and carbon footprint during the "Model Training" stage.

        Name   Nr.Param.  Power Cons.
        -------------------------------
        GPT-3    175B    1,287 MWh
        Gopher   280B    1,066 MWh
        OPT      175B      324 MWh
        BLOOM    176B      433 MWh

    • (Score: 2) by corey on Saturday December 28, @11:27PM

      by corey (2202) on Saturday December 28, @11:27PM (#1386716)

      Agree.

      The note people use these AI tools, the more quickly they will be replaced by AI systems. If people are worried about this tech taking their jobs, stop using them and push back.

  • (Score: 2) by darkfeline on Wednesday December 25, @12:25AM (2 children)

    by darkfeline (1030) on Wednesday December 25, @12:25AM (#1386373) Homepage

    Talk is cheap, listen to the money.

    If GenAI is useless, why are so many actors and creatives afraid of it? Just let GenAI train on it, it will never get good enough to replace you, right?

    If Harris was going to win, why did all of the betting markets favor Trump 2:1?

    When someone is saying one thing, but putting their money in the other thing, well, you know what they say about fools and money.

    --
    Join the SDF Public Access UNIX System today!
    • (Score: 5, Informative) by driverless on Wednesday December 25, @03:18AM (1 child)

      by driverless (4770) on Wednesday December 25, @03:18AM (#1386386)

      If GenAI is useless, why are so many actors and creatives afraid of it?

      They're afraid of the hype, not the reality.

      Also don't forget that the entertainment industry has done everything in its power to fight every new technology that comes along because they can't see anything beyond their short-term profit margins based on existing business models.

      • (Score: 2) by sjames on Monday December 30, @07:21PM

        by sjames (2882) on Monday December 30, @07:21PM (#1386911) Journal

        Being afraid of the hype isn't necessarily wrong. Even if they don't believe the hype themselves, if the people who decide if they remain employed do, they're gone. The fool that fired them in favor of AI might crash and burn as a result, but that puts no food on the table.

  • (Score: 2) by mcgrew on Wednesday December 25, @01:02PM (2 children)

    by mcgrew (701) <publish@mcgrewbooks.com> on Wednesday December 25, @01:02PM (#1386413) Homepage Journal

    Nonsense. Where does the author think coal and oil came from? Plants are made from the carbon in the air. Ancient plants, made out of air carbon, are modern coal and oil.

    The problem is that it took millions of years to turn those forests into coal, but we've been releasing millions of years of carbon sequestration in a few short centuries.

    But that's YOUR problem; I'm too old to see the worst of it.

    --
    A Russian operative has infiltrated the highest level of our government. Where's Joe McCarthy when we need him?
    • (Score: 4, Touché) by Ox0000 on Wednesday December 25, @09:24PM (1 child)

      by Ox0000 (5111) on Wednesday December 25, @09:24PM (#1386442)

      Après nous, le déluge [wikipedia.org], non?

      • (Score: 2) by mcgrew on Thursday December 26, @01:40AM

        by mcgrew (701) <publish@mcgrewbooks.com> on Thursday December 26, @01:40AM (#1386455) Homepage Journal

        Excellent post. I salute you, sir! Also, beware of Officer Poe, [wikipedia.org] his law is often harsh.

        --
        A Russian operative has infiltrated the highest level of our government. Where's Joe McCarthy when we need him?
  • (Score: 2) by NotSanguine on Thursday December 26, @01:58AM

    From Angela Collier [youtube.com], my new favorite Physics (plus) vlogger....

    Because AI does not exist but it will ruin everything anyway [youtube.com], in part bacause of the malicious optimism of AI-first companies [youtube.com].

    I recommend both videos above, as well as pretty much everything else Ms. Collier has done.

    --
    No, no, you're not thinking; you're just being logical. --Niels Bohr
(1)