Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Wednesday May 31 2023, @01:23PM   Printer-friendly

Nvidia is now a $1 trillion company thanks to the AI boom:

Nvidia has just become a $1 trillion company, with its rising valuation fueled by tech companies big and small racing to add generative artificial intelligence tools to their products. AI tools made up the vast bulk of recent Google I/O and Microsoft Build presentations, and Nvidia's chips make it a key supplier for companies trying to build something with AI.

The last quarterly earnings report from Nvidia noted over $2 billion in profit in three months. This latest push comes after Nvidia's business boomed early in the pandemic during a GPU shortage while they were in demand for PC gaming and cryptocurrency mining before those markets fell back throughout 2022.

Last fall, CEO Jensen Huang said it had built too many gaming GPUs and was forced to sell them for less money. However, by the time of Nvidia's next report in February, with ChatGPT all over the news, the outlook was more promising as Huang hyped the potential of Nvidia's data center growth, and the most recent report showed a new record in data center revenue.

Over the weekend, Nvidia's Computex 2023 keynote was full of AI announcements, including a demo of games using its Avatar Cloud Engine (ACE) for Games to support natural language both for input and responses and a new DGX GH200 supercomputer built around its latest Grace Hopper Superchip that's collectively capable of an exaflop of AI performance.

Its valuation pushed past the trillion-dollar benchmark as trading opened today at over $400 per share, putting it in the rarified air previously occupied by only a few large companies such as Apple and Microsoft after they surpassed the significant mark in August 2018 and August 2019, respectively. Amazon and Google are the other tech stocks in the club, and Meta is a former member.


Original Submission

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 3, Insightful) by bzipitidoo on Wednesday May 31 2023, @02:58PM (6 children)

    by bzipitidoo (4388) on Wednesday May 31 2023, @02:58PM (#1309061) Journal

    All this talk of AI. I say these hardware neural networks are going to show us that we've once again underestimated and misunderstood what it takes to be intelligent. Chess and other game playing programs are extreme idiot savants, able to play just one game extremely well, but utterly hopeless at anything else. Some really hoped that once computers could play chess well, we will have created AI. When computers finally did get good at chess, all that showed was that chess is amenable to brute force calculation.

    Has been difficult to generalize even just a little, to handle several games. Neural networks can do that.

    • (Score: 4, Insightful) by krishnoid on Wednesday May 31 2023, @03:25PM (2 children)

      by krishnoid (1156) on Wednesday May 31 2023, @03:25PM (#1309064)

      "Intelligent" at what level? Can a neural network accurately simulate the intelligence of a bee? Maybe a mouse, a bird? Or can the intelligence of an entire hive, bee accurately simulated by a Beeowulf cluster of these networks?

      If we knew that, at least we could get an idea of how it's trending over time.

      • (Score: 2) by bzipitidoo on Wednesday May 31 2023, @09:32PM (1 child)

        by bzipitidoo (4388) on Wednesday May 31 2023, @09:32PM (#1309104) Journal

        A neural network can't yet match a bee. Animals come preprogrammed with all kinds of behaviors, inclinations, and dispositions which we call "instincts", while neural networks have to learn every least little thing from scratch. Birds have to learn to fly, and, I suppose, sing, but they mostly don't have to learn to reject bad tasting food, or breathing, reproduction, and other bodily functions. They do have to learn which caterpillars taste awful, but not that they should spit out awful tasting things that seem to be food. I'd guess we're still figuring out what instincts animals have. If we trained a neural network to do all the things we think bees do, likely we'd find we missed a bunch of things. Some behaviors are quite rare. For instance, animals can tell when an earthquake and tsunami is coming, and know to run inland.

        The most important things a simulated bee might be missing are purpose and plans. Where current AI is most lacking is big picture kind of thinking.

    • (Score: 2) by SomeRandomGeek on Wednesday May 31 2023, @09:45PM (2 children)

      by SomeRandomGeek (856) on Wednesday May 31 2023, @09:45PM (#1309107)

      I think your comment will age badly. We have just left the "AI is an interesting toy" stage of development and entered the "AI is a really powerful tool, but only in the hands of someone that knows what they are doing and uses the AI for the things that it is actually good at rather than trying to use it as a generic substitute for a human being in any random situation" stage of development.
      Conversational programming is on the way for those of us who develop software: https://jessmart.in/articles/copilot [jessmart.in]
      Once you understand this, it is pretty obvious that similar changes are on the way for doctors, lawyers, and anyone else who uses complex patterns.

      • (Score: 2) by bzipitidoo on Thursday June 01 2023, @01:09AM (1 child)

        by bzipitidoo (4388) on Thursday June 01 2023, @01:09AM (#1309134) Journal

        Not so. I think we still lack understanding of what intelligence is. Our IQ tests are the sorts of things computers should ace, because they are mostly logic puzzles. That's too narrow.

        ChatGPT is at heart a much more powerful ELIZA. It's fantastic at what is known as "bandying words". It's so good at that, that a lot of people are convinced that it is the beginning of truly general AI. I have tested it a little bit. Asked it to write a few fairly short and simple programs, and one time the code it gave me wouldn't even compile because it cut off in mid statement. If it had any brains, you'd think it could test the code itself to see if it compiles. Seems running a compiler on its own code was an idea that simply didn't occur to it. But this code wasn't even worth that test. How could it not grasp that it wrote code that even a beginner would see in an instant is flawed, without needing to consult a compiler? Now, it has occurred to me that maybe, it didn't mess up that badly, and that the error I saw resulted from a problem with transmission. But that hypothesis I think is unlikely. More testing is needed.

        One more note, from history: Frankenstein. When scientists first began experimenting with electricity, it was soon discovered that an electric current could make muscles contract, in some cases even after death. Next thing you know, people were wildly speculating that maybe, electricity was the spark of life, so to speak. That's what lead to the whole idea of Frankenstein's monster: stitch together a body, zap it with lots of electricity (lightning bolts preferred as they were the most powerful and dramatic source known), and the monster might come to life.

        • (Score: 2) by SomeRandomGeek on Thursday June 01 2023, @05:01PM

          by SomeRandomGeek (856) on Thursday June 01 2023, @05:01PM (#1309265)

          Ok, so you know what large language models are and what they're not. I agree totally on these points. But I still think that you're approaching the problem wrong. You're thinking of the AI as being like a replacement worker, and therefore not useful until it is as good as the worker it is replacing for every task. Think of it as a tool. A person with a shovel can dig more efficiently than a person with just their hands. A person with a back hoe can dig more efficiently than a person with a shovel. A person with a large language model can code more efficiently than a person with just their brain. But there's some training required. The person needs to know what to ask the LLM for, and how to check the output.

  • (Score: 3, Informative) by turgid on Wednesday May 31 2023, @05:42PM (3 children)

    by turgid (4318) Subscriber Badge on Wednesday May 31 2023, @05:42PM (#1309080) Journal

    Sell your shares. Get while the gettin's good. A crash follows hype.

    • (Score: 3, Insightful) by istartedi on Thursday June 01 2023, @01:11AM

      by istartedi (123) on Thursday June 01 2023, @01:11AM (#1309136) Journal

      "The market can be irrational longer than you can be solvent" --A. Gary Shilling [quoteinvestigator.com]

      --
      Appended to the end of comments you post. Max: 120 chars.
    • (Score: 2) by richtopia on Thursday June 01 2023, @06:01AM (1 child)

      by richtopia (3160) on Thursday June 01 2023, @06:01AM (#1309161) Homepage Journal

      I disagree. There is so much forward momentum in AI as everyone and their grandma is trying to be the first player in an innovative application. Outside of massive companies designing their own silicon (Amazon, Google, Tesla), everyone is using Nvidia.

      I initially had the same response as you and was ready to cash out on the hype, and maybe in the immediate future there will be some contraction. And I'll admit I pick stocks based on my view as a consumer; I know very little about stock pricing.

(1)