Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 13 submissions in the queue.
posted by hubie on Thursday July 03, @08:16AM   Printer-friendly
from the stronger-Arm dept.

Arthur T Knackerbracket has processed the following story:

Arm-based servers are rapidly gaining traction in the market with shipments tipped to jump 70 percent in 2025, however, this remains well short of the chip designer's ambitions to make up half of datacenter CPU sales worldwide by the end of the year.

Market watcher IDC says Arm servers are attracting mass interest thanks mainly to the launch of large rack-scale configurations, referring to systems such as Nvidia's DGX GB200 NVL72, designed for AI processing.

In its latest Worldwide Quarterly Server Tracker, IDC estimates that servers based on the Arm architecture will account for 21.1 percent of total global shipments this year - not the 50 percent touted by Arm infrastructure chief Mohamed Awad in April.

Servers with at least one GPU fitted, sometimes styled as AI-capable, are projected to grow 46.7 percent, representing almost half of the total market value for this year. The fast pace of adoption by hyperscale customers and cloud service providers is fueling the server market, which IDC says is set to triple in size over just three years.

[...] IDC's regional market projections anticipate the US having the highest expansion with a 59.7 percent jump over 2024, which would see it account for almost 62 percent of the total server revenue by the end of 2025.

China is the other region heating up in the sales stakes, with IDC forecasting growth of 39.5 percent to make up more than 21 percent of the quarterly revenue worldwide. EMEA and Latin America are in single-digit growth territory at 7 and 0.7 percent, respectively, while Canada is expected to decline 9.6 percent this year due to an unspecified "very large deal" that happened in 2024.


Original Submission

 
This discussion was created by hubie (1068) for logged-in users only. Log in and try again!
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Interesting) by bzipitidoo on Thursday July 03, @02:03PM (6 children)

    by bzipitidoo (4388) on Thursday July 03, @02:03PM (#1409224) Journal

    The impression I have of the ARM architecture is one of minimalism. Seems these new servers are designed to have the GPU do the heavy lifting, while the CPU is relegated to housekeeping stuff. It's quite a change, but maybe it is a good direction to take computing. Been going that way for a while now, what with all this movement to employ GPUs for crypto and AI.

    The summary doesn't say, and I suppose doesn't have to say, that the loser in this is the x86 architecture. All this multicore and multithreading that current x86 CPUs have look kind of weak next to a GPU. 12 cores doesn't sound impressive when matched against a GPU with a hundred or more cores.

    I wonder how this affects RISC-V? If the whole idea of the 'C' in CPU is changed so that it is no longer central but instead is a management subsystem and the chip running it should be renamed from "CPU" to, say "OSPU", much like such things as storage and audio are handled, and this change is lasting, then RISC-V needs a rethink too. One of the biggest weaknesses in the RISC-V architecture is the lack of graphics.

    • (Score: 3, Insightful) by hendrikboom on Thursday July 03, @03:04PM

      by hendrikboom (1125) on Thursday July 03, @03:04PM (#1409229) Homepage Journal

      I suspect, with little actual knowledge, that it would be possible to get a large number of RISC-V processors on a chip, and achieve graphic processing that way. I suppose the OD would have to be tuned appropriately for this.

      Having CPUs do graphics directly would eliminate the costs of having to transfer bulk data between CPU and GPU, provided that the CPUs were up to scratch in sufficient quantities.

      I seem to remember that this was the original plan for the libre-SOC [libre-soc.org] project before it switched CPU targets because of nondisclosure constraints.

      Their plan had been to have a highly parallel out-of-order execution engine fed by multiple instruction streams.

      The project looked promising until politics and personalities torpedoed its funding.

    • (Score: 3, Funny) by DannyB on Thursday July 03, @03:41PM (1 child)

      by DannyB (5839) Subscriber Badge on Thursday July 03, @03:41PM (#1409236) Journal

      the 'C' in CPU is changed so that it is no longer central

      Just like how EVs have a brake and a gas peddle.

      --
      The server will be down for replacement of vacuum tubes, belts, worn parts and lubrication of gears and bearings.
      • (Score: 2) by theluggage on Thursday July 03, @04:19PM

        by theluggage (1797) on Thursday July 03, @04:19PM (#1409247)

        Just like how EVs have a brake and a gas peddle.

        Probably why it's more logically called "the accelerator pedal" here in Blighty.
        ...but then we put our luggage in "the boot", so we probably shouldn't be chucking any rocks in that particular greenhouse :-)

    • (Score: 3, Interesting) by theluggage on Thursday July 03, @04:42PM

      by theluggage (1797) on Thursday July 03, @04:42PM (#1409250)

      Seems these new servers are designed to have the GPU do the heavy lifting, while the CPU is relegated to housekeeping stuff.

      Been that way in "supercomputing" for a while - lots of "ARM-based" supercomputers around - but they're mostly ARMs controlling specialised vector processors etc. which bring the "super" bit.

      Part of the ARM's success is probably due to the flexible licensing that lets people build it into custom special-purpose chips (like NVIDIAs Grace/Hopper) - just like it did in the mobile sector.

      Not that ARM CPUs are fundamentally slower than x86 - the early ARMs ran rings around a contemporary 286 but, at the time, no DOS/Windows == No Sale, so they were focussed on mobile & embedded applications while the MegaHertz wars happened. They're back in play thanks to the rise of Linux for serious computing & the evolution of smartphones into "heavy lifting" devices.

      Meanwhile, x86 is doomed to support a legacy instruction set & the extra transistors to decode those instructions - once that legacy software fades away, so will x86.

    • (Score: 2) by stormwyrm on Friday July 04, @06:15AM (1 child)

      by stormwyrm (717) on Friday July 04, @06:15AM (#1409290) Journal
      The idea is not really new, having been invented by none other than the great supercomputing pioneer Seymour Cray in the 1960s. The CPU, such as it is, still has a central role in coordinating and managing the system as a while and doing much of the I/O work as it is a general-purpose, flexible system, while the major computational work is done by specialized, much faster processors. It was first used in the CDC 6600 and was also a key feature of Cray's later supercomputer designs in the 1970s and 1980s. The same idea was first used by consumer hardware as a PC plus i860 (or other high-performance RISC processor) expansion cards. It's essentially become the logical architecture of modern PCs today which have most of the I/O being handled by the main CPU while more specialized silicon (usually external but sometimes also on the same die as the CPU itself) does specialized compute-intensive tasks like graphics and AI processing. I first read about it in an article in the June 1992 edition of Dr. Dobb's Journal, "Personal Supercomputing [jacobfilipp.com]" by Ian Hirschsohn.
      --
      Numquam ponenda est pluralitas sine necessitate.
      • (Score: 2) by bzipitidoo on Friday July 04, @11:59PM

        by bzipitidoo (4388) on Friday July 04, @11:59PM (#1409344) Journal

        I have heard that supercomputer innovations show up in microcomputers 10 years later. I don't think it's entirely a case of nakedly copying the ideas. There's some convergent evolution involved.

(1)