Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 10 submissions in the queue.
posted by martyb on Sunday March 31 2019, @10:18PM   Printer-friendly
from the never-mind-Moore's-law-what-about-Amdahl's-law? dept.

Intel has teased* plans to return to the discrete graphics market in 2020. Now, some of those plans have leaked. Intel's Xe branded GPUs will apparently use an architecture capable of scaling to "any number" of GPUs that are connected by a multi-chip module (MCM). The "e" in Xe is meant to represent the number of GPU dies, with one of the first products being called X2/X2:

Developers won't need to worry about optimizing their code for multi-GPU, the OneAPI will take care of all that. This will also allow the company to beat the foundry's usual lithographic limit of dies that is currently in the range of ~800mm2. Why have one 800mm2 die when you can have two 600mm2 dies (the lower the size of the die, the higher the yield) or four 400mm2 ones? Armed with One API and the Xe macroarchitecture Intel plans to ramp all the way up to Octa GPUs by 2024. From this roadmap, it seems like the first Xe class of GPUs will be X2.

The tentative timeline for the first X2 class of GPUs was also revealed: June 31st, 2020. This will be followed by the X4 class sometime in 2021. It looks like Intel plans to add two more cores [dies] every year so we should have the X8 class by 2024. Assuming Intel has the scaling solution down pat, it should actually be very easy to scale these up. The only concern here would be the packaging yield – which Intel should be more than capable of handling and binning should take care of any wastage issues quite easily. Neither NVIDIA nor AMD have yet gone down the MCM path and if Intel can truly deliver on this design then the sky's the limit.

AMD has made extensive use of MCMs in its Zen CPUs, but will reportedly not use an MCM-based design for its upcoming Navi GPUs. Nvidia has published research into MCM GPUs but has yet to introduce products using such a design.

Intel will use an MCM for its upcoming 48-core "Cascade Lake" Xeon CPUs. They are also planning on using "chiplets" in other CPUs and mixing big and small CPU cores and/or cores made on different process nodes.

*Previously: Intel Planning a Return to the Discrete GPU Market, Nvidia CEO Responds
Intel Discrete GPU Planned to be Released in 2020
Intel Announces "Sunny Cove", Gen11 Graphics, Discrete Graphics Brand Name, 3D Packaging, and More

Related: Intel Integrates LTE Modem Into Custom Multi-Chip Module for New HP Laptop
Intel Promises "10nm" Chips by the End of 2019, and More


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 0) by Anonymous Coward on Sunday March 31 2019, @11:06PM (6 children)

    by Anonymous Coward on Sunday March 31 2019, @11:06PM (#822878)

    ...lithographic limit of dies that is currently in the range of ~800mm. Why have one 800mm die when you can have two 600mm dies (the lower the size of the die, the higher the yield) or four 400mm ones...

    Not the day for units on Soylent is it? Diamond forming pressures given in quadi-elephants per square fingernail, and now we have chips almost three feet across.
    800mm is 0.8 meters (0.91 meters is 3 feet). Either Intel have truly astounding die process with 800m wafers from which they cut dozens of chips, or something has gone squirly. A single chip on a 800mm die would not quite fit inside my laptop, or any computer I've ever owned. And they named them "micro" processors... :)

    • (Score: 2) by takyon on Sunday March 31 2019, @11:18PM (1 child)

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Sunday March 31 2019, @11:18PM (#822883) Journal

      I fixed it before you commented.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 2) by EvilSS on Sunday March 31 2019, @11:40PM

        by EvilSS (1456) Subscriber Badge on Sunday March 31 2019, @11:40PM (#822886)
        We need a 'Pedantic' mod option
    • (Score: 2) by PartTimeZombie on Monday April 01 2019, @12:07AM

      by PartTimeZombie (4827) on Monday April 01 2019, @12:07AM (#822891)

      I thought that was just the "Texas" option, because everything's bigger in Texas.

    • (Score: 2) by Snotnose on Monday April 01 2019, @12:26AM (2 children)

      by Snotnose (1623) on Monday April 01 2019, @12:26AM (#822899)

      ...lithographic limit of dies that is currently in the range of ~800mm. Why have one 800mm die when you can have two 600mm dies (the lower the size of the die, the higher the yield) or four 400mm ones...

      Not in my experience. Granted, I'm a software engineer. But I spent a lot of my life verifying new born silicon. The closer to the cutting edge, e.g. the smaller the die, the more weird problems you run across and, in general, the lower the yield for any given die in it's first incarnations.

      You want weird? Had a SOC (system on a chip) that would trigger the dead man timer every 1-5 days. Have fun troubleshooting that. Boss put me on it because I had the most hardware knowledge. I had the "golden" laptop that triggered the issue most, and a bunch of "do this, it dies" from assorted folks. Took me 2 weeks (mostly thumb twiddling), but I tracked it down to a write of a particular register. Nothing to do with the laptop, nothing to do with the "do this".

      The pisser on that one was we were short of JPEG debuggers, so while waiting for days for the problem to hit I literally had nothing to do. There was a flash game about mining stuff that I got really good at. My boss knew, his boss knew, and I spent 8 hours a day playing some stupid flash game because without a debugger I was useless.

      Best part? Commented out the offending line, then waiting a week to see if the system crashed. It didn't (it was a debug register the hardware folks used, but did nothing critical). I felt good I'd found the problem but wouldn't have bet anything on it. When a crash happens within 1 hour to 1 week it's hard to have confidence you've found the problem, even if you have rock solid evidence.

      How did I find it? A rolling array of where the code went. 256 bytes. In the code I put checkpoints that wrote to the array. When the system crashed I could bring up the memory controller and read my array. Narrowed things down to 1 "you have got to be kidding me" write instruction, commenting that out solved the problem.

      --
      Bad decisions, great stories
      • (Score: 2) by takyon on Monday April 01 2019, @12:47AM

        by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Monday April 01 2019, @12:47AM (#822907) Journal

        They are comparing large GPU to smaller GPU, not Qualcomm SoC to whatever. From a linked older article:

        https://wccftech.com/nvidia-future-gpu-mcm-package/ [wccftech.com]

        NVIDIA currently has the two fastest GPU accelerators for the compute market, the last years Tesla P100 that is based on Pascal and this years Tesla V100 that is based on Volta. There’s one thing in common about both chips, they are as big as a chip can get on their particular process node. The Pascal GP100 GPU measured at a die size of 610mm2 while the Volta V100 GPU, even being based on a 12nm process from TSMC is 33.1% larger at 815mm2. NVIDIA’s CEO Jen-Hsun Huang revealed at GTC that this is the practical limits of what’s possible with today’s physics and they cannot make a chip as dense or as big as GV100 today.

        Here's the Zen 2 chiplet + I/O die (estimated sizes):

        https://www.anandtech.com/show/13829/amd-ryzen-3rd-generation-zen-2-pcie-4-eight-core [anandtech.com]

        Doing some measurements on our imagery of the processor, and knowing that an AM4 processor is 40mm [per side] square, we measure the chiplet to be 10.53 x 7.67 mm = 80.80 mm2, whereas the IO die is 13.16mm x 9.32 mm = 122.63 mm2.

        So for a 64-core Epyc, there should be 8 of the chiplets and an I/O die (larger size version I think). CPUs tend to be much smaller than GPUs.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 0) by Anonymous Coward on Tuesday April 02 2019, @12:01PM

        by Anonymous Coward on Tuesday April 02 2019, @12:01PM (#823566)

        >we were short of JPEG debuggers

        JTAG, hopefully

  • (Score: 0) by Anonymous Coward on Monday April 01 2019, @12:23AM (1 child)

    by Anonymous Coward on Monday April 01 2019, @12:23AM (#822895)

    Why settle for a single undocumented attack vector? Intel and partner NSA announce OneIME, which allows seamless scaling to 8 undocumented telemetry devices at once.

  • (Score: 2) by linkdude64 on Monday April 01 2019, @01:59AM (2 children)

    by linkdude64 (5482) on Monday April 01 2019, @01:59AM (#822942)

    Intel late to the game!
    Also just in: We are late to the game, also!

    Good luck winning back your consumer confidence after your years of stagnation, Intel.

    • (Score: 2) by driverless on Monday April 01 2019, @05:15AM (1 child)

      by driverless (4770) on Monday April 01 2019, @05:15AM (#822974)

      They're not late to the game, they've been trying to get in since the i740 twenty years ago (the 82720 doesn't really count since it was a rebadged NEC design), and have failed to penetrate anything but the budget market every single time they've tried. This is another attempt that'll fail, they may be big in the CPU world but they can't compete with nVidia/ATI-AMD who have been doing this for their entire corporate lives.

      • (Score: 2) by takyon on Monday April 01 2019, @05:28AM

        by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Monday April 01 2019, @05:28AM (#822977) Journal

        Intel Larrabee [wikipedia.org] was a failed Intel GPU effort that later became the basis of the "manycore" Xeon Phi [wikipedia.org] chips, that have seen use in supercomputers and machine learning.

        https://www.nextplatform.com/2018/07/27/end-of-the-line-for-xeon-phi-its-all-xeon-from-here/ [nextplatform.com]
        https://www.theregister.co.uk/2018/06/13/intel_gpus_2020/ [theregister.co.uk]

        Xeon Phi was discontinued. In its place, Intel will sell Xeons with lots of cores (like 48-core Cascade Lake, and more cores are sure to be added as Intel expands its use of MCMs to try to compete with AMD's Epyc, Threadripper, and Ryzen) and these new discrete GPUs. Intel sees Nvidia making a lot of money selling GPUs for machine learning, driverless vehicles, etc. and wants a piece of that pie. Even the market for high-end gaming GPUs has been pretty strong, and could remain so if high-spec VR becomes the driver of upgrades. MCMs consist of multiple dies; Intel can pick and choose which ones go into the server/enterprise products, and leave the scrappier ones for the gamers.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 2) by shortscreen on Monday April 01 2019, @04:59AM (1 child)

    by shortscreen (2252) on Monday April 01 2019, @04:59AM (#822965) Journal

    A letter X with a number after it. What an original naming scheme. I'm so impressed.

    Now that Intel is going to make fancy discrete GPUs, does that mean they can also go back to making CPUs uninfested by their redundant rubbish graphics?

(1)