Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Sunday March 31 2019, @10:18PM   Printer-friendly
from the never-mind-Moore's-law-what-about-Amdahl's-law? dept.

Intel has teased* plans to return to the discrete graphics market in 2020. Now, some of those plans have leaked. Intel's Xe branded GPUs will apparently use an architecture capable of scaling to "any number" of GPUs that are connected by a multi-chip module (MCM). The "e" in Xe is meant to represent the number of GPU dies, with one of the first products being called X2/X2:

Developers won't need to worry about optimizing their code for multi-GPU, the OneAPI will take care of all that. This will also allow the company to beat the foundry's usual lithographic limit of dies that is currently in the range of ~800mm2. Why have one 800mm2 die when you can have two 600mm2 dies (the lower the size of the die, the higher the yield) or four 400mm2 ones? Armed with One API and the Xe macroarchitecture Intel plans to ramp all the way up to Octa GPUs by 2024. From this roadmap, it seems like the first Xe class of GPUs will be X2.

The tentative timeline for the first X2 class of GPUs was also revealed: June 31st, 2020. This will be followed by the X4 class sometime in 2021. It looks like Intel plans to add two more cores [dies] every year so we should have the X8 class by 2024. Assuming Intel has the scaling solution down pat, it should actually be very easy to scale these up. The only concern here would be the packaging yield – which Intel should be more than capable of handling and binning should take care of any wastage issues quite easily. Neither NVIDIA nor AMD have yet gone down the MCM path and if Intel can truly deliver on this design then the sky's the limit.

AMD has made extensive use of MCMs in its Zen CPUs, but will reportedly not use an MCM-based design for its upcoming Navi GPUs. Nvidia has published research into MCM GPUs but has yet to introduce products using such a design.

Intel will use an MCM for its upcoming 48-core "Cascade Lake" Xeon CPUs. They are also planning on using "chiplets" in other CPUs and mixing big and small CPU cores and/or cores made on different process nodes.

*Previously: Intel Planning a Return to the Discrete GPU Market, Nvidia CEO Responds
Intel Discrete GPU Planned to be Released in 2020
Intel Announces "Sunny Cove", Gen11 Graphics, Discrete Graphics Brand Name, 3D Packaging, and More

Related: Intel Integrates LTE Modem Into Custom Multi-Chip Module for New HP Laptop
Intel Promises "10nm" Chips by the End of 2019, and More


Original Submission

Related Stories

Intel Planning a Return to the Discrete GPU Market, Nvidia CEO Responds 15 comments

Intel isn't just poaching a prominent AMD employee. Intel is planning a return to the discrete GPU market:

On Monday, Intel announced that it had penned a deal with AMD to have the latter provide a discrete GPU to be integrated onto a future Intel SoC. On Tuesday, AMD announced that their chief GPU architect, Raja Koduri, was leaving the company. Now today the saga continues, as Intel is announcing that they have hired Raja Koduri to serve as their own GPU chief architect. And Raja's task will not be a small one; with his hire, Intel will be developing their own high-end discrete GPUs.

[...] [In] perhaps the only news that can outshine the fact that Raja Koduri is joining Intel, is what he will be doing for Intel. As part of today's revelation, Intel has announced that they are instituting a new top-to-bottom GPU strategy. At the bottom, the company wants to extend their existing iGPU market into new classes of edge devices, and while Intel doesn't go into much more detail than this, the fact that they use the term "edge" strongly implies that we're talking about IoT-class devices, where edge goes hand-in-hand with neural network inference. This is a field Intel already plays in to some extent with their Atom processors on the GPU side, and their Movidius neural compute engines on the dedicated silicon sign.

However in what's likely the most exciting part of this news for PC enthusiasts and the tech industry as a whole, is that in aiming at the top of the market, Intel will once again be going back into developing discrete GPUs. The company has tried this route twice before; once in the early days with the i740 in the late 90s, and again with the aborted Larrabee project in the late 2000s. However even though these efforts never panned out quite like Intel has hoped, the company has continued to develop their GPU architecture and GPU-like devices, the latter embodying the massive parallel compute focused Xeon Phi family.

Yet while Intel has GPU-like products for certain markets, the company doesn't have a proper GPU solution once you get beyond their existing GT4-class iGPUs, which are, roughly speaking, on par with $150 or so discrete GPUs. Which is to say that Intel doesn't have access to the midrange market or above with their iGPUs. With the hiring of Raja and Intel's new direction, the company is going to be expanding into full discrete GPUs for what the company calls "a broad range of computing segments."

Intel Discrete GPU Planned to be Released in 2020 8 comments

Intel's First (Modern) Discrete GPU Set For 2020

In a very short tweet posted to their Twitter feed yesterday, Intel revealed/confirmed the launch date for their first discrete GPU developed under the company's new dGPU initiative. The otherwise unnamed high-end GPU will be launching in 2020, a short two to two-and-a-half years from now.

[...] This new GPU would be the first GPU to come out of Intel's revitalized GPU efforts, which kicked into high gear at the end of 2017 with the hiring of former AMD and Apple GPU boss Raja Koduri. Intel of course is in the midst of watching sometimes-ally and sometimes-rival NVIDIA grow at a nearly absurd pace thanks to the machine learning boom, so Intel's third shot at dGPUs is ultimately an effort to establish themselves in a market for accelerators that is no longer niche but is increasingly splitting off customers who previously would have relied entirely on Intel CPUs.

[...] Intel isn't saying anything else about the GPU at this time. Though we do know from Intel's statements when they hired Koduri that they're starting with high-end GPUs, a fitting choice given the accelerator market Intel is going after. This GPU is almost certainly aimed at compute users first and foremost – especially if Intel adopts a bleeding edge-like strategy that AMD and NVIDIA have started to favor – but Intel's dGPU efforts are not entirely focused on professionals. Intel has also confirmed that they want to go after the gaming market as well, though what that would entail – and when – is another question entirely.

Previously: AMD's Radeon Technologies Group Boss Raja Koduri Leaves, Confirmed to be Defecting to Intel
Intel Planning a Return to the Discrete GPU Market, Nvidia CEO Responds


Original Submission

Intel Integrates LTE Modem Into Custom Multi-Chip Module for New HP Laptop 13 comments

Intel's Customized SoC for HP: Amber Lake-Y with On-Package LTE Modem

Announced earlier this week, HP's Spectre Folio convertible notebook already looks remarkable due to its leather exterior. As it appears, the system is as impressive inside as it is on the outside, as it incorporates a custom Intel's Amber Lake-Y multi-chip-module that features an LTE modem.

According to a report from PC World, the internal design of the Spectre Folio PC convertible notebook was co-developed by HP and Intel engineers under Intel's Innovation Excellence Program, which is aimed at enabling PC makers to bring state-of-the-art designs to the market. The product uses a tiny, jointly-designed motherboard that measures only 12,000 mm2 and is based around a unique multi-chip module that carries Intel's Amber Lake-Y SoC, a PCH (platform controller hub), and Intel's Intel XMM 7560 LTE Advanced Pro Cat16/Cat 13 modem.

[...] Intel is not new to selling complete platforms comprised of a CPU, a chipset, and a communication module. Back in 2000s the company made a fortune selling its Centrino-branded sets containing the aforementioned elements. By selling multiple chips at once, Intel naturally increases its revenue, whereas system vendors ensure compatibility. Therefore, platform-level integration is a win-win for all parties. With that said, this is the first time we've seen Intel put a CPU, a PCH, and a cellular modem onto one multi-chip-module in this fashion. So this may be the start of a trend for the company.

Related: Apple Could Switch From Qualcomm to Intel and MediaTek for Modems
Intel Announces Development of 5G Modems (Due in 2019)
AMD Creates Quad-Core Zen SoC for Chinese Console Maker
ARM Aims to Match Intel 15-Watt Laptop CPU Performance


Original Submission

Intel Announces 48-core Xeons Using Multiple Dies, Ahead of AMD Announcement 23 comments

Intel announces Cascade Lake Xeons: 48 cores and 12-channel memory per socket

Intel has announced the next family of Xeon processors that it plans to ship in the first half of next year. The new parts represent a substantial upgrade over current Xeon chips, with up to 48 cores and 12 DDR4 memory channels per socket, supporting up to two sockets.

These processors will likely be the top-end Cascade Lake processors; Intel is labelling them "Cascade Lake Advanced Performance," with a higher level of performance than the Xeon Scalable Processors (SP) below them. The current Xeon SP chips use a monolithic die, with up to 28 cores and 56 threads. Cascade Lake AP will instead be a multi-chip processor with multiple dies contained with in a single package. AMD is using a similar approach for its comparable products; the Epyc processors use four dies in each package, with each die having 8 cores.

The switch to a multi-chip design is likely driven by necessity: as the dies become bigger and bigger it becomes more and more likely that they'll contain a defect. Using several smaller dies helps avoid these defects. Because Intel's 10nm manufacturing process isn't yet good enough for mass market production, the new Xeons will continue to use a version of the company's 14nm process. Intel hasn't yet revealed what the topology within each package will be, so the exact distribution of those cores and memory channels between chips is as yet unknown. The enormous number of memory channels will demand an enormous socket, currently believed to be a 5903 pin connector.

Intel also announced tinier 4-6 core E-2100 Xeons with ECC memory support.

Meanwhile, AMD is holding a New Horizon event on Nov. 6, where it is expected to announce 64-core Epyc processors.

Related: AMD Epyc 7000-Series Launched With Up to 32 Cores
AVX-512: A "Hidden Gem"?
Intel's Skylake-SP vs AMD's Epyc
Intel Teases 28 Core Chip, AMD Announces Threadripper 2 With Up to 32 Cores
TSMC Will Make AMD's "7nm" Epyc Server CPUs
Intel Announces 9th Generation Desktop Processors, Including a Mainstream 8-Core CPU


Original Submission

AMD Previews Zen 2 Epyc CPUs with up to 64 Cores, New "Chiplet" Design 9 comments

AMD has announced the next generation of its Epyc server processors, with up to 64 cores (128 threads) each. Instead of an 8-core "core complex" (CCX), AMD's 64-core chips will feature 8 "chiplets" with 8 cores each:

AMD on Tuesday formally announced its next-generation EPYC processor code-named Rome. The new server CPU will feature up to 64 cores featuring the Zen 2 microarchitecture, thus providing at least two times higher performance per socket than existing EPYC chips.

As discussed in a separate story covering AMD's new 'chiplet' design approach, AMD EPYC 'Rome' processor will carry multiple CPU chiplets manufactured using TSMC's 7 nm fabrication process as well as an I/O die produced at a 14 nm node. As it appears, high-performance 'Rome' processors will use eight CPU chiplets offering 64 x86 cores in total.

Why chiplets?

Separating CPU chiplets from the I/O die has its advantages because it enables AMD to make the CPU chiplets smaller as physical interfaces (such as DRAM and Infinity Fabric) do not scale that well with shrinks of process technology. Therefore, instead of making CPU chiplets bigger and more expensive to manufacture, AMD decided to incorporate DRAM and some other I/O into a separate chip. Besides lower costs, the added benefit that AMD is going to enjoy with its 7 nm chiplets is ability to easier[sic] bin new chips for needed clocks and power, which is something that is hard to estimate in case of servers.

AMD also announced that Zen 4 is under development. It could be made on a "5nm" node, although that is speculation. The Zen 3 microarchitecture will be made on TSMC's N7+ process ("7nm" with more extensive use of extreme ultraviolet lithography).

AMD's Epyc CPUs will now be offered on Amazon Web Services.

AnandTech live blog of New Horizon event.

Previously: AMD Epyc 7000-Series Launched With Up to 32 Cores
TSMC Will Make AMD's "7nm" Epyc Server CPUs
Intel Announces 48-core Xeons Using Multiple Dies, Ahead of AMD Announcement

Related: Cray CS500 Supercomputers to Include AMD's Epyc as a Processor Option
Oracle Offers Servers with AMD's Epyc to its Cloud Customers


Original Submission

Intel Announces "Sunny Cove", Gen11 Graphics, Discrete Graphics Brand Name, 3D Packaging, and More 23 comments

Intel has announced new developments at its Architecture Day 2018:

Sunny Cove, built on 10nm, will come to market in 2019 and offer increased single-threaded performance, new instructions, and 'improved scalability'. Intel went into more detail about the Sunny Cove microarchitecture, which is in the next part of this article. To avoid doubt, Sunny Cove will have AVX-512. We believe that these cores, when paired with Gen11 graphics, will be called Ice Lake.

Willow Cove looks like it will be a 2020 core design, most likely also on 10nm. Intel lists the highlights here as a cache redesign (which might mean L1/L2 adjustments), new transistor optimizations (manufacturing based), and additional security features, likely referring to further enhancements from new classes of side-channel attacks. Golden Cove rounds out the trio, and is firmly in that 2021 segment in the graph. Process node here is a question mark, but we're likely to see it on 10nm and or 7nm. Golden Cove is where Intel adds another slice of the serious pie onto its plate, with an increase in single threaded performance, a focus on AI performance, and potential networking and AI additions to the core design. Security features also look like they get a boost.

Intel says that GT2 Gen11 integrated graphics with 64 execution units will reach 1 teraflops of performance. It compared the graphics solution to previous-generation GT2 graphics with 24 execution units, but did not mention Iris Plus Graphics GT3e, which already reached around 800-900 gigaflops with 48 execution units. The GPU will support Adaptive Sync, which is the standardized version of AMD's FreeSync, enabling variable refresh rates over DisplayPort and reducing screen tearing.

Intel's upcoming discrete graphics cards, planned for release around 2020, will be branded Xe. Xe will cover configurations from integrated and entry-level cards all the way up to datacenter-oriented products.

Like AMD, Intel will also organize cores into "chiplets". But it also announced FOVEROS, a 3D packaging technology that will allow it to mix chips from different process nodes, stack DRAM on top of components, etc. A related development is Intel's demonstration of "hybrid x86" CPUs. Like ARM's big.LITTLE and DynamIQ heterogeneous computing architectures, Intel can combine its large "Core" with smaller Atom cores. In fact, it created a 12mm×12mm×1mm SoC (compare to a dime coin which has a radius of 17.91mm and thickness of 1.35mm) with a single "Sunny Cove" core, four Atom cores, Gen11 graphics, and just 2 mW of standby power draw.


Original Submission

Intel Promises "10nm" Chips by the End of 2019, and More 6 comments

CES 2019 Quick Bytes: Consumer 10nm is Coming with Intel's Ice Lake

We've been on Intel's case for years to tell us when its 10nm parts are coming to the mass market. Technically Intel already shipped its first 10nm processor, Cannon Lake, but this was low volume and limited to specific geographic markets. This time Intel is promising that its first volume consumer processor on 10nm will be Ice Lake. It should be noted that Intel hasn't put a date on Ice Lake launching, but has promised 10nm on shelves by the end of 2019. It has several products that could qualify for that, but Ice Lake is the likely suspect.

At Intel's Architecture Day in December, we saw chips designated as 'Ice Lake-U', built for 15W TDPs with four cores using the new Sunny Cove microarchitecture and Gen11 graphics. Intel went into some details about this part, which we can share with you today.

The 15W processor is a quad core part supporting two threads per core, and will have 64 EUs of Gen11 graphics. 64 EUs will be the standard 'GT2' mainstream configuration for this generation, up from 24 EUs today. In order to drive that many execution units, Intel stated that they need 50-60 GB/s of memory bandwidth, which will come from LPDDR4X memory. In order for those numbers to line up, they will need LPDDR4X-3200 at a minimum, which gives 51.2 GB/s. [...] For connectivity, the chips will support Wi-Fi 6 (802.11ax) if the laptop manufacturer uses the correct interface module, but the support for Wi-Fi 6 is in the chip. The processor also supports native Thunderbolt 3 over USB Type-C, marking the first Intel chip with native TB3 support.

Intel to Stop Selling Xeon Phi Processors This Year 7 comments

The Larrabee Chapter Closes: Intel's Final Xeon Phi Processors Now in EOL

Intel this week initiated its product discontinuance plan for its remaining Xeon Phi 7200-series processors codenamed Knights Mill (KML), bringing an end to the family of processors that have now been superceded by the likes of Intel's 56-core Xeon Platinum 9200 family. Xeon Phi parts have been used primarily by supercomputers during its lifetime.

Customers interested in final Intel Xeon Phi 7295, 7285 and 7235 processors will have to place their final orders on these devices by August 9, 2019. Intel will ship the final Xeon Phi CPUs by July 31, 2020. Intel's Knights Mill processors feature 64, 68, or 72 upgraded Silvermont x86 cores paired with AVX-512 units and MCDRAM. The parts were essentially Knights Landing parts optimized for Deep Learning applications.

Also to be superceded by Intel Xe GPUs.

Related: Intel Discrete GPU Planned to be Released in 2020
Leaked Intel Discrete Graphics Roadmap Reveals Plans for "Seamless" Dual, Quad, and Octa-GPUs


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 0) by Anonymous Coward on Sunday March 31 2019, @11:06PM (6 children)

    by Anonymous Coward on Sunday March 31 2019, @11:06PM (#822878)

    ...lithographic limit of dies that is currently in the range of ~800mm. Why have one 800mm die when you can have two 600mm dies (the lower the size of the die, the higher the yield) or four 400mm ones...

    Not the day for units on Soylent is it? Diamond forming pressures given in quadi-elephants per square fingernail, and now we have chips almost three feet across.
    800mm is 0.8 meters (0.91 meters is 3 feet). Either Intel have truly astounding die process with 800m wafers from which they cut dozens of chips, or something has gone squirly. A single chip on a 800mm die would not quite fit inside my laptop, or any computer I've ever owned. And they named them "micro" processors... :)

    • (Score: 2) by takyon on Sunday March 31 2019, @11:18PM (1 child)

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Sunday March 31 2019, @11:18PM (#822883) Journal

      I fixed it before you commented.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 2) by EvilSS on Sunday March 31 2019, @11:40PM

        by EvilSS (1456) Subscriber Badge on Sunday March 31 2019, @11:40PM (#822886)
        We need a 'Pedantic' mod option
    • (Score: 2) by PartTimeZombie on Monday April 01 2019, @12:07AM

      by PartTimeZombie (4827) on Monday April 01 2019, @12:07AM (#822891)

      I thought that was just the "Texas" option, because everything's bigger in Texas.

    • (Score: 2) by Snotnose on Monday April 01 2019, @12:26AM (2 children)

      by Snotnose (1623) on Monday April 01 2019, @12:26AM (#822899)

      ...lithographic limit of dies that is currently in the range of ~800mm. Why have one 800mm die when you can have two 600mm dies (the lower the size of the die, the higher the yield) or four 400mm ones...

      Not in my experience. Granted, I'm a software engineer. But I spent a lot of my life verifying new born silicon. The closer to the cutting edge, e.g. the smaller the die, the more weird problems you run across and, in general, the lower the yield for any given die in it's first incarnations.

      You want weird? Had a SOC (system on a chip) that would trigger the dead man timer every 1-5 days. Have fun troubleshooting that. Boss put me on it because I had the most hardware knowledge. I had the "golden" laptop that triggered the issue most, and a bunch of "do this, it dies" from assorted folks. Took me 2 weeks (mostly thumb twiddling), but I tracked it down to a write of a particular register. Nothing to do with the laptop, nothing to do with the "do this".

      The pisser on that one was we were short of JPEG debuggers, so while waiting for days for the problem to hit I literally had nothing to do. There was a flash game about mining stuff that I got really good at. My boss knew, his boss knew, and I spent 8 hours a day playing some stupid flash game because without a debugger I was useless.

      Best part? Commented out the offending line, then waiting a week to see if the system crashed. It didn't (it was a debug register the hardware folks used, but did nothing critical). I felt good I'd found the problem but wouldn't have bet anything on it. When a crash happens within 1 hour to 1 week it's hard to have confidence you've found the problem, even if you have rock solid evidence.

      How did I find it? A rolling array of where the code went. 256 bytes. In the code I put checkpoints that wrote to the array. When the system crashed I could bring up the memory controller and read my array. Narrowed things down to 1 "you have got to be kidding me" write instruction, commenting that out solved the problem.

      --
      Why shouldn't we judge a book by it's cover? It's got the author, title, and a summary of what the book's about.
      • (Score: 2) by takyon on Monday April 01 2019, @12:47AM

        by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Monday April 01 2019, @12:47AM (#822907) Journal

        They are comparing large GPU to smaller GPU, not Qualcomm SoC to whatever. From a linked older article:

        https://wccftech.com/nvidia-future-gpu-mcm-package/ [wccftech.com]

        NVIDIA currently has the two fastest GPU accelerators for the compute market, the last years Tesla P100 that is based on Pascal and this years Tesla V100 that is based on Volta. There’s one thing in common about both chips, they are as big as a chip can get on their particular process node. The Pascal GP100 GPU measured at a die size of 610mm2 while the Volta V100 GPU, even being based on a 12nm process from TSMC is 33.1% larger at 815mm2. NVIDIA’s CEO Jen-Hsun Huang revealed at GTC that this is the practical limits of what’s possible with today’s physics and they cannot make a chip as dense or as big as GV100 today.

        Here's the Zen 2 chiplet + I/O die (estimated sizes):

        https://www.anandtech.com/show/13829/amd-ryzen-3rd-generation-zen-2-pcie-4-eight-core [anandtech.com]

        Doing some measurements on our imagery of the processor, and knowing that an AM4 processor is 40mm [per side] square, we measure the chiplet to be 10.53 x 7.67 mm = 80.80 mm2, whereas the IO die is 13.16mm x 9.32 mm = 122.63 mm2.

        So for a 64-core Epyc, there should be 8 of the chiplets and an I/O die (larger size version I think). CPUs tend to be much smaller than GPUs.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 0) by Anonymous Coward on Tuesday April 02 2019, @12:01PM

        by Anonymous Coward on Tuesday April 02 2019, @12:01PM (#823566)

        >we were short of JPEG debuggers

        JTAG, hopefully

  • (Score: 0) by Anonymous Coward on Monday April 01 2019, @12:23AM (1 child)

    by Anonymous Coward on Monday April 01 2019, @12:23AM (#822895)

    Why settle for a single undocumented attack vector? Intel and partner NSA announce OneIME, which allows seamless scaling to 8 undocumented telemetry devices at once.

  • (Score: 2) by linkdude64 on Monday April 01 2019, @01:59AM (2 children)

    by linkdude64 (5482) on Monday April 01 2019, @01:59AM (#822942)

    Intel late to the game!
    Also just in: We are late to the game, also!

    Good luck winning back your consumer confidence after your years of stagnation, Intel.

    • (Score: 2) by driverless on Monday April 01 2019, @05:15AM (1 child)

      by driverless (4770) on Monday April 01 2019, @05:15AM (#822974)

      They're not late to the game, they've been trying to get in since the i740 twenty years ago (the 82720 doesn't really count since it was a rebadged NEC design), and have failed to penetrate anything but the budget market every single time they've tried. This is another attempt that'll fail, they may be big in the CPU world but they can't compete with nVidia/ATI-AMD who have been doing this for their entire corporate lives.

      • (Score: 2) by takyon on Monday April 01 2019, @05:28AM

        by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Monday April 01 2019, @05:28AM (#822977) Journal

        Intel Larrabee [wikipedia.org] was a failed Intel GPU effort that later became the basis of the "manycore" Xeon Phi [wikipedia.org] chips, that have seen use in supercomputers and machine learning.

        https://www.nextplatform.com/2018/07/27/end-of-the-line-for-xeon-phi-its-all-xeon-from-here/ [nextplatform.com]
        https://www.theregister.co.uk/2018/06/13/intel_gpus_2020/ [theregister.co.uk]

        Xeon Phi was discontinued. In its place, Intel will sell Xeons with lots of cores (like 48-core Cascade Lake, and more cores are sure to be added as Intel expands its use of MCMs to try to compete with AMD's Epyc, Threadripper, and Ryzen) and these new discrete GPUs. Intel sees Nvidia making a lot of money selling GPUs for machine learning, driverless vehicles, etc. and wants a piece of that pie. Even the market for high-end gaming GPUs has been pretty strong, and could remain so if high-spec VR becomes the driver of upgrades. MCMs consist of multiple dies; Intel can pick and choose which ones go into the server/enterprise products, and leave the scrappier ones for the gamers.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 2) by shortscreen on Monday April 01 2019, @04:59AM (1 child)

    by shortscreen (2252) on Monday April 01 2019, @04:59AM (#822965) Journal

    A letter X with a number after it. What an original naming scheme. I'm so impressed.

    Now that Intel is going to make fancy discrete GPUs, does that mean they can also go back to making CPUs uninfested by their redundant rubbish graphics?

(1)