Intel isn't just poaching a prominent AMD employee. Intel is planning a return to the discrete GPU market:
On Monday, Intel announced that it had penned a deal with AMD to have the latter provide a discrete GPU to be integrated onto a future Intel SoC. On Tuesday, AMD announced that their chief GPU architect, Raja Koduri, was leaving the company. Now today the saga continues, as Intel is announcing that they have hired Raja Koduri to serve as their own GPU chief architect. And Raja's task will not be a small one; with his hire, Intel will be developing their own high-end discrete GPUs.
[...] [In] perhaps the only news that can outshine the fact that Raja Koduri is joining Intel, is what he will be doing for Intel. As part of today's revelation, Intel has announced that they are instituting a new top-to-bottom GPU strategy. At the bottom, the company wants to extend their existing iGPU market into new classes of edge devices, and while Intel doesn't go into much more detail than this, the fact that they use the term "edge" strongly implies that we're talking about IoT-class devices, where edge goes hand-in-hand with neural network inference. This is a field Intel already plays in to some extent with their Atom processors on the GPU side, and their Movidius neural compute engines on the dedicated silicon sign.
However in what's likely the most exciting part of this news for PC enthusiasts and the tech industry as a whole, is that in aiming at the top of the market, Intel will once again be going back into developing discrete GPUs. The company has tried this route twice before; once in the early days with the i740 in the late 90s, and again with the aborted Larrabee project in the late 2000s. However even though these efforts never panned out quite like Intel has hoped, the company has continued to develop their GPU architecture and GPU-like devices, the latter embodying the massive parallel compute focused Xeon Phi family.
Yet while Intel has GPU-like products for certain markets, the company doesn't have a proper GPU solution once you get beyond their existing GT4-class iGPUs, which are, roughly speaking, on par with $150 or so discrete GPUs. Which is to say that Intel doesn't have access to the midrange market or above with their iGPUs. With the hiring of Raja and Intel's new direction, the company is going to be expanding into full discrete GPUs for what the company calls "a broad range of computing segments."
Nvidia CEO On Intel's GPU, AMD Partnership, And Raja Koduri
Yeah, there's a lot of news out there....first of all, Raja leaving AMD is a great loss for AMD, and it's a recognition by Intel probably that the GPU is just incredibly important now. The modern GPU is not a graphics accelerator, we just left the letter "G" in there, but these processors are domain-specific parallel accelerators, and they are enormously complex, they are the most complex processors built by anybody on the planet today. And that's the reason why IBM uses our processors for the [world's] largest supercomputers, [and] that's the reason why every single cloud, every major server around the world has adopted Nvidia GPUs.
[...] Huang also pressed the point that investing in five different architectures dilutes focus and makes it impossible to support them forever, which has long-term implications for customers. Earlier in the call, Huang had pressed another key point:
"If you have four or five different architectures to support, that you offer to your customers, and they have to pick the one that works the best, you are essentially are saying that you don't know which one is the best [...] If there's five architectures, surely over time, 80% of them will be wrong. I think that our advantage is that we are singularly focused."
Huang didn't specifically name Intel in this statement, but Nvidia's focus on a single architecture stands in stark contrast to Intel's approach of offering five (coincidence?) different solutions, such as CPUs, Xeon Phi, FPGAs, ASICs, and now GPUs, for parallel workloads.
Previously: Intel Announces Core H Laptop Chips With AMD Graphics and High Bandwidth Memory
Related Stories
Intel squeezed an AMD graphics chip, RAM and CPU into one module
the new processor integrates a "semi-custom" AMD graphics chip and the second generation of Intel's "High Bandwidth Memory (HBM2)", which is comparable to GDDR5 in a traditional laptop.
Intel CPU and AMD GPU, together at last
Summary of Intel's news:
The new product, which will be part of our 8th Gen Intel Core family, brings together our high-performing Intel Core H-series processor, second generation High Bandwidth Memory (HBM2) and a custom-to-Intel third-party discrete graphics chip from AMD's Radeon Technologies Group* – all in a single processor package.
[...] At the heart of this new design is EMIB (Embedded Multi-Die Interconnect Bridge), a small intelligent bridge that allows heterogeneous silicon to quickly pass information in extremely close proximity. EMIB eliminates height impact as well as manufacturing and design complexities, enabling faster, more powerful and more efficient products in smaller sizes. This is the first consumer product that takes advantage of EMIB.
[...] Additionally, this solution is the first mobile PC to use HBM2, which consumes much less power and takes up less space compared to traditional discrete graphics-based designs using dedicated graphics memory, like GDDR5 memory.
takyon: This is more like an "integrated discrete GPU" than standard integrated graphics. It also avoids the need for Intel to license AMD's IP. AMD also needs to make a lot of parts since its wafer supply agreement with GlobalFoundries penalizes AMD if they buy less than a target number of wafers each year.
Also at AnandTech and Ars Technica.
Previously: AMD Stock Surges on Report of Intel Graphics Licensing Deal, 16-Core Ryzen Confirmed
Related: Samsung Increases Production of 8 GB High Bandwidth Memory 2.0 Stacks
The boss of AMD's Radeon Technologies Group is leaving the company:
Remember when we reported on the Radeon Technologies Group boss, Raja Koduri, taking a leave of absence with an intent to return to the fold in December? That isn't going to happen, according to a memo Raja has written to his team, because today is his last day in the job.
[...] Our sources tell us that Lisa Su, AMD CEO, will continue to oversee RTG for the foreseeable future. AMD appreciates that such an important role cannot be the sole domain of the CEO, and to this end is actively searching for a successor to Raja. We expect the appointment to be made within a few months.
The rumor mill suggests that Koduri will take a job at Intel, which would come at an interesting time now that Intel is including AMD graphics and High Bandwidth Memory in some of its products.
Update: Intel to Develop Discrete GPUs, Hires Raja Koduri as Chief Architect & Senior VP
Also at HotHardware and Fudzilla.
Previously: Interview With Raja Koduri, Head of the Radeon Technologies Group at AMD
Intel's First (Modern) Discrete GPU Set For 2020
In a very short tweet posted to their Twitter feed yesterday, Intel revealed/confirmed the launch date for their first discrete GPU developed under the company's new dGPU initiative. The otherwise unnamed high-end GPU will be launching in 2020, a short two to two-and-a-half years from now.
[...] This new GPU would be the first GPU to come out of Intel's revitalized GPU efforts, which kicked into high gear at the end of 2017 with the hiring of former AMD and Apple GPU boss Raja Koduri. Intel of course is in the midst of watching sometimes-ally and sometimes-rival NVIDIA grow at a nearly absurd pace thanks to the machine learning boom, so Intel's third shot at dGPUs is ultimately an effort to establish themselves in a market for accelerators that is no longer niche but is increasingly splitting off customers who previously would have relied entirely on Intel CPUs.
[...] Intel isn't saying anything else about the GPU at this time. Though we do know from Intel's statements when they hired Koduri that they're starting with high-end GPUs, a fitting choice given the accelerator market Intel is going after. This GPU is almost certainly aimed at compute users first and foremost – especially if Intel adopts a bleeding edge-like strategy that AMD and NVIDIA have started to favor – but Intel's dGPU efforts are not entirely focused on professionals. Intel has also confirmed that they want to go after the gaming market as well, though what that would entail – and when – is another question entirely.
Previously: AMD's Radeon Technologies Group Boss Raja Koduri Leaves, Confirmed to be Defecting to Intel
Intel Planning a Return to the Discrete GPU Market, Nvidia CEO Responds
Submitted via IRC for Bytram
Intel Linux Graphics Driver Adding Device Local Memory - Possible Start of dGPU Bring-Up
A big patch series was sent out today amounting to 42 patches and over four thousand lines of code for introducing the concept of memory regions to the Intel Linux graphics driver. The memory regions support is preparing for device local memory with future Intel graphics products.
The concept of memory regions is being added to the Intel "i915" Linux kernel DRM driver for "preparation for upcoming devices with device local memory." The concept is about having different "regions" of memory for system memory as for any device local memory (LMEM). Today's published code also introduces a simple allocator and allowing the existing GEM memory management code to be able to allocate memory to these different memory regions. Up to now with Intel integrated graphics, they haven't had to worry about this functionality not even with their eDRAM/L4 cache of select graphics processors.
This device-local memory for future Intel GPUs is almost surely for Intel's discrete graphics cards with dedicated vRAM expected to debut in 2020. For the past several generations of Iris Pro with eDRAM, the Intel Linux driver has already supported that functionality. The patch message itself makes it clear that this is for "upcoming devices" but without enabling any hardware support at this time. This memory region code doesn't touch any of the existing hardware support such as the already mainlined Icelake "Gen 11" graphics code.
Previously: Intel Planning a Return to the Discrete GPU Market, Nvidia CEO Responds
Intel Discrete GPU Planned to be Released in 2020
Intel Announces "Sunny Cove", Gen11 Graphics, Discrete Graphics Brand Name, 3D Packaging, and More
Intel has teased* plans to return to the discrete graphics market in 2020. Now, some of those plans have leaked. Intel's Xe branded GPUs will apparently use an architecture capable of scaling to "any number" of GPUs that are connected by a multi-chip module (MCM). The "e" in Xe is meant to represent the number of GPU dies, with one of the first products being called X2/X2:
Developers won't need to worry about optimizing their code for multi-GPU, the OneAPI will take care of all that. This will also allow the company to beat the foundry's usual lithographic limit of dies that is currently in the range of ~800mm2. Why have one 800mm2 die when you can have two 600mm2 dies (the lower the size of the die, the higher the yield) or four 400mm2 ones? Armed with One API and the Xe macroarchitecture Intel plans to ramp all the way up to Octa GPUs by 2024. From this roadmap, it seems like the first Xe class of GPUs will be X2.
The tentative timeline for the first X2 class of GPUs was also revealed: June 31st, 2020. This will be followed by the X4 class sometime in 2021. It looks like Intel plans to add two more cores [dies] every year so we should have the X8 class by 2024. Assuming Intel has the scaling solution down pat, it should actually be very easy to scale these up. The only concern here would be the packaging yield – which Intel should be more than capable of handling and binning should take care of any wastage issues quite easily. Neither NVIDIA nor AMD have yet gone down the MCM path and if Intel can truly deliver on this design then the sky's the limit.
AMD has made extensive use of MCMs in its Zen CPUs, but will reportedly not use an MCM-based design for its upcoming Navi GPUs. Nvidia has published research into MCM GPUs but has yet to introduce products using such a design.
Intel will use an MCM for its upcoming 48-core "Cascade Lake" Xeon CPUs. They are also planning on using "chiplets" in other CPUs and mixing big and small CPU cores and/or cores made on different process nodes.
*Previously: Intel Planning a Return to the Discrete GPU Market, Nvidia CEO Responds
Intel Discrete GPU Planned to be Released in 2020
Intel Announces "Sunny Cove", Gen11 Graphics, Discrete Graphics Brand Name, 3D Packaging, and More
Related: Intel Integrates LTE Modem Into Custom Multi-Chip Module for New HP Laptop
Intel Promises "10nm" Chips by the End of 2019, and More
(Score: 1, Insightful) by Anonymous Coward on Saturday November 11 2017, @08:31AM (10 children)
It would be interesting to see said giant get off its ass again and do some interesting things.
(Score: 2) by cubancigar11 on Saturday November 11 2017, @09:52AM (9 children)
It will be as interesting as Intel launching x86 phones. The upper tier market is already taken and already dwindling. Unless Intel ends-up doing some magic, and establishes itself as fierce competitor to NVidia and AMD, I doubt this venture is going anywhere.
Raja Koduri has huge responsibility. Hope he is able to deliver. I have a fleeting suspicion that his departure is directly related to AMD betting against discrete graphics cards with their integrated-CPU-GPU vision.
(Score: 2) by takyon on Saturday November 11 2017, @10:13AM (4 children)
The market is not dwindling.
AMD posted profits [soylentnews.org] due to cryptocurrencies driving up sales of GPUs. Nvidia posted record profits [anandtech.com], again.
Nvidia sells GPUs as a product for supercomputers, machine learning, driverless cars, etc. AMD is trying to do the same with a Tesla partnership. They can't afford to leave machine learning money on the table for Nvidia and Intel to snatch up.
Some GPUs limit the FP64 capability. Example: GTX 780 Ti vs. GTX Titan Black [arrayfire.com]. Both the GTX and GTX Titan lines of cards could be used by gamers, but the Titan line is more desirable for double-precision applications including supercomputing.
Intel already sells "manycore" chips for supercomputers, a spin-off of the failed Larrabee GPU. Maybe they can go back to the drawing board and build a card that either targets both gamers and enterprise/scientific, or build one for enterprise/scientific that can be easily neutered to work better for gamers. They can follow AMD's lead (Ryzen/Threadripper/Epyc) and use a multi-chip module to create different versions (such as a dual GPU).
From the summary:
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 0) by Anonymous Coward on Saturday November 11 2017, @10:41AM (1 child)
Judging by all the other recent shit they have been producing lately, it will just be another obfuscated proprietary security hole ridden piece of bullshit, at a premium cost.
Just like AAA videogames, CPU/Chipset Combos, and the aforementioned Movidius hardware/software stack, it will be a bunch of proprietary bullshit that people can't trust to do what they need, and like the Edison, i740, etc it will just be shelved in a year or two as they indecisively choose to focus on another market because they can't actually get the personnel and resources focused on a particular problem to solve it.
They did this 20 years ago with the i752/754, again with the Larrabee/early Xeon Phi parts, and as well shall soon see, they are just doing it again with the new dGPU parts, only this time with code signing and further lockdowns that make it untrustworthy for any sort of secure computing use.
(Score: 2) by cubancigar11 on Saturday November 11 2017, @12:25PM
CPU/Chipset combos are actually better performing and cheaper, which is why the next gen consoles are all using AMD instead of NVidia.
takyon From you link (thanks!), I prepared this chart of NVidia profits of last 4 quarters [photobucket.com] Thanks. It is insightful. My major interest is in PC gaming market, and sadly the numbers still don't look promising to me. The latest jump is most probably because AMD cards were simply out of market and NVidia got to eat that lunch. That is my reading. Let's see. NVidia itself thinks the market might be down for next 2 quarters: http://s388.photobucket.com/user/cubancigar11/media/NVidia_Profits.png.html [photobucket.com]
(Score: 0) by Anonymous Coward on Saturday November 11 2017, @06:58PM (1 child)
Actually its fp16 that is the bigger issue:
https://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/5 [anandtech.com]
(Score: 2) by takyon on Saturday November 11 2017, @07:22PM
Whatever. It's the same story of Nvidia dumbing down its parts:
Stuff like that creates a POSSIBLE market opportunity for Intel. Although they will probably end up doing the same differentiation.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 3, Interesting) by takyon on Saturday November 11 2017, @10:42AM
The AMD "integrated discrete GPU" + HBM combo in 8th generation Core H-series chips looks like Intel's attempt to muscle out other discrete GPUs in laptops and some desktops. They could eventually replace the AMD component with their own. The on-die GPU will be able to communicate with the CPU much faster than a GPU not connected by the EMIB. It might be able to be used for compute purposes. Throw in another discrete GPU and Intel's iGPU can be used simultaneously with it, a feature of Vulkan [wikipedia.org] and Direct3D 12.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 0) by Anonymous Coward on Saturday November 11 2017, @11:13AM (1 child)
> It will be as interesting as Intel launching x86 phones
That could have been a game changer, had they considered phone as a mere form factor for a pc.
Nokia did it with the n900.
The powers that be did not want people with SDR enabled PC in their pockets, and the n900 was not pushed by nokia itself.
Intel does not need to innovate, it's another IBM.
(Score: 3, Insightful) by cubancigar11 on Saturday November 11 2017, @11:56AM
You talk of powers that be... I think it was more about how x86 was power hungry.
(Score: 2) by frojack on Saturday November 11 2017, @08:17PM
So how do you square this:
With This:
Both of them are aiming at putting the GPU back into the CPU.
I see nothing but cooperation going on here. Quite under the table arrangements that could lead to merger eventually. Remember that Intel always tolerated AMD, with massive cross license arrangements. Initially because it needed competition to avoid government attention. Now with ARM clones eating both their lunches, these two don't have to worry about that so much. Together, they still own the performance market and discrete GPUs are a bottleneck for both of them.
Koduri moving helps both companies. Nobody but Nvidia is worried here.
No, you are mistaken. I've always had this sig.
(Score: 2) by crafoo on Saturday November 11 2017, @07:33PM (1 child)
I'm only here to watch the demise of x86. It's always been trash. An open CPU implemented with an FPGA with a discrete GPU sounds great to me.
(Score: 2) by frojack on Saturday November 11 2017, @08:29PM
X86 line has been the most successful computer line of all times.
Not because it was designed with the benefit of hindsight to be perfect, efficient, focused, and a power miser, but because it was always eclectic, and offered enough of something to just about every market. The General Motors of computers.
But somehow, even GM is likely to transition to Electric Vehicles, and X86 is unlikely to just go away any time soon.
No, you are mistaken. I've always had this sig.
(Score: 2) by RamiK on Saturday November 11 2017, @08:13PM
Besides, companies don't need "focus". If there's enough overlap, a toy company can produce handgrips for rifles.
Moreover, how many console bids did Intel lose over its lack of good graphics? How much IP did they develop that overlapped with GPU workloads that they just didn't use or license? It's been 3 years since Alice v. CLS Bank and Intel is finally waking up to realize discrete graphics has evolved into massively parallel compute RISC cores so there's little stopping them form doing their own and doing it well.
compiling...
(Score: 0) by Anonymous Coward on Sunday November 12 2017, @10:48AM
Because, you know, the ME in Intel chips already has that part of the market cornered