Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Saturday November 11 2017, @08:27AM   Printer-friendly
from the staying-focused dept.

Intel isn't just poaching a prominent AMD employee. Intel is planning a return to the discrete GPU market:

On Monday, Intel announced that it had penned a deal with AMD to have the latter provide a discrete GPU to be integrated onto a future Intel SoC. On Tuesday, AMD announced that their chief GPU architect, Raja Koduri, was leaving the company. Now today the saga continues, as Intel is announcing that they have hired Raja Koduri to serve as their own GPU chief architect. And Raja's task will not be a small one; with his hire, Intel will be developing their own high-end discrete GPUs.

[...] [In] perhaps the only news that can outshine the fact that Raja Koduri is joining Intel, is what he will be doing for Intel. As part of today's revelation, Intel has announced that they are instituting a new top-to-bottom GPU strategy. At the bottom, the company wants to extend their existing iGPU market into new classes of edge devices, and while Intel doesn't go into much more detail than this, the fact that they use the term "edge" strongly implies that we're talking about IoT-class devices, where edge goes hand-in-hand with neural network inference. This is a field Intel already plays in to some extent with their Atom processors on the GPU side, and their Movidius neural compute engines on the dedicated silicon sign.

However in what's likely the most exciting part of this news for PC enthusiasts and the tech industry as a whole, is that in aiming at the top of the market, Intel will once again be going back into developing discrete GPUs. The company has tried this route twice before; once in the early days with the i740 in the late 90s, and again with the aborted Larrabee project in the late 2000s. However even though these efforts never panned out quite like Intel has hoped, the company has continued to develop their GPU architecture and GPU-like devices, the latter embodying the massive parallel compute focused Xeon Phi family.

Yet while Intel has GPU-like products for certain markets, the company doesn't have a proper GPU solution once you get beyond their existing GT4-class iGPUs, which are, roughly speaking, on par with $150 or so discrete GPUs. Which is to say that Intel doesn't have access to the midrange market or above with their iGPUs. With the hiring of Raja and Intel's new direction, the company is going to be expanding into full discrete GPUs for what the company calls "a broad range of computing segments."

Nvidia CEO On Intel's GPU, AMD Partnership, And Raja Koduri

Yeah, there's a lot of news out there....first of all, Raja leaving AMD is a great loss for AMD, and it's a recognition by Intel probably that the GPU is just incredibly important now. The modern GPU is not a graphics accelerator, we just left the letter "G" in there, but these processors are domain-specific parallel accelerators, and they are enormously complex, they are the most complex processors built by anybody on the planet today. And that's the reason why IBM uses our processors for the [world's] largest supercomputers, [and] that's the reason why every single cloud, every major server around the world has adopted Nvidia GPUs.

[...] Huang also pressed the point that investing in five different architectures dilutes focus and makes it impossible to support them forever, which has long-term implications for customers. Earlier in the call, Huang had pressed another key point:

"If you have four or five different architectures to support, that you offer to your customers, and they have to pick the one that works the best, you are essentially are saying that you don't know which one is the best [...] If there's five architectures, surely over time, 80% of them will be wrong. I think that our advantage is that we are singularly focused."

Huang didn't specifically name Intel in this statement, but Nvidia's focus on a single architecture stands in stark contrast to Intel's approach of offering five (coincidence?) different solutions, such as CPUs, Xeon Phi, FPGAs, ASICs, and now GPUs, for parallel workloads.

Previously: Intel Announces Core H Laptop Chips With AMD Graphics and High Bandwidth Memory


Original Submission #1   Original Submission #2

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by frojack on Saturday November 11 2017, @08:29PM

    by frojack (1554) on Saturday November 11 2017, @08:29PM (#595720) Journal

    X86 line has been the most successful computer line of all times.

    Not because it was designed with the benefit of hindsight to be perfect, efficient, focused, and a power miser, but because it was always eclectic, and offered enough of something to just about every market. The General Motors of computers.

    But somehow, even GM is likely to transition to Electric Vehicles, and X86 is unlikely to just go away any time soon.

    --
    No, you are mistaken. I've always had this sig.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2