Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Saturday November 11 2017, @08:27AM   Printer-friendly
from the staying-focused dept.

Intel isn't just poaching a prominent AMD employee. Intel is planning a return to the discrete GPU market:

On Monday, Intel announced that it had penned a deal with AMD to have the latter provide a discrete GPU to be integrated onto a future Intel SoC. On Tuesday, AMD announced that their chief GPU architect, Raja Koduri, was leaving the company. Now today the saga continues, as Intel is announcing that they have hired Raja Koduri to serve as their own GPU chief architect. And Raja's task will not be a small one; with his hire, Intel will be developing their own high-end discrete GPUs.

[...] [In] perhaps the only news that can outshine the fact that Raja Koduri is joining Intel, is what he will be doing for Intel. As part of today's revelation, Intel has announced that they are instituting a new top-to-bottom GPU strategy. At the bottom, the company wants to extend their existing iGPU market into new classes of edge devices, and while Intel doesn't go into much more detail than this, the fact that they use the term "edge" strongly implies that we're talking about IoT-class devices, where edge goes hand-in-hand with neural network inference. This is a field Intel already plays in to some extent with their Atom processors on the GPU side, and their Movidius neural compute engines on the dedicated silicon sign.

However in what's likely the most exciting part of this news for PC enthusiasts and the tech industry as a whole, is that in aiming at the top of the market, Intel will once again be going back into developing discrete GPUs. The company has tried this route twice before; once in the early days with the i740 in the late 90s, and again with the aborted Larrabee project in the late 2000s. However even though these efforts never panned out quite like Intel has hoped, the company has continued to develop their GPU architecture and GPU-like devices, the latter embodying the massive parallel compute focused Xeon Phi family.

Yet while Intel has GPU-like products for certain markets, the company doesn't have a proper GPU solution once you get beyond their existing GT4-class iGPUs, which are, roughly speaking, on par with $150 or so discrete GPUs. Which is to say that Intel doesn't have access to the midrange market or above with their iGPUs. With the hiring of Raja and Intel's new direction, the company is going to be expanding into full discrete GPUs for what the company calls "a broad range of computing segments."

Nvidia CEO On Intel's GPU, AMD Partnership, And Raja Koduri

Yeah, there's a lot of news out there....first of all, Raja leaving AMD is a great loss for AMD, and it's a recognition by Intel probably that the GPU is just incredibly important now. The modern GPU is not a graphics accelerator, we just left the letter "G" in there, but these processors are domain-specific parallel accelerators, and they are enormously complex, they are the most complex processors built by anybody on the planet today. And that's the reason why IBM uses our processors for the [world's] largest supercomputers, [and] that's the reason why every single cloud, every major server around the world has adopted Nvidia GPUs.

[...] Huang also pressed the point that investing in five different architectures dilutes focus and makes it impossible to support them forever, which has long-term implications for customers. Earlier in the call, Huang had pressed another key point:

"If you have four or five different architectures to support, that you offer to your customers, and they have to pick the one that works the best, you are essentially are saying that you don't know which one is the best [...] If there's five architectures, surely over time, 80% of them will be wrong. I think that our advantage is that we are singularly focused."

Huang didn't specifically name Intel in this statement, but Nvidia's focus on a single architecture stands in stark contrast to Intel's approach of offering five (coincidence?) different solutions, such as CPUs, Xeon Phi, FPGAs, ASICs, and now GPUs, for parallel workloads.

Previously: Intel Announces Core H Laptop Chips With AMD Graphics and High Bandwidth Memory


Original Submission #1   Original Submission #2

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 1, Insightful) by Anonymous Coward on Saturday November 11 2017, @08:31AM (10 children)

    by Anonymous Coward on Saturday November 11 2017, @08:31AM (#595534)

    It would be interesting to see said giant get off its ass again and do some interesting things.

    • (Score: 2) by cubancigar11 on Saturday November 11 2017, @09:52AM (9 children)

      by cubancigar11 (330) on Saturday November 11 2017, @09:52AM (#595541) Homepage Journal

      It will be as interesting as Intel launching x86 phones. The upper tier market is already taken and already dwindling. Unless Intel ends-up doing some magic, and establishes itself as fierce competitor to NVidia and AMD, I doubt this venture is going anywhere.

      Raja Koduri has huge responsibility. Hope he is able to deliver. I have a fleeting suspicion that his departure is directly related to AMD betting against discrete graphics cards with their integrated-CPU-GPU vision.

      • (Score: 2) by takyon on Saturday November 11 2017, @10:13AM (4 children)

        by takyon (881) <takyonNO@SPAMsoylentnews.org> on Saturday November 11 2017, @10:13AM (#595543) Journal

        The upper tier market is already taken and already dwindling.

        The market is not dwindling.

        AMD posted profits [soylentnews.org] due to cryptocurrencies driving up sales of GPUs. Nvidia posted record profits [anandtech.com], again.

        Nvidia sells GPUs as a product for supercomputers, machine learning, driverless cars, etc. AMD is trying to do the same with a Tesla partnership. They can't afford to leave machine learning money on the table for Nvidia and Intel to snatch up.

        Some GPUs limit the FP64 capability. Example: GTX 780 Ti vs. GTX Titan Black [arrayfire.com]. Both the GTX and GTX Titan lines of cards could be used by gamers, but the Titan line is more desirable for double-precision applications including supercomputing.

        Intel already sells "manycore" chips for supercomputers, a spin-off of the failed Larrabee GPU. Maybe they can go back to the drawing board and build a card that either targets both gamers and enterprise/scientific, or build one for enterprise/scientific that can be easily neutered to work better for gamers. They can follow AMD's lead (Ryzen/Threadripper/Epyc) and use a multi-chip module to create different versions (such as a dual GPU).

        From the summary:

        it's a recognition by Intel probably that the GPU is just incredibly important now. The modern GPU is not a graphics accelerator, we just left the letter "G" in there, but these processors are domain-specific parallel accelerators, and they are enormously complex, they are the most complex processors built by anybody on the planet today. And that's the reason why IBM uses our processors for the worlds largest supercomputers, [and] that's the reason why every single cloud, every major server around the world has adopted Nvidia GPUs.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
        • (Score: 0) by Anonymous Coward on Saturday November 11 2017, @10:41AM (1 child)

          by Anonymous Coward on Saturday November 11 2017, @10:41AM (#595547)

          Judging by all the other recent shit they have been producing lately, it will just be another obfuscated proprietary security hole ridden piece of bullshit, at a premium cost.

          Just like AAA videogames, CPU/Chipset Combos, and the aforementioned Movidius hardware/software stack, it will be a bunch of proprietary bullshit that people can't trust to do what they need, and like the Edison, i740, etc it will just be shelved in a year or two as they indecisively choose to focus on another market because they can't actually get the personnel and resources focused on a particular problem to solve it.

          They did this 20 years ago with the i752/754, again with the Larrabee/early Xeon Phi parts, and as well shall soon see, they are just doing it again with the new dGPU parts, only this time with code signing and further lockdowns that make it untrustworthy for any sort of secure computing use.

        • (Score: 0) by Anonymous Coward on Saturday November 11 2017, @06:58PM (1 child)

          by Anonymous Coward on Saturday November 11 2017, @06:58PM (#595686)

          Some GPUs limit the FP64 capability

          Actually its fp16 that is the bigger issue:
          https://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/5 [anandtech.com]

          • (Score: 2) by takyon on Saturday November 11 2017, @07:22PM

            by takyon (881) <takyonNO@SPAMsoylentnews.org> on Saturday November 11 2017, @07:22PM (#595694) Journal

            Whatever. It's the same story of Nvidia dumbing down its parts:

            As for why NVIDIA would want to make FP16 performance so slow on Pascal GeForce parts, I strongly suspect that the Maxwell 2 based GTX Titan X sold too well with compute users over the past 12 months, and that this is NVIDIA’s reaction to that event. GTX Titan X’s FP16 and FP32 performance was (per-clock) identical its Tesla equivalent, the Tesla M40, and furthermore both cards shipped with 12GB of VRAM. This meant that other than Tesla-specific features such as drivers and support, there was little separating the two cards.

            The Titan series has always straddled the line between professional compute and consumer graphics users, however if it veers too far into the former then it puts Tesla sales at risk. Case in point: at this year’s NVIDIA GPU Technology Conference, I was approached twice by product vendors who were looking for more Titan X cards for their compute products, as at that time the Titan X was in short supply. Suffice it to say, Titan X has been very popular with the compute crowd.

            In any case, limiting the FP16 instruction rate on GeForce products is an easy way to ensure that these products don’t compete with the higher margin Tesla business. NVIDIA has only announced one Tesla so far – the high-end P100 – but even that sold out almost immediately. For now I suspect that NVIDIA wants to ensure that P100 and M40 sales are not impacted by the new GeForce cards.

            Stuff like that creates a POSSIBLE market opportunity for Intel. Although they will probably end up doing the same differentiation.

            --
            [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 3, Interesting) by takyon on Saturday November 11 2017, @10:42AM

        by takyon (881) <takyonNO@SPAMsoylentnews.org> on Saturday November 11 2017, @10:42AM (#595548) Journal

        The AMD "integrated discrete GPU" + HBM combo in 8th generation Core H-series chips looks like Intel's attempt to muscle out other discrete GPUs in laptops and some desktops. They could eventually replace the AMD component with their own. The on-die GPU will be able to communicate with the CPU much faster than a GPU not connected by the EMIB. It might be able to be used for compute purposes. Throw in another discrete GPU and Intel's iGPU can be used simultaneously with it, a feature of Vulkan [wikipedia.org] and Direct3D 12.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 0) by Anonymous Coward on Saturday November 11 2017, @11:13AM (1 child)

        by Anonymous Coward on Saturday November 11 2017, @11:13AM (#595553)

        > It will be as interesting as Intel launching x86 phones

        That could have been a game changer, had they considered phone as a mere form factor for a pc.
        Nokia did it with the n900.
        The powers that be did not want people with SDR enabled PC in their pockets, and the n900 was not pushed by nokia itself.

        Intel does not need to innovate, it's another IBM.

      • (Score: 2) by frojack on Saturday November 11 2017, @08:17PM

        by frojack (1554) on Saturday November 11 2017, @08:17PM (#595713) Journal

        So how do you square this:

        I have a fleeting suspicion that his departure is directly related to AMD betting against discrete graphics cards with their integrated-CPU-GPU vision.

        With This:

        Intel announced that it had penned a deal with AMD to have the latter provide a discrete GPU to be integrated onto a future Intel SoC

        Both of them are aiming at putting the GPU back into the CPU.

        I see nothing but cooperation going on here. Quite under the table arrangements that could lead to merger eventually. Remember that Intel always tolerated AMD, with massive cross license arrangements. Initially because it needed competition to avoid government attention. Now with ARM clones eating both their lunches, these two don't have to worry about that so much. Together, they still own the performance market and discrete GPUs are a bottleneck for both of them.

        Koduri moving helps both companies. Nobody but Nvidia is worried here.

        --
        No, you are mistaken. I've always had this sig.
  • (Score: 2) by crafoo on Saturday November 11 2017, @07:33PM (1 child)

    by crafoo (6639) on Saturday November 11 2017, @07:33PM (#595698)

    I'm only here to watch the demise of x86. It's always been trash. An open CPU implemented with an FPGA with a discrete GPU sounds great to me.

    • (Score: 2) by frojack on Saturday November 11 2017, @08:29PM

      by frojack (1554) on Saturday November 11 2017, @08:29PM (#595720) Journal

      X86 line has been the most successful computer line of all times.

      Not because it was designed with the benefit of hindsight to be perfect, efficient, focused, and a power miser, but because it was always eclectic, and offered enough of something to just about every market. The General Motors of computers.

      But somehow, even GM is likely to transition to Electric Vehicles, and X86 is unlikely to just go away any time soon.

      --
      No, you are mistaken. I've always had this sig.
  • (Score: 2) by RamiK on Saturday November 11 2017, @08:13PM

    by RamiK (1813) on Saturday November 11 2017, @08:13PM (#595709)

    Besides, companies don't need "focus". If there's enough overlap, a toy company can produce handgrips for rifles.

    Moreover, how many console bids did Intel lose over its lack of good graphics? How much IP did they develop that overlapped with GPU workloads that they just didn't use or license? It's been 3 years since Alice v. CLS Bank and Intel is finally waking up to realize discrete graphics has evolved into massively parallel compute RISC cores so there's little stopping them form doing their own and doing it well.

    --
    compiling...
  • (Score: 0) by Anonymous Coward on Sunday November 12 2017, @10:48AM

    by Anonymous Coward on Sunday November 12 2017, @10:48AM (#595875)

    Because, you know, the ME in Intel chips already has that part of the market cornered

(1)