Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 14 submissions in the queue.
posted by takyon on Sunday February 17 2019, @10:08PM   Printer-friendly
from the discreet-gpu dept.

Submitted via IRC for Bytram

Intel Linux Graphics Driver Adding Device Local Memory - Possible Start of dGPU Bring-Up

A big patch series was sent out today amounting to 42 patches and over four thousand lines of code for introducing the concept of memory regions to the Intel Linux graphics driver. The memory regions support is preparing for device local memory with future Intel graphics products.

The concept of memory regions is being added to the Intel "i915" Linux kernel DRM driver for "preparation for upcoming devices with device local memory." The concept is about having different "regions" of memory for system memory as for any device local memory (LMEM). Today's published code also introduces a simple allocator and allowing the existing GEM memory management code to be able to allocate memory to these different memory regions. Up to now with Intel integrated graphics, they haven't had to worry about this functionality not even with their eDRAM/L4 cache of select graphics processors.

This device-local memory for future Intel GPUs is almost surely for Intel's discrete graphics cards with dedicated vRAM expected to debut in 2020. For the past several generations of Iris Pro with eDRAM, the Intel Linux driver has already supported that functionality. The patch message itself makes it clear that this is for "upcoming devices" but without enabling any hardware support at this time. This memory region code doesn't touch any of the existing hardware support such as the already mainlined Icelake "Gen 11" graphics code.

Previously: Intel Planning a Return to the Discrete GPU Market, Nvidia CEO Responds
Intel Discrete GPU Planned to be Released in 2020
Intel Announces "Sunny Cove", Gen11 Graphics, Discrete Graphics Brand Name, 3D Packaging, and More


Original Submission

Related Stories

Intel Planning a Return to the Discrete GPU Market, Nvidia CEO Responds 15 comments

Intel isn't just poaching a prominent AMD employee. Intel is planning a return to the discrete GPU market:

On Monday, Intel announced that it had penned a deal with AMD to have the latter provide a discrete GPU to be integrated onto a future Intel SoC. On Tuesday, AMD announced that their chief GPU architect, Raja Koduri, was leaving the company. Now today the saga continues, as Intel is announcing that they have hired Raja Koduri to serve as their own GPU chief architect. And Raja's task will not be a small one; with his hire, Intel will be developing their own high-end discrete GPUs.

[...] [In] perhaps the only news that can outshine the fact that Raja Koduri is joining Intel, is what he will be doing for Intel. As part of today's revelation, Intel has announced that they are instituting a new top-to-bottom GPU strategy. At the bottom, the company wants to extend their existing iGPU market into new classes of edge devices, and while Intel doesn't go into much more detail than this, the fact that they use the term "edge" strongly implies that we're talking about IoT-class devices, where edge goes hand-in-hand with neural network inference. This is a field Intel already plays in to some extent with their Atom processors on the GPU side, and their Movidius neural compute engines on the dedicated silicon sign.

However in what's likely the most exciting part of this news for PC enthusiasts and the tech industry as a whole, is that in aiming at the top of the market, Intel will once again be going back into developing discrete GPUs. The company has tried this route twice before; once in the early days with the i740 in the late 90s, and again with the aborted Larrabee project in the late 2000s. However even though these efforts never panned out quite like Intel has hoped, the company has continued to develop their GPU architecture and GPU-like devices, the latter embodying the massive parallel compute focused Xeon Phi family.

Yet while Intel has GPU-like products for certain markets, the company doesn't have a proper GPU solution once you get beyond their existing GT4-class iGPUs, which are, roughly speaking, on par with $150 or so discrete GPUs. Which is to say that Intel doesn't have access to the midrange market or above with their iGPUs. With the hiring of Raja and Intel's new direction, the company is going to be expanding into full discrete GPUs for what the company calls "a broad range of computing segments."

Intel Discrete GPU Planned to be Released in 2020 8 comments

Intel's First (Modern) Discrete GPU Set For 2020

In a very short tweet posted to their Twitter feed yesterday, Intel revealed/confirmed the launch date for their first discrete GPU developed under the company's new dGPU initiative. The otherwise unnamed high-end GPU will be launching in 2020, a short two to two-and-a-half years from now.

[...] This new GPU would be the first GPU to come out of Intel's revitalized GPU efforts, which kicked into high gear at the end of 2017 with the hiring of former AMD and Apple GPU boss Raja Koduri. Intel of course is in the midst of watching sometimes-ally and sometimes-rival NVIDIA grow at a nearly absurd pace thanks to the machine learning boom, so Intel's third shot at dGPUs is ultimately an effort to establish themselves in a market for accelerators that is no longer niche but is increasingly splitting off customers who previously would have relied entirely on Intel CPUs.

[...] Intel isn't saying anything else about the GPU at this time. Though we do know from Intel's statements when they hired Koduri that they're starting with high-end GPUs, a fitting choice given the accelerator market Intel is going after. This GPU is almost certainly aimed at compute users first and foremost – especially if Intel adopts a bleeding edge-like strategy that AMD and NVIDIA have started to favor – but Intel's dGPU efforts are not entirely focused on professionals. Intel has also confirmed that they want to go after the gaming market as well, though what that would entail – and when – is another question entirely.

Previously: AMD's Radeon Technologies Group Boss Raja Koduri Leaves, Confirmed to be Defecting to Intel
Intel Planning a Return to the Discrete GPU Market, Nvidia CEO Responds


Original Submission

Intel Announces "Sunny Cove", Gen11 Graphics, Discrete Graphics Brand Name, 3D Packaging, and More 23 comments

Intel has announced new developments at its Architecture Day 2018:

Sunny Cove, built on 10nm, will come to market in 2019 and offer increased single-threaded performance, new instructions, and 'improved scalability'. Intel went into more detail about the Sunny Cove microarchitecture, which is in the next part of this article. To avoid doubt, Sunny Cove will have AVX-512. We believe that these cores, when paired with Gen11 graphics, will be called Ice Lake.

Willow Cove looks like it will be a 2020 core design, most likely also on 10nm. Intel lists the highlights here as a cache redesign (which might mean L1/L2 adjustments), new transistor optimizations (manufacturing based), and additional security features, likely referring to further enhancements from new classes of side-channel attacks. Golden Cove rounds out the trio, and is firmly in that 2021 segment in the graph. Process node here is a question mark, but we're likely to see it on 10nm and or 7nm. Golden Cove is where Intel adds another slice of the serious pie onto its plate, with an increase in single threaded performance, a focus on AI performance, and potential networking and AI additions to the core design. Security features also look like they get a boost.

Intel says that GT2 Gen11 integrated graphics with 64 execution units will reach 1 teraflops of performance. It compared the graphics solution to previous-generation GT2 graphics with 24 execution units, but did not mention Iris Plus Graphics GT3e, which already reached around 800-900 gigaflops with 48 execution units. The GPU will support Adaptive Sync, which is the standardized version of AMD's FreeSync, enabling variable refresh rates over DisplayPort and reducing screen tearing.

Intel's upcoming discrete graphics cards, planned for release around 2020, will be branded Xe. Xe will cover configurations from integrated and entry-level cards all the way up to datacenter-oriented products.

Like AMD, Intel will also organize cores into "chiplets". But it also announced FOVEROS, a 3D packaging technology that will allow it to mix chips from different process nodes, stack DRAM on top of components, etc. A related development is Intel's demonstration of "hybrid x86" CPUs. Like ARM's big.LITTLE and DynamIQ heterogeneous computing architectures, Intel can combine its large "Core" with smaller Atom cores. In fact, it created a 12mm×12mm×1mm SoC (compare to a dime coin which has a radius of 17.91mm and thickness of 1.35mm) with a single "Sunny Cove" core, four Atom cores, Gen11 graphics, and just 2 mW of standby power draw.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 0) by Anonymous Coward on Sunday February 17 2019, @10:58PM (4 children)

    by Anonymous Coward on Sunday February 17 2019, @10:58PM (#802641)

    was sent

    It is important to say that the patches were sent by Intel people, from their work email, and not by some enthusiasts. This is clear from the diffs.

    • (Score: 1, Interesting) by Anonymous Coward on Sunday February 17 2019, @11:00PM (3 children)

      by Anonymous Coward on Sunday February 17 2019, @11:00PM (#802643)

      Why is this important?

      • (Score: 2) by tibman on Monday February 18 2019, @12:35AM (2 children)

        by tibman (134) Subscriber Badge on Monday February 18 2019, @12:35AM (#802686)

        Official Intel patches vs non-official. Seems important to me.

        --
        SN won't survive on lurkers alone. Write comments.
        • (Score: 1, Insightful) by Anonymous Coward on Monday February 18 2019, @12:41AM (1 child)

          by Anonymous Coward on Monday February 18 2019, @12:41AM (#802692)

          Seems important to me.

          Why?

          • (Score: 2) by tibman on Saturday February 23 2019, @01:38AM

            by tibman (134) Subscriber Badge on Saturday February 23 2019, @01:38AM (#805428)

            What a waste of a comment. A company officially supporting its products on linux is objectively better than not supporting its products.

            --
            SN won't survive on lurkers alone. Write comments.
  • (Score: 2) by krishnoid on Sunday February 17 2019, @11:35PM

    by krishnoid (1156) on Sunday February 17 2019, @11:35PM (#802662)

    Here, I think it means Direct Rendering Manager right? I don't know what that means, either, though :-)

  • (Score: 0) by Anonymous Coward on Monday February 18 2019, @01:02AM

    by Anonymous Coward on Monday February 18 2019, @01:02AM (#802703)

    The existing solutions are enterprise grade and with prices over six thousands, pretty far away from the realm where one just want to have a little fun with a couple of virtual instances.

(1)