Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Tuesday January 02 2018, @04:27PM   Printer-friendly
from the progress++ dept.

An Intel website leaked some details of the Intel Core i7-8809G, a "Kaby Lake" desktop CPU with on-package AMD Radeon graphics and High Bandwidth Memory 2.0. While it is listed as an 8th-generation part, 8th-generation "Coffee Lake" CPUs for desktop users have up to 6 cores (in other words, Intel has been releasing multiple microarchitectures as "8th-generation"). The i7-8809G may be officially announced at the Consumer Electronics Show next week.

The components are linked together using what Intel calls "embedded multi-die interconnect bridge technology" (EMIB). The thermal design power (TDP) of the entire package is around 100 Watts:

Intel at the original launch did state that they were using Core-H grade CPUs for the Intel with Radeon Graphics products, which would mean that the CPU portion is around 45W. This would lead to ~55W left for graphics, which would be in the RX 550 level: 8 CUs, 512 SPs, running at 1100 MHz. It is worth nothing that AMD already puts up to 10 Vega CUs in its 15W processors, so with the Intel i7-8809G product Intel has likely has gone wider and slower: judging by the size of the silicon in the mockup, this could be more of a 20-24 CU design built within that 55W-75W window, depending on how the power budget is moved around between CPU and GPU. We await more information, of course.

It is rumored to include 4 GB of HBM2 on-package, while the CPU also supports DDR4-2400 memory. Two cheaper EMIB CPUs have been mentioned:

According to some other media, the 8809G will turbo to 4.1 GHz, while the graphics will feature 24 [compute units (CUs)] (1536 [stream processors (SPs)]) running at 1190 MHz while the HBM2 is 4GB and will run at 800 MHz. The same media are also listing the Core i7-8705G (20 CUs, 1000 MHz on 'Vega M GL', 700 MHz on HBM2) and a Core i7-8706G. None of the information from those sources is yet to be verified by AnandTech or found on an official Intel webpage.

Currently available AMD Ryzen Mobile APUs only include 8-10 Vega CUs. These are mobile chips with a maximum TDP of 25 W; no desktop Ryzen chips with integrated graphics have been announced yet.

Previously: Intel Announces Core H Laptop Chips With AMD Graphics and High Bandwidth Memory


Original Submission

Related Stories

Intel Announces Core H Laptop Chips With AMD Graphics and High Bandwidth Memory 21 comments

Intel squeezed an AMD graphics chip, RAM and CPU into one module

the new processor integrates a "semi-custom" AMD graphics chip and the second generation of Intel's "High Bandwidth Memory (HBM2)", which is comparable to GDDR5 in a traditional laptop.

Intel CPU and AMD GPU, together at last

Summary of Intel's news:

The new product, which will be part of our 8th Gen Intel Core family, brings together our high-performing Intel Core H-series processor, second generation High Bandwidth Memory (HBM2) and a custom-to-Intel third-party discrete graphics chip from AMD's Radeon Technologies Group* – all in a single processor package.

[...] At the heart of this new design is EMIB (Embedded Multi-Die Interconnect Bridge), a small intelligent bridge that allows heterogeneous silicon to quickly pass information in extremely close proximity. EMIB eliminates height impact as well as manufacturing and design complexities, enabling faster, more powerful and more efficient products in smaller sizes. This is the first consumer product that takes advantage of EMIB.

[...] Additionally, this solution is the first mobile PC to use HBM2, which consumes much less power and takes up less space compared to traditional discrete graphics-based designs using dedicated graphics memory, like GDDR5 memory.

takyon: This is more like an "integrated discrete GPU" than standard integrated graphics. It also avoids the need for Intel to license AMD's IP. AMD also needs to make a lot of parts since its wafer supply agreement with GlobalFoundries penalizes AMD if they buy less than a target number of wafers each year.

Also at AnandTech and Ars Technica.

Previously: AMD Stock Surges on Report of Intel Graphics Licensing Deal, 16-Core Ryzen Confirmed

Related: Samsung Increases Production of 8 GB High Bandwidth Memory 2.0 Stacks


Original Submission #1Original Submission #2

AMD at CES 2018 10 comments

At the Consumer Electronics Show, AMD confirmed details about products coming out in 2018:

  1. Ryzen 3 Mobile APUs: January 9th
  2. Ryzen Desktop APUs: February 12th
  3. Second Generation Ryzen Desktop Processors: April.
  4. Ryzen Pro Mobile APUs: Q2 2018
  5. Second Generation Threadripper Processors: 2H 2018
  6. Second Generation Ryzen Pro Desktop Processors: 2H 2018

The second generation "Zen+" products use a "12nm" process. Zen 2 and Zen 3 will use a "7nm" and "7nm+" process and will be out around 2019-2020.

Two cheaper Ryzen-based mobile APUs have been released. The Ryzen 3 2300U has 4 cores, 4 threads, and the Ryzen 3 2200U has 2 cores, 4 threads, making it the first dual-core part in the entire Ryzen product line. All of the Ryzen mobile parts have a 15 W TDP so far.

AMD has also lowered the suggested pricing for many of its Ryzen CPUs. For example, $299 for Ryzen 7 1700 from $329. The Threadripper Ryzen TR 1900X is down to $449 from $549.

Intel has officially launched five new Kaby Lake CPUs with AMD Radeon Vega graphics and 4 GB of High Bandwidth Memory. Each CPU also includes Intel's HD 630 GT2 integrated graphics, which is expected to be used for lower power video encode/decode tasks.

Previously: AMD Launches First Two Ryzen Mobile APUs With Vega Graphics
Intel Core i7-8809G with Radeon Graphics and High Bandwidth Memory: Details Leaked


Original Submission

Intel Reveals Three New Packaging Technologies for Stitching Multiple Dies Into One Processor 12 comments

Intel will be using a few packaging technologies to connect CPU core "chiplets":

Intel revealed three new packaging technologies at SEMICON West: Co-EMIB, Omni-Directional Interconnect (ODI) and Multi-Die I/O (MDIO). These new technologies enable massive designs by stitching together multiple dies into one processor. Building upon Intel's 2.5D EMIB and 3D Foveros tech, the technologies aim to bring near-monolithic power and performance to heterogeneous packages. For the data-center, that could enable a platform scope that far exceeds the die-size limits of single dies.

[...] Compared to interposers, which can be reticle-sized (832mm2) or even larger, [EMIB (Embedded Multi-die Interconnect Bridge)] is just a small (hence, cheap) piece of silicon. It provides the same bandwidth and energy-per-bit advantages of an interposer compared to standard package traces, which are traditionally used for multi-chip packages (MCPs), such as AMD's Infinity Fabric. (To some extent, because the PCH is a separate die, chiplets have actually been around for a very long time.)

[...] Intel showed off a concept product that contains four Foveros stacks, with each stack having eight small compute chiplets that are connected via TSVs to the base die. (So the role of Foveros there is to connect the chiplets as if it were a monolithic die.) Each Foveros stack is then interconnected via two (Co-)EMIB links with its two adjacent Foveros stacks. Co-EMIB is further used to connect the HBM and transceivers to the compute stacks.

Evidently, the cost of such a product would be enormous, as it essentially contains multiple traditional monolithic-class products in a single package. That's likely why Intel categorized it as a data-centric concept product, aimed mainly at the cloud players that are more than happy to absorb those costs in exchange for the extra performance.

[...] When they are ready, these technologies will provide Intel with powerful capabilities for the heterogeneous and data-centric era. On the client side, the benefits of advanced packaging include smaller package size and lower power consumption (for Lakefield, Intel claims a 10x SoC standby power improvement at 2.6mW). In the data center, advanced packaging will help to build very large and powerful platforms on a single package, with performance, latency, and power characteristics close to what a monolithic die would yield. The yield advantage of small chiplets and the establishment of chipset ecosystem are major drivers, too.

Also at The Register, VentureBeat, Guru3D, and PCWorld.

Related: Intel Core i7-8809G with Radeon Graphics and High Bandwidth Memory: Details Leaked
Intel Announces "Sunny Cove", Gen11 Graphics, Discrete Graphics Brand Name, 3D Packaging, and More
Intel Promises "10nm" Chips by the End of 2019, and More
Intel Details Lakefield CPU SoC With 3D Packaging and Big/Small Core Configuration
Intel's Jim Keller Promises That "Moore's Law" is Not Dead, Outlines 50x Improvement Plan


Original Submission

Samsung Develops 12-Layer 3D TSV DRAM 5 comments

Samsung has developed the first 12-layer High Bandwidth Memory stacks:

Samsung's 12-layer DRAM KGSDs (known good stack die) will feature 60,000 [through silicon via (TSV)] holes which is why the manufacturer considers its technology one of the most challenging packaging for mass production. Despite increase of the number of layers from eight to 12, thickness of the package will remain at 720 microns, so Samsung's partners will not have to change anything on their side to use the new technology. It does mean that we're seeing DRAM layers getting thinner, with acceptable yields for high-end products.

One of the first products to use Samsung's 12-layer DRAM packaging technology will be the company's 24 GB HBM2 KGSDs that will be mass produced shortly. These devices will allow developers of CPUs, GPUs, and FPGAs to install 48 GB or 96 GB of memory in case of 2048 or 4096-bit buses, respectively. It also allows for 12 GB and 6 GB stacks with less dense configurations.

"12-Hi" stacks were added to the HBM2 standard back in December, but there were no immediate plans by Samsung or SK Hynix to manufacture it.

Future AMD CPUs (particularly Epyc) may feature HBM stacks somewhere on the CPU die. Intel has already used its embedded multi-die interconnect bridge (EMIB) technology with HBM to create an advanced APU with AMD's own graphics, and is using HBM on field programmable gate arrays (FPGAs) and other products.

AMD's Radeon VII GPU has 16 GB of HBM2. Nvidia's V100 GPU has 16 or 32 GB on a 4,096-bit memory bus.

Also at Electronics Weekly.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2, Informative) by Anonymous Coward on Tuesday January 02 2018, @04:51PM (6 children)

    by Anonymous Coward on Tuesday January 02 2018, @04:51PM (#616763)

    The term has started to lose its meaning. Sites that suck use it to attract viewers.

    I was going to blame Anandtech, but after going there to confirm things, the word "leak" isn't part of the content. The word appears when they referenced information they had last year, but it's not relating to the Intel press release that the article is about. The info was posted on the other side of the world prior to being posted on this side of the world.

    Perhaps geographical regions that have a differently timed day and night cycles due to, you know, actually not being in California, actually have scripts that publish authorized data on a schedule that doesn't follow Silicon Valley time? Strange and unusual, I know... Intel might even have cheap local resources posting it, too, who knows, it *IS* an Indian website that the info was first seen.

    But it's in no way a leak..

    • (Score: 2) by LoRdTAW on Tuesday January 02 2018, @05:10PM (1 child)

      by LoRdTAW (3755) on Tuesday January 02 2018, @05:10PM (#616774) Journal

      Agreed. This is a "sneak peek". Not a leak.

      • (Score: 3, Funny) by DannyB on Tuesday January 02 2018, @05:46PM

        by DannyB (5839) Subscriber Badge on Tuesday January 02 2018, @05:46PM (#616797) Journal

        If it were more info about Intel's Management Engine, it would be a whistleblower, not a "leak".

        Paid for by Americans for Renewable Complaining and Sustainable Whining.

        --
        The lower I set my standards the more accomplishments I have.
    • (Score: 2) by takyon on Tuesday January 02 2018, @05:39PM (3 children)

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Tuesday January 02 2018, @05:39PM (#616789) Journal

      The info was obviously published on accident before a regular press release:

      I imagine that this listing will come down fairly quickly. The product page that the link goes to for this chip gives a 404.

      And the very end of the article references leaks/rumors beyond what came from Intel's Indian website.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 0) by Anonymous Coward on Tuesday January 02 2018, @06:22PM (2 children)

        by Anonymous Coward on Tuesday January 02 2018, @06:22PM (#616814)

        The "mistake" has saved Intel millions in advertisement costs.

        • (Score: 2) by takyon on Tuesday January 02 2018, @06:25PM (1 child)

          by takyon (881) <takyonNO@SPAMsoylentnews.org> on Tuesday January 02 2018, @06:25PM (#616817) Journal

          All they have to do is issue a press release to get coverage on these sites. One drone typing for an hour, cross checked with marketing and legal. Probably several hundreds of dollars of expenditure, not millions. And they will still issue one even if ALL of the relevant details have already leaked.

          --
          [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
          • (Score: 2) by bob_super on Tuesday January 02 2018, @06:55PM

            by bob_super (1357) on Tuesday January 02 2018, @06:55PM (#616825)

            Dang, they still have much to learn from Apple, which gets full front page coverage on the potential that some rumor about the idea of a leak might be plausible, plus full-blown analysis of fanfiction photoshops.

  • (Score: 2, Interesting) by waximius on Tuesday January 02 2018, @08:31PM (6 children)

    by waximius (1136) on Tuesday January 02 2018, @08:31PM (#616875) Homepage

    Ok, I've seen this going on for a while and will now ask the dumb question - why is Intel integrating GPUs onto the same package as their processors?

    Don't misunderstand, I can see many benefits of on-package GPUs, but if we're going to disrupt motherboards and sockets again why are we not making faster/larger processors instead of trying to integrate GPUs? If we are creating more real estate on-package, I don't see that putting a GPU on it is the best decision for the additional space needed and heat produced.

    As far as I know, full video cards can run circles around on-package GPUs. They are modular (I can use an old CPU and upgrade my video card independently), and the technology can still be made faster (we're not done innovating in that space). So, what benefits are there to an on-chip GPU that it would ever be an attractive option for a buyer? It will increase cost, be useful for only a certain market segment, and right now I completely discount this combined package because I think a video card gives me more performance bang for my buck than on-chip GPUs.

    Lastly, why AMD? Their drivers are not near as good as NVidia's (IMHO), and they own less than half of the market share by comparison. I upgrade my rig infrequently, and I switched from AMD video cards back in 2012 because I had such a bad 4-year experience with AMD.

    I don't see anybody asking these questions, so I'm hoping the answer is obvious and I'm just missing it somewhere.

    Market share comparison:
    https://wccftech.com/nvidia-amd-discrete-gpu-market-share-report-q3-2017/ [wccftech.com]

    • (Score: 4, Interesting) by LoRdTAW on Tuesday January 02 2018, @09:25PM (5 children)

      by LoRdTAW (3755) on Tuesday January 02 2018, @09:25PM (#616900) Journal

      You're thinking small.

      Don't misunderstand, I can see many benefits of on-package GPUs, but if we're going to disrupt motherboards and sockets again why are we not making faster/larger processors instead of trying to integrate GPUs? If we are creating more real estate on-package, I don't see that putting a GPU on it is the best decision for the additional space needed and heat produced.

      Have you looked at the Intel and AMD server/HPC offerings? Plenty of big CPU's with lots of cores there.

      As far as I know, full video cards can run circles around on-package GPUs. They are modular (I can use an old CPU and upgrade my video card independently), and the technology can still be made faster (we're not done innovating in that space). So, what benefits are there to an on-chip GPU that it would ever be an attractive option for a buyer? It will increase cost, be useful for only a certain market segment, and right now I completely discount this combined package because I think a video card gives me more performance bang for my buck than on-chip GPUs.

      There's this thing called the internet which is heavily driven by visual content delivered to screens. With all the new web technologies such as WebGL, streaming video, and all sorts of other stuff, why would you not include a GPU? It's a desktop necessity nowadays. Just because it cant play Crysis in 4k at 240Hz doesn't mean it's useless.

      Lastly, why AMD? Their drivers are not near as good as NVidia's (IMHO), and they own less than half of the market share by comparison. I upgrade my rig infrequently, and I switched from AMD video cards back in 2012 because I had such a bad 4-year experience with AMD.

      Because Intel and AMD already cross license technologies (hello, x86-64!). Business wise, Nvidia doesn't need Intel as they are doing quite well in the Mobile, HPC, AI, Deep Learning, and autonomous automotive markets. And the HPC/AI/Deep LEarning is very profitable as you can sell shit loads of chips at once to big customers with DEEP pockets. Intel is gearing up in some of those areas with The Xeon Phi and FPGA tech they got from Altera. So Intel and Nvidia are going to compete head to head in those markets where AMD is pretty much absent from. The enemy of my enemy is my friend and AMD is more of a friend than Nvidia at this point. And as for your driver complaint, you think Intel would let that be a problem? I mean who better than Intel to get those damn drivers into the Linux Kernel? Intel CPU with excellent GPU with mainlined kernel drivers: Win-win in my book.

      • (Score: 0) by Anonymous Coward on Tuesday January 02 2018, @09:58PM (1 child)

        by Anonymous Coward on Tuesday January 02 2018, @09:58PM (#616916)

        And as for your driver complaint, you think Intel would let that be a problem?

        Ever hear of GMA500? Poulsbo? Yeah, considering how Intel screwed me once with integrated graphics licensed from a third party, I absolutely believe they'd do it again.

        • (Score: 2) by LoRdTAW on Wednesday January 03 2018, @07:41PM

          by LoRdTAW (3755) on Wednesday January 03 2018, @07:41PM (#617308) Journal

          Yea, that was a hiccup. I had one of the diamondville's on an Intel ITX board with the GMA950. Terrible performance but it was my first ITX/low power system to play with and I had it hooked to a TV for a while as a media player which it sucked at when it came to HD but I didn't really care, and then it was a small desktop before I shelved it.

      • (Score: 1) by waximius on Tuesday January 02 2018, @10:18PM (2 children)

        by waximius (1136) on Tuesday January 02 2018, @10:18PM (#616925) Homepage

        Thank you, great information. One follow up to your point:

        There's this thing called the internet which is heavily driven by visual content delivered to screens. With all the new web technologies such as WebGL, streaming video, and all sorts of other stuff, why would you not include a GPU? It's a desktop necessity nowadays. Just because it cant play Crysis in 4k at 240Hz doesn't mean it's useless.

        I didn't mean to imply that I thought this configuration was useless, but that it applies to a limited market segment. Based on what you say though, I can see that the segment is much broader than I initially thought. If I understand right, the on-package GPU could be used for rendering lighter weight things, and a full video card could still be added and utilized for heavier weight applications like gaming. It's not an "either-or" situation, but a "yes-and".

        I like that as long as an Intel+GPU combo doesn't speed up obsolescence. My current configuration of CPU + video card runs just fine 10 years after I built it. I've upgraded to an SSD, and upgraded my video card multiple times, but am still using a Core-i7 920 and only have 6GB RAM. The longevity of that processor has been amazing, and I'm just now thinking I should upgrade the CPU. Having the GPU on-chip scares me only because I feel like I need to upgrade my video card somewhat regularly (based on need, but games require better hardware every year).

        In any case, thanks for the response, very informative.

        • (Score: 3, Informative) by takyon on Tuesday January 02 2018, @10:45PM

          by takyon (881) <takyonNO@SPAMsoylentnews.org> on Tuesday January 02 2018, @10:45PM (#616942) Journal

          The combination of an on-package GPU and the High Bandwidth Memory may have some advantages over discrete GPUs. Moving everything closer together helps overcome certain limits:

          http://www.nersc.gov/users/computational-systems/cori/application-porting-and-performance/using-on-package-memory/ [nersc.gov]

          Some users want smaller form factors, for Home Theater PCs (HTPCs) for example. This kind of on-package stuff might be cheaper than using a discrete GPU, with lower power consumption, but with better performance than integrated graphics. It might be worth it.

          Also, I was not sure when writing the summary, but I think both the Intel integrated graphics and AMD Radeon Vega graphics may be included on these chips. In which case you might have a setup well suited to newer graphics APIs like Vulkan which can take advantage of these disparate assets.

          --
          [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
        • (Score: 3, Interesting) by LoRdTAW on Wednesday January 03 2018, @03:12AM

          by LoRdTAW (3755) on Wednesday January 03 2018, @03:12AM (#617051) Journal

          There are plenty of use cases where the on die GPU is basically a necessity, even if it is low end. One I forgot to mention is that I think chrome renders pages on the GPU if supported. Then there is GPU 3D accelerated desktop compositing window managers. All that is used on business desktops daily.

          My Linux box is an AMD A10 APU which has plenty of CPU and GPU to do all the basics for development stuff and web browsing. Though I now wish it would morph into a Ryzen but I'm not spending money on it just to have it as I don't need all that CPU.

  • (Score: 2) by Azuma Hazuki on Tuesday January 02 2018, @10:27PM (3 children)

    by Azuma Hazuki (5086) on Tuesday January 02 2018, @10:27PM (#616932) Journal

    I don't like this. Intel poached Koduri essentially--we know *nothing* of how or why he joined Intel!--and now appears to be trying to create a mashup of Ryzen CPU performance with Ravenridge-level IGP. It's an interesting technical angle, but something about this smells as far as the corporate side goes. Intel fights fucking dirty, always did, and they seem to want to undermine AMD instead of competing with them.

    --
    I am "that girl" your mother warned you about...
    • (Score: 4, Interesting) by takyon on Tuesday January 02 2018, @10:39PM (2 children)

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Tuesday January 02 2018, @10:39PM (#616940) Journal

      AMD has done well with Ryzen and GPUs recently, but their easy cryptomining cash boost will probably dry up soon.

      This gets them to tap into a source of revenue. Although it probably doesn't help them with market share, they can leech off of Intel's.

      This is also less of a devastating self-flagellation for AMD than it might have been before they launched Ryzen. Ryzen has partially closed a huge gap with Intel's CPUs and IPC. It has inserted them back into desktops which they had all but abandoned. Future iterations of their hardware might be somewhat more competitive with Intel since Intel has struggled to move past 14nm and although GlobalFoundries, Samsung, et al. are said to have crappier process nodes than Intel, they are moving a little faster (for example, GlobalFoundries 7nm might be comparable to Intel's 10nm, but 7nm will be around before Intel gets much 10nm stuff out).

      AMD's next big move may be to muscle into the machine learning and automotive territory where Nvidia is riding high. There was talk of a GPU made for Tesla (the car company, not the Nvidia product).

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 0) by Anonymous Coward on Tuesday January 02 2018, @10:57PM

        by Anonymous Coward on Tuesday January 02 2018, @10:57PM (#616951)

        With Ryzen and Vega, AMD has fully thrown itself into the 'Secured for government spying' Arena, with backdoored CPUs and GPUs with signed firmware and potential hypervisor level backdoors that co-opt what little of the mainstream computer market was left after Intel was done with it.

        If you consider this from the intelligence-technlogy complex mindset, throwing AMD a few bones to help provide access to 99 percent of non-cellular computer users *IN THE WORLD*, this is a bargain to keep them in business while also ensuring the entire x86 market is following in lock-step the controls being put in place to either control or surveil the public.

        Given that basically all major software requires x86 today, even the software that doesn't require windows, this gives a dramatic amount of potential (even if unused) surveillance power to the gatekeepers who control it. In this case the NSA, Mossad, GCHQ, and possibly Japan through SoftBank's newfound ownership of ARM.

        Without competitors coming onto the market, especially competitors from other regions/nationalities, and ideally from other countries who still believe in privacy, if not free speech, we are rapidly approaching the sort of technological tipping point that Continuum depicted as far as one technology coopting control of the world towards one groups ideological control.

      • (Score: 2) by LoRdTAW on Wednesday January 03 2018, @03:24AM

        by LoRdTAW (3755) on Wednesday January 03 2018, @03:24AM (#617053) Journal

        And, lets just throw this in here: What if Intel worked with AMD to enable AMD discreet GPU cards work in tandem with each other in laptop or desktop? You can have it either way and everything plays nicely: Ryzen APU + AMD GPU || Intel [AMD]APU + AMD GPU. Pick your CPU core of choice. Then the main GPU can be turned off when just browsing or watching netflix and throttle on for the latest mmor-fps-rpg-whatever or buttcoin mining.

(1)