Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Tuesday May 24 2022, @10:12AM   Printer-friendly
from the talk-to-me-of-Mendocino dept.

AMD has announced "Mendocino", a mid-range chip for Windows and ChromeOS laptops that will launch in Q4 2022. The Mendocino die has a quad-core Zen 2 CPU and an unspecified amount of RDNA2 graphics cores, and uses LPDDR5 memory. It looks similar if not identical to the Van Gogh chip used in Valve's Steam Deck, except that it uses TSMC's "6nm" process instead of "7nm".

Seeing AMD planning to mint a new Zen 2-based APU in late 2022 is at first blush an unusual announcement, especially since the company is already two generations into mobile Zen 3. But for the low-end market it makes a fair bit of sense. Architecturally, Zen 3's CPU complexes (CCXes) are optimized for 8C designs; when AMD needs fewer cores than that (e.g. Ryzen 3 5400U), they've been using salvaged 8C dies. For Zen 2, on the other hand, the native CCX size is 4, which allows AMD to quickly (and cheaply) design an SoC based on existing IP blocks, as opposed to engineering a proper 4C Zen 3 CCX.

AMD's Ryzen 7000 series of desktop CPUs will launch this fall on a new AM5 socket, with a Land Grid Array (LGA) design. The heat spreader for the CPUs has cutouts on the top for capacitors, while the back is completely covered with pads (not pins like on AM4 CPUs). AM5 CPUs will only use dual-channel DDR5 memory, with no mixed DDR4/DDR5 support like Intel's latest Alder Lake CPUs.

Three new chipsets have been announced for the first AM5 motherboards: X670E (the 'E' is for "Extreme"), X670, and B650. These are differentiated primarily by the guaranteed level of support for PCIe 5.0 devices. X670E should support up to two PCIe 5.0 graphics card slots and multiple PCIe 5.0 SSDs, whereas B650 may only support a single PCIe 5.0 SSD, using PCIe 3/4 elsewhere. PCIe 5.0 x4 supports theoretical sequential read speeds of 16 GB/s, with SSDs in the real world likely reaching 14 GB/s.

The "6nm" I/O die inside Ryzen 7000 CPUs will include integrated RDNA2 graphics (again, an unspecified amount) and support up to 4 display outputs, including HDMI 2.1/DisplayPort 2.0. The move from a "14nm" GlobalFoundries I/O die down to TSMC "6nm" along with other improvements will likely lower idle power consumption.

L2 cache per Zen 4 core has been doubled to 1 MiB from Zen 3. A 16-core Ryzen 7000 chip was demonstrated boosting up to 5.5 GHz (single core), which could account for the majority of the CPU's performance increase given that AMD is currently only claiming a ">15% single-thread uplift" vs. Zen 3. The higher clock speeds could be due to the use of TSMC's "5nm" process for CPU cores, as well as a higher 170 Watt TDP/PPT. The CPUs will also include "expanded instructions" for "AI acceleration", which may refer to formats like bfloat16 and int8/int4, if not AVX-512.

See also: The Steam Deck APU gets a 6nm refresh to power AMD's best-in-class budget laptops


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 1, Touché) by Anonymous Coward on Tuesday May 24 2022, @11:54AM (11 children)

    by Anonymous Coward on Tuesday May 24 2022, @11:54AM (#1247435)

    I have to buy an expensive new AM5 motherboard, expensive new DDR5 RAM and an expensive new Ryzen 7000.... to get a 15% single-thread uplift? I hope they throw in an "I Love Lisa Su" T-shirt to make the deal more worthwhile.

    • (Score: 2) by RamiK on Tuesday May 24 2022, @12:25PM (9 children)

      by RamiK (1813) on Tuesday May 24 2022, @12:25PM (#1247445)

      The current bottleneck for games is the GPU for ray-tracing so having another GPU / AI accelerator slot in the intermediate price tier is pretty good.

      --
      compiling...
      • (Score: 1, Insightful) by Anonymous Coward on Tuesday May 24 2022, @12:36PM

        by Anonymous Coward on Tuesday May 24 2022, @12:36PM (#1247446)

        Dual GPU systems are out of fashion right now, partly because manufacturers don't like it (lower profit margins on mid range GPUs compared to high end), problems with performance (not all games working with it and frame synchronization/latency problems), plus of course the general GPU shortage, which is just now subsiding.

        Multiple GPU slots remain useful for virtualization, people who want a lot of monitors, driver developers, people doing high performance GPU computing, and the dozen or so people still trying to do crypto mining on their main PC. All fairly niche stuff, but enough to support a high end motherboard option.

      • (Score: 3, Interesting) by takyon on Tuesday May 24 2022, @04:33PM (7 children)

        by takyon (881) <takyonNO@SPAMsoylentnews.org> on Tuesday May 24 2022, @04:33PM (#1247488) Journal

        From what I understand SLI/Crossfire gaming is dead, because developers don't want to deal with it anymore. Instead we will see single GPUs reach absurd power levels, such as 450-600 Watts for the upcoming RTX 4090 Ti, and we will see multi-chip module (chiplet) designs (e.g. RDNA3) that are internally similar to dual/quad GPUs, but are treated as a single GPU by the OS and games.

        Details are subject to change on these chipsets, because below X670E there's still a bit of optionality, and there is a rumored B650E chipset, just to make things more confusing. But companies are already teasing their motherboards and you can see an example of dual slots here:

        https://wccftech.com/asus-unveils-next-gen-x670e-x670-motherboards-including-its-flagship-rog-crosshair-x670e-extreme/ [wccftech.com]

        Despite having two PCIe 5.0 x16 slots, you might only get x8 if actually using both of them. Which is probably fine.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
        • (Score: 3, Interesting) by RamiK on Tuesday May 24 2022, @05:42PM (6 children)

          by RamiK (1813) on Tuesday May 24 2022, @05:42PM (#1247512)

          Dual GPUs died because PCI lanes (and the memory bus in general) became the segmentation point between HPC / server / professional workstation and commodity hardware. So, while it not might be as significant as offering ECC to everyone or the threadrippers release, it's still AMD willing to be disruptive on features instead of just the price point.

          Besides, it just so happens to be that AMD also manufactures graphics cards so they might have a product in mind for those expansion slots. Whether it's dual gpu or an accelerator, I wouldn't know. But I doubt they would have wasted lanes that could have been used elsewhere otherwise.

          --
          compiling...
          • (Score: 0) by Anonymous Coward on Tuesday May 24 2022, @06:05PM (5 children)

            by Anonymous Coward on Tuesday May 24 2022, @06:05PM (#1247527)

            The thing is, GPUs are just not limited by PCIe bandwidth, except in rare cases (like the AMD RX6400 attached to a PC that only has PCIe 3). In the typical dual GPU case of two 8x connections at the maximum speed the GPU supports, it is not a limitation.

            • (Score: 3, Interesting) by RamiK on Wednesday May 25 2022, @09:03AM (4 children)

              by RamiK (1813) on Wednesday May 25 2022, @09:03AM (#1247686)

              The thing is, GPUs are just not limited by PCIe bandwidth

              The memory bus throughput was already the bottleneck in computer vision and low-latency VR 2 years ago:

              ...
              Solving a practical computer vision problem in real time is usually a challenging task. The algorithms are much more complicated than running a single linear filter over an image, and image resolution is high enough to cause a bottleneck in a memory bus. In many cases, significant speedups can be reached by choosing the right algorithm. For example, the Viola–Jones face detector algorithm [3] enables detection of human faces in real time on relatively low-power hardware. The FAST [4] feature detector and BRIEF [5] and ORB [6] descriptors allow quick generation of features for tracking. However, in the most of cases, it is not possible to achieve the required performance by just algorithmic optimizations.
              ...

              ( OpenVX Programming Guide 1.1 [google.com] )

              Nowadays, with all the high res/refresh-rate monitors, the problems are showing up in desktop and even console games. It's not too obvious since the gaming industry got stuck in a recession feedback loop due to mining*... But it's starting to show.

              Anyhow, AMD's architecture interconnect is slower than Intel's so it responds well to more lanes and bigger caches. Whether any of it matter in the present market is the gamble they're betting on...

              * it started as:
              1. Miners drove GPU cards prices beyond the reach of gamers.
              2. Game developers failed to sell feature-full triple A games as is and were forced to delay releases and spend more time on optimizing for low-end.
              3. Game engines stopped racing after new features.

              now it's:
              1. Gamers can still play most of the new games since the engines and games are fairly optimized for low-end so they aren't buying new GPUs even when the prices go down.
              2. Game and game engine developers are noticing there's more demand for low-end so they aren't bothering with performance demanding features.
              3. High end GPU sales aren't good so nVidia and AMD are slowing down releases to a crawl while focusing on compute features for non-gamers.

              So, basically, the market is "stuck" on the current setup and will continue to do so unless a killer feature (like VR headsets or really clever AI) convinces people to buy new hardware. This has to come from the hardware industry since, unlike previous years, the gaming industry has many independent studios that continue to produce quality content targeting low-end GPUs and the console wars post Wii are fought and won on content alone.

              a huge deal since there's something of a recession going on in computer games where people are just not buying high-end graphics cards so game engines aren't updating

              --
              compiling...
              • (Score: 1, Interesting) by Anonymous Coward on Wednesday May 25 2022, @07:37PM (1 child)

                by Anonymous Coward on Wednesday May 25 2022, @07:37PM (#1247806)

                You are conflating the memory bus with the PCIe bus. They are totally different things.

                There are no applications where an 8x PCIe 4.0 bus (much less a PCIe 5.0 bus) is a limit. Even synthetic benchmarks have trouble showing even a 1% performance difference. If you want to see an example of how much spare bandwidth there is on the PCIe bus, take a look at Looking Glass (the final rendered frames are all sent back over the PCIe bus, which is running at half capacity because it's a dual GPU setup, while running a game, and there is no performance hit).

                Monitor refresh rate does not tax memory bandwidth, and data being sent to the monitor does not travel over the PCIe bus. A 4k@120 signal is a lot of data for an external cable, but it's peanuts to a memory bus. Of course resolution matters in gaming, but even that is becoming less important as GPUs are moving toward dynamic resolution and upscaling.

                You really just don't know what you are talking about.

                The reason the gaming industry has been advancing slowly is because the PS4 and Xbox One had unusually long lifespans, underperformed compared to PCs on launch day, and game makers had to target those platforms. It's hard to push the state of the art when your game has to run on hardware that's the equivalent of a mid range PC from 2011. Now that everything is limited by heat, consoles will probably underperform relative to PCs forever, but at least there's a new generation of consoles now so things can move forward at least a little bit. The GPU shortage didn't help, but even in 2018 everything was held back by weak consoles.

                • (Score: 1, Insightful) by Anonymous Coward on Thursday May 26 2022, @09:08AM

                  by Anonymous Coward on Thursday May 26 2022, @09:08AM (#1247957)

                  You are conflating the memory bus with the PCIe bus. They are totally different things.

                  Embedded GPUs share RAM with the CPU so their interconnect takes the place of PCIe in those statements.

                  There are no applications where an 8x PCIe 4.0 bus (much less a PCIe 5.0 bus) is a limit.

                  1. Then why do they require 16x?
                  2. Streaming pushes almost raw frames back from the GPU to the CPU and from there it's pushed to over the network.
                  3. Look up OpenCL and FPGA discussion especially where machine vision and VR s concerned.
                  4. AI accelerators are very chatty.

                  Monitor refresh rate does not tax memory bandwidth, and data being sent to the monitor does not travel over the PCIe bus...

                  People aren't buying high refresh rate monitors thinking it's just for the GPU to draw the same frames twice. They want to render twice the frames so the motion will be smooth. Especially in VR.

                  The reason the gaming industry has been advancing slowly is because the PS4 and Xbox One had unusually long lifespans

                  Console life span is determined by the demand for better graphics and people just didn't demand it or were willing to pay more for it. Fundamentally, you're ignoring how the bulk of the gaming market is shooters and casual smartphones games with shit like Minecraft in-between. That had nothing to do with what Sony and Microsoft did and it's not going to change unless you can push something significantly better in both graphics and actual gameplay as opposed to just more polygons.

              • (Score: 2) by takyon on Wednesday May 25 2022, @11:00PM (1 child)

                by takyon (881) <takyonNO@SPAMsoylentnews.org> on Wednesday May 25 2022, @11:00PM (#1247870) Journal

                Foveated rendering will alleviate VR's bandwidth problems.

                Game developers were always tied to the minimum specs of the latest and previous console generations. Which until recently meant a relatively weak 8-core AMD Jaguar CPU, 8 GB of total system memory, slow HDD, etc. The faster-than-SATA SSDs in the latest consoles are the most important new feature for game developers, as they can "stream in" assets rapidly, and games no longer have to be optimized for HDDs with tricks like duplicating data in several places. These SSDs will clearly benefit open world games like The Elder Scrolls VI. The Zen 2 8-core CPU is around 4-5 times faster than 8-core Jaguar, not counting various accelerators.

                With resolution, there is a lot of wiggle room. Console games are targeting 4K/60 in some cases, but can upscale from a lower resolution like 1440p/1800p if needed. The Steam Survey shows that 1080p is still the most popular resolution. 1080p/60 is the target for many people, and that's usually reachable with older GPUs and upcoming APUs (Rembrandt 12 CU and greater). If the going gets tough, some will settle for 1080p/30 or 720p/30.

                Games can easily target all of those at once, which is why you see Elden Ring running on a Steam Deck.

                Despite shortages, many XSX/PS5 units have sold. For example, PS5 initially outsold the PS4 in its first year but the rate has since fallen behind. We're still in a transition period with some games being released for the previous generation, but that won't last. SSDs will become a recommended/minimum spec to run certain games, if not strictly required (a large amount of RAM could get around the requirement).

                --
                [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 2) by takyon on Tuesday May 24 2022, @04:18PM

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Tuesday May 24 2022, @04:18PM (#1247485) Journal

      There's speculation that AMD is sandbagging on performance claims ahead of the Raptor Lake launch, and you can find some suspicious elements around that Cinebench R23 claim, like the obvious one: the more-than symbol. But that could just be a cooooooope.

      It's probably best to ignore this launch, and let others do the expensive beta testing for you. Rembrandt desktop APUs + A620 chipset motherboards might change that around late 2022, early 2023.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 2, Interesting) by Anonymous Coward on Tuesday May 24 2022, @02:11PM (1 child)

    by Anonymous Coward on Tuesday May 24 2022, @02:11PM (#1247454)

    Reminds me of Intel's Celeron 300A. Nice easy overclock from 300MHz to 450MHz.

    https://en.wikipedia.org/wiki/Celeron#Mendocino [wikipedia.org]

  • (Score: 0) by Anonymous Coward on Wednesday May 25 2022, @05:53PM (1 child)

    by Anonymous Coward on Wednesday May 25 2022, @05:53PM (#1247787)

    "AMD has announced "Mendocino", a mid-range chip for Windows and ChromeOS laptops that will launch in Q4 2022."

    what a disgusting, Suited Whore way to not say "gnu+linox" or just "Linux".

(1)