Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Sunday June 23 2019, @06:47AM   Printer-friendly
from the when-in-rome... dept.

Submitted via IRC for Bytram

7nm AMD EPYC "Rome" CPU w/ 64C/128T to Cost $8K (56 Core Intel Xeon: $25K-50K)

Yesterday, we shared the core and thread counts of AMD's Zen 2 based Epyc lineup, with the lowest-end chip going as low as 8 cores while the top-end 7742 boasting 64 and double the threads. Today, the prices of these server parts have also surfaced, and it seems like they are going to be quite a bit cheaper than the competing Intel Xeon Platinum processors.

The top-end Epyc 7742 with a TDP of 225W (128 threads @ 3.4GHz) is said to sell for a bit less than $8K, while the lower clocked 7702 and 7702P (single-socket) are going to cost $7,215 and $4,955 (just) respectively. That's quite impressive, you're getting 64 Zen 2 cores for just $5,000, while on the other hand Intel's 28-core Xeon Platinum 8280 costs a whopping $18K and is half as powerful.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by takyon on Sunday June 23 2019, @03:33PM (7 children)

    by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Sunday June 23 2019, @03:33PM (#859083) Journal

    Jump into the quantum supremacy [soylentnews.org] and universal memory articles.

    Node naming has been busted for many years now, which is why I always put it in quotes and typically name the foundry.

    https://semiengineering.com/how-many-nanometers/ [semiengineering.com]
    https://en.wikichip.org/wiki/technology_node [wikichip.org]

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 2) by HiThere on Sunday June 23 2019, @04:33PM (3 children)

    by HiThere (866) Subscriber Badge on Sunday June 23 2019, @04:33PM (#859092) Journal

    OTOH, quantum computers still require cyrogenic cooling. Until that problem is licked, or at least reduced to dry ice temperatures, quantum computers will all be at datacenters.

    I'm currently most intrigued by 3-d chips. They've got a heat problem that just won't quit, though, so the chips need to be manufactured with built-in cooling systems. So far that's too difficult, but that's a design and engineering problem, not something basic. Or perhaps they could come up with a chip design that works better when it's hot? (The last one of those I saw worked on vacuum tubes, though.)

    --
    Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
    • (Score: 3, Interesting) by takyon on Sunday June 23 2019, @05:39PM

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Sunday June 23 2019, @05:39PM (#859107) Journal

      I see a few ways forward on 3D chips:

      • Move DRAM closer to cores as in the DARPA 3D SoC concept. Should be more news about that within the next year or two. There is an interim step where you just stack DRAM on top, but you can lower voltage and power further by tightly packing in RAM near the transistors.
      • Novel transistor design or material, like carbon nanotubes, TFETs, or the metal-based field emission air channel transistor. It's possible that today's transistors are using orders of magnitude more energy or waste heat than future designs.
      • Lower the clock speeds and voltage. You could use machine-learning algorithms to break up threads across multiple cores, and predict how to utilize as many cores as possible. But for manycore chips with thousands of cores, you are already dealing with highly parallel code, and it's possible that it will be worth it to pile on layers of cores even if you have to lower each core's performance.
      • In the case of neuromorphic chips that operate with brain-like spikes from only a small amount of cores/neurons at any given time, it may be possible to go 3D without worrying about power. See IBM's TrueNorth chip which had 4,096 cores but only 70 mW power consumption. If they can stack 1,000 layers of that, they should be able to do it with only modest cooling measures. It could be even closer to mimicking the brain since it is no longer planar.
      • Go the quantum supremacy route, except at room temperature. More research needed.
      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 2) by takyon on Monday June 24 2019, @10:52AM (1 child)

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Monday June 24 2019, @10:52AM (#859304) Journal

      I forgot to mention that the industry is moving towards a rudimentary form of stacking: Wafer-on-Wafer (WoW), which can be used to stack two processors. It has the advantage of bringing sets of cores (modules) closer together which helps with chip latency. It isn't mentioned how you would deal with the extra heat, which would rise from the bottom wafer to the top one.

      TSMC’s New Wafer-on-Wafer Process to Empower NVIDIA and AMD GPU Designs [engineering.com]
      TSMC Will Manufacture 3D Stacked WoW Chips In 2021 Claims Executive [wccftech.com]

      So you get a doubling of transistors within the same footprint, and up to a doubling of multi-core performance (I assume less than 2x if clocks drop). Maybe as soon as 2021 and tied to a TSMC "5nm" node.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 2) by HiThere on Monday June 24 2019, @08:10PM

        by HiThere (866) Subscriber Badge on Monday June 24 2019, @08:10PM (#859488) Journal

        Yes. They've been edging towards 3-D chips since the early 80's, perhaps earlier. But I'm not sure (yet) that this "stacking" is primarily aimed at increasing 3-Dness. It seems like it may be aimed more at increasing the percentage of good chips.

        --
        Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
  • (Score: 0) by Anonymous Coward on Monday June 24 2019, @08:16AM (2 children)

    by Anonymous Coward on Monday June 24 2019, @08:16AM (#859291)

    Quantum supremacy isn't all that supreme.

    The thing about quantum computers is that they don't actually solve that many problems. Everyone knows about factorization, but that mostly causes more real-world problems than it solves. I think much of the research into quantum computing has been driven by the need to make sure that nobody invents them in secret, so they can decrypt everyone's communications. But everyone will have switched to quantum-proof cryptography before that happens. So this ability is not very important.

    Mainly, the thing they would be useful for is simulation of quantum systems. It might be that the main thing we get out of quantum computers is... much better regular computers. And for that, you don't need an actual quantum computer on your desk to get the benefits from them.

    • (Score: 2) by takyon on Monday June 24 2019, @10:37AM (1 child)

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Monday June 24 2019, @10:37AM (#859302) Journal

      I thought the same exact thing, but:

      https://en.wikipedia.org/wiki/Quantum_algorithm [wikipedia.org]

      Although all classical algorithms can also be performed on a quantum computer

      With the caveat that this would not apply to an annealer like D-Wave.

      If quantum computers running classical code turns out to be slow and impractical, I think you could still see applications for quantum computing on home computers, such as simulating real world systems within open world video games, or machine learning. If a quantum computer can be done near room temperature, without significant cooling, you could see it integrated onto a smartphone SoC or as an add-on card for desktops. Make it available, and people will figure out what to do with it.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 2) by bzipitidoo on Monday June 24 2019, @12:56PM

        by bzipitidoo (4388) on Monday June 24 2019, @12:56PM (#859327) Journal

        Not to be facetious, but there's a lot of uncertainty around quantum computing. We don't even have a firm grasp of just what problems they can solve quickly that classical computers cannot, and we won't, until the famous question of whether P!=NP is solved. Most people strongly suspect that P!=NP, but if somehow it should turn out the opposite, that P=NP, then quantum computing may be of no value. So far, it is thought that BQP, the problems that a quantum computer can solve within a bounded amount of error in polynomial time, lies somewhere between P and NP, that is, that P is a subset of BQP, which is a subset of NP.

        In the efforts to move closer to solving whether P!=NP, researchers have come up with an awful lot of problem classifications. Fro instance, there's RP, the set of problems that can be solved in polynomial time with a randomized algorithm. RP is also somewhere between P and NP. RP might be equal to P. Whether it contains BQP or BQP contains it, or neither, is not known. Primality testing was known to be in RP, until recently when someone discovered a deterministic way to test for primality, placing that problem firmly in P.