Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Sunday June 23 2019, @06:47AM   Printer-friendly
from the when-in-rome... dept.

Submitted via IRC for Bytram

7nm AMD EPYC "Rome" CPU w/ 64C/128T to Cost $8K (56 Core Intel Xeon: $25K-50K)

Yesterday, we shared the core and thread counts of AMD's Zen 2 based Epyc lineup, with the lowest-end chip going as low as 8 cores while the top-end 7742 boasting 64 and double the threads. Today, the prices of these server parts have also surfaced, and it seems like they are going to be quite a bit cheaper than the competing Intel Xeon Platinum processors.

The top-end Epyc 7742 with a TDP of 225W (128 threads @ 3.4GHz) is said to sell for a bit less than $8K, while the lower clocked 7702 and 7702P (single-socket) are going to cost $7,215 and $4,955 (just) respectively. That's quite impressive, you're getting 64 Zen 2 cores for just $5,000, while on the other hand Intel's 28-core Xeon Platinum 8280 costs a whopping $18K and is half as powerful.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by bzipitidoo on Sunday June 23 2019, @02:41PM (13 children)

    by bzipitidoo (4388) on Sunday June 23 2019, @02:41PM (#859074) Journal

    In other words, as I have heard hinted, marketing has seized on the die size and warped that once reliable measurement to hype product. I suppose what TSMC calls 7nm is actually one component that has been reduced to 7nm while the bulk of the die remains at 10nm or even 14nm?

    Novel tech? Silicon and the Von Neumann architecture have had a great run, and are still king. GaAs, optical circuitry, graphite, memristors ... nothing has made serious inroads into silicon's dominance. Perhaps neural network computing will. Alpha Go and descendants were very impressive. Thus far, quantum computing has been vaporware. The big problem with stacked chips and other ventures into 3D is heat dissipation. That universal computing memory sounds great, but I thought memristors had great potential, and where are they now? Heck, I wonder if that universal memory is memristors!

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 3, Insightful) by takyon on Sunday June 23 2019, @03:33PM (7 children)

    by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Sunday June 23 2019, @03:33PM (#859083) Journal

    Jump into the quantum supremacy [soylentnews.org] and universal memory articles.

    Node naming has been busted for many years now, which is why I always put it in quotes and typically name the foundry.

    https://semiengineering.com/how-many-nanometers/ [semiengineering.com]
    https://en.wikichip.org/wiki/technology_node [wikichip.org]

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 2) by HiThere on Sunday June 23 2019, @04:33PM (3 children)

      by HiThere (866) Subscriber Badge on Sunday June 23 2019, @04:33PM (#859092) Journal

      OTOH, quantum computers still require cyrogenic cooling. Until that problem is licked, or at least reduced to dry ice temperatures, quantum computers will all be at datacenters.

      I'm currently most intrigued by 3-d chips. They've got a heat problem that just won't quit, though, so the chips need to be manufactured with built-in cooling systems. So far that's too difficult, but that's a design and engineering problem, not something basic. Or perhaps they could come up with a chip design that works better when it's hot? (The last one of those I saw worked on vacuum tubes, though.)

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
      • (Score: 3, Interesting) by takyon on Sunday June 23 2019, @05:39PM

        by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Sunday June 23 2019, @05:39PM (#859107) Journal

        I see a few ways forward on 3D chips:

        • Move DRAM closer to cores as in the DARPA 3D SoC concept. Should be more news about that within the next year or two. There is an interim step where you just stack DRAM on top, but you can lower voltage and power further by tightly packing in RAM near the transistors.
        • Novel transistor design or material, like carbon nanotubes, TFETs, or the metal-based field emission air channel transistor. It's possible that today's transistors are using orders of magnitude more energy or waste heat than future designs.
        • Lower the clock speeds and voltage. You could use machine-learning algorithms to break up threads across multiple cores, and predict how to utilize as many cores as possible. But for manycore chips with thousands of cores, you are already dealing with highly parallel code, and it's possible that it will be worth it to pile on layers of cores even if you have to lower each core's performance.
        • In the case of neuromorphic chips that operate with brain-like spikes from only a small amount of cores/neurons at any given time, it may be possible to go 3D without worrying about power. See IBM's TrueNorth chip which had 4,096 cores but only 70 mW power consumption. If they can stack 1,000 layers of that, they should be able to do it with only modest cooling measures. It could be even closer to mimicking the brain since it is no longer planar.
        • Go the quantum supremacy route, except at room temperature. More research needed.
        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 2) by takyon on Monday June 24 2019, @10:52AM (1 child)

        by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Monday June 24 2019, @10:52AM (#859304) Journal

        I forgot to mention that the industry is moving towards a rudimentary form of stacking: Wafer-on-Wafer (WoW), which can be used to stack two processors. It has the advantage of bringing sets of cores (modules) closer together which helps with chip latency. It isn't mentioned how you would deal with the extra heat, which would rise from the bottom wafer to the top one.

        TSMC’s New Wafer-on-Wafer Process to Empower NVIDIA and AMD GPU Designs [engineering.com]
        TSMC Will Manufacture 3D Stacked WoW Chips In 2021 Claims Executive [wccftech.com]

        So you get a doubling of transistors within the same footprint, and up to a doubling of multi-core performance (I assume less than 2x if clocks drop). Maybe as soon as 2021 and tied to a TSMC "5nm" node.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
        • (Score: 2) by HiThere on Monday June 24 2019, @08:10PM

          by HiThere (866) Subscriber Badge on Monday June 24 2019, @08:10PM (#859488) Journal

          Yes. They've been edging towards 3-D chips since the early 80's, perhaps earlier. But I'm not sure (yet) that this "stacking" is primarily aimed at increasing 3-Dness. It seems like it may be aimed more at increasing the percentage of good chips.

          --
          Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
    • (Score: 0) by Anonymous Coward on Monday June 24 2019, @08:16AM (2 children)

      by Anonymous Coward on Monday June 24 2019, @08:16AM (#859291)

      Quantum supremacy isn't all that supreme.

      The thing about quantum computers is that they don't actually solve that many problems. Everyone knows about factorization, but that mostly causes more real-world problems than it solves. I think much of the research into quantum computing has been driven by the need to make sure that nobody invents them in secret, so they can decrypt everyone's communications. But everyone will have switched to quantum-proof cryptography before that happens. So this ability is not very important.

      Mainly, the thing they would be useful for is simulation of quantum systems. It might be that the main thing we get out of quantum computers is... much better regular computers. And for that, you don't need an actual quantum computer on your desk to get the benefits from them.

      • (Score: 2) by takyon on Monday June 24 2019, @10:37AM (1 child)

        by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Monday June 24 2019, @10:37AM (#859302) Journal

        I thought the same exact thing, but:

        https://en.wikipedia.org/wiki/Quantum_algorithm [wikipedia.org]

        Although all classical algorithms can also be performed on a quantum computer

        With the caveat that this would not apply to an annealer like D-Wave.

        If quantum computers running classical code turns out to be slow and impractical, I think you could still see applications for quantum computing on home computers, such as simulating real world systems within open world video games, or machine learning. If a quantum computer can be done near room temperature, without significant cooling, you could see it integrated onto a smartphone SoC or as an add-on card for desktops. Make it available, and people will figure out what to do with it.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
        • (Score: 2) by bzipitidoo on Monday June 24 2019, @12:56PM

          by bzipitidoo (4388) on Monday June 24 2019, @12:56PM (#859327) Journal

          Not to be facetious, but there's a lot of uncertainty around quantum computing. We don't even have a firm grasp of just what problems they can solve quickly that classical computers cannot, and we won't, until the famous question of whether P!=NP is solved. Most people strongly suspect that P!=NP, but if somehow it should turn out the opposite, that P=NP, then quantum computing may be of no value. So far, it is thought that BQP, the problems that a quantum computer can solve within a bounded amount of error in polynomial time, lies somewhere between P and NP, that is, that P is a subset of BQP, which is a subset of NP.

          In the efforts to move closer to solving whether P!=NP, researchers have come up with an awful lot of problem classifications. Fro instance, there's RP, the set of problems that can be solved in polynomial time with a randomized algorithm. RP is also somewhere between P and NP. RP might be equal to P. Whether it contains BQP or BQP contains it, or neither, is not known. Primality testing was known to be in RP, until recently when someone discovered a deterministic way to test for primality, placing that problem firmly in P.

  • (Score: 0) by Anonymous Coward on Sunday June 23 2019, @04:04PM (1 child)

    by Anonymous Coward on Sunday June 23 2019, @04:04PM (#859086)

    No, actually clocks will get slower as the components shrink further. The advantage is more in efficiency. No idea where you get this idea that they should double.

  • (Score: 2) by Immerman on Monday June 24 2019, @02:40AM (2 children)

    by Immerman (3985) on Monday June 24 2019, @02:40AM (#859224)

    >The big problem with stacked chips and other ventures into 3D is heat dissipation.

    Which would be an excellent use for diamond-based CPU wafers, we know how to N- and P- dope diamond, and it's a better thermal conductor than copper, making it much easier for heat to migrate to the surface.

    Sadly, flawless wafer-sized lab grown diamond is well behind schedule - presumably they've either encountered problems, or decided the gemstone market is far more profitable than selling diamond wafers to semiconductor manufacturers.

    • (Score: 2) by takyon on Monday June 24 2019, @10:40AM (1 child)

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Monday June 24 2019, @10:40AM (#859303) Journal

      the gemstone market is far more profitable than selling diamond wafers to semiconductor manufacturers

      If the production problems get worked out and there is an actual push to use diamond in various ways, you will see companies like Samsung build their own facilities.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 2) by Immerman on Tuesday June 25 2019, @01:13AM

        by Immerman (3985) on Tuesday June 25 2019, @01:13AM (#859568)

        As I recall, the big problem is that for building reliable CPUs you want an atomically flawless diamond with no lattice discontinuities that would interfere with its electrical properties, so you need to grow a single wafer-diameter crystal from a tiny flawless seed. And, at least for the vapor deposition technologies being used 20-25 years ago, you could only grow flawless crystal in one direction, while the cross-section would slowly increase over time as it grew in very slight "cone" shape, only widening at a fraction of a percent of the speed in the primary growth direction. At the time they were projecting that it would take them around 15-20 years before their crystals were wide enough to be worth using as semiconductor wafers.

        Unless the technology has fundamentally changed, that means that the only way Samsung could build such a facility would be if they could get their hands on a flawless diamond wafer from one of the existing diamond-growing companies to act as a seed.