Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Sunday June 23 2019, @06:47AM   Printer-friendly
from the when-in-rome... dept.

Submitted via IRC for Bytram

7nm AMD EPYC "Rome" CPU w/ 64C/128T to Cost $8K (56 Core Intel Xeon: $25K-50K)

Yesterday, we shared the core and thread counts of AMD's Zen 2 based Epyc lineup, with the lowest-end chip going as low as 8 cores while the top-end 7742 boasting 64 and double the threads. Today, the prices of these server parts have also surfaced, and it seems like they are going to be quite a bit cheaper than the competing Intel Xeon Platinum processors.

The top-end Epyc 7742 with a TDP of 225W (128 threads @ 3.4GHz) is said to sell for a bit less than $8K, while the lower clocked 7702 and 7702P (single-socket) are going to cost $7,215 and $4,955 (just) respectively. That's quite impressive, you're getting 64 Zen 2 cores for just $5,000, while on the other hand Intel's 28-core Xeon Platinum 8280 costs a whopping $18K and is half as powerful.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Booga1 on Sunday June 23 2019, @08:58AM (20 children)

    by Booga1 (6333) on Sunday June 23 2019, @08:58AM (#859039)

    Core count, cost, and performance isn't everything for some companies. For some it's about being able to stock a reliable amount of replacements on hand. If your servers are 100% Intel, it's tough to justify switching to AMD, no matter how much better they are.

    On the other hand, if you can setup three times your current Intel servers under a new service contract for half the price, you might get the bean counters on board. This is where AMD's wheels are going to hit pavement or not. If they can get 20% of the server market, they're going to be making big bucks for 5+ years at least. Nobody ever buys just a processor, it's about the platform and support contracts around it.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 4, Interesting) by Pax on Sunday June 23 2019, @09:38AM

    by Pax (5056) on Sunday June 23 2019, @09:38AM (#859041)

    " if you can setup three times your current Intel servers under a new service contract for half the price"
    Processor costs as stated are 1/4 to 1/6 cost alone making this a no brainer especially when these will not have the vulnerabilities of intel and thus won't have the brakes put on them by the software mitigation

  • (Score: 4, Interesting) by bzipitidoo on Sunday June 23 2019, @10:31AM (15 children)

    by bzipitidoo (4388) on Sunday June 23 2019, @10:31AM (#859045) Journal

    Have to upgrade some time. If you don't, all too soon you'll be running on creaky 10 year old hardware that isn't supported any more, and takes twice as much power and space to do a tenth of the work that new hardware can handle. 10 years ago, 45nm was state of the art, with 32nm about to become available. And, every drive in the RAID has been replaced at least once and all are due for replacement again. The business case for upgrading from that to this new 7nm stuff, leaping from PCIe 2.0 with spinning rust to PCIe 5.0 with SSDs, is going to be irresistible. So why not jump from Intel to AMD when you upgrade?

    What another 10 years will bring, who can guess? The return of advancement in raw speed, with computers at last leaving the neighborhood of 4GHz in the dust? Seems 7nm should be able to handle close to twice the clock speed that 14nm can. Or are we going to see quantum computers on the desktop? Or neural network computers? That'll really shake things up like they haven't been shaken since the personal computer first appeared and displaced mainframe computing in the 1980s.

    • (Score: 2) by takyon on Sunday June 23 2019, @11:15AM (14 children)

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Sunday June 23 2019, @11:15AM (#859049) Journal

      Seems 7nm should be able to handle close to twice the clock speed that 14nm can.

      That doesn't seem right. Intel's "14nm" is more like a TSMC "10nm" and is similar in clock speeds to TSMC "7nm". In fact, Intel optimized on their process so much that they have trouble moving to "10nm" because clock speeds can take a hit. There's no doubling between TSMC "14nm" and TSMC "7nm" that I know of, and won't be even if we take into account "7nm+" improvements.

      Obviously, there have been recent improvements and a push to get into the 4 GHz to 5 GHz range, either sustained or boost, in some scenarios. This can be a big improvement, especially if you were running an older chip. But the real clock speed improvements will require some novel technology [soylentnews.org] to unlock.

      It does look like there is a lot of room for improvement in computing even with Moore's law "dead". New transistors, stacked chips, DRAM on chip [darpa.mil], optical computing [insidehpc.com], neuromorphic [wikipedia.org], quantum, universal memory [soylentnews.org], etc. The idea that we could make things 100-1000x faster again is a bit mind-boggling.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 2) by bzipitidoo on Sunday June 23 2019, @02:41PM (13 children)

        by bzipitidoo (4388) on Sunday June 23 2019, @02:41PM (#859074) Journal

        In other words, as I have heard hinted, marketing has seized on the die size and warped that once reliable measurement to hype product. I suppose what TSMC calls 7nm is actually one component that has been reduced to 7nm while the bulk of the die remains at 10nm or even 14nm?

        Novel tech? Silicon and the Von Neumann architecture have had a great run, and are still king. GaAs, optical circuitry, graphite, memristors ... nothing has made serious inroads into silicon's dominance. Perhaps neural network computing will. Alpha Go and descendants were very impressive. Thus far, quantum computing has been vaporware. The big problem with stacked chips and other ventures into 3D is heat dissipation. That universal computing memory sounds great, but I thought memristors had great potential, and where are they now? Heck, I wonder if that universal memory is memristors!

        • (Score: 3, Insightful) by takyon on Sunday June 23 2019, @03:33PM (7 children)

          by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Sunday June 23 2019, @03:33PM (#859083) Journal

          Jump into the quantum supremacy [soylentnews.org] and universal memory articles.

          Node naming has been busted for many years now, which is why I always put it in quotes and typically name the foundry.

          https://semiengineering.com/how-many-nanometers/ [semiengineering.com]
          https://en.wikichip.org/wiki/technology_node [wikichip.org]

          --
          [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
          • (Score: 2) by HiThere on Sunday June 23 2019, @04:33PM (3 children)

            by HiThere (866) Subscriber Badge on Sunday June 23 2019, @04:33PM (#859092) Journal

            OTOH, quantum computers still require cyrogenic cooling. Until that problem is licked, or at least reduced to dry ice temperatures, quantum computers will all be at datacenters.

            I'm currently most intrigued by 3-d chips. They've got a heat problem that just won't quit, though, so the chips need to be manufactured with built-in cooling systems. So far that's too difficult, but that's a design and engineering problem, not something basic. Or perhaps they could come up with a chip design that works better when it's hot? (The last one of those I saw worked on vacuum tubes, though.)

            --
            Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
            • (Score: 3, Interesting) by takyon on Sunday June 23 2019, @05:39PM

              by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Sunday June 23 2019, @05:39PM (#859107) Journal

              I see a few ways forward on 3D chips:

              • Move DRAM closer to cores as in the DARPA 3D SoC concept. Should be more news about that within the next year or two. There is an interim step where you just stack DRAM on top, but you can lower voltage and power further by tightly packing in RAM near the transistors.
              • Novel transistor design or material, like carbon nanotubes, TFETs, or the metal-based field emission air channel transistor. It's possible that today's transistors are using orders of magnitude more energy or waste heat than future designs.
              • Lower the clock speeds and voltage. You could use machine-learning algorithms to break up threads across multiple cores, and predict how to utilize as many cores as possible. But for manycore chips with thousands of cores, you are already dealing with highly parallel code, and it's possible that it will be worth it to pile on layers of cores even if you have to lower each core's performance.
              • In the case of neuromorphic chips that operate with brain-like spikes from only a small amount of cores/neurons at any given time, it may be possible to go 3D without worrying about power. See IBM's TrueNorth chip which had 4,096 cores but only 70 mW power consumption. If they can stack 1,000 layers of that, they should be able to do it with only modest cooling measures. It could be even closer to mimicking the brain since it is no longer planar.
              • Go the quantum supremacy route, except at room temperature. More research needed.
              --
              [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
            • (Score: 2) by takyon on Monday June 24 2019, @10:52AM (1 child)

              by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Monday June 24 2019, @10:52AM (#859304) Journal

              I forgot to mention that the industry is moving towards a rudimentary form of stacking: Wafer-on-Wafer (WoW), which can be used to stack two processors. It has the advantage of bringing sets of cores (modules) closer together which helps with chip latency. It isn't mentioned how you would deal with the extra heat, which would rise from the bottom wafer to the top one.

              TSMC’s New Wafer-on-Wafer Process to Empower NVIDIA and AMD GPU Designs [engineering.com]
              TSMC Will Manufacture 3D Stacked WoW Chips In 2021 Claims Executive [wccftech.com]

              So you get a doubling of transistors within the same footprint, and up to a doubling of multi-core performance (I assume less than 2x if clocks drop). Maybe as soon as 2021 and tied to a TSMC "5nm" node.

              --
              [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
              • (Score: 2) by HiThere on Monday June 24 2019, @08:10PM

                by HiThere (866) Subscriber Badge on Monday June 24 2019, @08:10PM (#859488) Journal

                Yes. They've been edging towards 3-D chips since the early 80's, perhaps earlier. But I'm not sure (yet) that this "stacking" is primarily aimed at increasing 3-Dness. It seems like it may be aimed more at increasing the percentage of good chips.

                --
                Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
          • (Score: 0) by Anonymous Coward on Monday June 24 2019, @08:16AM (2 children)

            by Anonymous Coward on Monday June 24 2019, @08:16AM (#859291)

            Quantum supremacy isn't all that supreme.

            The thing about quantum computers is that they don't actually solve that many problems. Everyone knows about factorization, but that mostly causes more real-world problems than it solves. I think much of the research into quantum computing has been driven by the need to make sure that nobody invents them in secret, so they can decrypt everyone's communications. But everyone will have switched to quantum-proof cryptography before that happens. So this ability is not very important.

            Mainly, the thing they would be useful for is simulation of quantum systems. It might be that the main thing we get out of quantum computers is... much better regular computers. And for that, you don't need an actual quantum computer on your desk to get the benefits from them.

            • (Score: 2) by takyon on Monday June 24 2019, @10:37AM (1 child)

              by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Monday June 24 2019, @10:37AM (#859302) Journal

              I thought the same exact thing, but:

              https://en.wikipedia.org/wiki/Quantum_algorithm [wikipedia.org]

              Although all classical algorithms can also be performed on a quantum computer

              With the caveat that this would not apply to an annealer like D-Wave.

              If quantum computers running classical code turns out to be slow and impractical, I think you could still see applications for quantum computing on home computers, such as simulating real world systems within open world video games, or machine learning. If a quantum computer can be done near room temperature, without significant cooling, you could see it integrated onto a smartphone SoC or as an add-on card for desktops. Make it available, and people will figure out what to do with it.

              --
              [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
              • (Score: 2) by bzipitidoo on Monday June 24 2019, @12:56PM

                by bzipitidoo (4388) on Monday June 24 2019, @12:56PM (#859327) Journal

                Not to be facetious, but there's a lot of uncertainty around quantum computing. We don't even have a firm grasp of just what problems they can solve quickly that classical computers cannot, and we won't, until the famous question of whether P!=NP is solved. Most people strongly suspect that P!=NP, but if somehow it should turn out the opposite, that P=NP, then quantum computing may be of no value. So far, it is thought that BQP, the problems that a quantum computer can solve within a bounded amount of error in polynomial time, lies somewhere between P and NP, that is, that P is a subset of BQP, which is a subset of NP.

                In the efforts to move closer to solving whether P!=NP, researchers have come up with an awful lot of problem classifications. Fro instance, there's RP, the set of problems that can be solved in polynomial time with a randomized algorithm. RP is also somewhere between P and NP. RP might be equal to P. Whether it contains BQP or BQP contains it, or neither, is not known. Primality testing was known to be in RP, until recently when someone discovered a deterministic way to test for primality, placing that problem firmly in P.

        • (Score: 0) by Anonymous Coward on Sunday June 23 2019, @04:04PM (1 child)

          by Anonymous Coward on Sunday June 23 2019, @04:04PM (#859086)

          No, actually clocks will get slower as the components shrink further. The advantage is more in efficiency. No idea where you get this idea that they should double.

        • (Score: 2) by Immerman on Monday June 24 2019, @02:40AM (2 children)

          by Immerman (3985) on Monday June 24 2019, @02:40AM (#859224)

          >The big problem with stacked chips and other ventures into 3D is heat dissipation.

          Which would be an excellent use for diamond-based CPU wafers, we know how to N- and P- dope diamond, and it's a better thermal conductor than copper, making it much easier for heat to migrate to the surface.

          Sadly, flawless wafer-sized lab grown diamond is well behind schedule - presumably they've either encountered problems, or decided the gemstone market is far more profitable than selling diamond wafers to semiconductor manufacturers.

          • (Score: 2) by takyon on Monday June 24 2019, @10:40AM (1 child)

            by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Monday June 24 2019, @10:40AM (#859303) Journal

            the gemstone market is far more profitable than selling diamond wafers to semiconductor manufacturers

            If the production problems get worked out and there is an actual push to use diamond in various ways, you will see companies like Samsung build their own facilities.

            --
            [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
            • (Score: 2) by Immerman on Tuesday June 25 2019, @01:13AM

              by Immerman (3985) on Tuesday June 25 2019, @01:13AM (#859568)

              As I recall, the big problem is that for building reliable CPUs you want an atomically flawless diamond with no lattice discontinuities that would interfere with its electrical properties, so you need to grow a single wafer-diameter crystal from a tiny flawless seed. And, at least for the vapor deposition technologies being used 20-25 years ago, you could only grow flawless crystal in one direction, while the cross-section would slowly increase over time as it grew in very slight "cone" shape, only widening at a fraction of a percent of the speed in the primary growth direction. At the time they were projecting that it would take them around 15-20 years before their crystals were wide enough to be worth using as semiconductor wafers.

              Unless the technology has fundamentally changed, that means that the only way Samsung could build such a facility would be if they could get their hands on a flawless diamond wafer from one of the existing diamond-growing companies to act as a seed.

  • (Score: 2) by takyon on Sunday June 23 2019, @10:56AM

    by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Sunday June 23 2019, @10:56AM (#859047) Journal

    AMD All Set To Capture 10% of the Total Server CPU Market by 2020, Report Indicates – Will Secure More Deals With 7nm EPYC CPUs Due To Strong Price / Performance Leadership [wccftech.com]

    Cray, AMD to build 1.5 exaflops supercomputer for US government [arstechnica.com] (not "Rome", but the next Epyc after that)

    Indications are that their market share is trending upward. But even if AMD goes to 10-20%, then it's not the end of the world for Intel. Intel gets to continue to print money.

    However, Intel does appear poised to lower prices:

    Intel Prepares 15% CPU Price Cut in Response to AMD’s Ryzen 3000 Series [wccftech.com]

    That's for desktop chips. If Intel also lowers Xeon prices to anywhere near what AMD's Epyc prices are, that could be reflected in their balance sheet.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 2, Interesting) by Anonymous Coward on Sunday June 23 2019, @10:28PM (1 child)

    by Anonymous Coward on Sunday June 23 2019, @10:28PM (#859172)

    I had this discussion at work and lost.

    Pros:
    License costs for all but one vendor go down by half, as they are socket based, not core based. We would be able to turn all our multi processor systems into single processor systems (enough pci-e lanes with the Epyc part), and keep the same host count while increasing capacity. License cost savings would pay for the hardware in less than three years.

    AMD is less expensive initially too.

    Cons:
    Can't live migrate workloads from existing Xeon based hardware to Epyc based hardware. Inconvenient, but is not a deal killer for us.

    Moron managers can't wrap heads around concept that things have changed-- turns out this one trumps everything.

    • (Score: 0) by Anonymous Coward on Monday June 24 2019, @02:47AM

      by Anonymous Coward on Monday June 24 2019, @02:47AM (#859226)

      Things will change when the first company goes bankrupt over intel on delivering on their promises.