Submitted via IRC for Bytram
7nm AMD EPYC "Rome" CPU w/ 64C/128T to Cost $8K (56 Core Intel Xeon: $25K-50K)
Yesterday, we shared the core and thread counts of AMD's Zen 2 based Epyc lineup, with the lowest-end chip going as low as 8 cores while the top-end 7742 boasting 64 and double the threads. Today, the prices of these server parts have also surfaced, and it seems like they are going to be quite a bit cheaper than the competing Intel Xeon Platinum processors.
The top-end Epyc 7742 with a TDP of 225W (128 threads @ 3.4GHz) is said to sell for a bit less than $8K, while the lower clocked 7702 and 7702P (single-socket) are going to cost $7,215 and $4,955 (just) respectively. That's quite impressive, you're getting 64 Zen 2 cores for just $5,000, while on the other hand Intel's 28-core Xeon Platinum 8280 costs a whopping $18K and is half as powerful.
(Score: 2) by bzipitidoo on Sunday June 23 2019, @02:41PM (13 children)
In other words, as I have heard hinted, marketing has seized on the die size and warped that once reliable measurement to hype product. I suppose what TSMC calls 7nm is actually one component that has been reduced to 7nm while the bulk of the die remains at 10nm or even 14nm?
Novel tech? Silicon and the Von Neumann architecture have had a great run, and are still king. GaAs, optical circuitry, graphite, memristors ... nothing has made serious inroads into silicon's dominance. Perhaps neural network computing will. Alpha Go and descendants were very impressive. Thus far, quantum computing has been vaporware. The big problem with stacked chips and other ventures into 3D is heat dissipation. That universal computing memory sounds great, but I thought memristors had great potential, and where are they now? Heck, I wonder if that universal memory is memristors!
(Score: 3, Insightful) by takyon on Sunday June 23 2019, @03:33PM (7 children)
Jump into the quantum supremacy [soylentnews.org] and universal memory articles.
Node naming has been busted for many years now, which is why I always put it in quotes and typically name the foundry.
https://semiengineering.com/how-many-nanometers/ [semiengineering.com]
https://en.wikichip.org/wiki/technology_node [wikichip.org]
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by HiThere on Sunday June 23 2019, @04:33PM (3 children)
OTOH, quantum computers still require cyrogenic cooling. Until that problem is licked, or at least reduced to dry ice temperatures, quantum computers will all be at datacenters.
I'm currently most intrigued by 3-d chips. They've got a heat problem that just won't quit, though, so the chips need to be manufactured with built-in cooling systems. So far that's too difficult, but that's a design and engineering problem, not something basic. Or perhaps they could come up with a chip design that works better when it's hot? (The last one of those I saw worked on vacuum tubes, though.)
Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
(Score: 3, Interesting) by takyon on Sunday June 23 2019, @05:39PM
I see a few ways forward on 3D chips:
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by takyon on Monday June 24 2019, @10:52AM (1 child)
I forgot to mention that the industry is moving towards a rudimentary form of stacking: Wafer-on-Wafer (WoW), which can be used to stack two processors. It has the advantage of bringing sets of cores (modules) closer together which helps with chip latency. It isn't mentioned how you would deal with the extra heat, which would rise from the bottom wafer to the top one.
TSMC’s New Wafer-on-Wafer Process to Empower NVIDIA and AMD GPU Designs [engineering.com]
TSMC Will Manufacture 3D Stacked WoW Chips In 2021 Claims Executive [wccftech.com]
So you get a doubling of transistors within the same footprint, and up to a doubling of multi-core performance (I assume less than 2x if clocks drop). Maybe as soon as 2021 and tied to a TSMC "5nm" node.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by HiThere on Monday June 24 2019, @08:10PM
Yes. They've been edging towards 3-D chips since the early 80's, perhaps earlier. But I'm not sure (yet) that this "stacking" is primarily aimed at increasing 3-Dness. It seems like it may be aimed more at increasing the percentage of good chips.
Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
(Score: 0) by Anonymous Coward on Monday June 24 2019, @08:16AM (2 children)
Quantum supremacy isn't all that supreme.
The thing about quantum computers is that they don't actually solve that many problems. Everyone knows about factorization, but that mostly causes more real-world problems than it solves. I think much of the research into quantum computing has been driven by the need to make sure that nobody invents them in secret, so they can decrypt everyone's communications. But everyone will have switched to quantum-proof cryptography before that happens. So this ability is not very important.
Mainly, the thing they would be useful for is simulation of quantum systems. It might be that the main thing we get out of quantum computers is... much better regular computers. And for that, you don't need an actual quantum computer on your desk to get the benefits from them.
(Score: 2) by takyon on Monday June 24 2019, @10:37AM (1 child)
I thought the same exact thing, but:
https://en.wikipedia.org/wiki/Quantum_algorithm [wikipedia.org]
With the caveat that this would not apply to an annealer like D-Wave.
If quantum computers running classical code turns out to be slow and impractical, I think you could still see applications for quantum computing on home computers, such as simulating real world systems within open world video games, or machine learning. If a quantum computer can be done near room temperature, without significant cooling, you could see it integrated onto a smartphone SoC or as an add-on card for desktops. Make it available, and people will figure out what to do with it.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by bzipitidoo on Monday June 24 2019, @12:56PM
Not to be facetious, but there's a lot of uncertainty around quantum computing. We don't even have a firm grasp of just what problems they can solve quickly that classical computers cannot, and we won't, until the famous question of whether P!=NP is solved. Most people strongly suspect that P!=NP, but if somehow it should turn out the opposite, that P=NP, then quantum computing may be of no value. So far, it is thought that BQP, the problems that a quantum computer can solve within a bounded amount of error in polynomial time, lies somewhere between P and NP, that is, that P is a subset of BQP, which is a subset of NP.
In the efforts to move closer to solving whether P!=NP, researchers have come up with an awful lot of problem classifications. Fro instance, there's RP, the set of problems that can be solved in polynomial time with a randomized algorithm. RP is also somewhere between P and NP. RP might be equal to P. Whether it contains BQP or BQP contains it, or neither, is not known. Primality testing was known to be in RP, until recently when someone discovered a deterministic way to test for primality, placing that problem firmly in P.
(Score: 0) by Anonymous Coward on Sunday June 23 2019, @04:04PM (1 child)
No, actually clocks will get slower as the components shrink further. The advantage is more in efficiency. No idea where you get this idea that they should double.
(Score: 0) by Anonymous Coward on Sunday June 23 2019, @04:15PM
Here is some discussion: https://i.reddit.com/r/intel/comments/a1wv7l/the_5ghz_barrier_and_cpus_10nm_and_lower_with/ [reddit.com]
(Score: 2) by Immerman on Monday June 24 2019, @02:40AM (2 children)
>The big problem with stacked chips and other ventures into 3D is heat dissipation.
Which would be an excellent use for diamond-based CPU wafers, we know how to N- and P- dope diamond, and it's a better thermal conductor than copper, making it much easier for heat to migrate to the surface.
Sadly, flawless wafer-sized lab grown diamond is well behind schedule - presumably they've either encountered problems, or decided the gemstone market is far more profitable than selling diamond wafers to semiconductor manufacturers.
(Score: 2) by takyon on Monday June 24 2019, @10:40AM (1 child)
If the production problems get worked out and there is an actual push to use diamond in various ways, you will see companies like Samsung build their own facilities.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by Immerman on Tuesday June 25 2019, @01:13AM
As I recall, the big problem is that for building reliable CPUs you want an atomically flawless diamond with no lattice discontinuities that would interfere with its electrical properties, so you need to grow a single wafer-diameter crystal from a tiny flawless seed. And, at least for the vapor deposition technologies being used 20-25 years ago, you could only grow flawless crystal in one direction, while the cross-section would slowly increase over time as it grew in very slight "cone" shape, only widening at a fraction of a percent of the speed in the primary growth direction. At the time they were projecting that it would take them around 15-20 years before their crystals were wide enough to be worth using as semiconductor wafers.
Unless the technology has fundamentally changed, that means that the only way Samsung could build such a facility would be if they could get their hands on a flawless diamond wafer from one of the existing diamond-growing companies to act as a seed.