AMD's Threadripper 1950X (TR 1950X?) will have 16 cores for $1,000, and the Threadripper 1920X will have 12 cores for $800. They will be available in early August:
Last night out of the blue, we received an email from AMD, sharing some of the specifications for the forthcoming Ryzen Threadripper CPUs to be announced today. Up until this point, we knew a few things – Threadripper would consist of two Zeppelin dies featuring AMD's latest Zen core and microarchitecture, and would essentially double up on the HEDT Ryzen launch. Double dies means double pretty much everything: Threadripper would support up to 16 cores, up to 32 MB of L3 cache, quad-channel memory support, and would require a new socket/motherboard platform called X399, sporting a massive socket with 4094-pins (and also marking an LGA socket for AMD). By virtue of being sixteen cores, AMD is seemingly carving a new consumer category above HEDT/High-End Desktop, which we've coined the 'Super High-End Desktop', or SHED for short.
[...] From what we do know, 16 Zen cores at $999 is about the ballpark price we were expecting. With the clock speeds of 3.4 GHz base and 4 GHz Turbo, this is essentially two Ryzen 7 1800X dies at $499 each stuck together, creating the $999 price (obviously it's more complicated than this). Given the frequencies and the performance of these dies, the TDP is likely in the 180W range; seeing as how the Ryzen 7 1800X was a 95W CPU with slightly higher frequencies. The 1950X runs at 4.0 GHz turbo and also has access to AMD's XFR – which will boost the processor when temperature and power allows – in jumps of +25 MHz: AMD would not comment on the maximum frequency boost of XFR, though given our experiences of the Ryzen silicon and previous Ryzen processor specifications, this is likely to be +100 MHz. We were not told if the CPUs would come with a bundled CPU cooler, although if our 180W prediction is in the right area, then substantial cooling would be needed. We expect AMD to use the same Indium-Tin solder as the Ryzen CPUs, although we were unable to get confirmation at this at this time.
[...] Comparing the two, and what we know, AMD is going to battle on many fronts. Coming in at $999 is going to be aggressive, along with an all-core turbo at 3.4 GHz or above: Intel's chip at $1999 will likely turbo below this. Both chips will have quad-channel DRAM, supporting DDR4-2666 in 1 DIMM per channel mode (and DDR4-2400 in 2 DPC), but there are some tradeoffs. Intel Core parts do not support ECC, and AMD Threadripper parts are expected to (awaiting confirmation). Intel has the better microarchitecture in terms of pure IPC, though it will be interesting to see the real-world difference if AMD is clocked higher. AMD Threadripper processors will have access to 60 lanes of PCIe for accelerators, such as GPUs, RAID cards and other functions, with another 4 reserved by the chipset: Intel will likely be limited to 44 for accelerators but have a much better chipset in the X299 for IO support and capabilities. We suspect AMD to run a 180W TDP, and Intel at 165W, giving a slight advantage to Intel perhaps (depending on workload), and Intel will also offer AVX512 support for its CPU whereas AMD has smaller FMA and AVX engines by comparison. The die-to-die latency of AMD's MCM will also be an interesting element to the story, depending exactly where AMD is aiming this product.
There's also some details for Ryzen 3 quad-cores, but no confirmed pricing yet.
Meanwhile, Intel's marketing department has badmouthed AMD, calling 32-core Naples server chips "4 glued-together desktop die". That could have something to do with AMD's chips matching Intel's performance on certain workloads at around half the price.
Also at CNET, The Verge, and Ars Technica.
Previously: CPU Rumor Mill: Intel Core i9, AMD Ryzen 9, and AMD "Starship"
Intel Announces 4 to 18-Core Skylake-X CPUs
Intel Core i9-7900X Reviewed: Hotter and More Expensive than AMD Ryzen 1800X for Small Gains
AMD Epyc 7000-Series Launched With Up to 32 Cores
(Score: 0) by Anonymous Coward on Friday July 14 2017, @12:11AM (11 children)
How are they gonna feed the data to all them cores? Now they will have to put up huge cooling systems to the memory modules, in addition to those for the CPUs, GPUs, and whatever coprocessors and bridging chips.
It all points to power efficiency as the supreme metric.
(Score: 2) by Runaway1956 on Friday July 14 2017, @12:33AM (3 children)
It's not that big a problem. I'm headed to Home Depot. I'll get a window air conditioner, and hang it on the side of my EATX tower, and park the tower in front of a window. I may have to build a short platform to sit the tower on - ten or twelve inches should make it line up nicely. Who cares about efficiency? I just want the biggest, baddest machine within 100 miles!!
Abortion is the number one killed of children in the United States.
(Score: 0) by Anonymous Coward on Friday July 14 2017, @12:40AM (1 child)
Some of us have to live in "proper" homes, not in a trailer. Sheeeit, trailers do have their advantages.
(Score: 0) by Anonymous Coward on Friday July 14 2017, @02:06PM
EATX does not spell "trailer". And, that shed in your mama's backyard doesn't constitute a "proper" home.
(Score: 2) by bob_super on Friday July 14 2017, @12:54AM
Save money: Just stop at the Auto Parts store and grab a couple gallons of 5W-30 for immersion, and a water pump to circulate.
(Score: 2) by kaszz on Friday July 14 2017, @01:03AM (2 children)
So people will have to install fluid cooling for the CPU, North bridge, GPU and memory modules. Do you see any significant obstacle to do that?
The one I come to think of spontaneously is that memory module connectors tend to be mechanically weak in the perpendicular direction of their socket. And the usual condensation issue but that can be handled by measuring ambient temperature and cooling temperature. As long as the difference follows certain curves it should be alright.
(Score: 0) by Anonymous Coward on Friday July 14 2017, @01:21AM (1 child)
One-off supercomputer installation, maybe. For server farms/data centers where the chips are marketed for, liquid cooling is no go. Data centers are like freight truck fleet, they are not formula-1 racing teams.
(Score: 0) by Anonymous Coward on Friday July 14 2017, @06:53AM
Water can be easier to move around than air [ovh.com]
But OVH only does about 20kW (144 servers) per rack.
If you want to go up to 500kW per rack, you need to look into state-change cooling [datatank-mining.com].
(Score: 2) by jimtheowl on Friday July 14 2017, @01:21AM (2 children)
2) Your brain is extremely power efficient, yet, that is not a metric that I would rely on.
(Score: 1, Touché) by Anonymous Coward on Friday July 14 2017, @01:35AM (1 child)
A brilliant observation. Somebody call the Novel commmittee.
(Score: 2) by SpockLogic on Friday July 14 2017, @05:55PM
Why, you need permission to write a book?
Overreacting is one thing, sticking your head up your ass hoping the problem goes away is another - edIII
(Score: 3, Informative) by WillR on Friday July 14 2017, @02:55PM
I'm gonna go way out on a limb here and guess that's what the 60 fucking PCIe lanes are for.