AMD's TR 1950X (16 cores) and TR 1920X (12 cores) CPUs will be released on August 10th:
The news at the top of the hour is the date at which AMD is making Threadripper and associated TR4 based motherboards available at retail: August 10th. This is expected to be a full worldwide retail launch, so don't be surprised if your favorite retailer starts posting teaser images about how much stock they have. August 10th will see both the 1950X and 1920X with their retail packaging, along with motherboards from the main four motherboard vendors.
AMD has also announced an 8-core version of Threadripper, the TR 1900X, for $549. Why buy it instead of spending $300 on the Ryzen 7 1700 or $420 on the Ryzen 7 1800X, both of which have eight cores?
There are some questions around why AMD would release an 8-core Threadripper, given that the Ryzen 7 1800X is also eight core and currently retails around $399 when distributor sales are factored in. The main thing here is going to be IO, specifically that the user is going to get access to quad channel memory and all the PCIe lanes required for multi-GPU or multi-add-in cards, along with a super high-end motherboard that likely contains multiple CPU-based PCIe x4 storage and/or 10G Ethernet and additional features.
Previously: CPU Rumor Mill: Intel Core i9, AMD Ryzen 9, and AMD "Starship"
AMD 16/12-Core Threadripper Details Confirmed
(Score: 2) by kaszz on Friday August 04 2017, @02:40AM (8 children)
It's the heat that kills faster processors asfaik, not the cost as such. But if there's fewer transistors to produce that heat. Maybe it can work? Ie something like a souped up 6502 + MMU in 64-bit version that clocks 100 GHz.
(Score: 2) by Immerman on Friday August 04 2017, @03:13AM (7 children)
The problem is that reliability falls as speed increases. You can overcome that to a certain extent by making things smaller, so that electrons cross the transistor faster/in response to a lower signal, or by increasing the voltage so that you get a better "signal to noise ratio". Unfortunately we're about as small as silicon-based circuits can get without running afoul of quantum mechanical interference, and increasing the voltage dramatically increases heat (power increases with the square of voltage) while offering diminishing returns on signal quality.
A sufficiently simple chip could potentially be pushed a lot faster than a modern CPU, but even immersing the thing in a tank of oil coolant and letting it boil it off, ala the famous "Little Bubbles" Cray chip is unlikely to get you to 100GHz. Maybe with high-powered liquid nitrogen cooling, or liquid helium - but that's going to get very finicky and expensive. Plus, it won't run nearly as fast as you'd expect - consider that a modern high-end CPU runs at about the same speed as one from a decade ago, while even most single-threaded software will run considerably faster on a newer chip - all those extra transistors are buying you a lot of predictive optimizations as well.
(Score: 3, Interesting) by takyon on Friday August 04 2017, @03:59AM (6 children)
Transistors in the 0.5 THz to 1.0 THz range are possible:
http://www.news.gatech.edu/2014/02/17/silicon-germanium-chip-sets-new-speed-record [gatech.edu]
If I'm not mistaken, a 1000 GHz chip would have to have lots of tiny [physlink.com] cores. Which means that GPUs would benefit more than CPUs at such high clock speeds, because your CPU would have have to be Xeon Phi style with hundreds of cores.
Good news for next-next-next-gen VR: if Silicon-Germanium, carbon nanotubes, or some other technology enables 100 GHz and above clock speeds, then we can easily see 1 petaflops [reddit.com] to 1 exaflops GPUs.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by fyngyrz on Friday August 04 2017, @07:35AM (2 children)
Memory remains a severe bottleneck - until/unless it's on-cpu-chip in significant amounts, the distance between the CPU and the memory will eat that speed like a pothead with a fresh bag of Fritos.
(Score: 2) by takyon on Friday August 04 2017, @10:49AM (1 child)
High Bandwidth Memory [wikipedia.org] (or the similar Hybrid Memory Cube [wikipedia.org]) and subsequent versions have helped massively on that front.
Samsung Increases Production of 8 GB High Bandwidth Memory 2.0 Stacks [soylentnews.org]
HBM3: Cheaper, up to 64GB on-package, and terabytes-per-second bandwidth [arstechnica.com]
Post-NAND replacements like Crossbar RRAM, PCM, or memristors were supposed to enable terabytes of memory with similar endurance and speed to DRAM. Instead we have gotten 3D XPoint. Maybe in 10 years that situation will change.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by fyngyrz on Friday August 04 2017, @02:50PM
There's some hope there, all right. But a lot of the bandwidth comes from wide access; that's not going to help at nearly as much when access is a word here and a word there. Until we get optical or some other inherently high-speed storage that works and is affordable, likely memory will remain slower than processors and cache will remain the word of the day.
(Score: 2) by Immerman on Friday August 04 2017, @01:11PM (2 children)
Quite, but that's basically an entirely new technology - lots of improvements available if we're willing to pay through the nose for it. Nothing that can be mass-produced in he short term though.
(Score: 2) by takyon on Friday August 04 2017, @01:58PM (1 child)
IBM used silicon-germanium in its 7nm [soylentnews.org] and 5nm demo chips [soylentnews.org].
3nm seems possible: TSMC Plans New Fab for 3nm [eetimes.com]
ASML is talking about 1-3nm [nextbigfuture.com].
TSMC could put out 3nm chips around 2022. So we have at least 5 years, possibly up to 10, before we need to explore raising clock rates, stacking cores in layers, or other crazy approaches to boosting performance.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by Immerman on Friday August 04 2017, @02:26PM
Yep - they're looking great in the lab, and I'm looking forward to them hitting the streets. For now though they're basically irrelevant. Maybe in 5-10 years we'll be able to buy them, and maybe they'll reopen the traditional clock-increasing method of boosting performance (really hope you didn't intentionally include that in the "other crazy approaches"), but I've seen far too many promising technologies get neglected and abandoned over the years to give a whole lot of credence to demo units.
Heck, silicon-germanium processors were supposed to be right around the corner 17 years ago when CPU clock rates started seriously plateauing. 17 years later and rather than a thousandfold increase in keeping with the prior trend, clock speeds have barely more than doubled, and all the tricks we've thrown at them haven't yielded performance improvements all that much more impressive. And we're still waiting on germanium.