Recently, Intel was rumored to be releasing 10 and 12 core "Core i9" CPUs to compete with AMD's 10-16 core "Threadripper" CPUs. Now, Intel has confirmed these as well as 14, 16, and 18 core Skylake-X CPUs. Every CPU with 6 or more cores appears to support quad-channel DDR4:
Intel Core | Cores/Threads | Price | $/core |
---|---|---|---|
i9-7980XE | 18/36 | $1,999 | $111 |
i9-7960X | 16/32 | $1,699 | $106 |
i9-7940X | 14/28 | $1,399 | $100 |
i9-7920X | 12/24 | $1,199 | $100 |
i9-7900X | 10/20 | $999 | $100 |
i7-7820X | 8/16 | $599 | $75 |
i7-7800X | 6/12 | $389 | $65 |
i7-7740X | 4/8 | $339 | $85 |
i7-7640X | 4/4 | $242 | $61 (less threads) |
Last year at Computex, the flagship Broadwell-E enthusiast chip was launched: the 10-core i7-6950X at $1,723. Today at Computex, the 10-core i9-7900X costs $999, and the 16-core i9-7960X costs $1,699. Clearly, AMD's Ryzen CPUs have forced Intel to become competitive.
Although the pricing of AMD's 10-16 core Threadripper CPUs is not known yet, the 8-core Ryzen R7 launched at $500 (available now for about $460). The Intel i7-7820X has 8 cores for $599, and will likely have better single-threaded performance than the AMD equivalent. So while Intel's CPUs are still more expensive than AMD's, they may have similar price/performance.
For what it's worth, Intel also announced quad-core Kaby Lake-X processors.
Welcome to the post-quad-core era. Will you be getting any of these chips?
(Score: 4, Interesting) by bradley13 on Tuesday May 30 2017, @02:42PM (16 children)
Will you be getting any of these chips?
Honestly, there's not much point unless the chip is going into a server, or you have some really special applications. If you have a four-core processor, it already spends most of its time bored.
That said, it's pretty cool that Moore's Law lives on. If you could the total compute capacity of one of these chips, it's pretty astounding. That article a while back, grousing about how we only have "incremental" improvements in technology? Sometimes, quantity has a quality all it's own. Just consider all of the changes, both in technology and in society, that are directly attributable to increased computing power.
Yeah, also Facebook, but I guess it can't all be good stuff...
Everyone is somebody else's weirdo.
(Score: 3, Informative) by zocalo on Tuesday May 30 2017, @03:27PM (10 children)
UNIX? They're not even circumcised! Savages!
(Score: 2) by ikanreed on Tuesday May 30 2017, @04:18PM (6 children)
And of course, there's video games. You know, one of the primary reasons people get fancier, more powerful, and substantially more expensive computers. If you can't think of how graphics pipeline, engine pipeline, resource loading pipeline, and AI state updates might all be on different threads/cores, you're not very creative.
(Score: 2) by EvilSS on Tuesday May 30 2017, @05:10PM
(Score: 2) by zocalo on Tuesday May 30 2017, @05:23PM (4 children)
Even assuming an MMO, where you're going to have a few more options that could be readily farmed off to a dedicated thread/core on top of the four examples of execution threads you gave (team comms, for instance), and maybe making some of the main threads (graphics and engine, perhaps?) multi-threaded in their own right, you might get up towards 8-10 threads, but 18? Even allowing for a few threads left over for background OS tasks, I'm not sure we're going to be seeing many games that can take full advantage of the 18 core monster - let alone its 36 threads - for quite some time, if at all.
UNIX? They're not even circumcised! Savages!
(Score: 2) by ikanreed on Tuesday May 30 2017, @05:24PM
Basically only AAA games that explicitly have PC as a primary target, consoles tend to cap out at 4 cores.
Which is to say, not a lot.
(Score: 2) by tibman on Tuesday May 30 2017, @06:15PM
Like you pointed out, most people have other programs running at the same time. It's often overlooked in PC gaming benchmarks. VOIP being the big one. I often have a browser open too. So while most games only make good use of 3-4 threads that doesn't make an 8+ thread CPU useless for gaming. Zero random lurches or stutters from other processes is nice. Six core is pretty much the next step for gamers. Eight core is trying to future proof but probably not needed (yet).
SN won't survive on lurkers alone. Write comments.
(Score: 0) by Anonymous Coward on Tuesday May 30 2017, @07:02PM (1 child)
Why MMO? Most of the critical game logic, aside from display and control, is run on the server.
From my (admittedly genre-limited) experience, strategy games can make the most use of additional CPU resources. A modern strategy game AI can use as many cores as you can throw at it; here, the CPU (or sometimes the memory) is the bottleneck. Not to mention thousands or hundreds of thousands of individual units, each one requiring some low-level control routines.
In some strategy games I've played (Star Ruler 2...), late game on larger maps can become almost completely unplayable because the CPU just can't keep up. On the other hand, I've never had graphical stutter, even on combats with thousands of units shooting lasers and exploding all over the place (like on this screenshot [gog.com]).
TL;DR: AlphaGo is a computer controlled AI for a strategy game. Think about how many cores can it use.
(Score: 2) by takyon on Tuesday May 30 2017, @07:19PM
Agreed on the MMO. Single player first-person games can have "complex" AI to the point where you can't have dozens or hundreds of NPCs on a single map without causing slowdowns (compare that amount to your strategy games... like Cossacks: Back to War), and tend to break maps up with loading zones (for example, cities in Oblivion or Skyrim). Having more cores and RAM allows more AI and stuff to be loaded, which is beneficial for single-map stealth games with no loading zones that have all of the AI loaded and patrolling at the beginning.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by kaszz on Wednesday May 31 2017, @03:35AM (2 children)
I think you are doing the video processing wrong. When they did some of the later Star Wars movies. It was partly edited on a simple laptop using a low quality version. But when the edit was completed. The edit file were sent to a server cluster that rendered the real thing. No creative wait there that mattered.
(Score: 2) by zocalo on Wednesday May 31 2017, @11:07AM (1 child)
I suppose if you were a really keen self employed videographer then it might be viable to offload the final rendering to the cloud, but there are a number of problems with that. Firstly, I'm not actually sure how much of a timesaving that's going to give you as you're introducing another potential bottleneck - your Internet link; my fibre internet connection is no slouch so it might work out, but my raw video footage is still typically tens of GB - and sometimes exceeds 100GB if I'm drawing on a lot of B-roll clips - so if you don't have a quick connection that may be an issue. The likely show-stopper though is software support; other than CGI and 3DFX *very* few video production tools actually support render farms in the first place, let alone cloud-based ones (including Adobe Creative Cloud's Premiere Pro, somewhat ironically), so you're either going to be farming out the work manually or using higher end tools like the AVID setup used by Disney for their Star Wars movies. Even if your software supports it, you're unlikely to find a commercial service, so you'll be rolling your own VMs in the cloud, and likely paying out a small fortune for licenses as well.
UNIX? They're not even circumcised! Savages!
(Score: 2) by kaszz on Wednesday May 31 2017, @05:17PM
The files are not sent via the internet. They are delivered physically to the rendering farm, only the edit data is perhaps sent via the internet. As for software, I'll expect professional software to at least be able to deliver the edit file. But there's open source solutions both for the edit console and the server farm I think. The last part should at least not be too hard to get rolling.
(Score: 0) by Anonymous Coward on Tuesday May 30 2017, @03:59PM
Try going to mainstream sites with a "modern" browser and no ad block, javascript, cross-site request, etc protection. Even sites that only need to show text (eg reddit) can still slow a computer to a crawl, especially if you leave open enough tabs.
I made that example up, but just searched it and found that sure enough they have found a way:
https://www.reddit.com/r/chrome/comments/456175/helpchrome_high_cpu_usage/ [reddit.com]
https://www.reddit.com/r/chrome/comments/3otkff/bug_reddits_sidebar_causes_high_cpu_usage_and/ [reddit.com]
(Score: 3, Insightful) by LoRdTAW on Tuesday May 30 2017, @04:21PM
My older Core i7 from 2011 is still going strong. I see no reason to upgrade at all. GPU also is fine, an older AMD something. I don't play many demanding 3D games anymore so I keep forgetting my own PC's specs.
As you said, all those cores and nothing to do.
(Score: 3, Informative) by bob_super on Tuesday May 30 2017, @04:56PM
>Honestly, there's not much point unless the chip is going into a server, or you have some really special applications.
> If you have a four-core processor, it already spends most of its time bored.
That is 100% true.
However, if you pay engineers to twiddle their thumbs during hour-long compiles (just joking, they're obviously writing documentation...), every minute saved is money.
If that CPU spends 90% of its time idling, but saves a mere 10 engineering minutes a day, it will not only pay for itself mathematically, but make your geek happy and therefore more productive even when not directly compiling.
(Score: 2) by fyngyrz on Tuesday May 30 2017, @08:52PM
I have (and write) exactly those applications. Software defined radio; intensive realtime image processing. The former runs very nicely with many, many threads doing completely different things, none of which except the main thread are loaded heavily (and that only because OS graphics support tends to have to be done from the main loop, which I really wish they (everyone) would get past); the latter goes faster the more slices you can chop an image into right up until you run out of memory bandwidth vs. internal instruction cycle time where the bus is available to other cores. Cache is basically useless here because it's never, ever large enough. And you can tune how far to slice things up based on dynamic testing of the machine you're running on. More than I need means I can get the right amount that I need. And my current 12/24 isn't more than I need.
So... if Apple comes with a dual-CPU i9-7980XE, so providing 36 cores and 72 threads, in a Mac Pro that actually has significant memory expandability, slots for multiple graphics cards, and proper connectivity, I will on that like white on rice. They just might do that, too. That's just a 2013 Mac Pro with new CPUs and a bigger memory bus. And they've admitted the trash can isn't working out.
If they don't, still, I'm sure someone will, and I'll attempt to Hackintosh my way along. If that can't be done, then I may abandon OSX/MacOS altogether for an OS that can give me what I want. My code's in c, and the important stuff isn't tied to any OS. And I use POSIX threading, so that's generally agnostic enough.
(Score: 2) by driverless on Wednesday May 31 2017, @02:56AM
You could use them to calculate whether you need to buy a 7-blade or 9-blade razor, or a 5,000W or 7,000W stick vacuum.