Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Tuesday May 30 2017, @02:35PM   Printer-friendly
from the moah-fasteh dept.

Recently, Intel was rumored to be releasing 10 and 12 core "Core i9" CPUs to compete with AMD's 10-16 core "Threadripper" CPUs. Now, Intel has confirmed these as well as 14, 16, and 18 core Skylake-X CPUs. Every CPU with 6 or more cores appears to support quad-channel DDR4:

Intel CoreCores/ThreadsPrice$/core
i9-7980XE18/36$1,999$111
i9-7960X16/32$1,699$106
i9-7940X14/28$1,399$100
i9-7920X12/24$1,199$100
i9-7900X10/20$999$100
i7-7820X8/16$599$75
i7-7800X6/12$389$65
i7-7740X4/8$339$85
i7-7640X4/4$242$61 (less threads)

Last year at Computex, the flagship Broadwell-E enthusiast chip was launched: the 10-core i7-6950X at $1,723. Today at Computex, the 10-core i9-7900X costs $999, and the 16-core i9-7960X costs $1,699. Clearly, AMD's Ryzen CPUs have forced Intel to become competitive.

Although the pricing of AMD's 10-16 core Threadripper CPUs is not known yet, the 8-core Ryzen R7 launched at $500 (available now for about $460). The Intel i7-7820X has 8 cores for $599, and will likely have better single-threaded performance than the AMD equivalent. So while Intel's CPUs are still more expensive than AMD's, they may have similar price/performance.

For what it's worth, Intel also announced quad-core Kaby Lake-X processors.

Welcome to the post-quad-core era. Will you be getting any of these chips?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Interesting) by bradley13 on Tuesday May 30 2017, @02:42PM (16 children)

    by bradley13 (3053) on Tuesday May 30 2017, @02:42PM (#517655) Homepage Journal

    Will you be getting any of these chips?

    Honestly, there's not much point unless the chip is going into a server, or you have some really special applications. If you have a four-core processor, it already spends most of its time bored.

    That said, it's pretty cool that Moore's Law lives on. If you could the total compute capacity of one of these chips, it's pretty astounding. That article a while back, grousing about how we only have "incremental" improvements in technology? Sometimes, quantity has a quality all it's own. Just consider all of the changes, both in technology and in society, that are directly attributable to increased computing power.

    Yeah, also Facebook, but I guess it can't all be good stuff...

    --
    Everyone is somebody else's weirdo.
    • (Score: 3, Informative) by zocalo on Tuesday May 30 2017, @03:27PM (10 children)

      by zocalo (302) on Tuesday May 30 2017, @03:27PM (#517686)
      I wouldn't class video processing a "really special application", yet it's a fairly popular usage case that can - with the right software - absolutely benefit mainsteam PC users that have a larger number of CPU cores and a good GPU (or two), especially given the rise in popularity of 4K footage from action cams and drones. Along with GPU, storage and memory upgrades, I've gone from dual-core to quad-core to octa-core, and I've still not managed to hit a point where I can't max out every single core of the CPU during a high-res rendering process - even with GPU based rendering assistance. Nothing stymies the creative process quicker than having to sit and wait so, while I'm not going to be upgrading immediately, I'm definitely looking forwards to seeing what kind of capacity for higher preview resolutions and how close to real-time final output rendering I can achieve with one of these chips when I do.
      --
      UNIX? They're not even circumcised! Savages!
      • (Score: 2) by ikanreed on Tuesday May 30 2017, @04:18PM (6 children)

        by ikanreed (3164) Subscriber Badge on Tuesday May 30 2017, @04:18PM (#517722) Journal

        And of course, there's video games. You know, one of the primary reasons people get fancier, more powerful, and substantially more expensive computers. If you can't think of how graphics pipeline, engine pipeline, resource loading pipeline, and AI state updates might all be on different threads/cores, you're not very creative.

        • (Score: 2) by EvilSS on Tuesday May 30 2017, @05:10PM

          by EvilSS (1456) Subscriber Badge on Tuesday May 30 2017, @05:10PM (#517764)
          These won't do much for games over their lower-core siblings though. Most games won't use much over 4 cores. However they would be useful to people who play PC games and stream to youtube or twitch. Live encoding on top of gaming can bottleneck most desktop CPUs. Although the AMD CPUs might be at a better price point for that use.
        • (Score: 2) by zocalo on Tuesday May 30 2017, @05:23PM (4 children)

          by zocalo (302) on Tuesday May 30 2017, @05:23PM (#517774)
          Are there any video games that can actually make use of this many CPU cores though (as opposed to GPU pipelines, in which case the sky is probably still the limit)? Sure, it's becoming the norm to to split a complex game engine up into multiple parallel executing threads to take more advantage of modern multi-core CPUs, but there's only so many things that you can do that with before managing all the interdependant parallelism becomes more problematic that it's worth. Unlike with video processing, it's not like you're chopping up a set workload into chunks, setting the CPU to work, and don't really care when each task finishes before reassembling the processed data; almost everything is going to need to maintain at least some level of synch.

          Even assuming an MMO, where you're going to have a few more options that could be readily farmed off to a dedicated thread/core on top of the four examples of execution threads you gave (team comms, for instance), and maybe making some of the main threads (graphics and engine, perhaps?) multi-threaded in their own right, you might get up towards 8-10 threads, but 18? Even allowing for a few threads left over for background OS tasks, I'm not sure we're going to be seeing many games that can take full advantage of the 18 core monster - let alone its 36 threads - for quite some time, if at all.
          --
          UNIX? They're not even circumcised! Savages!
          • (Score: 2) by ikanreed on Tuesday May 30 2017, @05:24PM

            by ikanreed (3164) Subscriber Badge on Tuesday May 30 2017, @05:24PM (#517775) Journal

            Basically only AAA games that explicitly have PC as a primary target, consoles tend to cap out at 4 cores.

            Which is to say, not a lot.

          • (Score: 2) by tibman on Tuesday May 30 2017, @06:15PM

            by tibman (134) Subscriber Badge on Tuesday May 30 2017, @06:15PM (#517804)

            Like you pointed out, most people have other programs running at the same time. It's often overlooked in PC gaming benchmarks. VOIP being the big one. I often have a browser open too. So while most games only make good use of 3-4 threads that doesn't make an 8+ thread CPU useless for gaming. Zero random lurches or stutters from other processes is nice. Six core is pretty much the next step for gamers. Eight core is trying to future proof but probably not needed (yet).

            --
            SN won't survive on lurkers alone. Write comments.
          • (Score: 0) by Anonymous Coward on Tuesday May 30 2017, @07:02PM (1 child)

            by Anonymous Coward on Tuesday May 30 2017, @07:02PM (#517822)

            Why MMO? Most of the critical game logic, aside from display and control, is run on the server.

            From my (admittedly genre-limited) experience, strategy games can make the most use of additional CPU resources. A modern strategy game AI can use as many cores as you can throw at it; here, the CPU (or sometimes the memory) is the bottleneck. Not to mention thousands or hundreds of thousands of individual units, each one requiring some low-level control routines.

            In some strategy games I've played (Star Ruler 2...), late game on larger maps can become almost completely unplayable because the CPU just can't keep up. On the other hand, I've never had graphical stutter, even on combats with thousands of units shooting lasers and exploding all over the place (like on this screenshot [gog.com]).

            TL;DR: AlphaGo is a computer controlled AI for a strategy game. Think about how many cores can it use.

            • (Score: 2) by takyon on Tuesday May 30 2017, @07:19PM

              by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Tuesday May 30 2017, @07:19PM (#517837) Journal

              Agreed on the MMO. Single player first-person games can have "complex" AI to the point where you can't have dozens or hundreds of NPCs on a single map without causing slowdowns (compare that amount to your strategy games... like Cossacks: Back to War), and tend to break maps up with loading zones (for example, cities in Oblivion or Skyrim). Having more cores and RAM allows more AI and stuff to be loaded, which is beneficial for single-map stealth games with no loading zones that have all of the AI loaded and patrolling at the beginning.

              --
              [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 2) by kaszz on Wednesday May 31 2017, @03:35AM (2 children)

        by kaszz (4211) on Wednesday May 31 2017, @03:35AM (#518076) Journal

        I think you are doing the video processing wrong. When they did some of the later Star Wars movies. It was partly edited on a simple laptop using a low quality version. But when the edit was completed. The edit file were sent to a server cluster that rendered the real thing. No creative wait there that mattered.

        • (Score: 2) by zocalo on Wednesday May 31 2017, @11:07AM (1 child)

          by zocalo (302) on Wednesday May 31 2017, @11:07AM (#518206)
          In a professional studio production environment, absolutely (I've done this myself), but the point of the discussion - and my personal example - was for enthusiast/semi-pro users with a single high-end PC doing that CPU intensive rendering of their 4K action cam/drone footage, and for that you are drawing on the raw full-res video. I'm absolutely editing at low-res (or not so low, these days) but such users are not likely to have access to a render farm, but even if they did then it would just be an array of servers equipped with the CPUs like those above, so would still make a valid "mainstream" usage case. There are still a few editing situations where you'll want to see the full-res output though - selective corrections on part of the frame for example - so you can make sure you've not got any visual glitches at the boundaries of the edit, and that's where having the extra CPU capacity during editing can really be useful.

          I suppose if you were a really keen self employed videographer then it might be viable to offload the final rendering to the cloud, but there are a number of problems with that. Firstly, I'm not actually sure how much of a timesaving that's going to give you as you're introducing another potential bottleneck - your Internet link; my fibre internet connection is no slouch so it might work out, but my raw video footage is still typically tens of GB - and sometimes exceeds 100GB if I'm drawing on a lot of B-roll clips - so if you don't have a quick connection that may be an issue. The likely show-stopper though is software support; other than CGI and 3DFX *very* few video production tools actually support render farms in the first place, let alone cloud-based ones (including Adobe Creative Cloud's Premiere Pro, somewhat ironically), so you're either going to be farming out the work manually or using higher end tools like the AVID setup used by Disney for their Star Wars movies. Even if your software supports it, you're unlikely to find a commercial service, so you'll be rolling your own VMs in the cloud, and likely paying out a small fortune for licenses as well.
          --
          UNIX? They're not even circumcised! Savages!
          • (Score: 2) by kaszz on Wednesday May 31 2017, @05:17PM

            by kaszz (4211) on Wednesday May 31 2017, @05:17PM (#518385) Journal

            The files are not sent via the internet. They are delivered physically to the rendering farm, only the edit data is perhaps sent via the internet. As for software, I'll expect professional software to at least be able to deliver the edit file. But there's open source solutions both for the edit console and the server farm I think. The last part should at least not be too hard to get rolling.

    • (Score: 0) by Anonymous Coward on Tuesday May 30 2017, @03:59PM

      by Anonymous Coward on Tuesday May 30 2017, @03:59PM (#517706)

      Try going to mainstream sites with a "modern" browser and no ad block, javascript, cross-site request, etc protection. Even sites that only need to show text (eg reddit) can still slow a computer to a crawl, especially if you leave open enough tabs.

      I made that example up, but just searched it and found that sure enough they have found a way:
      https://www.reddit.com/r/chrome/comments/456175/helpchrome_high_cpu_usage/ [reddit.com]
      https://www.reddit.com/r/chrome/comments/3otkff/bug_reddits_sidebar_causes_high_cpu_usage_and/ [reddit.com]

    • (Score: 3, Insightful) by LoRdTAW on Tuesday May 30 2017, @04:21PM

      by LoRdTAW (3755) on Tuesday May 30 2017, @04:21PM (#517724) Journal

      My older Core i7 from 2011 is still going strong. I see no reason to upgrade at all. GPU also is fine, an older AMD something. I don't play many demanding 3D games anymore so I keep forgetting my own PC's specs.

      As you said, all those cores and nothing to do.

    • (Score: 3, Informative) by bob_super on Tuesday May 30 2017, @04:56PM

      by bob_super (1357) on Tuesday May 30 2017, @04:56PM (#517750)

      >Honestly, there's not much point unless the chip is going into a server, or you have some really special applications.
      > If you have a four-core processor, it already spends most of its time bored.

      That is 100% true.
      However, if you pay engineers to twiddle their thumbs during hour-long compiles (just joking, they're obviously writing documentation...), every minute saved is money.
      If that CPU spends 90% of its time idling, but saves a mere 10 engineering minutes a day, it will not only pay for itself mathematically, but make your geek happy and therefore more productive even when not directly compiling.

    • (Score: 2) by fyngyrz on Tuesday May 30 2017, @08:52PM

      by fyngyrz (6567) on Tuesday May 30 2017, @08:52PM (#517879) Journal

      Honestly, there's not much point [...] or you have some really special applications.

      I have (and write) exactly those applications. Software defined radio; intensive realtime image processing. The former runs very nicely with many, many threads doing completely different things, none of which except the main thread are loaded heavily (and that only because OS graphics support tends to have to be done from the main loop, which I really wish they (everyone) would get past); the latter goes faster the more slices you can chop an image into right up until you run out of memory bandwidth vs. internal instruction cycle time where the bus is available to other cores. Cache is basically useless here because it's never, ever large enough. And you can tune how far to slice things up based on dynamic testing of the machine you're running on. More than I need means I can get the right amount that I need. And my current 12/24 isn't more than I need.

      So... if Apple comes with a dual-CPU i9-7980XE, so providing 36 cores and 72 threads, in a Mac Pro that actually has significant memory expandability, slots for multiple graphics cards, and proper connectivity, I will on that like white on rice. They just might do that, too. That's just a 2013 Mac Pro with new CPUs and a bigger memory bus. And they've admitted the trash can isn't working out.

      If they don't, still, I'm sure someone will, and I'll attempt to Hackintosh my way along. If that can't be done, then I may abandon OSX/MacOS altogether for an OS that can give me what I want. My code's in c, and the important stuff isn't tied to any OS. And I use POSIX threading, so that's generally agnostic enough.

    • (Score: 2) by driverless on Wednesday May 31 2017, @02:56AM

      by driverless (4770) on Wednesday May 31 2017, @02:56AM (#518055)

      Honestly, there's not much point unless the chip is going into a server, or you have some really special applications.

      You could use them to calculate whether you need to buy a 7-blade or 9-blade razor, or a 5,000W or 7,000W stick vacuum.

  • (Score: 4, Funny) by Anonymous Coward on Tuesday May 30 2017, @02:44PM

    by Anonymous Coward on Tuesday May 30 2017, @02:44PM (#517657)

    This announcement comes too late for Memorial Day, but these new chips should hopefully be available for your 4th of July grilling needs.

  • (Score: 3, Interesting) by VLM on Tuesday May 30 2017, @03:06PM

    by VLM (445) Subscriber Badge on Tuesday May 30 2017, @03:06PM (#517676)

    May as well thank vmware Inc, because as long as they continue to license on per CPU chip basis we're gonna get ever more cores per chip.

    Its not really all that unfair; fundamentally computer power is heat, and two i7-7740X will cost twice the licensing costs as one i7-7820X but will generate about twice the heat or twice the long term processing thru-put or however you want to phrase it. Obviously for identical mfgr processes, each flipped bit makes a little heat so 200 watts of flipped bits is twice the productivity of 100 watts of flipped bits. So I'm curious what the workload is for 18, 20, 32, 64 cores that are not very busy at all but somehow are all in use even at some low thermally limited level.

    I've noticed a bifurcation where years ago I'd get like a gig for a virtual image and feel happy about it and very financially productive with it, and nothing has really changed. However in support software the latest vcenter appliance with vsphere and all that junk somehow takes about 14 gigs of ram just to boot up, which seems a bit extreme. I was playing with virtualized networking and thats also extremely memory hungry, a couple distributed switches and some other junk and suddenly I'm using like 10 gigs of ram just for virtualized network appliances, which seems pretty messed up. So anyway my point is something that makes, oh, say 128 gigs of ram cheap and convenient and universal is probably more exciting for me than 18 cores, 17 of which will be thermally limited such that I only get 1 cores worth of thruput anyway.

  • (Score: 0) by Anonymous Coward on Tuesday May 30 2017, @04:03PM (2 children)

    by Anonymous Coward on Tuesday May 30 2017, @04:03PM (#517708)

    core i9. now with 2 more digits!

    • (Score: 3, Funny) by Booga1 on Tuesday May 30 2017, @04:15PM (1 child)

      by Booga1 (6333) on Tuesday May 30 2017, @04:15PM (#517719)

      Just wait until it goes to 11!!!

      • (Score: 0) by Anonymous Coward on Wednesday May 31 2017, @01:56PM

        by Anonymous Coward on Wednesday May 31 2017, @01:56PM (#518279)

        And 11s!!! (WTF is the S for I have no idea)

  • (Score: 5, Informative) by BananaPhone on Tuesday May 30 2017, @04:40PM (5 children)

    by BananaPhone (2488) on Tuesday May 30 2017, @04:40PM (#517739)

    Intel thinks Ryzen is a threat and now all these goodies come out of Intel.

    Maybe we should buy Ryzen CPUs.

    Imagine the stuff that would come out from BOTH.

    • (Score: 2) by tibman on Tuesday May 30 2017, @04:54PM

      by tibman (134) Subscriber Badge on Tuesday May 30 2017, @04:54PM (#517747)

      Too late, already have a Ryzen 7 1800x at home : )

      If only i could talk work into getting more than a 4 core cpu : / Half the engineers are still on intel two-core laptops!

      --
      SN won't survive on lurkers alone. Write comments.
    • (Score: 4, Informative) by Marand on Tuesday May 30 2017, @11:30PM (2 children)

      by Marand (1081) on Tuesday May 30 2017, @11:30PM (#517975) Journal

      Done. I built a Ryzen 7 1700 system late March and, aside from having to watch for BIOS updates for better memory compatibility, it's been great. So far I have no regrets, even with the upcoming threadripper chips diminishing some of the awesomeness of the R7 parts.

      Sure, games don't use all the cores, but I can run a game, minimise it, and go do other things without ever noticing that some game is still munching CPU on 2-4 threads. When I want to I go back where I left off, no need to reload. Same with productivity stuff, who cares if it's running and using some CPU cycles? It's not going to affect any game I decide to launch later.

      I don't even see the CPU temperature increase considerably when I do this, the highest I've seen it go is 50C with the Wraith Spire cooler that came with it, and it's usually in the low 40s under load. (Low load tends to be around 32-34C)

      • (Score: 2) by takyon on Wednesday May 31 2017, @12:23AM (1 child)

        by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Wednesday May 31 2017, @12:23AM (#517997) Journal

        even with the upcoming threadripper chips diminishing some of the awesomeness of the R7 parts

        That depends on the $/core and the customer's need for multithreaded performance (like fyngyrz above).

        i9-7980XE is $111/core.
        i7-7820X is $75/core.
        i7-7800X is $65/core.

        Now we have Ryzen R7 1800X with 8 cores at $500 ($460 retail currently). There will supposedly be two 16-core Threadripper models. I doubt the 16-cores will debut at $1,000, so the awesomeness of R7 is hardly diminished...

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
        • (Score: 2) by Marand on Wednesday May 31 2017, @04:42AM

          by Marand (1081) on Wednesday May 31 2017, @04:42AM (#518103) Journal

          All I meant is that the initial "Woo! 8c/16t? This is sweet!" turned into "Whoa, threadripper looks insane! ...but my R7 is still pretty sweet." :P

          None of that makes me wish I'd waited, though. I specifically went for the 1700 with its 65w TDP (instead of the 1700X or 1800X), so I'm not particularly interested in all these HEDT chips with TDPs of 140w or higher, aside from the "wow, that's crazy" factor. :)

    • (Score: 1) by BenFenner on Wednesday May 31 2017, @01:48PM

      by BenFenner (4171) on Wednesday May 31 2017, @01:48PM (#518275)

      I've built with Intel CPUs since 1994. Maybe 300-400 machines now between personal, family, and clients.
      However, the latest machine I did had the new i7 where Intel has refused to provide Win7 drivers for the GPU making Windows 7 almost impossible to configure properly.
      All new machines from here on out will be AMD CPUs as long as they maintain some semblance of backward compatibility with operating systems.

  • (Score: 0) by Anonymous Coward on Wednesday May 31 2017, @04:47AM

    by Anonymous Coward on Wednesday May 31 2017, @04:47AM (#518107)

    Will you be getting any of these chips?

    No.

    I don't need much more than a Core 2 Duo provides. I say much more because I find the A9 in my iPhone SE handles most of my banking and financial transactions quite well.

    The Core 2 Duo in my Thinkpad gets about 90% of my computing needs done, and I get a nice 4:3 IPS display to go with it without pulse width modulation.

    My desktop needs are satisfied by a first generation (((CompuLab))) Mintbox Mini.

  • (Score: 3, Interesting) by opinionated_science on Wednesday May 31 2017, @11:36AM (1 child)

    by opinionated_science (4031) on Wednesday May 31 2017, @11:36AM (#518214)

    AMD can we have an AVX512 instruction set on Naples? Intel doesn't like us having them on the desktop...

(1)