Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Tuesday May 30 2017, @02:35PM   Printer-friendly
from the moah-fasteh dept.

Recently, Intel was rumored to be releasing 10 and 12 core "Core i9" CPUs to compete with AMD's 10-16 core "Threadripper" CPUs. Now, Intel has confirmed these as well as 14, 16, and 18 core Skylake-X CPUs. Every CPU with 6 or more cores appears to support quad-channel DDR4:

Intel CoreCores/ThreadsPrice$/core
i9-7980XE18/36$1,999$111
i9-7960X16/32$1,699$106
i9-7940X14/28$1,399$100
i9-7920X12/24$1,199$100
i9-7900X10/20$999$100
i7-7820X8/16$599$75
i7-7800X6/12$389$65
i7-7740X4/8$339$85
i7-7640X4/4$242$61 (less threads)

Last year at Computex, the flagship Broadwell-E enthusiast chip was launched: the 10-core i7-6950X at $1,723. Today at Computex, the 10-core i9-7900X costs $999, and the 16-core i9-7960X costs $1,699. Clearly, AMD's Ryzen CPUs have forced Intel to become competitive.

Although the pricing of AMD's 10-16 core Threadripper CPUs is not known yet, the 8-core Ryzen R7 launched at $500 (available now for about $460). The Intel i7-7820X has 8 cores for $599, and will likely have better single-threaded performance than the AMD equivalent. So while Intel's CPUs are still more expensive than AMD's, they may have similar price/performance.

For what it's worth, Intel also announced quad-core Kaby Lake-X processors.

Welcome to the post-quad-core era. Will you be getting any of these chips?


Original Submission

Related Stories

Intel and AMD News From Computex 2016 22 comments

A lot of CPU news is coming out of Computex 2016.

Intel has launched its new Broadwell-E "Extreme Edition" CPUs for "enthusiasts". The top-of-the-line model, the i7-6950X, now includes 10 cores instead of 8, but the price has increased massively to around $1,723. Compare this to a ~$999 launch price for the 8-core i7-5960X or 6-core i7-4960X flagships from previous generations.

Intel has also launched some new Skylake-based Xeons with "Iris Pro" graphics.

AMD revealed more details about the Radeon RX 480, a 14nm "Polaris" GPU that will be priced at $199 and released on June 29th. AMD intends to compete for the budget/mainstream gamer segment falling far short of the $379 launch price of a GTX 1070, while delivering around 70-75% of the performance. It also claims that the RX 480 will perform well enough to allow more gamers to use premium virtual reality headsets like the Oculus Rift or HTC Vive.

While 14nm AMD "Zen" desktop chips should be coming later this year, laptop/2-in-1/tablet users will have to settle for the 7th generation Bristol Ridge and Stoney Ridge APUs. They are still 28nm "Excavator" based chips with "modules" instead of cores.


Original Submission

CPU Rumor Mill: Intel Core i9, AMD Ryzen 9, and AMD "Starship" 9 comments

AMD is rumored to be releasing a line of Ryzen 9 "Threadripper" enthusiast CPUs that include 10, 12, 14, or 16 cores. This is in contrast to the Ryzen lines of AMD CPUs that topped out at the 8-core Ryzen 7 1800X with a base clock of 3.6 GHz.

Meanwhile, Intel is supposedly planning to release 6, 8, 10, and 12 core Skylake-X processors under an "Intel Core i9" designation. Two Kaby Lake-X, a quad-core and another quad-core with hyper-threading disabled, are also mentioned.

Finally, AMD's 32-core "Naples" server chips could be succeeded in late 2018 or 2019 by a 48-core 7nm part nicknamed "Starship". GlobalFoundries plans to skip the 10nm node, and where GF goes, AMD follows. Of course, according to Intel, what really matters are transistors per square millimeter.

All of the processors mentioned could be officially announced at Computex 2017, running from May 30 to June 3. Expect the high end desktop (HEDT) CPUs to be in excess of $500 and as high as $1,500. Intel may also announce Coffee Lake CPUs later this year including a "mainstream" priced 6-core chip.


Original Submission

Intel's Skylake-SP vs AMD's Epyc 15 comments

AnandTech compared Intel's Skylake-SP chips to AMD's Epyc chips:

We can continue to talk about Intel's excellent mesh topology and AMD strong new Zen architecture, but at the end of the day, the "how" will not matter to infrastructure professionals. Depending on your situation, performance, performance-per-watt, and/or performance-per-dollar are what matters.

The current Intel pricing draws the first line. If performance-per-dollar matters to you, AMD's EPYC pricing is very competitive for a wide range of software applications. With the exception of database software and vectorizable HPC code, AMD's EPYC 7601 ($4200) offers slightly less or slightly better performance than Intel's Xeon 8176 ($8000+). However the real competitor is probably the Xeon 8160, which has 4 (-14%) fewer cores and slightly lower turbo clocks (-100 or -200 MHz). We expect that this CPU will likely offer 15% lower performance, and yet it still costs about $500 more ($4700) than the best EPYC. Of course, everything will depend on the final server system price, but it looks like AMD's new EPYC will put some serious performance-per-dollar pressure on the Intel line.

The Intel chip is indeed able to scale up in 8 sockets systems, but frankly that market is shrinking fast, and dual socket buyers could not care less.

Meanwhile, although we have yet to test it, AMD's single socket offering looks even more attractive. We estimate that a single EPYC 7551P would indeed outperform many of the dual Silver Xeon solutions. Overall the single-socket EPYC gives you about 8 cores more at similar clockspeeds than the 2P Intel, and AMD doesn't require explicit cross socket communication - the server board gets simpler and thus cheaper. For price conscious server buyers, this is an excellent option.

However, if your software is expensive, everything changes. In that case, you care less about the heavy price tags of the Platinum Xeons. For those scenarios, Intel's Skylake-EP Xeons deliver the highest single threaded performance (courtesy of the 3.8 GHz turbo clock), high throughput without much (hardware) tuning, and server managers get the reassurance of Intel's reliable track record. And if you use expensive HPC software, you will probably get the benefits of Intel's beefy AVX 2.0 and/or AVX-512 implementations.

AMD's flagship Epyc CPU has 32 cores, while the largest Skylake-EP Xeon CPU has 28 cores.

Quoted text is from page 23, "Closing Thoughts".

[Ed. note: Article is multiple pages with no single page version in sight.]

Previously: Google Gets its Hands on Skylake-Based Intel Xeons
Intel Announces 4 to 18-Core Skylake-X CPUs
AMD Epyc 7000-Series Launched With Up to 32 Cores
Intel's Skylake and Kaby Lake CPUs Have Nasty Microcode Bug
AVX-512: A "Hidden Gem"?


Original Submission

AMD 16/12-Core Threadripper Details Confirmed 12 comments

AMD's Threadripper 1950X (TR 1950X?) will have 16 cores for $1,000, and the Threadripper 1920X will have 12 cores for $800. They will be available in early August:

Last night out of the blue, we received an email from AMD, sharing some of the specifications for the forthcoming Ryzen Threadripper CPUs to be announced today. Up until this point, we knew a few things – Threadripper would consist of two Zeppelin dies featuring AMD's latest Zen core and microarchitecture, and would essentially double up on the HEDT Ryzen launch. Double dies means double pretty much everything: Threadripper would support up to 16 cores, up to 32 MB of L3 cache, quad-channel memory support, and would require a new socket/motherboard platform called X399, sporting a massive socket with 4094-pins (and also marking an LGA socket for AMD). By virtue of being sixteen cores, AMD is seemingly carving a new consumer category above HEDT/High-End Desktop, which we've coined the 'Super High-End Desktop', or SHED for short.

[...] From what we do know, 16 Zen cores at $999 is about the ballpark price we were expecting. With the clock speeds of 3.4 GHz base and 4 GHz Turbo, this is essentially two Ryzen 7 1800X dies at $499 each stuck together, creating the $999 price (obviously it's more complicated than this). Given the frequencies and the performance of these dies, the TDP is likely in the 180W range; seeing as how the Ryzen 7 1800X was a 95W CPU with slightly higher frequencies. The 1950X runs at 4.0 GHz turbo and also has access to AMD's XFR – which will boost the processor when temperature and power allows – in jumps of +25 MHz: AMD would not comment on the maximum frequency boost of XFR, though given our experiences of the Ryzen silicon and previous Ryzen processor specifications, this is likely to be +100 MHz. We were not told if the CPUs would come with a bundled CPU cooler, although if our 180W prediction is in the right area, then substantial cooling would be needed. We expect AMD to use the same Indium-Tin solder as the Ryzen CPUs, although we were unable to get confirmation at this at this time.

[...] Comparing the two, and what we know, AMD is going to battle on many fronts. Coming in at $999 is going to be aggressive, along with an all-core turbo at 3.4 GHz or above: Intel's chip at $1999 will likely turbo below this. Both chips will have quad-channel DRAM, supporting DDR4-2666 in 1 DIMM per channel mode (and DDR4-2400 in 2 DPC), but there are some tradeoffs. Intel Core parts do not support ECC, and AMD Threadripper parts are expected to (awaiting confirmation). Intel has the better microarchitecture in terms of pure IPC, though it will be interesting to see the real-world difference if AMD is clocked higher. AMD Threadripper processors will have access to 60 lanes of PCIe for accelerators, such as GPUs, RAID cards and other functions, with another 4 reserved by the chipset: Intel will likely be limited to 44 for accelerators but have a much better chipset in the X299 for IO support and capabilities. We suspect AMD to run a 180W TDP, and Intel at 165W, giving a slight advantage to Intel perhaps (depending on workload), and Intel will also offer AVX512 support for its CPU whereas AMD has smaller FMA and AVX engines by comparison. The die-to-die latency of AMD's MCM will also be an interesting element to the story, depending exactly where AMD is aiming this product.

There's also some details for Ryzen 3 quad-cores, but no confirmed pricing yet.

Meanwhile, Intel's marketing department has badmouthed AMD, calling 32-core Naples server chips "4 glued-together desktop die". That could have something to do with AMD's chips matching Intel's performance on certain workloads at around half the price.

Also at CNET, The Verge, and Ars Technica.

Previously: CPU Rumor Mill: Intel Core i9, AMD Ryzen 9, and AMD "Starship"
Intel Announces 4 to 18-Core Skylake-X CPUs
Intel Core i9-7900X Reviewed: Hotter and More Expensive than AMD Ryzen 1800X for Small Gains
AMD Epyc 7000-Series Launched With Up to 32 Cores


Original Submission

Intel Teases 28 Core Chip, AMD Announces Threadripper 2 With Up to 32 Cores 40 comments

AMD released Threadripper CPUs in 2017, built on the same 14nm Zen architecture as Ryzen, but with up to 16 cores and 32 threads. Threadripper was widely believed to have pushed Intel to respond with the release of enthusiast-class Skylake-X chips with up to 18 cores. AMD also released Epyc-branded server chips with up to 32 cores.

This week at Computex 2018, Intel showed off a 28-core CPU intended for enthusiasts and high end desktop users. While the part was overclocked to 5 GHz, it required a one-horsepower water chiller to do so. The demonstration seemed to be timed to steal the thunder from AMD's own news.

Now, AMD has announced two Threadripper 2 CPUs: one with 24 cores, and another with 32 cores. They use the "12nm LP" GlobalFoundries process instead of "14nm", which could improve performance, but are currently clocked lower than previous Threadripper parts. The TDP has been pushed up to 250 W from the 180 W TDP of Threadripper 1950X. Although these new chips match the core counts of top Epyc CPUs, there are some differences:

At the AMD press event at Computex, it was revealed that these new processors would have up to 32 cores in total, mirroring the 32-core versions of EPYC. On EPYC, those processors have four active dies, with eight active cores on each die (four for each CCX). On EPYC however, there are eight memory channels, and AMD's X399 platform only has support for four channels. For the first generation this meant that each of the two active die would have two memory channels attached – in the second generation Threadripper this is still the case: the two now 'active' parts of the chip do not have direct memory access.

This also means that the number of PCIe lanes remains at 64 for Threadripper 2, rather than the 128 of Epyc.

Threadripper 1 had a "game mode" that disabled one of the two active dies, so it will be interesting to see if users of the new chips will be forced to disable even more cores in some scenarios.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Interesting) by bradley13 on Tuesday May 30 2017, @02:42PM (16 children)

    by bradley13 (3053) on Tuesday May 30 2017, @02:42PM (#517655) Homepage Journal

    Will you be getting any of these chips?

    Honestly, there's not much point unless the chip is going into a server, or you have some really special applications. If you have a four-core processor, it already spends most of its time bored.

    That said, it's pretty cool that Moore's Law lives on. If you could the total compute capacity of one of these chips, it's pretty astounding. That article a while back, grousing about how we only have "incremental" improvements in technology? Sometimes, quantity has a quality all it's own. Just consider all of the changes, both in technology and in society, that are directly attributable to increased computing power.

    Yeah, also Facebook, but I guess it can't all be good stuff...

    --
    Everyone is somebody else's weirdo.
    • (Score: 3, Informative) by zocalo on Tuesday May 30 2017, @03:27PM (10 children)

      by zocalo (302) on Tuesday May 30 2017, @03:27PM (#517686)
      I wouldn't class video processing a "really special application", yet it's a fairly popular usage case that can - with the right software - absolutely benefit mainsteam PC users that have a larger number of CPU cores and a good GPU (or two), especially given the rise in popularity of 4K footage from action cams and drones. Along with GPU, storage and memory upgrades, I've gone from dual-core to quad-core to octa-core, and I've still not managed to hit a point where I can't max out every single core of the CPU during a high-res rendering process - even with GPU based rendering assistance. Nothing stymies the creative process quicker than having to sit and wait so, while I'm not going to be upgrading immediately, I'm definitely looking forwards to seeing what kind of capacity for higher preview resolutions and how close to real-time final output rendering I can achieve with one of these chips when I do.
      --
      UNIX? They're not even circumcised! Savages!
      • (Score: 2) by ikanreed on Tuesday May 30 2017, @04:18PM (6 children)

        by ikanreed (3164) Subscriber Badge on Tuesday May 30 2017, @04:18PM (#517722) Journal

        And of course, there's video games. You know, one of the primary reasons people get fancier, more powerful, and substantially more expensive computers. If you can't think of how graphics pipeline, engine pipeline, resource loading pipeline, and AI state updates might all be on different threads/cores, you're not very creative.

        • (Score: 2) by EvilSS on Tuesday May 30 2017, @05:10PM

          by EvilSS (1456) Subscriber Badge on Tuesday May 30 2017, @05:10PM (#517764)
          These won't do much for games over their lower-core siblings though. Most games won't use much over 4 cores. However they would be useful to people who play PC games and stream to youtube or twitch. Live encoding on top of gaming can bottleneck most desktop CPUs. Although the AMD CPUs might be at a better price point for that use.
        • (Score: 2) by zocalo on Tuesday May 30 2017, @05:23PM (4 children)

          by zocalo (302) on Tuesday May 30 2017, @05:23PM (#517774)
          Are there any video games that can actually make use of this many CPU cores though (as opposed to GPU pipelines, in which case the sky is probably still the limit)? Sure, it's becoming the norm to to split a complex game engine up into multiple parallel executing threads to take more advantage of modern multi-core CPUs, but there's only so many things that you can do that with before managing all the interdependant parallelism becomes more problematic that it's worth. Unlike with video processing, it's not like you're chopping up a set workload into chunks, setting the CPU to work, and don't really care when each task finishes before reassembling the processed data; almost everything is going to need to maintain at least some level of synch.

          Even assuming an MMO, where you're going to have a few more options that could be readily farmed off to a dedicated thread/core on top of the four examples of execution threads you gave (team comms, for instance), and maybe making some of the main threads (graphics and engine, perhaps?) multi-threaded in their own right, you might get up towards 8-10 threads, but 18? Even allowing for a few threads left over for background OS tasks, I'm not sure we're going to be seeing many games that can take full advantage of the 18 core monster - let alone its 36 threads - for quite some time, if at all.
          --
          UNIX? They're not even circumcised! Savages!
          • (Score: 2) by ikanreed on Tuesday May 30 2017, @05:24PM

            by ikanreed (3164) Subscriber Badge on Tuesday May 30 2017, @05:24PM (#517775) Journal

            Basically only AAA games that explicitly have PC as a primary target, consoles tend to cap out at 4 cores.

            Which is to say, not a lot.

          • (Score: 2) by tibman on Tuesday May 30 2017, @06:15PM

            by tibman (134) Subscriber Badge on Tuesday May 30 2017, @06:15PM (#517804)

            Like you pointed out, most people have other programs running at the same time. It's often overlooked in PC gaming benchmarks. VOIP being the big one. I often have a browser open too. So while most games only make good use of 3-4 threads that doesn't make an 8+ thread CPU useless for gaming. Zero random lurches or stutters from other processes is nice. Six core is pretty much the next step for gamers. Eight core is trying to future proof but probably not needed (yet).

            --
            SN won't survive on lurkers alone. Write comments.
          • (Score: 0) by Anonymous Coward on Tuesday May 30 2017, @07:02PM (1 child)

            by Anonymous Coward on Tuesday May 30 2017, @07:02PM (#517822)

            Why MMO? Most of the critical game logic, aside from display and control, is run on the server.

            From my (admittedly genre-limited) experience, strategy games can make the most use of additional CPU resources. A modern strategy game AI can use as many cores as you can throw at it; here, the CPU (or sometimes the memory) is the bottleneck. Not to mention thousands or hundreds of thousands of individual units, each one requiring some low-level control routines.

            In some strategy games I've played (Star Ruler 2...), late game on larger maps can become almost completely unplayable because the CPU just can't keep up. On the other hand, I've never had graphical stutter, even on combats with thousands of units shooting lasers and exploding all over the place (like on this screenshot [gog.com]).

            TL;DR: AlphaGo is a computer controlled AI for a strategy game. Think about how many cores can it use.

            • (Score: 2) by takyon on Tuesday May 30 2017, @07:19PM

              by takyon (881) <takyonNO@SPAMsoylentnews.org> on Tuesday May 30 2017, @07:19PM (#517837) Journal

              Agreed on the MMO. Single player first-person games can have "complex" AI to the point where you can't have dozens or hundreds of NPCs on a single map without causing slowdowns (compare that amount to your strategy games... like Cossacks: Back to War), and tend to break maps up with loading zones (for example, cities in Oblivion or Skyrim). Having more cores and RAM allows more AI and stuff to be loaded, which is beneficial for single-map stealth games with no loading zones that have all of the AI loaded and patrolling at the beginning.

              --
              [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 2) by kaszz on Wednesday May 31 2017, @03:35AM (2 children)

        by kaszz (4211) on Wednesday May 31 2017, @03:35AM (#518076) Journal

        I think you are doing the video processing wrong. When they did some of the later Star Wars movies. It was partly edited on a simple laptop using a low quality version. But when the edit was completed. The edit file were sent to a server cluster that rendered the real thing. No creative wait there that mattered.

        • (Score: 2) by zocalo on Wednesday May 31 2017, @11:07AM (1 child)

          by zocalo (302) on Wednesday May 31 2017, @11:07AM (#518206)
          In a professional studio production environment, absolutely (I've done this myself), but the point of the discussion - and my personal example - was for enthusiast/semi-pro users with a single high-end PC doing that CPU intensive rendering of their 4K action cam/drone footage, and for that you are drawing on the raw full-res video. I'm absolutely editing at low-res (or not so low, these days) but such users are not likely to have access to a render farm, but even if they did then it would just be an array of servers equipped with the CPUs like those above, so would still make a valid "mainstream" usage case. There are still a few editing situations where you'll want to see the full-res output though - selective corrections on part of the frame for example - so you can make sure you've not got any visual glitches at the boundaries of the edit, and that's where having the extra CPU capacity during editing can really be useful.

          I suppose if you were a really keen self employed videographer then it might be viable to offload the final rendering to the cloud, but there are a number of problems with that. Firstly, I'm not actually sure how much of a timesaving that's going to give you as you're introducing another potential bottleneck - your Internet link; my fibre internet connection is no slouch so it might work out, but my raw video footage is still typically tens of GB - and sometimes exceeds 100GB if I'm drawing on a lot of B-roll clips - so if you don't have a quick connection that may be an issue. The likely show-stopper though is software support; other than CGI and 3DFX *very* few video production tools actually support render farms in the first place, let alone cloud-based ones (including Adobe Creative Cloud's Premiere Pro, somewhat ironically), so you're either going to be farming out the work manually or using higher end tools like the AVID setup used by Disney for their Star Wars movies. Even if your software supports it, you're unlikely to find a commercial service, so you'll be rolling your own VMs in the cloud, and likely paying out a small fortune for licenses as well.
          --
          UNIX? They're not even circumcised! Savages!
          • (Score: 2) by kaszz on Wednesday May 31 2017, @05:17PM

            by kaszz (4211) on Wednesday May 31 2017, @05:17PM (#518385) Journal

            The files are not sent via the internet. They are delivered physically to the rendering farm, only the edit data is perhaps sent via the internet. As for software, I'll expect professional software to at least be able to deliver the edit file. But there's open source solutions both for the edit console and the server farm I think. The last part should at least not be too hard to get rolling.

    • (Score: 0) by Anonymous Coward on Tuesday May 30 2017, @03:59PM

      by Anonymous Coward on Tuesday May 30 2017, @03:59PM (#517706)

      Try going to mainstream sites with a "modern" browser and no ad block, javascript, cross-site request, etc protection. Even sites that only need to show text (eg reddit) can still slow a computer to a crawl, especially if you leave open enough tabs.

      I made that example up, but just searched it and found that sure enough they have found a way:
      https://www.reddit.com/r/chrome/comments/456175/helpchrome_high_cpu_usage/ [reddit.com]
      https://www.reddit.com/r/chrome/comments/3otkff/bug_reddits_sidebar_causes_high_cpu_usage_and/ [reddit.com]

    • (Score: 3, Insightful) by LoRdTAW on Tuesday May 30 2017, @04:21PM

      by LoRdTAW (3755) on Tuesday May 30 2017, @04:21PM (#517724) Journal

      My older Core i7 from 2011 is still going strong. I see no reason to upgrade at all. GPU also is fine, an older AMD something. I don't play many demanding 3D games anymore so I keep forgetting my own PC's specs.

      As you said, all those cores and nothing to do.

    • (Score: 3, Informative) by bob_super on Tuesday May 30 2017, @04:56PM

      by bob_super (1357) on Tuesday May 30 2017, @04:56PM (#517750)

      >Honestly, there's not much point unless the chip is going into a server, or you have some really special applications.
      > If you have a four-core processor, it already spends most of its time bored.

      That is 100% true.
      However, if you pay engineers to twiddle their thumbs during hour-long compiles (just joking, they're obviously writing documentation...), every minute saved is money.
      If that CPU spends 90% of its time idling, but saves a mere 10 engineering minutes a day, it will not only pay for itself mathematically, but make your geek happy and therefore more productive even when not directly compiling.

    • (Score: 2) by fyngyrz on Tuesday May 30 2017, @08:52PM

      by fyngyrz (6567) on Tuesday May 30 2017, @08:52PM (#517879) Journal

      Honestly, there's not much point [...] or you have some really special applications.

      I have (and write) exactly those applications. Software defined radio; intensive realtime image processing. The former runs very nicely with many, many threads doing completely different things, none of which except the main thread are loaded heavily (and that only because OS graphics support tends to have to be done from the main loop, which I really wish they (everyone) would get past); the latter goes faster the more slices you can chop an image into right up until you run out of memory bandwidth vs. internal instruction cycle time where the bus is available to other cores. Cache is basically useless here because it's never, ever large enough. And you can tune how far to slice things up based on dynamic testing of the machine you're running on. More than I need means I can get the right amount that I need. And my current 12/24 isn't more than I need.

      So... if Apple comes with a dual-CPU i9-7980XE, so providing 36 cores and 72 threads, in a Mac Pro that actually has significant memory expandability, slots for multiple graphics cards, and proper connectivity, I will on that like white on rice. They just might do that, too. That's just a 2013 Mac Pro with new CPUs and a bigger memory bus. And they've admitted the trash can isn't working out.

      If they don't, still, I'm sure someone will, and I'll attempt to Hackintosh my way along. If that can't be done, then I may abandon OSX/MacOS altogether for an OS that can give me what I want. My code's in c, and the important stuff isn't tied to any OS. And I use POSIX threading, so that's generally agnostic enough.

    • (Score: 2) by driverless on Wednesday May 31 2017, @02:56AM

      by driverless (4770) on Wednesday May 31 2017, @02:56AM (#518055)

      Honestly, there's not much point unless the chip is going into a server, or you have some really special applications.

      You could use them to calculate whether you need to buy a 7-blade or 9-blade razor, or a 5,000W or 7,000W stick vacuum.

  • (Score: 4, Funny) by Anonymous Coward on Tuesday May 30 2017, @02:44PM

    by Anonymous Coward on Tuesday May 30 2017, @02:44PM (#517657)

    This announcement comes too late for Memorial Day, but these new chips should hopefully be available for your 4th of July grilling needs.

  • (Score: 3, Interesting) by VLM on Tuesday May 30 2017, @03:06PM

    by VLM (445) on Tuesday May 30 2017, @03:06PM (#517676)

    May as well thank vmware Inc, because as long as they continue to license on per CPU chip basis we're gonna get ever more cores per chip.

    Its not really all that unfair; fundamentally computer power is heat, and two i7-7740X will cost twice the licensing costs as one i7-7820X but will generate about twice the heat or twice the long term processing thru-put or however you want to phrase it. Obviously for identical mfgr processes, each flipped bit makes a little heat so 200 watts of flipped bits is twice the productivity of 100 watts of flipped bits. So I'm curious what the workload is for 18, 20, 32, 64 cores that are not very busy at all but somehow are all in use even at some low thermally limited level.

    I've noticed a bifurcation where years ago I'd get like a gig for a virtual image and feel happy about it and very financially productive with it, and nothing has really changed. However in support software the latest vcenter appliance with vsphere and all that junk somehow takes about 14 gigs of ram just to boot up, which seems a bit extreme. I was playing with virtualized networking and thats also extremely memory hungry, a couple distributed switches and some other junk and suddenly I'm using like 10 gigs of ram just for virtualized network appliances, which seems pretty messed up. So anyway my point is something that makes, oh, say 128 gigs of ram cheap and convenient and universal is probably more exciting for me than 18 cores, 17 of which will be thermally limited such that I only get 1 cores worth of thruput anyway.

  • (Score: 0) by Anonymous Coward on Tuesday May 30 2017, @04:03PM (2 children)

    by Anonymous Coward on Tuesday May 30 2017, @04:03PM (#517708)

    core i9. now with 2 more digits!

    • (Score: 3, Funny) by Booga1 on Tuesday May 30 2017, @04:15PM (1 child)

      by Booga1 (6333) on Tuesday May 30 2017, @04:15PM (#517719)

      Just wait until it goes to 11!!!

      • (Score: 0) by Anonymous Coward on Wednesday May 31 2017, @01:56PM

        by Anonymous Coward on Wednesday May 31 2017, @01:56PM (#518279)

        And 11s!!! (WTF is the S for I have no idea)

  • (Score: 5, Informative) by BananaPhone on Tuesday May 30 2017, @04:40PM (5 children)

    by BananaPhone (2488) on Tuesday May 30 2017, @04:40PM (#517739)

    Intel thinks Ryzen is a threat and now all these goodies come out of Intel.

    Maybe we should buy Ryzen CPUs.

    Imagine the stuff that would come out from BOTH.

    • (Score: 2) by tibman on Tuesday May 30 2017, @04:54PM

      by tibman (134) Subscriber Badge on Tuesday May 30 2017, @04:54PM (#517747)

      Too late, already have a Ryzen 7 1800x at home : )

      If only i could talk work into getting more than a 4 core cpu : / Half the engineers are still on intel two-core laptops!

      --
      SN won't survive on lurkers alone. Write comments.
    • (Score: 4, Informative) by Marand on Tuesday May 30 2017, @11:30PM (2 children)

      by Marand (1081) on Tuesday May 30 2017, @11:30PM (#517975) Journal

      Done. I built a Ryzen 7 1700 system late March and, aside from having to watch for BIOS updates for better memory compatibility, it's been great. So far I have no regrets, even with the upcoming threadripper chips diminishing some of the awesomeness of the R7 parts.

      Sure, games don't use all the cores, but I can run a game, minimise it, and go do other things without ever noticing that some game is still munching CPU on 2-4 threads. When I want to I go back where I left off, no need to reload. Same with productivity stuff, who cares if it's running and using some CPU cycles? It's not going to affect any game I decide to launch later.

      I don't even see the CPU temperature increase considerably when I do this, the highest I've seen it go is 50C with the Wraith Spire cooler that came with it, and it's usually in the low 40s under load. (Low load tends to be around 32-34C)

      • (Score: 2) by takyon on Wednesday May 31 2017, @12:23AM (1 child)

        by takyon (881) <takyonNO@SPAMsoylentnews.org> on Wednesday May 31 2017, @12:23AM (#517997) Journal

        even with the upcoming threadripper chips diminishing some of the awesomeness of the R7 parts

        That depends on the $/core and the customer's need for multithreaded performance (like fyngyrz above).

        i9-7980XE is $111/core.
        i7-7820X is $75/core.
        i7-7800X is $65/core.

        Now we have Ryzen R7 1800X with 8 cores at $500 ($460 retail currently). There will supposedly be two 16-core Threadripper models. I doubt the 16-cores will debut at $1,000, so the awesomeness of R7 is hardly diminished...

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
        • (Score: 2) by Marand on Wednesday May 31 2017, @04:42AM

          by Marand (1081) on Wednesday May 31 2017, @04:42AM (#518103) Journal

          All I meant is that the initial "Woo! 8c/16t? This is sweet!" turned into "Whoa, threadripper looks insane! ...but my R7 is still pretty sweet." :P

          None of that makes me wish I'd waited, though. I specifically went for the 1700 with its 65w TDP (instead of the 1700X or 1800X), so I'm not particularly interested in all these HEDT chips with TDPs of 140w or higher, aside from the "wow, that's crazy" factor. :)

    • (Score: 1) by BenFenner on Wednesday May 31 2017, @01:48PM

      by BenFenner (4171) on Wednesday May 31 2017, @01:48PM (#518275)

      I've built with Intel CPUs since 1994. Maybe 300-400 machines now between personal, family, and clients.
      However, the latest machine I did had the new i7 where Intel has refused to provide Win7 drivers for the GPU making Windows 7 almost impossible to configure properly.
      All new machines from here on out will be AMD CPUs as long as they maintain some semblance of backward compatibility with operating systems.

  • (Score: 0) by Anonymous Coward on Wednesday May 31 2017, @04:47AM

    by Anonymous Coward on Wednesday May 31 2017, @04:47AM (#518107)

    Will you be getting any of these chips?

    No.

    I don't need much more than a Core 2 Duo provides. I say much more because I find the A9 in my iPhone SE handles most of my banking and financial transactions quite well.

    The Core 2 Duo in my Thinkpad gets about 90% of my computing needs done, and I get a nice 4:3 IPS display to go with it without pulse width modulation.

    My desktop needs are satisfied by a first generation (((CompuLab))) Mintbox Mini.

  • (Score: 3, Interesting) by opinionated_science on Wednesday May 31 2017, @11:36AM (1 child)

    by opinionated_science (4031) on Wednesday May 31 2017, @11:36AM (#518214)

    AMD can we have an AVX512 instruction set on Naples? Intel doesn't like us having them on the desktop...

(1)