Slash Boxes

SoylentNews is people

posted by janrinok on Thursday June 07 2018, @06:53PM   Printer-friendly
from the begun-the-core-wars-have dept.

AMD released Threadripper CPUs in 2017, built on the same 14nm Zen architecture as Ryzen, but with up to 16 cores and 32 threads. Threadripper was widely believed to have pushed Intel to respond with the release of enthusiast-class Skylake-X chips with up to 18 cores. AMD also released Epyc-branded server chips with up to 32 cores.

This week at Computex 2018, Intel showed off a 28-core CPU intended for enthusiasts and high end desktop users. While the part was overclocked to 5 GHz, it required a one-horsepower water chiller to do so. The demonstration seemed to be timed to steal the thunder from AMD's own news.

Now, AMD has announced two Threadripper 2 CPUs: one with 24 cores, and another with 32 cores. They use the "12nm LP" GlobalFoundries process instead of "14nm", which could improve performance, but are currently clocked lower than previous Threadripper parts. The TDP has been pushed up to 250 W from the 180 W TDP of Threadripper 1950X. Although these new chips match the core counts of top Epyc CPUs, there are some differences:

At the AMD press event at Computex, it was revealed that these new processors would have up to 32 cores in total, mirroring the 32-core versions of EPYC. On EPYC, those processors have four active dies, with eight active cores on each die (four for each CCX). On EPYC however, there are eight memory channels, and AMD's X399 platform only has support for four channels. For the first generation this meant that each of the two active die would have two memory channels attached – in the second generation Threadripper this is still the case: the two now 'active' parts of the chip do not have direct memory access.

This also means that the number of PCIe lanes remains at 64 for Threadripper 2, rather than the 128 of Epyc.

Threadripper 1 had a "game mode" that disabled one of the two active dies, so it will be interesting to see if users of the new chips will be forced to disable even more cores in some scenarios.

Original Submission

Related Stories

Intel Announces 4 to 18-Core Skylake-X CPUs 31 comments

Recently, Intel was rumored to be releasing 10 and 12 core "Core i9" CPUs to compete with AMD's 10-16 core "Threadripper" CPUs. Now, Intel has confirmed these as well as 14, 16, and 18 core Skylake-X CPUs. Every CPU with 6 or more cores appears to support quad-channel DDR4:

Intel CoreCores/ThreadsPrice$/core
i7-7640X4/4$242$61 (less threads)

Last year at Computex, the flagship Broadwell-E enthusiast chip was launched: the 10-core i7-6950X at $1,723. Today at Computex, the 10-core i9-7900X costs $999, and the 16-core i9-7960X costs $1,699. Clearly, AMD's Ryzen CPUs have forced Intel to become competitive.

Although the pricing of AMD's 10-16 core Threadripper CPUs is not known yet, the 8-core Ryzen R7 launched at $500 (available now for about $460). The Intel i7-7820X has 8 cores for $599, and will likely have better single-threaded performance than the AMD equivalent. So while Intel's CPUs are still more expensive than AMD's, they may have similar price/performance.

For what it's worth, Intel also announced quad-core Kaby Lake-X processors.

Welcome to the post-quad-core era. Will you be getting any of these chips?

Original Submission

AMD Epyc 7000-Series Launched With Up to 32 Cores 19 comments

AMD has launched its Ryzen-based take on x86 server processors to compete with Intel's Xeon CPUs. All of the Epyc 7000-series CPUs support 128 PCIe 3.0 lanes and 8 channels (2 DIMMs per channel) of DDR4-2666 DRAM:

A few weeks ago AMD announced the naming of the new line of enterprise-class processors, called EPYC, and today marks the official launch with configurations up to 32 cores and 64 threads per processor. We also got an insight into several features of the design, including the AMD Infinity Fabric.

Today's announcement of the AMD EPYC product line sees the launch of the top four CPUs, focused primarily at dual socket systems. The full EPYC stack will contain twelve processors, with three for single socket environments, with the rest of the stack being made available at the end of July. It is worth taking a few minutes to look at how these processors look under the hood.

On the package are four silicon dies, each one containing the same 8-core silicon we saw in the AMD Ryzen processors. Each silicon die has two core complexes, each of four cores, and supports two memory channels, giving a total maximum of 32 cores and 8 memory channels on an EPYC processor. The dies are connected by AMD's newest interconnect, the Infinity Fabric, which plays a key role not only in die-to-die communication but also processor-to-processor communication and within AMD's new Vega graphics. AMD designed the Infinity Fabric to be modular and scalable in order to support large GPUs and CPUs in the roadmap going forward, and states that within a single package the fabric is overprovisioned to minimize any issues with non-NUMA aware software (more on this later).

With a total of 8 memory channels, and support for 2 DIMMs per channel, AMD is quoting a 2TB per socket maximum memory support, scaling up to 4TB per system in a dual processor system. Each CPU will support 128 PCIe 3.0 lanes, suitable for six GPUs with full bandwidth support (plus IO) or up to 32 NVMe drives for storage. All the PCIe lanes can be used for IO devices, such as SATA drives or network ports, or as Infinity Fabric connections to other devices. There are also 4 IO hubs per processor for additional storage support.

AMD's slides at Ars Technica.

Original Submission

AMD 16/12-Core Threadripper Details Confirmed 12 comments

AMD's Threadripper 1950X (TR 1950X?) will have 16 cores for $1,000, and the Threadripper 1920X will have 12 cores for $800. They will be available in early August:

Last night out of the blue, we received an email from AMD, sharing some of the specifications for the forthcoming Ryzen Threadripper CPUs to be announced today. Up until this point, we knew a few things – Threadripper would consist of two Zeppelin dies featuring AMD's latest Zen core and microarchitecture, and would essentially double up on the HEDT Ryzen launch. Double dies means double pretty much everything: Threadripper would support up to 16 cores, up to 32 MB of L3 cache, quad-channel memory support, and would require a new socket/motherboard platform called X399, sporting a massive socket with 4094-pins (and also marking an LGA socket for AMD). By virtue of being sixteen cores, AMD is seemingly carving a new consumer category above HEDT/High-End Desktop, which we've coined the 'Super High-End Desktop', or SHED for short.

[...] From what we do know, 16 Zen cores at $999 is about the ballpark price we were expecting. With the clock speeds of 3.4 GHz base and 4 GHz Turbo, this is essentially two Ryzen 7 1800X dies at $499 each stuck together, creating the $999 price (obviously it's more complicated than this). Given the frequencies and the performance of these dies, the TDP is likely in the 180W range; seeing as how the Ryzen 7 1800X was a 95W CPU with slightly higher frequencies. The 1950X runs at 4.0 GHz turbo and also has access to AMD's XFR – which will boost the processor when temperature and power allows – in jumps of +25 MHz: AMD would not comment on the maximum frequency boost of XFR, though given our experiences of the Ryzen silicon and previous Ryzen processor specifications, this is likely to be +100 MHz. We were not told if the CPUs would come with a bundled CPU cooler, although if our 180W prediction is in the right area, then substantial cooling would be needed. We expect AMD to use the same Indium-Tin solder as the Ryzen CPUs, although we were unable to get confirmation at this at this time.

[...] Comparing the two, and what we know, AMD is going to battle on many fronts. Coming in at $999 is going to be aggressive, along with an all-core turbo at 3.4 GHz or above: Intel's chip at $1999 will likely turbo below this. Both chips will have quad-channel DRAM, supporting DDR4-2666 in 1 DIMM per channel mode (and DDR4-2400 in 2 DPC), but there are some tradeoffs. Intel Core parts do not support ECC, and AMD Threadripper parts are expected to (awaiting confirmation). Intel has the better microarchitecture in terms of pure IPC, though it will be interesting to see the real-world difference if AMD is clocked higher. AMD Threadripper processors will have access to 60 lanes of PCIe for accelerators, such as GPUs, RAID cards and other functions, with another 4 reserved by the chipset: Intel will likely be limited to 44 for accelerators but have a much better chipset in the X299 for IO support and capabilities. We suspect AMD to run a 180W TDP, and Intel at 165W, giving a slight advantage to Intel perhaps (depending on workload), and Intel will also offer AVX512 support for its CPU whereas AMD has smaller FMA and AVX engines by comparison. The die-to-die latency of AMD's MCM will also be an interesting element to the story, depending exactly where AMD is aiming this product.

There's also some details for Ryzen 3 quad-cores, but no confirmed pricing yet.

Meanwhile, Intel's marketing department has badmouthed AMD, calling 32-core Naples server chips "4 glued-together desktop die". That could have something to do with AMD's chips matching Intel's performance on certain workloads at around half the price.

Also at CNET, The Verge, and Ars Technica.

Previously: CPU Rumor Mill: Intel Core i9, AMD Ryzen 9, and AMD "Starship"
Intel Announces 4 to 18-Core Skylake-X CPUs
Intel Core i9-7900X Reviewed: Hotter and More Expensive than AMD Ryzen 1800X for Small Gains
AMD Epyc 7000-Series Launched With Up to 32 Cores

Original Submission

AMD Expected to Release Ryzen CPUs on a 12nm Process in Q1 2018 10 comments

AMD's high Ryzen sales may have convinced the company to release a new version on a slightly improved process in Spring 2018:

AMD has informed its partners that it plans to launch in February 2018 an upgrade version of its Ryzen series processors built using a 12nm low-power (12LP) process at Globalfoundries, according to sources at motherboard makers.

The company will initially release the CPUs codenamed Pinnacle 7, followed by mid-range Pinnacle 5 and entry-level Pinnacle 3 processors in March 2018, the sources disclosed. AMD is also expected to see its share of the desktop CPU market return to 30% in the first half of 2018.

AMD will launch the low-power version of Pinnacle processors in April 2018 and the enterprise version Pinnacle Pro in May 2018.

The new "Pinnacle Ridge" chips appear to be part of a Zen 1 refresh rather than "Zen 2", which is expected to ship in 2019 on a 7nm process. The 12nm Leading-Performance (12LP) process was described by GlobalFoundries as providing 15% greater circuit density and a 10% performance increase compared to its 14nm FinFET process.

AMD has yet to release 14nm "Raven Ridge" CPUs for laptops.

Also at Wccftech. HPCwire article about the 12LP process.

Previously: AMD Ryzen Launch News
AMD's Ryzen Could be Forcing Intel to Release "Coffee Lake" CPUs Sooner
AMD Ryzen 3 Reviewed

Original Submission

AMD Ratcheting Up the Pressure on Intel 24 comments

Intel expects to lose some server/data center market share to AMD's Epyc line of chips:

The pitched battle between Intel and AMD has spread to the data center, and while Intel has been forthcoming that it expects to lose some market share in the coming months to AMD, Brian Krzanich's recent comments to Instinet analyst Roman Shah give us some insight into the surprising scope of AMD's threat. Shah recently sat down with Intel CEO Brian Krzanich and Barron's reported on his findings:

Shah relates that Krzanich "was very matter-of-fact in saying that Intel would lose server share to AMD in the second half of the year," which is not news, but he thought it significant that "Mr. Krzanich did not draw a firm line in the sand as it relates to AMD's potential gains in servers; he only indicated that it was Intel's job to not let AMD capture 15-20% market share." (emphasis added).

Furthermore, Intel's problems with the "10nm" node could allow AMD to pick up market share with "7nm" (although it may be similar in performance to Intel's "10nm"):

Nomura Instinet is less bullish on further stock gains for Intel after talking to the chipmaker's CEO, Brian Krzanich. [...] The analyst said Intel's problems in moving to its next-generation chip manufacturing technology may be a factor in its potential market share losses. The chipmaker revealed on its April 26 earnings conference call that it delayed volume production under its 10-nanometer chip manufacturing process to next year. Conversely, AMD said on its call that it plans to start next-generation 7-nanometer chip production in late 2018.

[...] "We see Mr. Krzanich's posture here reflecting the company's inability thus far to sufficiently yield 10nm for volume production while AMD's partner TSMC is currently making good progress on 7nm; thus, setting Intel up for stiff competition again in 2019," the analyst said.

Here are a couple of post-mortem articles on Intel's misleading 28-core CPU demo and more:

Rather than 28 cores, Intel may introduce 20 and 22 core CPUs to compete with AMD's Threadripper 2, along with 8-core Coffee Lake refresh CPUs to compete with Ryzen.

Original Submission

AMD Threadripper 2 Available Starting on August 13 24 comments

AMD's Threadripper 2 TR 2990WX will be available for retail on August 13. The CPU has 32 cores and the suggested retail price is $1,799, compared to $1,999 for Intel's 18-core i9-7980XE. A 24-core TR 2970WX will be available in October for $1,299.

The 16-core TR 2950X ($899, August 31) and 12-core TR 2920X ($649, October) replace their counterparts from the last generation of Threadripper CPUs, but have slightly improved "12nm" Zen+ cores like the other Threadripper 2 CPUs. The 16 and 12-core chips use 2 dies while the 24 and 32-core versions use 4 dies.

A benchmark leak shows the 32-core TR 2990WX outperforming Intel's 18-core i9-7980XE by 53% in the multithreaded Cinebench R15 (this is an early result, may not represent the final performance, and may be overly favorable to AMD).

Also at Tom's Hardware and Engadget.

Related: First Two AMD Threadripper Chips Out on Aug. 10, New 8-Core Version on Aug. 31
Intel Teases 28 Core Chip, AMD Announces Threadripper 2 With Up to 32 Cores
AMD Ratcheting Up the Pressure on Intel

Original Submission

Intel Announces 9th Generation Desktop Processors, Including a Mainstream 8-Core CPU 33 comments

Intel Announces 9th Gen Core CPUs: Core i9-9900K (8-Core), i7-9700K, & i5-9600K

Among many of Intel's announcements today, a key one for a lot of users will be the launch of Intel's 9th Generation Core desktop processors, offering up to 8-cores on Intel's mainstream consumer platform. These processors are drop-in compatible with current Coffee Lake and Z370 platforms, but are accompanied by a new Z390 chipset and associated motherboards as well. The highlights from this launch is the 8-core Core i9 parts, which include a 5.0 GHz turbo Core i9-9900K, rated at a 95W TDP.

[...] Leading from the top of the stack is the Core i9-9900K, Intel's new flagship mainstream processor. This part is eight full cores with hyperthreading, with a base frequency of 3.6 GHz at 95W TDP, and a turbo up to 5.0 GHz on two cores. Memory support is up to dual channel DDR4-2666. The Core i9-9900K builds upon the Core i7-8086K from the 8th Generation product line by adding two more cores, and increasing that 5.0 GHz turbo from one core to two cores. The all-core turbo is 4.7 GHz, so it will be interesting to see what the power consumption is when the processor is fully loaded. The Core i9 family will have the full 2MB of L3 cache per core.

[...] Also featuring 8-cores is the Core i7-9700K, but without the hyperthreading. This part will have a base frequency of 3.6 GHz as well for a given 95W TDP, but can turbo up to 4.9 GHz only on a single core. The i7-9700K is meant to be the direct upgrade over the Core i7-8700K, and although both chips have the same underlying Coffee Lake microarchitecture, the 9700K has two more cores and slightly better turbo performance, but less L3 cache per core at only 1.5MB per.

Intel also announced refreshed 8 to 18 core high-end desktop CPUs, and a new 28-core Xeon aimed at extreme workstation users.

Intel Teases 28 Core Chip, AMD Announces Threadripper 2 With Up to 32 Cores
AMD Threadripper 2 Available Starting on August 13

Original Submission

Intel Announces 48-core Xeons Using Multiple Dies, Ahead of AMD Announcement 23 comments

Intel announces Cascade Lake Xeons: 48 cores and 12-channel memory per socket

Intel has announced the next family of Xeon processors that it plans to ship in the first half of next year. The new parts represent a substantial upgrade over current Xeon chips, with up to 48 cores and 12 DDR4 memory channels per socket, supporting up to two sockets.

These processors will likely be the top-end Cascade Lake processors; Intel is labelling them "Cascade Lake Advanced Performance," with a higher level of performance than the Xeon Scalable Processors (SP) below them. The current Xeon SP chips use a monolithic die, with up to 28 cores and 56 threads. Cascade Lake AP will instead be a multi-chip processor with multiple dies contained with in a single package. AMD is using a similar approach for its comparable products; the Epyc processors use four dies in each package, with each die having 8 cores.

The switch to a multi-chip design is likely driven by necessity: as the dies become bigger and bigger it becomes more and more likely that they'll contain a defect. Using several smaller dies helps avoid these defects. Because Intel's 10nm manufacturing process isn't yet good enough for mass market production, the new Xeons will continue to use a version of the company's 14nm process. Intel hasn't yet revealed what the topology within each package will be, so the exact distribution of those cores and memory channels between chips is as yet unknown. The enormous number of memory channels will demand an enormous socket, currently believed to be a 5903 pin connector.

Intel also announced tinier 4-6 core E-2100 Xeons with ECC memory support.

Meanwhile, AMD is holding a New Horizon event on Nov. 6, where it is expected to announce 64-core Epyc processors.

Related: AMD Epyc 7000-Series Launched With Up to 32 Cores
AVX-512: A "Hidden Gem"?
Intel's Skylake-SP vs AMD's Epyc
Intel Teases 28 Core Chip, AMD Announces Threadripper 2 With Up to 32 Cores
TSMC Will Make AMD's "7nm" Epyc Server CPUs
Intel Announces 9th Generation Desktop Processors, Including a Mainstream 8-Core CPU

Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Thursday June 07 2018, @07:00PM

    by Anonymous Coward on Thursday June 07 2018, @07:00PM (#690016)

    Not me.

  • (Score: 2) by VLM on Thursday June 07 2018, @07:08PM (22 children)

    by VLM (445) on Thursday June 07 2018, @07:08PM (#690023)

    Serious question, is this really useful for gamers today in 2018?

    I only use a couple cores for minecraft and my minecraft server even with extensive mods, so I don't know about twitchy FPS sequels maybe those use all the cores, I donno. 28 cores seems a little far fetched. I think my server has 4 allocated but its usually not using all 4.

    Ironically for a product marketed to gamers, I have a real world use for multicore in my vmware cluster in the basement. In terms of system design where only the weakest link matters, I would think my memory BW or SSD would saturate long before all 28 cores are in use, but who knows, maybe not.

    • (Score: 0) by Anonymous Coward on Thursday June 07 2018, @07:16PM (2 children)

      by Anonymous Coward on Thursday June 07 2018, @07:16PM (#690027)

      But why?

      • (Score: 2) by VLM on Friday June 08 2018, @03:24PM

        by VLM (445) on Friday June 08 2018, @03:24PM (#690352)

        You mistyped "why not?"

        What can I say, you get addicted to the virtualization lifestyle at work, next thing you know you got a cluster in the basement. All legal if for non-commercial use and the ESX-experience VMUG deal. I spend a lot more on electricity and hardware than I do on VMUG membership, thats for sure, LOL.

      • (Score: 0) by Anonymous Coward on Sunday June 10 2018, @04:59AM

        by Anonymous Coward on Sunday June 10 2018, @04:59AM (#691037)

        I have somewhat seriously discussed placing a server in the attic and running wiring out around the house to allow thin clients or even just monitor/keyboard/mouse stattions to but placed in any room so that all people in the house have access to their own computing environment from anywhere in the house. Seems like the perfect use for a personal vmware cluster. A basement is fine, too.

    • (Score: 2) by EvilSS on Thursday June 07 2018, @07:50PM

      by EvilSS (1456) Subscriber Badge on Thursday June 07 2018, @07:50PM (#690042)
      No. Maybe if you are also live-streaming at the same time but even then it's overkill. These are for workstation loads. Video editing, running a bunch of VMs, sciencey mumbo jumbo, crap like that. For games you'd be better off with fewer, faster cores.
    • (Score: 2) by JoeMerchant on Thursday June 07 2018, @07:59PM (6 children)

      by JoeMerchant (3937) on Thursday June 07 2018, @07:59PM (#690045)

      Game authors _could_ use massively multi-threaded processing for many things (think: processing NPC logic and decisions), but game authors are driven by the mass market: what does their buying audience have access to, so, no, I doubt many game authors are going to go massively multi-threaded anytime soon. Lots of "gaming systems" are still just dual core.

      🌻🌻 []
      • (Score: 5, Insightful) by takyon on Thursday June 07 2018, @08:31PM (2 children)

        by takyon (881) <{takyon} {at} {}> on Thursday June 07 2018, @08:31PM (#690056) Journal

        If they develop their games on workstations, and test them on both the workstations and dual or quad-core PCs, they should be able to design engines/games that use even as many as 64 threads. Games like Skyrim ran pretty well on low-end systems as well as high-end systems. That's the way it should work: set your minimum requirements, as low as possible if you want to ensure people can at least run it, but scale to use everything available.

        While Steam stats show many 2-4 core users, we now have 6-8 core "mainstream" chips from both AMD and Intel. The latest consoles also allow game developers to access about 6-7 of the 8 cores. The writing is on the wall, and people should design their stuff to work well with 64 or more threads even if customers don't have that yet. Even if the utilization is just running some basic parallel RadiantAI type stuff on every thread, but doing most of the work on 4 cores, so be it. The bleeding edge users should be able to enjoy a game mode that allows for thousands of NPCs.

        [SIG] 10/28/2017: Soylent Upgrade v14 []
        • (Score: 2) by JoeMerchant on Thursday June 07 2018, @09:00PM

          by JoeMerchant (3937) on Thursday June 07 2018, @09:00PM (#690067)

          people should design their stuff to work well with 64 or more threads even if customers don't have that yet

          Who do you think runs EA, Activision and UbiSoft? Where's their motivation?

          I agree with you, but I don't think that they do.

          🌻🌻 []
        • (Score: 2) by bob_super on Thursday June 07 2018, @09:05PM

          by bob_super (1357) on Thursday June 07 2018, @09:05PM (#690069)

          The current question tends to be : Will it run on Xbox/PS4 at reasonable settings, and how much does the extra PC eye candy cost ?

      • (Score: 0) by Anonymous Coward on Friday June 08 2018, @07:41AM (2 children)

        by Anonymous Coward on Friday June 08 2018, @07:41AM (#690237)

        massively multi-threaded processing for many things (think: processing NPC logic and decisions),

        Many games have a multiplayer option. So a popular way to reduce bandwidth is that the NPC logic and decisions are actually deterministic based on committed player input - which doesn't take that much bandwidth. The same actions by the same players at the same time will have the NPCs doing the same exact things. If the NPCs were not so deterministic the various client PCs will have to send each other zillions of detailed updates of the various independent NPCs, instead of mainly only sending the player actions to each other.

        Making it massively multithreaded AND deterministic is possible but not so simple if you actually want it to be faster... And even less simple if you want it to be seem even more intelligent...

        You could have the game work differently depending on whether it's multiplayer or single player but that adds to the complexity.

        • (Score: 0) by Anonymous Coward on Friday June 08 2018, @08:53AM (1 child)

          by Anonymous Coward on Friday June 08 2018, @08:53AM (#690249)
          It's still possible if you only had a few intelligent NPCs that sent their actions similar to the players.
          • (Score: 2) by Freeman on Friday June 08 2018, @05:41PM

            by Freeman (732) on Friday June 08 2018, @05:41PM (#690416) Journal

            Intelligent NPC, now there's an Oxymoron, if I ever saw one.

            Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
    • (Score: 1) by nitehawk214 on Thursday June 07 2018, @08:16PM (3 children)

      by nitehawk214 (1304) on Thursday June 07 2018, @08:16PM (#690050)

      The AMD one is certainly not marketed to gamers. The "game mode" on Threadripper cripples the processor which is somehow supposed to improve game performance. If all you are doing is playing games, you don't want Threadripper at all.

      I am considering Threadripper for my next video / photography processing workstation. Though I don't know if any software can really make use of this many cores yet.

      "Don't you ever miss the days when you used to be nostalgic?" -Loiosh
      • (Score: 2) by takyon on Thursday June 07 2018, @08:40PM (1 child)

        by takyon (881) <{takyon} {at} {}> on Thursday June 07 2018, @08:40PM (#690062) Journal

        Only certain games, like DiRT Rally, require the Threadripper "game mode". In my quick search I didn't see any lists that show which games require game mode, don't require game mode, or can use all 16 (soon 32) cores. But maybe those lists are floating around somewhere.

        Game developers know that Intel and AMD are putting out non-Xeon/Epyc CPUs with a lot more cores. The 16-core TR 1950X, 18-core Core i9-7980XE. Now shooting up to 28 or 32 cores, and maybe continuing to 48 or 64 in the near future. So newer game engines should be able to handle running on these, or even use all of the cores in some cases.

        [SIG] 10/28/2017: Soylent Upgrade v14 []
        • (Score: 1) by nitehawk214 on Saturday June 09 2018, @02:20AM

          by nitehawk214 (1304) on Saturday June 09 2018, @02:20AM (#690641)

          Well, they are going to have to make better use of multi core and gpu based systems and faster memory busses. The Ghz wars have ended, and Moore's law has expanded in to multi core processors. The coming generations of ships are going to add performance with more cores, not more Ghz.

          "Don't you ever miss the days when you used to be nostalgic?" -Loiosh
      • (Score: 0) by Anonymous Coward on Friday June 08 2018, @06:22PM

        by Anonymous Coward on Friday June 08 2018, @06:22PM (#690437)

        i suspect kedenlive could.

    • (Score: 2) by Apparition on Thursday June 07 2018, @09:18PM

      by Apparition (6835) on Thursday June 07 2018, @09:18PM (#690073) Journal

      It's good for playing a video game while live streaming and having Google Chrome up with a dozen tabs. Most video games don't utilize more than four cores, but over the past few years more games have come out which utilize six or even eight, such as Overwatch. I don't know of any video game that utilizes more than eight cores, but I'm sure that's only a matter of time.

    • (Score: 3, Insightful) by bobthecimmerian on Thursday June 07 2018, @10:32PM (3 children)

      by bobthecimmerian (6834) on Thursday June 07 2018, @10:32PM (#690097)

      I wouldn't get it for gaming, I would guess that a $300-$500 CPU and, say, $1500 invested in the right GPU would outdo a $1000+ 32 core machine and $500 less spent on GPU.

      I rip all of my Blu Rays and DVDs to disk and then reencode them to H.265 (since that's the most bandwidth/disk space -efficient codec my streaming media devices can handle). So I could find a use for one of these machines for a few months. But as it is, right now I have an AMD FX-8320. It's a joke against the cutting edge, really, 4 cores, 8 threads, and the dedicated GPU is even older, form 2010. But I have an SSD drive and 32GB of RAM. I have my H.265 encoding running continuously in the background on 4 threads. The other 4 threads run Firefox, Chrome, Minecraft, a web server, and 3 VMs. I lock the screen and walk away, and my kids sit down to use Chrome and Mineraft. It never slows down.

      Now granted, if I was into more modern games it wouldn't work.

      • (Score: 2) by Spamalope on Thursday June 07 2018, @11:41PM (2 children)

        by Spamalope (5233) on Thursday June 07 2018, @11:41PM (#690119) Homepage

        What software are you using to re-encode/convert to a single video file?
        I haven't done it in a very long time and the landscape has completely changed.

        • (Score: 2, Insightful) by Anonymous Coward on Friday June 08 2018, @04:17AM

          by Anonymous Coward on Friday June 08 2018, @04:17AM (#690194)

          Not the GP, but MakeMKV followed by HandBrake seems to be the choice lately.

        • (Score: 2) by bobthecimmerian on Friday June 08 2018, @11:35AM

          by bobthecimmerian (6834) on Friday June 08 2018, @11:35AM (#690273)

          I am on Linux, but everything I'm doing can be done on Windows. I use MakeMKV to rip films as-is and then ffmpeg to convert to H.265 video and AAC audio. AAC is the audio codec on DVDs and it's lower quality than the audio codecs on Blu Rays, but I can't hear any differences with my mediocre sound system. I've had problems with streaming video to PCs and Android television boxes using the Blu Ray audio codecs, that's why I do the conversion. I use .mkv files instead of .mp4 or similar because Blu Ray subtitles can't be stored in .mp4 files without some kind of conversion process.

    • (Score: 2) by tibman on Thursday June 07 2018, @10:57PM

      by tibman (134) Subscriber Badge on Thursday June 07 2018, @10:57PM (#690111)

      This processor? doubtful. Currently using the Ryzen 8 core version and it's excessive. I play a coop game called Vermintide 2 that starts a local server that other people connect to. When i host we get wrecked so bad. There is enough spare CPU for the AI director to give you a really bad (or Fun?) day.

      The nice part is being able to leave all your normal programs open when you play games. You could permanently give 4 threads to your sooper seekret VM and you wouldn't notice any game slowdown at all. Most people seem to use these high core-count CPUs for streaming their play sessions to the world.

      SN won't survive on lurkers alone. Write comments.
    • (Score: 2) by Freeman on Friday June 08 2018, @05:36PM

      by Freeman (732) on Friday June 08 2018, @05:36PM (#690415) Journal

      I'm not sure what the real bottleneck for performance on my current machine is, but I'm thinking its' got a whole lot more to do with developers who can't code a stable piece of software. As opposed to them legitimately maxing out my CPU, RAM or GPU. Could be that my 8GB RX480 is having issues, but I doubt it. I also got way more RAM than I will need for the foreseeable future. VR does demand a bit more CPU, RAM, and GPU power due to the nature of the beast. No one's really taking advantage of my 8-core CPU, though.

      SSDs makes loading anything feel more snappy. Comparing a HDD to a SSD is like comparing a VHS to a DVD. There's a vast improvement in performance when upgrading to a SSD. That would be the single best thing to upgrade, if you want to feel like you're going to make a serious impact on your PC performance.

      Faster RAM is notable, but probably not very noticeable when you go from 2400 to 3200mhz. You just won't see that much of a performance gain. Going from 4GB to 8GB of RAM could on the other hand make a much bigger impact, if you don't have enough to keep the 5,000 tabs up in your Browser of choice.

      GPU upgrade from 128-bit to 256-bit, or 256-bit to 512-bit will typically be a major improvement over the previous GPU. There are other factors like how much RAM is on the card and speed of the card, but the higher bandwidth cards usually follow an upward trend regarding those things.

      Also, regarding Multiplayer performance. Sometimes you just need a faster internet connection, though latency is a key issue. Other times it's down to the software developers that can't code.

      Minor jump in CPU performance? Bottom of the list of "Maybe this will help my computer run faster".

      Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
  • (Score: 3, Informative) by bob_super on Thursday June 07 2018, @07:26PM (2 children)

    by bob_super (1357) on Thursday June 07 2018, @07:26PM (#690033)

    We tried a Threadripper Gen1 for our compiles, but it turned out not faster than our Xeon, despite much more memory bandwidth. Had to go to an Intel in the end to actually go faster.
    Now that Zen+ claims double-digit percentage gains in cache latencies compared to Zen, hopefully it can offset the deficit against intel on single-thread performance (sadly still a significant part of the compile).

    • (Score: 2) by takyon on Thursday June 07 2018, @07:31PM (1 child)

      by takyon (881) <{takyon} {at} {}> on Thursday June 07 2018, @07:31PM (#690035) Journal

      Should be interesting to see what they do with the pricing. Maybe 50% more than the Threadripper 1950X ($1000) for the 32 core variant, and then when a 7nm Threadripper 3 comes out, drop it back down to $1000 for 32 cores.

      [SIG] 10/28/2017: Soylent Upgrade v14 []
      • (Score: 2) by bob_super on Thursday June 07 2018, @10:03PM

        by bob_super (1357) on Thursday June 07 2018, @10:03PM (#690083)

        We can't fully utilize 32 right now, so I'm all for them making the flagship go up 50% (for twice as many cores) if the 12- or 16-core chips, now "mid-range" drop by 25% in the process.

  • (Score: 3, Interesting) by JoeMerchant on Thursday June 07 2018, @07:40PM (3 children)

    by JoeMerchant (3937) on Thursday June 07 2018, @07:40PM (#690039)

    Intel was teasing 80+ core CPUs in 2006... I guess they're finally starting to deliver something approaching that old tease.

    🌻🌻 []
  • (Score: 0, Offtopic) by MikeVDS on Thursday June 07 2018, @07:57PM (1 child)

    by MikeVDS (1142) on Thursday June 07 2018, @07:57PM (#690044)

    Just a test post

    • (Score: 0) by Anonymous Coward on Friday June 08 2018, @06:28PM

      by Anonymous Coward on Friday June 08 2018, @06:28PM (#690440)

      what a nerd...

  • (Score: 5, Interesting) by DannyB on Thursday June 07 2018, @08:26PM (2 children)

    by DannyB (5839) Subscriber Badge on Thursday June 07 2018, @08:26PM (#690055) Journal

    It's taken longer than I had hoped, but there are now plenty of languages with the right features, and various frameworks that make it much easier to take advantage of using any number of cores to handle "embarrassingly parallel problems".

    And there are plenty of embarrassingly parallel problems. Some problems can be transformed into parallel problems. Just look for long iterations over items where the processing of each item is independent from other items.

    You can also re-think algorithms.


    I was plotting millions, then tens of millions of data points. It was slow. I was doing the obvious but naive operation of drawing a dot for each data point. This meant a long loop, and invoking a graphics subsystem operation for every point. Even though the plot is being drawn off screen.

    Then I observed a phenomena. This is like having a square tiled wall (eg, the pixels) and throwing color paint filled balloons at the wall (eg, each plot point). After many plot points, older plot points are obscured by newer ones.

    So let's re-think. Imagine each plot point is now a dart with an infinitely small top. The square tiles on the wall are now "pixel buckets". Each square ("pixel bucket") accumulates the average (of the original data, not the color). The average of data points (eg, dart points) that hit the wall in that tile. Now we're throwing darts (data points) at the wall instead of paint filled balloons. (each pixel bucket has a counter and an accumulated sum, thus an average.)

    At the end, compute the color (along a gradient) for the accumulated average in each pixel. Now the number of graphical operations is to set one pixel for every pixel bucket. The number of graphical operations is tied to the number of pixels, and unrelated to the number of input data points. The entire result is:
    1. faster
    2. draws a much more finely detailed view (and don't say it is because the original dot size plotted was too big)

    Now I can (and did) take this further and make it parallel. Divide up the original data points into groups of "work units". When each cpu core is free, it consumes the next "work unit" in the queue. It creates a 2D array of "pixel bucket" averages. Iterates over the subset of data points in that work unit, and averages each point into which ever pixel it would land in.

    At the end, pairs of these arrays of pixel values are smashed together. (Simply add the counters and sums together in corresponding pixel buckets.) Then on the final array, once again, determine colors and plot.

    The result is identical, but now much faster. Not n-cores * original faster, but close. There is overhead. But using 8 cores is way worth it.

    My point, if you think about it, you can find opportunities to use multiple cores. Just put your mind to it. Remember there is overhead. So each "work unit" must be far more than worth the overhead to organize and process it under this model.

    When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
    • (Score: 0) by Anonymous Coward on Friday June 08 2018, @03:07PM (1 child)

      by Anonymous Coward on Friday June 08 2018, @03:07PM (#690346)

      Why not use OpenCL to run it on a GPU?

      • (Score: 2) by DannyB on Monday June 11 2018, @02:27PM

        by DannyB (5839) Subscriber Badge on Monday June 11 2018, @02:27PM (#691395) Journal

        So many things to do, so little time. I'm sure you know the story.

        The project is written in Java. There are two (that I know of) projects for Java to support OpenCL in Java. I have looked into it. It is a higher bar to jump over. I might try it with a small project first. It's a matter of time and energy to do it. I'm interested in trying it.

        I have to write a C kernel, and there are examples, and have that code available as a "string". (Eg, baked into the code, retrieved from a configuration file, database, etc.) I have to think about the problem very differently to organize it for OpenCL. It is a very different programming model than conventional CPUs. Basically OpenCL is parallelism at a far finer grained approach than the "work units" I described. The work done my my "work units", and thus the code, could be arbitrarily complex. As long as work units are all independent of one another. The very same code to do the work, works on a single cpu core, or multiple cores, if you have them. With OpenCL, I need to have two sets of code to maintain. The OpenCL version, and at least a single-core version for when OpenCL is not available on a given runtime. (Remember, my Java program, the binary, runs on any machine, even ones not invented yet.)

        Thus, there is a philosophical issue. What I would rather see is More Cores Please. Conventional cores. Conventional architecture programming. It seems that if you had several hundred cores that were more general purpose rather than specialized for graphics, this would STILL benefits graphics. But in a much more general way.

        Let me give an example of a problem that would require serious thinking for OpenCL. A Mandelbrot set explorer. My current Mandelbrot set explorer (in Java) uses arbitrary precision. Thus it does not "peter out" once you dive deep enough to exhaust the precision of a double (eg, 64 bit float). By allowing arbitrary precision math, you can dive deeper and deeper. A Mandelbrot explorer is another embarrassingly parallel problem. "Work units" could even be distributed out to other computers on a network. You just need to launch a JAR file on each node. (And those nodes don't even have to be the same CPU architecture or OS.) In a single kernel in OpenCL, I would need to be able to iterate X number of times on a pixel, using arbitrary precision math, within the bounds of how kernels work. Multiple parameters, each parameter being a buffer (an array) where different concurrent kernels operate on different elements in the parameter buffers.

        It seems that with all the silicon we have now, maybe it's time to start building larger numbers of general purpose cores. This would much more rapidly produce benefits in far more every day applications than Open CL. IMO.

        When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
  • (Score: 0) by Anonymous Coward on Thursday June 07 2018, @09:10PM

    by Anonymous Coward on Thursday June 07 2018, @09:10PM (#690071)

    How many dies fit into a 30cm platter? What's yield rate for 15-core dies?

  • (Score: 1, Insightful) by Anonymous Coward on Thursday June 07 2018, @10:17PM (2 children)

    by Anonymous Coward on Thursday June 07 2018, @10:17PM (#690091)

    i'd rather have a single core that was 28 times faster.