from the natural-beauty-in-clouds-and-skylakes dept.
https://www.hpcwire.com/2017/02/27/google-gets-first-dibs-new-skylake-chips/
As part of an ongoing effort to differentiate its public cloud services, Google made good this week on its intention to bring custom Xeon Skylake chips from Intel Corp. to its Google Compute Engine. The cloud provider is the first to offer the next-gen Xeons, and is getting access ahead of traditional server-makers like Dell and HPE.
Google announced plans to incorporate the next-generation Intel server chips into its public could last November. On Friday (Feb. 24), Urs Hölzle, Google's senior vice president for cloud infrastructure, said the Skylake upgrade would deliver a significant performance boost for demanding applications and workloads ranging from genomic research to machine learning.
The cloud vendor noted that Skylake includes Intel Advanced Vector Extensions (AVX-512) that target workloads such as data analytics, engineering simulations and scientific modeling. When compared to previous generations, the Skylake extensions are touted as doubling floating-point performance "for the heaviest calculations," Hölzle noted in a blog post.
Related Stories
AnandTech compared Intel's Skylake-SP chips to AMD's Epyc chips:
We can continue to talk about Intel's excellent mesh topology and AMD strong new Zen architecture, but at the end of the day, the "how" will not matter to infrastructure professionals. Depending on your situation, performance, performance-per-watt, and/or performance-per-dollar are what matters.
The current Intel pricing draws the first line. If performance-per-dollar matters to you, AMD's EPYC pricing is very competitive for a wide range of software applications. With the exception of database software and vectorizable HPC code, AMD's EPYC 7601 ($4200) offers slightly less or slightly better performance than Intel's Xeon 8176 ($8000+). However the real competitor is probably the Xeon 8160, which has 4 (-14%) fewer cores and slightly lower turbo clocks (-100 or -200 MHz). We expect that this CPU will likely offer 15% lower performance, and yet it still costs about $500 more ($4700) than the best EPYC. Of course, everything will depend on the final server system price, but it looks like AMD's new EPYC will put some serious performance-per-dollar pressure on the Intel line.
The Intel chip is indeed able to scale up in 8 sockets systems, but frankly that market is shrinking fast, and dual socket buyers could not care less.
Meanwhile, although we have yet to test it, AMD's single socket offering looks even more attractive. We estimate that a single EPYC 7551P would indeed outperform many of the dual Silver Xeon solutions. Overall the single-socket EPYC gives you about 8 cores more at similar clockspeeds than the 2P Intel, and AMD doesn't require explicit cross socket communication - the server board gets simpler and thus cheaper. For price conscious server buyers, this is an excellent option.
However, if your software is expensive, everything changes. In that case, you care less about the heavy price tags of the Platinum Xeons. For those scenarios, Intel's Skylake-EP Xeons deliver the highest single threaded performance (courtesy of the 3.8 GHz turbo clock), high throughput without much (hardware) tuning, and server managers get the reassurance of Intel's reliable track record. And if you use expensive HPC software, you will probably get the benefits of Intel's beefy AVX 2.0 and/or AVX-512 implementations.
AMD's flagship Epyc CPU has 32 cores, while the largest Skylake-EP Xeon CPU has 28 cores.
Quoted text is from page 23, "Closing Thoughts".
[Ed. note: Article is multiple pages with no single page version in sight.]
Previously: Google Gets its Hands on Skylake-Based Intel Xeons
Intel Announces 4 to 18-Core Skylake-X CPUs
AMD Epyc 7000-Series Launched With Up to 32 Cores
Intel's Skylake and Kaby Lake CPUs Have Nasty Microcode Bug
AVX-512: A "Hidden Gem"?
(Score: 3, Interesting) by Unixnut on Tuesday February 28 2017, @05:23PM
"As part of an ongoing effort to differentiate its public cloud services, Google made good this week on its intention to bring custom Xeon Skylake chips from Intel Corp. to its Google Compute Engine. The cloud provider is the first to offer the next-gen Xeons, and is getting access ahead of traditional server-makers like Dell and HPE."
It isn't uncommon. The financial sector (especially High freq trading and hedge funds) will usually pay above the odds for pre-release hardware. The "big consumer pc" companies get it last, when the bugs are ironed out and prices dropped down to something not mind bendingly expensive.
Once when I bought an older gen Intel from a financial company that was liquidating old stock, the CPU didn't have a model number/name or anything else burnt in. It showed up on the boot screen as "Unknown Intel" "Unknown Model 4.2GHz". Most likely back then, the company paid a high price to Intel to get a CPU so new it didn't even have an official model number baked into its silicon yet.
Likewise had the privilege of using 10 Core Intel CPUs that weren't yet released to the public (I think it was back in 2014) in a HPC setup for a finance company. Had to sign a NDA saying that until released to the public, I could not talk about it the CPU at all, nor disclose any performance specs/metrics.
(Score: 0) by Anonymous Coward on Tuesday February 28 2017, @05:52PM (3 children)
To be honest, I haven't experienced much difference in speed in CPUs for about 10 years. I had a little Core 2 Duo on my desktop that I did some number crunching on, then the guy in the next office blew $10k on a 32core 64Gb RAM machine (this was when that was a lot of RAM). I begged him to give me a login to run some stuff and he did... but it was pretty disappointing. More or less the same as my beater machine.
I'm kind of intrigued by the "double performance" for intensive tasks but I won't hold my breath. In any case, GPUs "literally" (not literally) blow CPUs out of the water for most tasks. Instant 10X faster... and I'm saving up my pennies for a Pascal which should be another 2X. For my uses (mainly research / testing) I used to think chasing speed was a unimportant since you can always run something overnight... but it is immensely productive to be able to work interactively rather than planning / executing / analyzing experiments separately.
(Score: 0) by Anonymous Coward on Tuesday February 28 2017, @06:07PM (2 children)
Your old machine already had enough juice to play your porn collection.
(Score: 2) by DannyB on Tuesday February 28 2017, @09:41PM (1 child)
It is safer to use a VM rather than the bare thing.
Especially if the VM can easily be reset back to it's initial hard drive state for each session.
Don't put a mindless tool of corporations in the white house; vote ChatGPT for 2024!
(Score: 0) by Anonymous Coward on Wednesday March 01 2017, @12:30AM
When it comes to video or audio, I can really detect a difference when it's not touching the bare metal. I don't like VMs either.
(Score: 0) by Anonymous Coward on Tuesday February 28 2017, @05:57PM (1 child)
I am not sure why it is I care to read about their reselling indirect access to a CPU.
My stuff wont go faster if they put more seats on the plane, despite whatever efficiency gained. or is that the wrong metaphor. anyway it benefits them and how they can put more customers on the same box.
any speed increase noted per customer is just because they havent finished cramming more stuff onto the virtual machine.
(Score: 2) by tibman on Wednesday March 01 2017, @02:22PM
I think it's more of a PR thing for Intel.
SN won't survive on lurkers alone. Write comments.