Stories
Slash Boxes
Comments

SoylentNews is people

posted by CoolHand on Tuesday February 28 2017, @02:43PM   Printer-friendly
from the natural-beauty-in-clouds-and-skylakes dept.

https://www.hpcwire.com/2017/02/27/google-gets-first-dibs-new-skylake-chips/

As part of an ongoing effort to differentiate its public cloud services, Google made good this week on its intention to bring custom Xeon Skylake chips from Intel Corp. to its Google Compute Engine. The cloud provider is the first to offer the next-gen Xeons, and is getting access ahead of traditional server-makers like Dell and HPE.

Google announced plans to incorporate the next-generation Intel server chips into its public could last November. On Friday (Feb. 24), Urs Hölzle, Google's senior vice president for cloud infrastructure, said the Skylake upgrade would deliver a significant performance boost for demanding applications and workloads ranging from genomic research to machine learning.

The cloud vendor noted that Skylake includes Intel Advanced Vector Extensions (AVX-512) that target workloads such as data analytics, engineering simulations and scientific modeling. When compared to previous generations, the Skylake extensions are touted as doubling floating-point performance "for the heaviest calculations," Hölzle noted in a blog post.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 3, Interesting) by Unixnut on Tuesday February 28 2017, @05:23PM

    by Unixnut (5779) on Tuesday February 28 2017, @05:23PM (#472886)

    "As part of an ongoing effort to differentiate its public cloud services, Google made good this week on its intention to bring custom Xeon Skylake chips from Intel Corp. to its Google Compute Engine. The cloud provider is the first to offer the next-gen Xeons, and is getting access ahead of traditional server-makers like Dell and HPE."

    It isn't uncommon. The financial sector (especially High freq trading and hedge funds) will usually pay above the odds for pre-release hardware. The "big consumer pc" companies get it last, when the bugs are ironed out and prices dropped down to something not mind bendingly expensive.

    Once when I bought an older gen Intel from a financial company that was liquidating old stock, the CPU didn't have a model number/name or anything else burnt in. It showed up on the boot screen as "Unknown Intel" "Unknown Model 4.2GHz". Most likely back then, the company paid a high price to Intel to get a CPU so new it didn't even have an official model number baked into its silicon yet.

    Likewise had the privilege of using 10 Core Intel CPUs that weren't yet released to the public (I think it was back in 2014) in a HPC setup for a finance company. Had to sign a NDA saying that until released to the public, I could not talk about it the CPU at all, nor disclose any performance specs/metrics.

  • (Score: 0) by Anonymous Coward on Tuesday February 28 2017, @05:52PM (3 children)

    by Anonymous Coward on Tuesday February 28 2017, @05:52PM (#472906)

    To be honest, I haven't experienced much difference in speed in CPUs for about 10 years. I had a little Core 2 Duo on my desktop that I did some number crunching on, then the guy in the next office blew $10k on a 32core 64Gb RAM machine (this was when that was a lot of RAM). I begged him to give me a login to run some stuff and he did... but it was pretty disappointing. More or less the same as my beater machine.

    I'm kind of intrigued by the "double performance" for intensive tasks but I won't hold my breath. In any case, GPUs "literally" (not literally) blow CPUs out of the water for most tasks. Instant 10X faster... and I'm saving up my pennies for a Pascal which should be another 2X. For my uses (mainly research / testing) I used to think chasing speed was a unimportant since you can always run something overnight... but it is immensely productive to be able to work interactively rather than planning / executing / analyzing experiments separately.

    • (Score: 0) by Anonymous Coward on Tuesday February 28 2017, @06:07PM (2 children)

      by Anonymous Coward on Tuesday February 28 2017, @06:07PM (#472924)

      Your old machine already had enough juice to play your porn collection.

      • (Score: 2) by DannyB on Tuesday February 28 2017, @09:41PM (1 child)

        by DannyB (5839) Subscriber Badge on Tuesday February 28 2017, @09:41PM (#473051) Journal

        It is safer to use a VM rather than the bare thing.

        Especially if the VM can easily be reset back to it's initial hard drive state for each session.

        --
        People today are educated enough to repeat what they are taught but not to question what they are taught.
        • (Score: 0) by Anonymous Coward on Wednesday March 01 2017, @12:30AM

          by Anonymous Coward on Wednesday March 01 2017, @12:30AM (#473144)

          When it comes to video or audio, I can really detect a difference when it's not touching the bare metal. I don't like VMs either.

  • (Score: 0) by Anonymous Coward on Tuesday February 28 2017, @05:57PM (1 child)

    by Anonymous Coward on Tuesday February 28 2017, @05:57PM (#472911)

    I am not sure why it is I care to read about their reselling indirect access to a CPU.

    My stuff wont go faster if they put more seats on the plane, despite whatever efficiency gained. or is that the wrong metaphor. anyway it benefits them and how they can put more customers on the same box.

    any speed increase noted per customer is just because they havent finished cramming more stuff onto the virtual machine.

    • (Score: 2) by tibman on Wednesday March 01 2017, @02:22PM

      by tibman (134) Subscriber Badge on Wednesday March 01 2017, @02:22PM (#473305)

      I think it's more of a PR thing for Intel.

      --
      SN won't survive on lurkers alone. Write comments.
(1)