Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Friday July 17 2015, @10:00AM   Printer-friendly
from the hickory-dickory-dock dept.

Intel's "Tick-Tock" strategy of micro-architectural changes followed by die shrinks has officially stalled. Although Haswell and Broadwell chips have experienced delays, and Broadwell desktop chips have been overshadowed by Skylake, delays in introducing 10nm process node chips have resulted in Intel's famously optimistic roadmap missing its targets by about a whole year. 10nm Cannonlake chips were set to begin volume production in late 2016, but are now scheduled for the second half of 2017. In its place, a third generation of 14nm chips named "Kaby Lake" will be launched. It is unclear what improvements Kaby Lake will bring over Skylake.

Intel will not be relying on the long-delayed extreme ultraviolet (EUV) lithography to make 10nm chips. The company's revenues for the last quarter were better than expected, despite the decline of the PC market. Intel's CEO revealed the stopgap 14nm generation at the Q2 2015 earnings call:

"The lithography is continuing to get more difficult as you try and scale and the number of multi-pattern steps you have to do is increasing," [Intel CEO Brian Krzanich] said, adding, "This is the longest period of time without a lithography node change."

[...] But Krzanich seemed confident that letting up on the gas, at least for now, is the right move – with the understanding that Intel will aim to get back onto its customary two-year cycle as soon as possible. "Our customers said, 'Look, we really want you to be predictable. That's as important as getting to that leading edge'," Krzanich said during Wednesday's earnings call. "We chose to actually just go ahead and insert – since nothing else had changed – insert this third wave [with Kaby Lake]. When we go from 10-nanometer to 7-nanometer, it will be another set of parameters that we'll reevaluate this."

Intel Roadmap
Year   Old   New
2014   14nm Broadwell   14nm Broadwell
2015   14nm Skylake   14nm Skylake
2016   10nm Cannonlake   14nm Kaby Lake
2017   10nm "Tock"   10nm Cannonlake
2018   N/A   10nm "Tock"


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Zinho on Friday July 17 2015, @09:52PM

    by Zinho (759) on Friday July 17 2015, @09:52PM (#210622)

    Want to quote some citations on x86 being more power hungry than ARM?

    Sure:

    “The X-Gene-equipped units [at PayPal] cost approximately one-half the price of traditional data center infrastructure hardware and incurred only one-seventh of the annual running cost,” [source] [datacenterknowledge.com]

    There's even a Windows Server build in the works for ARM, [zdnet.com] which takes away one of the last reasons to stay with Intel in the face of reduced power requirements on ARM. Again, from that last article I linked:

    The bottom line one is that a 64-bit ARM powered microserver has a thermal design power (TDP) of of between 10 and 45 watts. A conventional x86 server runs more than 90 watts. The lower the power consumption the lower the not just the direct server utility bills, but the lower the overall data center running costs.

    Let me put it put concrete numbers.

    A 64-bit ARM server will use no more half the power of its x86 counterpart. ZDNet estimated that the kilowatt hour cost for commercial use per year per server in 2013 was $731.94. Multiply that by the number of servers in a data center and then divide that number by two.

    Since power consumption is often a data center's single greatest cost, that is a tremendous saving.

    --
    "Space Exploration is not endless circles in low earth orbit." -Buzz Aldrin
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by dusty monkey on Saturday July 18 2015, @09:56AM

    by dusty monkey (5492) on Saturday July 18 2015, @09:56AM (#210733)

    The least that you could do was check the numbers before stringing together what is clearly shit, or rather, parrot complete shit that someone else strung together. To arrive at the $713.94 figure given in the article you cited, the author cites another article that uses a draw of 850 watts per server to arrive that the $713.94 cost, but thats an order of magnitude larger than the 90 watts the author of the article you cited had just mentioned for conventional x86 servers.

    The author of the article that arrives at $713.94 (Teena Hammond) is dead on, for an atypical server that draws 750 watts, such as you would find in a supercomputing cluster. But why then is the author of the article you cited (Steven J. Vaughan-Nichols) misrepresenting the $713.94 figure? Its because he is a hack author that doesnt know shit about anything, and for some reason you didnt bother to care about the veracity of it. Gullible people (such as yourself) seem to have swallowed it without any critical thought at all.

    I can predict how and why you pushed this shit on us. You had a preconceived notion and posted the first thing that you could find that seemed to support it, and you did it as fast as you could which is why you didnt put any critical thought into it. Your actions are highly similar to the actions of the god damned author of the mis-informative highly inaccurate and misleading article that you cited.

    --
    - when you vote for the lesser of two evils, you are still voting for evil - stop supporting evil -
    • (Score: 2) by Zinho on Sunday July 19 2015, @05:09PM

      by Zinho (759) on Sunday July 19 2015, @05:09PM (#211124)

      [lots of rant]

      Wow, you really feel strongly about this. I didn't mean my careless research to be personally offensive to you.

      Here's my take on this.
      * my first web search found not only support for the idea that data centers were switching to low power chips, but also a specific example (PayPal, although they're not alone [zdnet.com])
      * It's my experience that a strong fraction of a server's power use is in the processor (disk and cooling being the next two biggest costs, not necessarily in order)
      * High-efficiency chips, if serving to meet the load, cost less to run (45W < 90W, kWh = $)

      The fact that companies are creating the product and marketing it means that it's at least not insane; the fact that PayPal is willing to admit to purchasing it means that they did the math and it works for them. I know it's more complicated than that; at full load the right comparison is computations per Watt, and you'll need more of the lower-power chips to do the same work. It's not a slam dunk in either direction.

      As much as it may offend you, microservers are a real thing. Especially for paralellizable applications, large numbers of low-power processors can be used to do real-life work [datacenterdynamics.com] in a cost-effective way. It's not for everyone, nor for every application, but there are times when it makes sense.

      --
      "Space Exploration is not endless circles in low earth orbit." -Buzz Aldrin
      • (Score: 2) by dusty monkey on Tuesday July 21 2015, @12:46AM

        by dusty monkey (5492) on Tuesday July 21 2015, @12:46AM (#211688)

        I didn't mean my careless research to be personally offensive to you.

        Its not, but it appears that being critical of what you did is offensive to you. Can't just accept that you made a mistake, that you have faults that need working on, I guess.

        As much as it may offend you, microservers are a real thing.

        No shit sherlock. Apparently any criticism of you means the person being critical must be completely opposite on every possible thing. Here is an idea... next time, just dont be so damned worse than worthless. You do realize that your post was worse than worthless, right? It was misinformation. You did harm.

        --
        - when you vote for the lesser of two evils, you are still voting for evil - stop supporting evil -
        • (Score: 2) by Zinho on Tuesday July 21 2015, @06:51AM

          by Zinho (759) on Tuesday July 21 2015, @06:51AM (#211787)

          . . . it appears that being critical of what you did is offensive to you. Can't just accept that you made a mistake, that you have faults that need working on, I guess.

          Actually, I'm more confused than offended at the intensity of your response to my quoting the $713.94 figure for annual processing cost. You spent an entire post decrying my lack of research on this point, when I thought that was perhaps the least consequential point in the article I linked. I'll gladly admit I was wrong about that number, because it doesn't matter to me. You're right, of course; 90 watts running full tilt for a year will only cost ~$80 depending on your price per kWh (according to my back-of-the-envelope estimate, not intended to be either specific or accurate; YMMV).

          Here is an idea... next time, just dont be so damned worse than worthless. You do realize that your post was worse than worthless, right? It was misinformation. You did harm.

          Again, confusion on my part. Harm? Is someone going to lose money or need hospitalization because of what I wrote? Psychic trauma caused by a throwaway value that the quoted author couldn't even be bothered to divide by two to make his point? And there was nothing else in my original or follow-up post of value that could balance that potential damage?

          I think I'm having a Poe's law moment. Either you're sincere and have a wildly different perspective than mine on the value of precise data center operating cost estimates, or you're trolling me; I can't tell the difference.

          --
          "Space Exploration is not endless circles in low earth orbit." -Buzz Aldrin