Intel's "Tick-Tock" strategy of micro-architectural changes followed by die shrinks has officially stalled. Although Haswell and Broadwell chips have experienced delays, and Broadwell desktop chips have been overshadowed by Skylake, delays in introducing 10nm process node chips have resulted in Intel's famously optimistic roadmap missing its targets by about a whole year. 10nm Cannonlake chips were set to begin volume production in late 2016, but are now scheduled for the second half of 2017. In its place, a third generation of 14nm chips named "Kaby Lake" will be launched. It is unclear what improvements Kaby Lake will bring over Skylake.
Intel will not be relying on the long-delayed extreme ultraviolet (EUV) lithography to make 10nm chips. The company's revenues for the last quarter were better than expected, despite the decline of the PC market. Intel's CEO revealed the stopgap 14nm generation at the Q2 2015 earnings call:
"The lithography is continuing to get more difficult as you try and scale and the number of multi-pattern steps you have to do is increasing," [Intel CEO Brian Krzanich] said, adding, "This is the longest period of time without a lithography node change."
[...] But Krzanich seemed confident that letting up on the gas, at least for now, is the right move – with the understanding that Intel will aim to get back onto its customary two-year cycle as soon as possible. "Our customers said, 'Look, we really want you to be predictable. That's as important as getting to that leading edge'," Krzanich said during Wednesday's earnings call. "We chose to actually just go ahead and insert – since nothing else had changed – insert this third wave [with Kaby Lake]. When we go from 10-nanometer to 7-nanometer, it will be another set of parameters that we'll reevaluate this."
Intel Roadmap | ||||
---|---|---|---|---|
Year | Old | New | ||
2014 | 14nm Broadwell | 14nm Broadwell | ||
2015 | 14nm Skylake | 14nm Skylake | ||
2016 | 10nm Cannonlake | 14nm Kaby Lake | ||
2017 | 10nm "Tock" | 10nm Cannonlake | ||
2018 | N/A | 10nm "Tock" |
Original Submission
Related Stories
Moore's Law, coined eponymously for Gordon Moore, co-founder of Intel Corporation, who, in a 1965 paper famously observed that component densities on integrated circuits will double every twelve months. He amended his observation in 1975 to a doubling every 24 months. Since then, the chip industry has borne out Moore's observation/prediction. However, there are still those who claim that Moore's Law is dying, just as many have done before.
However, Peter Bright over at Ars Technica is reporting notes a change in focus for the chip industry away from chasing Moore's Law. From the article:
Gordon Moore's observation was not driven by any particular scientific or engineering necessity. It was a reflection on just how things happened to turn out. The silicon chip industry took note and started using it not merely as a descriptive, predictive observation, but as a prescriptive, positive law: a target that the entire industry should hit.
Apparently, the industry isn't going to keep trying to hit that particular target moving forward, as we've seen with the recent delay of Intel's 10nm Cannonlake chips. This is for several reasons:
In the 2000s, it was clear that this geometric scaling was at an end, but various technical measures were devised to keep pace of the Moore's law curves. At 90nm, strained silicon was introduced; at 45nm, new materials to increase the capacitance of each transistor layered on the silicon were introduced. At 22nm, tri-gate transistors maintained the scaling.
But even these new techniques were up against a wall. The photolithography process used to transfer the chip patterns to the silicon wafer has been under considerable pressure: currently, light with a 193 nanometre wavelength is used to create chips with features just 14 nanometres. The oversized light wavelength is not insurmountable but adds extra complexity and cost to the manufacturing process. It has long been hoped that extreme UV (EUV), with a 13.5nm wavelength, will ease this constraint, but production-ready EUV technology has proven difficult to engineer.
Even with EUV, it's unclear just how much further scaling is even possible; at 2nm, transistors would be just 10 atoms wide, and it's unlikely that they'd operate reliably at such a small scale. Even if these problems were resolved, the specter of power usage and dissipation looms large: as the transistors are packed ever tighter, dissipating the energy that they use becomes ever harder.
The new techniques, such as strained silicon and tri-gate transistors, took more than a decade to put in production. EUV has been talked about for longer still. There's also a significant cost factor. There's a kind of undesired counterpart to Moore's law, Rock's law, which observes that the cost of a chip fabrication plant doubles every 4 years. Technology may provide ways to further increase the number of transistors packed into a chip, but the manufacturing facilities to build these chips may be prohibitively expensive—a situation compounded by the growing use of smaller, cheaper processors.
The article goes on to discuss how the industry will focus moving forward:
[More]
Intel may finally be abandoning its "Tick-Tock" strategy:
As reported at The Motley Fool, Intel's latest 10-K / annual report filing would seem to suggest that the 'Tick-Tock' strategy of introducing a new lithographic process note in one product cycle (a 'tick') and then an upgraded microarchitecture the next product cycle (a 'tock') is going to fall by the wayside for the next two lithographic nodes at a minimum, to be replaced with a three element cycle known as 'Process-Architecture-Optimization'.
Intel's Tick-Tock strategy has been the bedrock of their microprocessor dominance of the last decade. Throughout the tenure, every other year Intel would upgrade their fabrication plants to be able to produce processors with a smaller feature set, improving die area, power consumption, and slight optimizations of the microarchitecture, and in the years between the upgrades would launch a new set of processors based on a wholly new (sometimes paradigm shifting) microarchitecture for large performance upgrades. However, due to the difficulty of implementing a 'tick', the ever decreasing process node size and complexity therein, as reported previously with 14nm and the introduction of Kaby Lake, Intel's latest filing would suggest that 10nm will follow a similar pattern as 14nm by introducing a third stage to the cadence.
Year | Process | Name | Type |
---|---|---|---|
2016 | 14nm | Kaby Lake | Optimization |
2017 | 10nm | Cannonlake | Process |
2018 | 10nm | Ice Lake | Architecture |
2019 | 10nm | Tiger Lake | Optimization |
2020 | 7nm | ??? | Process |
This suggests that 10nm "Cannonlake" chips will be released in 2017, followed by a new 10nm architecture in 2018 (tentatively named "Ice Lake"), optimization in 2019 (tentatively named "Tiger Lake"), and 7nm chips in 2020. This year's "optimization" will come in the form of "Kaby Lake", which could end up making underwhelming improvements such as slightly higher clock speeds, due to higher yields of the previously-nameed "Skylake" chips. To be fair, Kaby Lake will supposedly add the following features alongside any CPU performance tweaks:
Kaby Lake will add native USB 3.1 support, whereas Skylake motherboards require a third-party add-on chip in order to provide USB 3.1 ports. It will also feature a new graphics architecture to improve performance in 3D graphics and 4K video playback. Kaby Lake will add native HDCP 2.2 support. Kaby Lake will add full fixed function HEVC Main10/10-bit and VP9 10-bit hardware decoding.
Previously: Intel's "Tick-Tock" Strategy Stalls, 10nm Chips Delayed
Intel Tock-Ticks Chipsets Back to 22nm
We've confirmed through multiple sources that Intel is fabbing its new H310C chipset on its 22nm process. That means the chip-making giant has taken a step back to an older process for the H310C chipset as it struggles with its ongoing shortage of 14nm processors. Contrary to recent reports, our sources confirmed Intel manufactures these chips and not TSMC (which has been reported in recent weeks), though that could be subject to change in the future.
The shift in Intel's strategy comes as the company struggles with the fallout from its chronically delayed 10nm process. Now the company is dealing with an increasingly loud chorus of reports that Intel's 14nm shortage is now impacting its server, desktop and mobile chips.
[...] Intel typically produces chipsets on a larger node than its current-gen processors, but the delayed 10nm production has found both chipsets and chips on the same 14nm node, creating a manufacturing bottleneck as the company experiences record demand for 14nm processors.
Related: Intel's "Tick-Tock" Strategy Stalls, 10nm Chips Delayed
Intel's First 8th Generation Processors Are Just Updated 7th Generation Chips
Intel Delays Mass Production Of 10 nm CPUs To 2019
Intel Issues Update on 14nm Shortage, Invests $1B Into Fab Sites (Update)
Intel's CFO and interim CEO Bob Swan penned an open letter to its customers and partners today outlining the steps it is taking to address a persistent and worsening shortage of 14nm processors.
[...] The shortage impacts nearly every aspect of Intel's business, from desktops to laptops, servers and even chipsets, so Intel is making the sound business decision to prioritize high-margin products. The firm has also expanded its testing capacity by diverting some work to a facility in Vietnam.
[...] Intel's statement also assures us that processors built on its 10nm fabrication will arrive in volume in 2019. Intel had previously stated that 10nm processors would be available in 2019, but hadn't made the distinction that they would arrive in volume. That's a positive sign, as the oft-delayed 10nm production is surely a contributing factor to the shortage. Intel also cites the booming desktop PC market, which has outstripped the company's original estimates earlier this year, as a catalyst.
In either case, Intel concedes that "supply is undoubtedly tight, particularly in the entry-level of the PC market" but doesn't provide a firm timeline for when the processors will become fully available. Intel's letter also touts its $1 billion investment in 14nm fabs this year, but half of that capital expenditure was scheduled prior to its first public acknowledgement of the shortage. Given Intel's foresight into the production challenges, the prior $500 million investment was likely in response to the increases in demand and looming production shortfall.
Previously: Intel Migrates New Chipsets to "22nm" Node From "14nm"
Related: Intel's "Tick-Tock" Strategy Stalls, 10nm Chips Delayed
Intel's First 8th Generation Processors Are Just Updated 7th Generation Chips
Intel Delays Mass Production Of 10 nm CPUs To 2019
Report: Intel is cancelling its 10nm process. Intel: No, we're not
Earlier today, it was reported that Intel is cancelling its troublesome 10nm manufacturing process. In an unusual response, the company has tweeted an official denial of the claims.
[...] The company's most recent estimate is that 10nm will go into volume production in the second half of 2019. The report from SemiAccurate cites internal sources saying that this isn't going to happen: while there may be a few 10nm chips, for the most part Intel is going to skip to its 7nm process.
Typically, Intel doesn't respond to rumors, but this one appears to be an exception. The company is tweeting that it's making "good progress" on 10nm and that yields are improving consistent with the guidance the company provided on its last earnings report. Intel's next earnings report is on Thursday, and we're likely to hear more about 10nm's progress then.
Also at Tom's Hardware and The Verge.
Related: Intel's "Tick-Tock" Strategy Stalls, 10nm Chips Delayed (it has been over 3 years since this article was posted)
Moore's Law: Not Dead? Intel Says its 10nm Chips Will Beat Samsung's
Intel's First 8th Generation Processors Are Just Updated 7th Generation Chips
Intel Releases Open Letter in Attempt to Address Shortage of "14nm" Processors and "10nm" Delays
Intel's Senior Vice President Jim Keller (who previously helped to design AMD's K8 and Zen microarchitectures) gave a talk at the Silicon 100 Summit that promised continued pursuit of transistor scaling gains, including a roughly 50x increase in gate density:
Intel's New Chip Wizard Has a Plan to Bring Back the Magic (archive)
In 2016, a biennial report that had long served as an industry-wide pledge to sustain Moore's law gave up and switched to other ways of defining progress. Analysts and media—even some semiconductor CEOs—have written Moore's law's obituary in countless ways. Keller doesn't agree. "The working title for this talk was 'Moore's law is not dead but if you think so you're stupid,'" he said Sunday. He asserted that Intel can keep it going and supply tech companies ever more computing power. His argument rests in part on redefining Moore's law.
[...] Keller also said that Intel would need to try other tactics, such as building vertically, layering transistors or chips on top of each other. He claimed this approach will keep power consumption down by shortening the distance between different parts of a chip. Keller said that using nanowires and stacking his team had mapped a path to packing transistors 50 times more densely than possible with Intel's 10 nanometer generation of technology. "That's basically already working," he said.
The ~50x gate density claim combines ~3x density from additional pitch scaling (from "10nm"), ~2x from nanowires, another ~2x from stacked nanowires, ~2x from wafer-to-wafer stacking, and ~2x from die-to-wafer stacking.
Related: Intel's "Tick-Tock" Strategy Stalls, 10nm Chips Delayed
Intel's "Tick-Tock" is Now More Like "Process-Architecture-Optimization"
Moore's Law: Not Dead? Intel Says its 10nm Chips Will Beat Samsung's
Another Step Toward the End of Moore's Law
Intel says it was too aggressive pursuing 10nm, will have 7nm chips in 2021
[Intel's CEO Bob] Swan made a public appearance at Fortune's Brainstorm Tech conference in Aspen, Colorado, on Tuesday and explained to the audience in attendance that Intel essentially set the bar too high for itself in pursuing 10nm. More specifically, he pointed to Intel's overly "aggressive goal" of going after a 2.7x transistor density improvement over 14nm.
[...] Needless to say, the 10nm delays have caused Intel to fall well behind that transistor density doubling. Many have proclaimed Moore's Law as dead, but as far as Swan is concerned, Moore's Law is not dead. It apparently just needed to undergo an unexpected surgery.
"The challenges of being late on this latest [10nm] node of Moore's Law was somewhat a function of what we've been able to do in the past, which in essence was define the odds on scaling the infrastructure," Swan explains. Bumping up to a 2.7x scaling factor proved to be "very complicated," more so than Intel anticipated. He also says that Intel erred when it "prioritized performance at a time when predictability was really important."
"The short story is we learned from it, we'll get our 10nm node out this year. Our 7nm node will be out in two years and it will be a 2.0X scaling so back to the historical Moore's Law curve," Swan added.
Also at Fortune and Tom's Hardware.
Related:
Intel's "Tick-Tock" Strategy Stalls, 10nm Chips Delayed
Intel's "Tick-Tock" is Now More Like "Process-Architecture-Optimization"
Moore's Law: Not Dead? Intel Says its 10nm Chips Will Beat Samsung's
Intel Releases Open Letter in Attempt to Address Shortage of "14nm" Processors and "10nm" Delays
Intel Says "7nm" Node Using Extreme Ultraviolet Lithography is on Track
Intel Promises "10nm" Chips by the End of 2019, and More
Intel Launches Coffee Lake Refresh, Roadmap Leaks Showing No "10nm" Desktop Parts Until 2022
Intel Shares "10nm" Ice Lake Processor Details
HP Boss: Intel Shortages are Steering Our Suited Customers to Buy AMD
Intel's Jim Keller Promises That "Moore's Law" is Not Dead, Outlines 50x Improvement Plan
(Score: 0) by Anonymous Coward on Friday July 17 2015, @10:08AM
This delay may give them more time to catch up in the die shrinking game. This is good news for competition.
(Score: 0) by Anonymous Coward on Friday July 17 2015, @10:19AM
AMD's projected new chips won't be out for another year or two anyway, meaning they will be out just in time to be behind the curve yet again. And last I checked none of them were expected to be on a new process (IE still 28 nm... 3 years later?)
(Score: 1, Insightful) by Anonymous Coward on Friday July 17 2015, @10:54AM
Which is probably a big factor in why Intel chose to delay. Why push yourself if no one else is pushing either?
(Score: 4, Interesting) by takyon on Friday July 17 2015, @11:45AM
Zen is scheduled to come out next year, deliver a massive 40% IPC improvement [anandtech.com] (feasible because of Bulldozer's bad architecture), and will apparently be skipping 20nm and going directly to 14nm [wccftech.com]. Some of its APUs may come with high bandwidth memory 2.0 next year.
On GPUs AMD is much more comparable to NVIDIA than AMD w/ Intel on CPUs. Both AMD and NVIDIA will use high bandwidth memory 2.0 next year, and will probably boost HBM DRAM past 4 GB on most cards. Both companies may skip to 14nm. There have been reports that AMD will quickly release something on 20nm during 2015-2016.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 3, Interesting) by dusty monkey on Friday July 17 2015, @02:02PM
The process size of all foundries have been lies for years now. For instance the smallest gate lengths in intels 22nm tri-gate process (ivy bridge) were actually 25nm, and thats on the entire chip. Not a single gate length less than 25nm in its "22nm" chips.
How is it 22nm then? It isn't. Pure and simple. Its all marketing babble now.
AMD is still stuck using gloflo's finfets, and even when gloflo starts pushing out their so called 14nm finfets in 2016 based on some cross-licenses samsung technology, the smallest gate length on the chips will be 20nm. I am guessing that intel is still stuck near ~20nm gate lengths also.
- when you vote for the lesser of two evils, you are still voting for evil - stop supporting evil -
(Score: 2) by takyon on Friday July 17 2015, @07:31PM
It may be misleading and each manufacturer may have their own definition of 32/28/22/20/16/14/10/7 nm, but it doesn't matter much.
What matters is that a.) it is harder to make components smaller, b.) they are making components smaller, and c.) there are benefits (cost, performance, power consumption, size) to making it smaller.
When IBM announced the 7nm demo, they emphasized that the FinFETs were stacked at a pitch of less than 30nm, compared to 42nm for Broadwell.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 3, Interesting) by Gravis on Friday July 17 2015, @10:47AM
despite going to increasingly smaller lithography, intel is sticking to their x86 game plan which consistently results in high power consumption. there is a clear reason that all but a few smartphones use ARM, lower power consumption. intel even tried to scale down x86 to work on a smartphone and they had to add a 3000mAh battery to just get a competitive battery life. AMD seems to be seeing the writing on the wall because they have invested in making an ARMv8 chip for servers.
frankly i've found intel's scheme of constant change to be bothersome because it means that unless you upgrade inside of two years, you have to buy a new motherboard if you want to upgrade your processor.
(Score: 2, Informative) by Anonymous Coward on Friday July 17 2015, @11:14AM
Intel was in the motherboard business for a long time, so it was in their best interests to force you towards an upgrade. Where as AMD provided you a CPU upgrade path, to give new life to your older machine.
(Score: 2, Insightful) by Anonymous Coward on Friday July 17 2015, @12:10PM
Where as AMD provided you a CPU upgrade path, to give new life to your older machine.
Yup.
I just recently upgraded my old Athlon 64 X2 to a 1055t. Going secondary market netted me a deal that just can't be beat with older Intel hardware.
It's not the latest and greatest, but the fact that you can hold off on upgrades with AMD processors is amazing. With AMD, your motherboard could theoretically last 7-10 years with minor component upgrades every now and then.
(Score: 3, Insightful) by theluggage on Friday July 17 2015, @12:49PM
unless you upgrade inside of two years, you have to buy a new motherboard if you want to upgrade your processor.
But how important, really, is being able to put a new processor in an > 3 year old motherboard?
Back in the good old/bad old days (particularly when on-chip caches were taking off), a new processor might have offered you a 50% increase in clock speed and a dramatic performance boost. These days, improvements in raw performance are much more incremental and many of the benefits of a new processor will be support for new versions of DDR RAM, SATA, USB, PCIe, Thunderbolt, NewWonderPort(tm), and (if you use integrated graphics) DisplayPort, HDMU etc. which are useless without motherboard/chipset support.
The other main area of progress is power consumption & thermal performance which, again, are of limited value without a case/cooling rethink, and are more applicable to laptops etc. which aren't usually CPU-upgrade-friendly.
(Score: 3, Informative) by takyon on Friday July 17 2015, @02:38PM
There is one thing Intel could do to raise performance: boost the core counts for its mainstream chips.
AMD promises to improve IPC of Zen by 40%, compared to 5-10% for the latest Intel launches. They are also switching from clustered multithreading (Bulldozer) to simultaneous multithreading (Zen). So each Zen core will be a "real" core now rather than half of a "module". There are even rumors of a 16-core, 32 thread mainstream Zen chip [digitaltrends.com].
An 8-core Zen chip could suddenly become a real challenge to Intel's i7 quad-core processors next year. Intel's enthusiast processors have come with 6 cores for several generations, and Haswell-E will include an 8-core variant. Mainstream desktop/mobile chips have been stuck at 4 cores. That might change now that games, web browsers, and utilities are increasingly multithreaded.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by tibman on Friday July 17 2015, @02:48PM
Might as well solder the CPU onto the motherboard with that attitude : P Not all computers are built with high-end parts. As high-end parts become middle and low you can continue to upgrade your machine for cheap.
SN won't survive on lurkers alone. Write comments.
(Score: 2) by theluggage on Friday July 17 2015, @08:11PM
Not all computers are built with high-end parts. As high-end parts become middle and low you can continue to upgrade your machine for cheap.
But that's what eBay is for. If you've got a dual 'Sandy Bridge' i3, then the obvious upgrade would be a second-hand or surplus 'Sandy Bridge' i7 quad, which would be a significant upgrade, not a new-model i3 dual which (even if it was made to work on your motherboard) would offer relatively modest improvements.
CPU upgrades were a thing back when the per-core performance of CPUs was increasing far more rapidly than it is today. These days, the CPU is fairly low down the upgrade list.
(Score: 2) by Zinho on Friday July 17 2015, @09:20PM
If you've got a dual 'Sandy Bridge' i3, then the obvious upgrade would be a second-hand or surplus 'Sandy Bridge' i7 quad, which would be a significant upgrade, not a new-model i3 dual which (even if it was made to work on your motherboard) would offer relatively modest improvements.
See, that's the difference in philosophy that Gravis and Tibman were trying to point out. If you bought an AM3+ motherboard in 2011 you could use it with any processor from the Phenom II / Athlon II / Sempron / Opteron / FX chip lines based on your available budget, and today's latest chip designs still work on it. The obvious upgrade for my Athlon II chip is a modern FX chip, no motherboard replacement required. In contrast, Intel requires a new motherboard to go from core i3 to core i7?
Tibman's right, if every chip architecture change requires a new motherboard then there's no difference to the consumer between a chip you can't swap out (soldered to the board) and one you won't swap out (upgrade within the same processor family isn't worth the money, as you said).
"Space Exploration is not endless circles in low earth orbit." -Buzz Aldrin
(Score: 2) by mhajicek on Saturday July 18 2015, @03:50AM
Incremental indeed. My CADCAM box is three years old, and buying new with the same budget would only get me about another 20% performance.
The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
(Score: 4, Insightful) by iwoloschin on Friday July 17 2015, @01:51PM
Want to quote some citations on x86 being more power hungry than ARM? I've heard this argument over and over, but it doesn't hold up.
For low power devices, cell phones and such, yes, it does appear that ARM is more power efficient than x86. But we're not talking about performance at all, just the raw power used by a CPU running the cellphone to do normal cellphone things.
On the flip side, for high end servers, where are all the ARM chips? Surely if ARM was more power efficient we'd see high end servers using ARM to save power, if only for the massive cost savings (power costs money, power makes heat, cooling costs even more money, etc). But where are all of the ARM servers? I haven't seen any big plays here, which makes me suspect that while ARM may use less power overall, x86 (or Intel's superior fab?) provides a much higher performance per watt number than anything ARM has come up with.
Maybe (hopefully!) at some point ARM and x86 will cross paths, but for now, it seems that for absolute minimal power consumption, you use ARM, but for any sort of serious amount of work, while plugged into wall power, you should be picking x86. They both have their uses, but it's impossible to claim one as being better than the other without constraining your use case.
(Score: 2) by Zinho on Friday July 17 2015, @09:52PM
Want to quote some citations on x86 being more power hungry than ARM?
Sure:
“The X-Gene-equipped units [at PayPal] cost approximately one-half the price of traditional data center infrastructure hardware and incurred only one-seventh of the annual running cost,” [source] [datacenterknowledge.com]
There's even a Windows Server build in the works for ARM, [zdnet.com] which takes away one of the last reasons to stay with Intel in the face of reduced power requirements on ARM. Again, from that last article I linked:
The bottom line one is that a 64-bit ARM powered microserver has a thermal design power (TDP) of of between 10 and 45 watts. A conventional x86 server runs more than 90 watts. The lower the power consumption the lower the not just the direct server utility bills, but the lower the overall data center running costs.
Let me put it put concrete numbers.
A 64-bit ARM server will use no more half the power of its x86 counterpart. ZDNet estimated that the kilowatt hour cost for commercial use per year per server in 2013 was $731.94. Multiply that by the number of servers in a data center and then divide that number by two.
Since power consumption is often a data center's single greatest cost, that is a tremendous saving.
"Space Exploration is not endless circles in low earth orbit." -Buzz Aldrin
(Score: 2) by dusty monkey on Saturday July 18 2015, @09:56AM
The least that you could do was check the numbers before stringing together what is clearly shit, or rather, parrot complete shit that someone else strung together. To arrive at the $713.94 figure given in the article you cited, the author cites another article that uses a draw of 850 watts per server to arrive that the $713.94 cost, but thats an order of magnitude larger than the 90 watts the author of the article you cited had just mentioned for conventional x86 servers.
The author of the article that arrives at $713.94 (Teena Hammond) is dead on, for an atypical server that draws 750 watts, such as you would find in a supercomputing cluster. But why then is the author of the article you cited (Steven J. Vaughan-Nichols) misrepresenting the $713.94 figure? Its because he is a hack author that doesnt know shit about anything, and for some reason you didnt bother to care about the veracity of it. Gullible people (such as yourself) seem to have swallowed it without any critical thought at all.
I can predict how and why you pushed this shit on us. You had a preconceived notion and posted the first thing that you could find that seemed to support it, and you did it as fast as you could which is why you didnt put any critical thought into it. Your actions are highly similar to the actions of the god damned author of the mis-informative highly inaccurate and misleading article that you cited.
- when you vote for the lesser of two evils, you are still voting for evil - stop supporting evil -
(Score: 2) by Zinho on Sunday July 19 2015, @05:09PM
[lots of rant]
Wow, you really feel strongly about this. I didn't mean my careless research to be personally offensive to you.
Here's my take on this.
* my first web search found not only support for the idea that data centers were switching to low power chips, but also a specific example (PayPal, although they're not alone [zdnet.com])
* It's my experience that a strong fraction of a server's power use is in the processor (disk and cooling being the next two biggest costs, not necessarily in order)
* High-efficiency chips, if serving to meet the load, cost less to run (45W < 90W, kWh = $)
The fact that companies are creating the product and marketing it means that it's at least not insane; the fact that PayPal is willing to admit to purchasing it means that they did the math and it works for them. I know it's more complicated than that; at full load the right comparison is computations per Watt, and you'll need more of the lower-power chips to do the same work. It's not a slam dunk in either direction.
As much as it may offend you, microservers are a real thing. Especially for paralellizable applications, large numbers of low-power processors can be used to do real-life work [datacenterdynamics.com] in a cost-effective way. It's not for everyone, nor for every application, but there are times when it makes sense.
"Space Exploration is not endless circles in low earth orbit." -Buzz Aldrin
(Score: 2) by dusty monkey on Tuesday July 21 2015, @12:46AM
I didn't mean my careless research to be personally offensive to you.
Its not, but it appears that being critical of what you did is offensive to you. Can't just accept that you made a mistake, that you have faults that need working on, I guess.
As much as it may offend you, microservers are a real thing.
No shit sherlock. Apparently any criticism of you means the person being critical must be completely opposite on every possible thing. Here is an idea... next time, just dont be so damned worse than worthless. You do realize that your post was worse than worthless, right? It was misinformation. You did harm.
- when you vote for the lesser of two evils, you are still voting for evil - stop supporting evil -
(Score: 2) by Zinho on Tuesday July 21 2015, @06:51AM
. . . it appears that being critical of what you did is offensive to you. Can't just accept that you made a mistake, that you have faults that need working on, I guess.
Actually, I'm more confused than offended at the intensity of your response to my quoting the $713.94 figure for annual processing cost. You spent an entire post decrying my lack of research on this point, when I thought that was perhaps the least consequential point in the article I linked. I'll gladly admit I was wrong about that number, because it doesn't matter to me. You're right, of course; 90 watts running full tilt for a year will only cost ~$80 depending on your price per kWh (according to my back-of-the-envelope estimate, not intended to be either specific or accurate; YMMV).
Here is an idea... next time, just dont be so damned worse than worthless. You do realize that your post was worse than worthless, right? It was misinformation. You did harm.
Again, confusion on my part. Harm? Is someone going to lose money or need hospitalization because of what I wrote? Psychic trauma caused by a throwaway value that the quoted author couldn't even be bothered to divide by two to make his point? And there was nothing else in my original or follow-up post of value that could balance that potential damage?
I think I'm having a Poe's law moment. Either you're sincere and have a wildly different perspective than mine on the value of precise data center operating cost estimates, or you're trolling me; I can't tell the difference.
"Space Exploration is not endless circles in low earth orbit." -Buzz Aldrin
(Score: 2, Informative) by Anonymous Coward on Friday July 17 2015, @01:58PM
plan which consistently results in high power consumption
If you ignore that they have gone from 125W to ~5W for the same or better performance. Atom chips are on par with the performance of a pentium 4 from 10 years ago. (the charts do not lie)
I have an Intel NUC. To get that performance 10 years ago would have required a liquid cooled rig and lots of fans and a decent power supply. Now it is a power brick and uses around 20W total when running full out (3-5 idle) instead of 500W because of all the extra stuff and the CPU.
A chip from this year is consistently is better than the one from 2 years ago. And uses less power to do it.
I have hoped AMD would up their game. The only thing keeping them alive at this point is MS and Sony game consoles. It is a bloodbath on their financials. I even own some stock hoping they would do better. But alas they have not.
They do however have an interesting yield problem. You can tell because of the number of different chips they sell. They have it fairly fine grained though. https://en.wikipedia.org/wiki/Haswell_%28microarchitecture%29#List_of_Haswell_processors. [wikipedia.org] The highest in that list is 145w. Which is high. But that is for 18 CPUs. Not sure where you get that their power requirements are going up. That is decidedly downward. Even 2-4 years ago that would have been a 300-800W beast of a box.
(Score: 2) by takyon on Friday July 17 2015, @02:22PM
Intel Core M seems to be a great chip with high performance and low power consumption, although it is very expensive. They have consistently improved power consumption and idle power on mobile (laptop) chips to the point where Atom and Core M might converge soon.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by Katastic on Friday July 17 2015, @03:52PM
Intel doesn't use X86 anymore. X86 is just the executable format that gets turned into micro-operations and processed. The same micro-operations exist on all processors now. And the unit that transforms X86 to micro ops is tiny compared to the out-of-order scheduler and branch prediction units.
Power consumption is not something as simple as instruction set.
(Score: 2) by darkfeline on Sunday July 19 2015, @02:53AM
Why doe x86 still exist? Based on my meager experience with really low level work, it would be better to use RISC or similar to make a really simple but blazing fast general processor and move all of those convoluted instructions and optimization tricks to the compiler level. We're not working in assembly any more so we can let compilers handle any optimizations needed by the processor.
Join the SDF Public Access UNIX System today!
(Score: 3, Insightful) by bzipitidoo on Friday July 17 2015, @12:58PM
There are plenty of concessions that "tick tock" was aggressive and optimistic. Would it be so bad to make a more realistic schedule that takes into account a few basic facts of nature, in particular, the Law of Diminishing Returns? Moore's Law is wrong, the doubling of speed and capacity has to end it's just a question of when, engineers all know that. But does management, really? If asked, they'll say they understand, but their crazy aggressive scheduling belies that. I wonder how many engineers they verbally flogged, how much unpaid overtime they hinted had better be worked, to try to stay on this schedule? Intel is known to be a vicious sweatshop. The excuse is that it's not their fault, it's the market economy and those unforgiving quarterly stock valuations that make them do it.
(Score: 2) by theluggage on Friday July 17 2015, @01:17PM
Moore's Law is wrong, the doubling of speed and capacity has to end it's just a question of when, engineers all know that. But does management, really?
Engineers also know that "unconstrained exponential growth" means "look out, she's gonna blow!" However, the whole of modern hypercapitalism is rooted on the delusion that you can have unlimited exponential growth in a closed system. So, its not surprising that management have adopted a corruption of Moore's law as a business model.
(NB: Moore's law is about transistor density, not specifically speed)
(Score: 3, Informative) by takyon on Friday July 17 2015, @01:59PM
There's nothing wrong with this move, and complaints (around the web, not just here) that AMD isn't offering enough competition for Intel miss the mark.
The problem is EUV from ASML [wikipedia.org]. Multiple patterning with 193 nm lithography is getting too expensive. EUV rollout has been too slow. The last Intel process node launches have been spaced closer to 2.5 years than 2 years. Going to 3 is not that bad, and it gives the company a chance to innovate on 14nm rather than reap the benefits of a planned shrink.
We are already on year 2 of this 3 year "Tick-Tock" cycle, so it is not a long wait to 10nm. 10nm apparently doesn't depend on EUV since Intel has said it doesn't plan to use EUV for 10nm and is already building fabs for 10nm.
If you were waiting for Skylake before upgrading, it's worth wondering whether Kaby Lake might be a bit better. The desktop launches are already much slower than mobile/tablet/laptop CPU launches and Skylake could kill Broadwell desktop chips (in the sense that nobody bothers getting Broadwell). Kaby Lake could bring improvements or be more like Devil's [digitaltrends.com] Canyon [anandtech.com], a 100-200 MHz clock boost and not much more.
Another consideration: Skylake supports DDR3 and DDR4, 10nm Cannonlake supports just DDR4. What memory will Kaby Lake support?
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 0) by Anonymous Coward on Friday July 17 2015, @03:38PM
I would not wait around on that chip. If I had a choice between the two obviously pick the later. But wait on it? Not so much. I think you are right about Broadwell at this point. Everyone is waiting on Skylake. We should see a decent clearout of inventory in the next few months of Broadwell equipment.
(Score: 3, Interesting) by bob_super on Friday July 17 2015, @03:17PM
Intel suffers massive 14nm delays, buys Altera (which couldn't figure out 20nm), ends tick-tock...
Either someone's resting on their laurels, or some heads should start rolling, because IBM and TSMC are pretty positive about their 7nm roadmap.
(Score: 2) by takyon on Friday July 17 2015, @03:55PM
IBM hasn't shown off anything that Intel couldn't do itself in a few years. Intel will use EUV and SiGe eventually.
By using slow and incomplete EUV to create their 7nm demo chip, IBM is tying the fate of 7nm to ASML's EUV lithography, which has been delayed for decades and won't be ready for another 3-5 years.
To continue process node shrinks, you either need EUV or... self-assembly [uchicago.edu]?
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by bob_super on Friday July 17 2015, @04:05PM
TSMC has announced that they plan to output 7nm FPGAs in 2017.
Even if they slip by one year, the schedule in TFS still has Intel doing 10nm Tock in 2018.
(Score: 2) by takyon on Friday July 17 2015, @04:14PM
I doubt they can transition from 16nm to 7nm in 2 years. I'll believe it when I see it.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by tangomargarine on Friday July 17 2015, @04:00PM
What's the deal with all these -lake suffixes? At first glance none of them even make any sense, which seems like a weird thing to base a mnemonic on.
"Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
(Score: -1, Troll) by Anonymous Coward on Friday July 17 2015, @07:14PM
Backdoor sin both Intel v(pro) and amd stalled too?
Make sure you don't have thoughts women wouldn't like, such as supporting marrying girls rather than women (as used to be legal in the USA before feminism and is legal in all old religions)
Only the Unitarian progressivism is allowed in the USA
(Score: 0) by Anonymous Coward on Friday July 17 2015, @07:28PM
yeah I too miss the dark ages... (I'm kidding you moron)
(Score: -1, Troll) by Anonymous Coward on Friday July 17 2015, @09:00PM
Fuck you you faggot.
May the south raise again