Intel may finally be abandoning its "Tick-Tock" strategy:
As reported at The Motley Fool, Intel's latest 10-K / annual report filing would seem to suggest that the 'Tick-Tock' strategy of introducing a new lithographic process note in one product cycle (a 'tick') and then an upgraded microarchitecture the next product cycle (a 'tock') is going to fall by the wayside for the next two lithographic nodes at a minimum, to be replaced with a three element cycle known as 'Process-Architecture-Optimization'.
Intel's Tick-Tock strategy has been the bedrock of their microprocessor dominance of the last decade. Throughout the tenure, every other year Intel would upgrade their fabrication plants to be able to produce processors with a smaller feature set, improving die area, power consumption, and slight optimizations of the microarchitecture, and in the years between the upgrades would launch a new set of processors based on a wholly new (sometimes paradigm shifting) microarchitecture for large performance upgrades. However, due to the difficulty of implementing a 'tick', the ever decreasing process node size and complexity therein, as reported previously with 14nm and the introduction of Kaby Lake, Intel's latest filing would suggest that 10nm will follow a similar pattern as 14nm by introducing a third stage to the cadence.
Year | Process | Name | Type |
---|---|---|---|
2016 | 14nm | Kaby Lake | Optimization |
2017 | 10nm | Cannonlake | Process |
2018 | 10nm | Ice Lake | Architecture |
2019 | 10nm | Tiger Lake | Optimization |
2020 | 7nm | ??? | Process |
This suggests that 10nm "Cannonlake" chips will be released in 2017, followed by a new 10nm architecture in 2018 (tentatively named "Ice Lake"), optimization in 2019 (tentatively named "Tiger Lake"), and 7nm chips in 2020. This year's "optimization" will come in the form of "Kaby Lake", which could end up making underwhelming improvements such as slightly higher clock speeds, due to higher yields of the previously-nameed "Skylake" chips. To be fair, Kaby Lake will supposedly add the following features alongside any CPU performance tweaks:
Kaby Lake will add native USB 3.1 support, whereas Skylake motherboards require a third-party add-on chip in order to provide USB 3.1 ports. It will also feature a new graphics architecture to improve performance in 3D graphics and 4K video playback. Kaby Lake will add native HDCP 2.2 support. Kaby Lake will add full fixed function HEVC Main10/10-bit and VP9 10-bit hardware decoding.
Previously: Intel's "Tick-Tock" Strategy Stalls, 10nm Chips Delayed
(Score: 2) by Alfred on Wednesday March 23 2016, @01:43PM
We're not gaining GHz anymore, 10% gain each generation from optimizations doesn't matter except to a few, the x86 instruction set is crap and holds the architecture back, baked in DRM and privacy concerns.
I have no reason to upgrade my years old i7. It is only fully utilized for video transcoding. I don't need to overclock. If everyone understood that a more powerful machine doesn't compensate for a lack of talent or lack of frags then new CPU sales would not be enticing. Even if a new CPU saved 10 minutes of compile time chances are the guy wasted the 10 gained on facebook anyway.
My only reason to buy a new CPU is to add a whole new machine.
(Score: 2) by SDRefugee on Wednesday March 23 2016, @02:27PM
Hell, my primary machine has a Xeon 5500, quad-core, running Linux, and I can't see any reason to upgrade as
long as this system still runs to my satisfaction... This constant upgrading pressure is weird and isn't happening
in *my* world...
America should be proud of Edward Snowden, the hero, whether they know it or not..
(Score: 0) by Anonymous Coward on Wednesday March 23 2016, @03:16PM
I would upgrade just for the power usage alone. That is probably a monster of a box. Something more recent will probably beat it in all respects and use less power to do it.
(Score: 0) by Anonymous Coward on Wednesday March 23 2016, @03:05PM
Off the top of my head...
Modern CPUs have:
-- extensions for AES acceleration which are used everywhere.
-- much better memory plumbing and DDR4 support
-- integrated graphics that are actually decent
-- much lower TPW's and run A LOT cooler
-- higher clock speeds in some cases, not all.
I recently[6 months ago] picked up a Xeon x5650 to replace my ancient Nehalem i7-920. I love it. Went from 130W TPW to 95W and picked up two more cores and AES instructions and vt-d, etc. It was $80 too.
My next build will be in 6 months or so, once all the Xeon stuff is out and the prices settle a bit.
I love the thought of building a NUC-ish size server with i7-ish power and 40-60W TPW that will run cool and quiet on solid-state storage. It also means I can put it on a 5-600W UPS and it will run for quite a while when the mains go down.
(Score: 2) by Alfred on Wednesday March 23 2016, @04:38PM
Though yes those are all things that new CPUs have over old ones none of them are a sufficient reason for me to upgrade. Even as a group the pull is not there. The benefit per unit cost is not good enough and when two of those also require added cost of a new mobo it gets worse. Saving 35W of TDP won't pay for itself especially when I already run at 3% load 95% of the time.
I do like building and I have been through the phase of wanting the latest or more powerful. I grew out of that and know that no one will see my rig and anyone who does won't care. Its just a tool, I don't need a Porsche to drive to work.
(Score: 0) by Anonymous Coward on Wednesday March 23 2016, @09:42PM
Awww :( I'll look at your "rig" if it'll make you feel better.
(Score: 2) by bitstream on Wednesday March 23 2016, @03:09PM
My thoughts too. They have to start improving the architecture itself not just pumping the gazillionhertz, add instructions for the latest fad (VP9, USB wtf?) or trying to do superficial improvements. Computations done with less gates using less cycles translates into less heat and the ability to push the clock closer to the real limit. Because the clock domain(s) can be kept smaller and gates can be kept from interfering each other better.
The frequency roof will probably be good for the chip design industry. They have to pay attention to what the chips do instead of relying on physics wizardry. On the other side people writing software may perhaps be smarter about algorithms and not rely on yet another faster CPU to bail them from sloppy coding practice.
Completely optical processing is likely the path to a serious performance boost in the range of 1000x. Provided a semitransparent gate can be accomplished at small geometries and high temperature.