Intel is talking about improvements it has made to transistor scaling for the 10nm process node, and claims that its version of 10nm will increase transistor density by 2.7x rather than doubling it.
On the face of it, three years between process shrinks, rather than the traditional two years, would appear to end Moore's Law. But Intel claims that's not so. The company says that the 14nm and 10nm process shrinks in particular more than doubled the transistor density. At 10nm, for example, the company names a couple of techniques that are enabling this "hyperscaling." Each logic cell (an arrangement of transistors to form a specific logic gate, such as a NAND gate or a flip flop) is surrounded by dummy gates: spacers to isolate one cell from its neighbor. Traditionally, two dummy gates have been used at the boundary of each cell; at 10nm, Intel is reducing this to a single dummy gate, thereby reducing the space occupied by each cell and allowing them to be packed more tightly.
Each gate has a number of contacts used to join them to the metal layers of the chip. Traditionally, the contact was offset from the gate. At 10nm, Intel is stacking the contacts on top of the gates, which it calls "contact over active gate." Again, this reduces the space each gate takes, increasing the transistor density.
Intel proposes a new metric for measuring transistor density:
Intel wants to describe processes in terms of millions of logic transistors per square millimeter, calculated using a 3:2 mix of NAND cells and scan flip flop cells. Using this metric, the company's 22nm process managed 15.3 megatransistors per millimeter squared (MTr/mm2). The current 14nm process is 37.5MTr/mm2, and at 10nm, the company will hit 100.8MTr/mm2. Competing 14nm/16nm processes only offer around 28MTr/mm2, and Intel estimates that competing 10nm processes will come in at around 50MTr/mm2.
See also: the International Roadmap for Devices and Systems.
A number of stories here have covered the advancement to 10nm chips: Samsung: Exynos, TSMC: MediaTech Helio X30 for example. A reoccuring comment in the discussions is if 10nm from Samsung is equivalent to 10nm for TSMC or Intel.
Intel's Mark Bohr discussed the difficulty of comparing process nodes during Manufacturing Day, specifically proposing to move the industry to transistor density as a comparative metric. Surprisingly enough, Intel claims their 10nm process is roughly twice as dense of the competition. Intel is not the only ones frustrated by comparing process nodes, as this recent article tries to compare current "14nm" nodes between the major vendors.
To further confuse the discussion is new 22nm processes: Global Foundries 22nm FD-SOI and Intel's just announced 22FFL process, both targeting energy efficient devices. GF's is in high volume manufacturing already while Intel's is just announced, but further cement's Intel's delve into foundry work.
These topics are largely covered by EETimes' summary of Intel's recent announcements
(Score: 2) by RamiK on Thursday March 30 2017, @09:11PM (3 children)
If you get 100 chips off the wafer, 5 would be 100MTr/mm^2 and sold as top end Xeons. Another 5 would have a few circuits fused off ending up with 90MTr/mm^2 and sold as high mid-end... All the way down to i3 with, oh, say, 30-40MTr/mm^2 ? You can tell by how well GlobalFoundries compare against Intel with AMD's Ryzen at 14nm.
Another way of looking at it is that the desktop users buying i7s are being over-charged by a wide margin as they're subsidizing the Xeons where Intel is forced to reduce the prices or face competition from IBM's POWER. Well, that bit of speculation will need to wait for Ryzen 5 and 3 and how well they compare.
compiling...
(Score: 2) by kaszz on Thursday March 30 2017, @11:37PM (1 child)
Yeah that makes sense.
And also infuriates the manufacturer when people finds out how to re-enable some functions ;-)
(Score: 2) by TheRaven on Friday March 31 2017, @09:59AM
And also infuriates the manufacturer when people finds out how to re-enable some functions ;-)
For two reasons. The 'evil' one is when they've disabled perfectly working features because the yields were higher than expected and there's more of a market for the low-end features (this was really common about 15-20 years ago). The more common reason is that these features are actually broken. They may work fine most of the time, but when the chip gets a bit warmer (but still well within its thermal design specification), they'll get unacceptable error rates. People who enable the features get crashy systems and blame the manufacturer.
sudo mod me up
(Score: 0) by Anonymous Coward on Friday March 31 2017, @02:22PM
Can I put a Xenon in my desktop then? Or will the increased performance melt my consumer-grade motherboard?