Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Wednesday July 03 2019, @07:52PM   Printer-friendly
from the moore-of-a-guideline dept.

Intel's Senior Vice President Jim Keller (who previously helped to design AMD's K8 and Zen microarchitectures) gave a talk at the Silicon 100 Summit that promised continued pursuit of transistor scaling gains, including a roughly 50x increase in gate density:

Intel's New Chip Wizard Has a Plan to Bring Back the Magic (archive)

In 2016, a biennial report that had long served as an industry-wide pledge to sustain Moore's law gave up and switched to other ways of defining progress. Analysts and media—even some semiconductor CEOs—have written Moore's law's obituary in countless ways. Keller doesn't agree. "The working title for this talk was 'Moore's law is not dead but if you think so you're stupid,'" he said Sunday. He asserted that Intel can keep it going and supply tech companies ever more computing power. His argument rests in part on redefining Moore's law.

[...] Keller also said that Intel would need to try other tactics, such as building vertically, layering transistors or chips on top of each other. He claimed this approach will keep power consumption down by shortening the distance between different parts of a chip. Keller said that using nanowires and stacking his team had mapped a path to packing transistors 50 times more densely than possible with Intel's 10 nanometer generation of technology. "That's basically already working," he said.

The ~50x gate density claim combines ~3x density from additional pitch scaling (from "10nm"), ~2x from nanowires, another ~2x from stacked nanowires, ~2x from wafer-to-wafer stacking, and ~2x from die-to-wafer stacking.

Related: Intel's "Tick-Tock" Strategy Stalls, 10nm Chips Delayed
Intel's "Tick-Tock" is Now More Like "Process-Architecture-Optimization"
Moore's Law: Not Dead? Intel Says its 10nm Chips Will Beat Samsung's
Another Step Toward the End of Moore's Law


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Thursday July 04 2019, @06:16AM (1 child)

    by Anonymous Coward on Thursday July 04 2019, @06:16AM (#863045)
    How many years has it taken to double CPU single thread performance? Still seems like we're plateauing on single threaded performance and the rest of the performance increases is on parallel stuff.

    Parallel performance is a far easier problem to solve - make it cheaper, use less power and generate less heat. Then add as many as you can afford. If your computation problems are parallel enough to run on multiple cores they're often parallel enough to run on multiple computers.

    Layering storage circuitry on top of each other makes sense (e.g. in SSDs - only a tiny percentage of the circuitry is active and generating heat at a time). But layering CPU circuitry on top of each other will increase the max heat generation density a lot more. The shortened distances will only help a bit in reducing the heat generated while increasing the difficulty of pumping away the heat. Will it be cheaper once you add all the stuff needed to deal with that?
  • (Score: 3, Interesting) by takyon on Thursday July 04 2019, @08:02AM

    by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Thursday July 04 2019, @08:02AM (#863068) Journal

    Shortening the distance between RAM and CPU [darpa.mil] could reduce some heat and speed up operations massively. Replacing the RAM with universal memory [soylentnews.org] would also be a big help. If it can take the heat, stack that onto or inside the CPU.

    Neuromorphic computing will be well suited to scaling vertically since it has inherently low power consumption and works kind of like a brain with not much active at once. But for classical CPUs, it's unclear how heat will be dealt with, although you can see that TSMC is also pursuing Wafer-on-Wafer (WoW). That's at least two companies that think that at least a doubling of density is possible with crude stacking. Beyond that, a new type of transistor [soylentnews.org], material, or cooling method could help with the heat problem.

    Intel supposedly has an actual IPC increase coming, 18% with Ice Lake [wccftech.com]. Setting aside whether this is valid, wiped out by security mitigations, or paired with lower clock speeds on an immature node, we should be grateful for any gains we get. In other industries, making something 5% better/more efficient would have an impact of billions of dollars. Obviously, a smaller increase won't make you run out and buy it like double performance in a single year would, but incremental gains add up.

    I'm optimistic that we'll see more software exploiting parallelism, now that we are entering an era of ubiquitous 8-cores (counting game consoles and smartphones) and mainstream 16-cores [soylentnews.org].

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]