Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Tuesday December 17, @09:13PM   Printer-friendly
from the just-a-little-problem dept.

Arthur T Knackerbracket has processed the following story:

The Intel 4004, the first commercial microprocessor, was released in 1971. With 2,300 transistors packed into 12mm2 [sic], it heralded a revolution in computing. A little over 50 years later, Apple’s M2 Ultra contains 134 billion transistors.

The scale of progress is difficult to comprehend, but the evolution of semiconductors, driven for decades by Moore’s Law, has paved a path from the emergence of personal computing and the internet to today’s AI revolution.

But this pace of innovation is not guaranteed, and the next frontier of technological advances—from the future of AI to new computing paradigms—will only happen if we think differently.

The modern microchip stretches both the limits of physics and credulity. Such is the atomic precision, that a few atoms can decide the function of an entire chip. This marvel of engineering is the result of over 50 years of exponential scaling creating faster, smaller transistors.

But we are reaching the physical limits of how small we can go, costs are increasing exponentially with complexity, and efficient power consumption is becoming increasingly difficult. In parallel, AI is demanding ever-more computing power. Data from Epoch AI indicates the amount of computing needed to develop AI is quickly outstripping Moore’s Law, doubling every six months in the “deep learning era” since 2010.

These interlinked trends present challenges not just for the industry, but society as a whole. Without new semiconductor innovation, today’s AI models and research will be starved of computational resources and struggle to scale and evolve. Key sectors like AI, autonomous vehicles, and advanced robotics will hit bottlenecks, and energy use from high-performance computing and AI will continue to soar.

At this inflection point, a complex, global ecosystem—from foundries and designers to highly specialized equipment manufacturers and materials solutions providers like Merck—is working together more closely than ever before to find the answers. All have a role to play, and the role of materials extends far, far beyond the silicon that makes up the wafer.

Instead, materials intelligence is present in almost every stage of the chip production process—whether in chemical reactions to carve circuits at molecular scale (etching) or adding incredibly thin layers to a wafer (deposition) with atomic precision: a human hair is 25,000 times thicker than layers in leading edge nodes.

Yes, materials provide a chip’s physical foundation and the substance of more powerful and compact components. But they are also integral to the advanced fabrication methods and novel chip designs that underpin the industry’s rapid progress in recent decades.

For this reason, materials science is taking on a heightened importance as we grapple with the limits of miniaturization. Advanced materials are needed more than ever for the industry to unlock the new designs and technologies capable of increasing chip efficiency, speed, and power. We are seeing novel chip architectures that embrace the third dimension and stack layers to optimize surface area usage while lowering energy consumption. The industry is harnessing advanced packaging techniques, where separate “chiplets” are fused with varying functions into a more efficient, powerful single chip. This is called heterogeneous integration.

Materials are also allowing the industry to look beyond traditional compositions. Photonic chips, for example, harness light rather than electricity to transmit data. In all cases, our partners rely on us to discover materials never previously used in chips and guide their use at the atomic level. This, in turn, is fostering the necessary conditions for AI to flourish in the immediate future.

The next big leap will involve thinking differently. The future of technological progress will be defined by our ability to look beyond traditional computing.

Answers to mounting concerns over energy efficiency, costs, and scalability will be found in ambitious new approaches inspired by biological processes or grounded in the principles of quantum mechanics.

While still in its infancy, quantum computing promises processing power and efficiencies well beyond the capabilities of classical computers. Even if practical, scalable quantum systems remain a long way off, their development is dependent on the discovery and application of state-of-the-art materials.

Similarly, emerging paradigms like neuromorphic computing, modelled on the human brain with architectures mimicking our own neural networks, could provide the firepower and energy-efficiency to unlock the next phase of AI development. Composed of a deeply complex web of artificial synapses and neurons, these chips would avoid traditional scalability roadblocks and the limitations of today’s Von Neumann computers that separate memory and processing.

Our biology consists of super complex, intertwined systems that have evolved by natural selection, but it can be inefficient; the human brain is capable of extraordinary feats of computational power, but it also requires sleep and careful upkeep. The most exciting step will be using advanced compute—AI and quantum—to finally understand and design systems inspired by biology. This combination will drive the power and ubiquity of next-generation computing and associated advances to human well-being.

Until then, the insatiable demand for more computing power to drive AI’s development poses difficult questions for an industry grappling with the fading of Moore’s Law and the constraints of physics. The race is on to produce more powerful, more efficient, and faster chips to progress AI’s transformative potential in every area of our lives.

Materials are playing a hidden, but increasingly crucial role in keeping pace, producing next-generation semiconductors and enabling the new computing paradigms that will deliver tomorrow’s technology.

But materials science’s most important role is yet to come. Its true potential will be to take us—and AI—beyond silicon into new frontiers and the realms of science fiction by harnessing the building blocks of biology.


Original Submission

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2, Funny) by Anonymous Coward on Tuesday December 17, @10:11PM

    by Anonymous Coward on Tuesday December 17, @10:11PM (#1385737)

    In tfs it says,

    > In parallel, AI is demanding ever-more computing power.
    In parallel, the venture capital and tech giants behind AI are demanding ever-more computing power.

    ftfy

    (Who profits from AI? Certainly not me, I've been trying to turn off Gemini in my Google searches.)

  • (Score: 4, Interesting) by gnuman on Tuesday December 17, @10:21PM (8 children)

    by gnuman (5013) on Tuesday December 17, @10:21PM (#1385739)

    Most of the chips we have are basically 2D chips. The FinFET is like a tiny step away from 2D and into 3D .. the multi-layer NAND chips are at few hundred layers now... CPU, I haven't heard anything there.

    2D chips will always give you limits of number of transistors you can put in an area. A billion transistors in 2D would be what in 3D? 1e9**3/2 = ~1e25 ... so...

    If we are limited in processing it's because of 2D. If we have 3D and sufficient bandwidth to feed that chip, you don't much clock speed to have massive throughput. Even a 1kHz chip would be able to process significantly more data... heck, the chip could have its own terabytes working memory and not dent the transistor count dedicated for other things... the chip would be the computer and the rest is just IO and wiring to the chip.

    This is essentially what the AI is "competing" with when it comes to meat-processors. We have like 1 liter of brain and the AI has lower working volume than that human hair. Trying to compensate for that with power and fancy materials probably will not work.

    • (Score: 2, Insightful) by Anonymous Coward on Wednesday December 18, @01:15AM (2 children)

      by Anonymous Coward on Wednesday December 18, @01:15AM (#1385751)

      We have like 1 liter of brain and the AI has lower working volume than that human hair. Trying to compensate for that with power and fancy materials probably will not work.

      Meanwhile a crow has a small brain and is still smarter than ChatGPT in some ways.

      https://www.youtube.com/watch?v=cbSu2PXOTOc [youtube.com]

      As for bees and other arthropods, I suspect they're still smarter in some ways: https://www.nationalgeographic.com/animals/article/bees-can-play-study-shows-bumblebees-insect-intelligence [nationalgeographic.com]
      https://knowablemagazine.org/content/article/mind/2021/are-spiders-intelligent [knowablemagazine.org]

      Personally I suspect single celled creatures[1] already solved many problems of thinking, and brains weren't initially to solve the problems of thinking but the problems of controlling a multicellular organism (redundancy, interfacing, etc)

      [1] Some testate amoeba build their own shells out of external materials, and don't reproduce if there isn't enough material around to form a second shell.

      • (Score: 3, Informative) by gnuman on Wednesday December 18, @09:20PM (1 child)

        by gnuman (5013) on Wednesday December 18, @09:20PM (#1385814)

        Meanwhile a crow has a small brain and is still smarter than ChatGPT in some ways.

        Despite it's smaller size, Ravens brain contain only an order of magnitude fewer neurons than humans.

        https://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons [wikipedia.org]

        Raven and a horse are comparable in number of neutrons. It's not the size, it's how much you pack into it. Ravens have size constrains but that hasn't limited their complexity or number of neurons. I think it's mostly an energy minimization function when it come to brain evolution. Larger brain uses more energy so it has to pull in more resources to survive and be a positive evolutionary trait. If you just have a big brain and don't use it, it decays. Also, you are more likely to die if you can't outcompete your "less endowed" neighbours. It's a feature, not a bug in the system. "Potential" of a system is meaningless if it can't execute.

        Now, ChatGPT is a machine, a tool. So it doesn't have these constraints that biological life has had until very recent times.

        • (Score: 0) by Anonymous Coward on Thursday January 02, @12:49PM

          by Anonymous Coward on Thursday January 02, @12:49PM (#1387163)
          Well if intelligence is very related to the number of neurons then perhaps humans should learn more about how to make organizations significantly smarter as their number of members increase. 😉
    • (Score: 3, Interesting) by Freeman on Wednesday December 18, @03:20PM (1 child)

      by Freeman (732) on Wednesday December 18, @03:20PM (#1385779) Journal

      AMD started the trend towards 3D CPUs with 3D V-cache. https://www.digitaltrends.com/computing/what-is-amd-3d-v-cache/ [digitaltrends.com]

      The AMD branded X3D chips are a pretty big success and I'm sure we're barely seeing the start of that kind of thing. SOCs were just the tip of the iceberg as that is literally just cramming as many features as you can onto one chip. Here's hoping that desktop computers don't all go that way.

      --
      Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
      • (Score: 2, Interesting) by Chromium_One on Thursday December 19, @09:34PM

        by Chromium_One (4574) on Thursday December 19, @09:34PM (#1385881)

        What's in the CPU now used to be half your mainboard in support chips. Memory, I/O, cache, and so on. Strongly suspect that the trend over time is gonna continue, to include not just upping cache, but actually putting more and more shared RAM for general use and integrated video. No reason NOT to have external RAM slots still available for those who need more, except in budget/embedded systems. Or from certain suppliers who want you to buy a new machine rather than upgrading one you already own. (Apple not being the only culprit, but).

        --
        When you live in a sick society, everything you do is wrong.
    • (Score: 5, Insightful) by Ox0000 on Wednesday December 18, @06:20PM (2 children)

      by Ox0000 (5111) on Wednesday December 18, @06:20PM (#1385792)

      "Building" a 3D CPU isn't that hard. Building a 3D CPU that will work, won't overheat, and actually perform as expected despite to quantum effects, that's much harder.

      Think about how much quantum effects already affect the current 2D CPUs, now instead of only having to "care" about what's to the N/E/S/W of you, you also have to care about what's above you, that makes things much harder.
      Then there's the routing of the interconnections that we've not even talked about yet, nor how you'll dissipate the heat coming from the inside of your chip.

      I'm sure everyone would love to move to 3D layouts, but getting there in a way that they actually work, that is a very hard problem.

      • (Score: 2) by kolie on Wednesday December 18, @11:58PM

        by kolie (2622) Subscriber Badge on Wednesday December 18, @11:58PM (#1385825) Journal

        Cooling is the largest obstacle faced. Spot on here.

      • (Score: 3, Funny) by jelizondo on Thursday December 19, @12:20AM

        by jelizondo (653) Subscriber Badge on Thursday December 19, @12:20AM (#1385826) Journal

        "care" about what's to the N/E/S/W of you, you also have to care about what's above you

        And below you, you insensitive clod!

        Some of us live in the basement...

(1)