Stories
Slash Boxes
Comments

SoylentNews is people

posted by cmn32480 on Monday July 25 2016, @06:57PM   Printer-friendly
from the all-good-thigns-must-come-to-an-end dept.

Submitted via IRC for TheMightyBuzzard

After more than 50 years of miniaturization, the transistor could stop shrinking in just five years. That is the prediction of the 2015 International Technology Roadmap for Semiconductors, which was officially released earlier this month.

After 2021, the report forecasts, it will no longer be economically desirable for companies to continue to shrink the dimensions of transistors in microprocessors. Instead, chip manufacturers will turn to other means of boosting density, namely turning the transistor from a horizontal to a vertical geometry and building multiple layers of circuitry, one on top of another.

For some, this change will likely be interpreted as another death knell for Moore's Law, the repeated doubling of transistor densities that has given us the extraordinarily capable computers we have today. Compounding the drama is the fact that this is the last ITRS roadmap, the end to a more-than-20-year-old coordinated planning effort that began in the United States and was then expanded to include the rest of the world.

[...]

This final ITRS report is titled ITRS 2.0. The name reflects the idea that improvements in computing are no longer driven from the bottom-up, by tinier switches and denser or faster memories. Instead, it takes a more top-down approach, focusing on the applications that now drive chip design, such as data centers, the Internet of Things, and mobile gadgets.

Source: http://spectrum.ieee.org/tech-talk/computing/hardware/transistors-will-stop-shrinking-in-2021-moores-law-roadmap-predicts


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Insightful) by jmorris on Tuesday July 26 2016, @12:01AM

    by jmorris (4844) on Tuesday July 26 2016, @12:01AM (#380105)

    The problem is bigger than just hitting a bottom limit on shrinking. The bigger problem is they have hit an upper limit on clock speed before you get into space heater territory, an upper limit on how many transistors you can put into a single CPU core before you pass so far beyond the point of diminishing returns that it becomes pointless and the big one, a diminishing point of return on cache to make up for the horrible mismatch in speed between a modern CPU and RAM. (HBM only helps that last problem, not eliminate it) There are only two ways to gain performance now and both depend on changes in how we create software:

    1. Widespread and pervasive multi-threading to make it practical to ship consumer chips with dozens of cores for computation instead of just in the GPU. There is a reason that current desktop CPUs push the feature to power down the extra cores to make one run faster.

    2. Abandon the 'who cares' attitude to inefficient programming practices and start tweaking the crap out of performance critical code, hand tuned assembly, clock counting, carefully taking into account cache misses, stalls of the pipelines in processors, the works. This means the shell can still be slapped together in a scripting language but only ones that make it easy to link in libraries in real languages to do the heavy lifting.

    Either way it means a revolution in how we hire and train programmers. In reality we probably have to do a little of both.

    Starting Score:    1  point
    Moderation   +2  
       Insightful=2, Total=2
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 3, Insightful) by migz on Tuesday July 26 2016, @06:01AM

    by migz (1807) on Tuesday July 26 2016, @06:01AM (#380205)

    The problem with tweaking the crap out of performance critical code is that today's CPUs are so complex (and our algorithms too) that humans (even the shit hot ones) struggle to produce better code than the compiler. Try doing a better job than hotspot on a 64 bit processor. Hotspot has more metrics than you can shake a stick at, and it's optimization algorithms would take you a lifetime to replicate.

    Programmers are not trained, they train themselves, otherwise you hired a code monkey. You don't need a revolution, programmers can tell the difference, start using them to screen applicants. Few code monkeys ever progress to being programmers. Stop paying them the same as dev's and pay us more.

    • (Score: 3, Interesting) by jmorris on Tuesday July 26 2016, @12:00PM

      by jmorris (4844) on Tuesday July 26 2016, @12:00PM (#380257)

      You buy the myth that Java has ever, or ever can, approach native speed but look down on 'code monkeys.' Uh huh. Yea, lets listen to your advice.

      And to get to the root of your argument, the overly complex, not understandable by humans, CPU is the root of the power consumption problem. It is the end product of trying to throw another billion transistors into trying to squeeze another couple of percent out of a single core after you hit the thermal wall and can't crank the clock again without water cooling. It is going to go because it is another dead end, just like the P4. The future is performance per watt and performance per square millimeter of die. so more cores can be stuffed in before it melts.