Stories
Slash Boxes
Comments

SoylentNews is people

posted by takyon on Thursday February 11 2016, @06:22PM   Printer-friendly
from the where-is-mccoy-when-you-need-him dept.

Moore's Law, coined eponymously for Gordon Moore, co-founder of Intel Corporation, who, in a 1965 paper famously observed that component densities on integrated circuits will double every twelve months. He amended his observation in 1975 to a doubling every 24 months. Since then, the chip industry has borne out Moore's observation/prediction. However, there are still those who claim that Moore's Law is dying, just as many have done before.

However, Peter Bright over at Ars Technica is reporting notes a change in focus for the chip industry away from chasing Moore's Law. From the article:

Gordon Moore's observation was not driven by any particular scientific or engineering necessity. It was a reflection on just how things happened to turn out. The silicon chip industry took note and started using it not merely as a descriptive, predictive observation, but as a prescriptive, positive law: a target that the entire industry should hit.

Apparently, the industry isn't going to keep trying to hit that particular target moving forward, as we've seen with the recent delay of Intel's 10nm Cannonlake chips. This is for several reasons:

In the 2000s, it was clear that this geometric scaling was at an end, but various technical measures were devised to keep pace of the Moore's law curves. At 90nm, strained silicon was introduced; at 45nm, new materials to increase the capacitance of each transistor layered on the silicon were introduced. At 22nm, tri-gate transistors maintained the scaling.

But even these new techniques were up against a wall. The photolithography process used to transfer the chip patterns to the silicon wafer has been under considerable pressure: currently, light with a 193 nanometre wavelength is used to create chips with features just 14 nanometres. The oversized light wavelength is not insurmountable but adds extra complexity and cost to the manufacturing process. It has long been hoped that extreme UV (EUV), with a 13.5nm wavelength, will ease this constraint, but production-ready EUV technology has proven difficult to engineer.

Even with EUV, it's unclear just how much further scaling is even possible; at 2nm, transistors would be just 10 atoms wide, and it's unlikely that they'd operate reliably at such a small scale. Even if these problems were resolved, the specter of power usage and dissipation looms large: as the transistors are packed ever tighter, dissipating the energy that they use becomes ever harder.

The new techniques, such as strained silicon and tri-gate transistors, took more than a decade to put in production. EUV has been talked about for longer still. There's also a significant cost factor. There's a kind of undesired counterpart to Moore's law, Rock's law, which observes that the cost of a chip fabrication plant doubles every 4 years. Technology may provide ways to further increase the number of transistors packed into a chip, but the manufacturing facilities to build these chips may be prohibitively expensive—a situation compounded by the growing use of smaller, cheaper processors.

The article goes on to discuss how the industry will focus moving forward:

[More]

These difficulties mean that the Moore's law-driven roadmap is now at an end. ITRS decided in 2014 that its next roadmap would no longer be beholden to Moore's "law," and Nature writes that the next ITRS roadmap, published next month, will instead take a different approach.

Rather than focus on the technology used in the chips, the new roadmap will take an approach it describes as "More than Moore." The growth of smartphones and Internet of Things, for example, means that a diverse array of sensors and low power processors are now of great importance to chip companies. The highly integrated chips used in these devices mean that it's desirable to build processors that aren't just logic and cache, but which also include RAM, power regulation, analog components for GPS, cellular, and Wi-Fi radios, or even microelectromechanical components such as gyroscopes and accelerometers.

So what say you, Soylentils? Is Moore's Law really dead, or is this just another round of hyperbole?

Related Stories

Intel's "Tick-Tock" Strategy Stalls, 10nm Chips Delayed 37 comments

Intel's "Tick-Tock" strategy of micro-architectural changes followed by die shrinks has officially stalled. Although Haswell and Broadwell chips have experienced delays, and Broadwell desktop chips have been overshadowed by Skylake, delays in introducing 10nm process node chips have resulted in Intel's famously optimistic roadmap missing its targets by about a whole year. 10nm Cannonlake chips were set to begin volume production in late 2016, but are now scheduled for the second half of 2017. In its place, a third generation of 14nm chips named "Kaby Lake" will be launched. It is unclear what improvements Kaby Lake will bring over Skylake.

Intel will not be relying on the long-delayed extreme ultraviolet (EUV) lithography to make 10nm chips. The company's revenues for the last quarter were better than expected, despite the decline of the PC market. Intel's CEO revealed the stopgap 14nm generation at the Q2 2015 earnings call:

"The lithography is continuing to get more difficult as you try and scale and the number of multi-pattern steps you have to do is increasing," [Intel CEO Brian Krzanich] said, adding, "This is the longest period of time without a lithography node change."

[...] But Krzanich seemed confident that letting up on the gas, at least for now, is the right move – with the understanding that Intel will aim to get back onto its customary two-year cycle as soon as possible. "Our customers said, 'Look, we really want you to be predictable. That's as important as getting to that leading edge'," Krzanich said during Wednesday's earnings call. "We chose to actually just go ahead and insert – since nothing else had changed – insert this third wave [with Kaby Lake]. When we go from 10-nanometer to 7-nanometer, it will be another set of parameters that we'll reevaluate this."

Intel Roadmap
Year   Old   New
2014   14nm Broadwell   14nm Broadwell
2015   14nm Skylake   14nm Skylake
2016   10nm Cannonlake   14nm Kaby Lake
2017   10nm "Tock"   10nm Cannonlake
2018   N/A   10nm "Tock"


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by fnj on Thursday February 11 2016, @06:39PM

    by fnj (1654) on Thursday February 11 2016, @06:39PM (#302866)

    This may be technically naive, but why couldn't they keep up or even greatly accelerate the increase in density by going vertical?

    • (Score: 1) by arcz on Thursday February 11 2016, @06:43PM

      by arcz (4501) on Thursday February 11 2016, @06:43PM (#302872) Journal

      Possible, but cooling would be very challenging. I suspect that would only work with very low clock frequencies.

    • (Score: 4, Interesting) by takyon on Thursday February 11 2016, @06:46PM

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Thursday February 11 2016, @06:46PM (#302875) Journal

      Heat dissipation. It is possible to go vertical with NAND [soylentnews.org] and DRAM [soylentnews.org] because they use less power among other factors.

      Some 3D CPU chips have been made in the lab, but it is still a holy grail for perpetuating Moore's law. A "mere" 100-layer CPU would keep us busy for years, and it could be possible to scale up to 10,000s or more layers.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 3, Interesting) by bob_super on Thursday February 11 2016, @07:02PM

        by bob_super (1357) on Thursday February 11 2016, @07:02PM (#302890)

        IBM was working on getting cooling channels "horizontally" through the die so that you can scale by going 3D with TSVs. Not sure how many years it will take to make it reliable and affordable, but it sounds like a good approach to me.

        • (Score: 2) by takyon on Thursday February 11 2016, @07:08PM

          by takyon (881) <takyonNO@SPAMsoylentnews.org> on Thursday February 11 2016, @07:08PM (#302894) Journal

          it sounds like a good approach to me

          It's a long way from "good approach" to getting out of the lab and getting sold. That's far different than the CPUs that are sold now, and much harder to make work than 3D memory and storage.

          Even the minor improvements that Intel, AMD and others need are failing hard. 450mm wafer yield isn't good enough, and EUV doesn't work fast enough. There are already heat problems with these 2D planar chips. Intel ditched its fully integrated voltage regulator (FIVR) at least for a couple of years due to heat problems. TSV with some cooling channels isn't going to cut it alone, or it would be on the market next year.

          --
          [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
        • (Score: 2) by sjames on Thursday February 11 2016, @11:52PM

          by sjames (2882) on Thursday February 11 2016, @11:52PM (#303016) Journal

          It MIGHT work, but there's a long way to go. Keep in mind that at the necessary scale, the channels will look more like an ultra fine filter membrane than a channel from the perspective of the coolant molecules.

      • (Score: 2) by JoeMerchant on Thursday February 11 2016, @10:51PM

        by JoeMerchant (3937) on Thursday February 11 2016, @10:51PM (#302996)

        I think the main problem with an active 3D CPU is that the cooling isn't worth the cost/complexity. What's the application? Does your average cubicle drone require 1000 cores in order to process their 10x200 Excel spreadsheet with pie charts?

        Developing a reliable, mass production 3D fab would be quite a bit more expensive than the current 14nm process, involve serious technical/financial risk, and address very limited markets.

        --
        🌻🌻 [google.com]
        • (Score: 2) by legont on Friday February 12 2016, @01:02AM

          by legont (4179) on Friday February 12 2016, @01:02AM (#303045)

          Actually, yes, I want many cores hopefully in millions. I want to run neural networks on my devices - tired of their stupidity, you know. For example I'd appreciate my toilet to analyse my pee and poo and disperse precise amount of expensive water simultaneously suggesting a diet adjustments. Cooking it while I am still reading slashdot in there would help as well.

          --
          "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
          • (Score: 2) by JoeMerchant on Friday February 12 2016, @02:39AM

            by JoeMerchant (3937) on Friday February 12 2016, @02:39AM (#303058)

            Well, when Skynet emerges on one of the many thousand super-computers today, if it is benign and we want lots of copies of it, it can help design affordable mass production versions of itself. Until then, just building massively parallel cluster machines isn't doing much to bring the "significant advance in AI that's just 5 years away" any closer than it was in 1985.

            --
            🌻🌻 [google.com]
            • (Score: 2) by legont on Friday February 12 2016, @08:25AM

              by legont (4179) on Friday February 12 2016, @08:25AM (#303130)

              I think millions of cores are still too few for a true AI, but regardless... there are many applications that would benefit from massive multicore environment greatly especially if we replace bloated kernels with something like QNX. For starter I want a process - any process - to have it's own core plus some in reserve. I mean no matter how I upgrade the hardware, firefox is slowing me down because some page in a tab wants to do something when it feels like it. The easiest solution is to let it have it's own core. And don't tell me please to close the tabs. All the tabs I am interested in should be opened all the time because it just takes too long to open them again because of the server issues. OK, so we are talking about hundreds of cores already.

              Next - a little AI. Not true one, but a little. Nice talking assistant, local image recognition from web pages on the fly; things like that. I don't want to even notice my laptop doing it - no performance degradation. All such algorithms love multicore. Is it too much to ask? I'd be glad to pay $100 extra for such a thing.

              Finally, I like playing with numbers - simulations, Monte Carlo, - things like that. It adds up rather fast.

              --
              "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
              • (Score: 2) by JoeMerchant on Friday February 12 2016, @02:06PM

                by JoeMerchant (3937) on Friday February 12 2016, @02:06PM (#303190)

                So, at $100 extra, you've just bought the incremental upgrade from an i3 to an i5...

                Massively multicore, achieved by stacking layers of silicon, would put your laptop out of the price range of laptops, at $10K+, for at least the first 10 years of development and refinement of the tech - which is why it hasn't happened.

                If battlefield soldiers have a real need for this kind of multicore tech, it might get developed and eventually economy of scale reduce it to consumer pricing, but nobody needs 100 tabs open on the battlefield, that's a want, and a distraction for people with important things to pay attention to. The people in that scenario who might need image recognition running on thousands of video feeds simultaneously are still located in "central command" where they can use traditional big iron to do the processing, no need to trade three aircraft carriers for the ability to run that processing in the field - and even if you had it for free in money terms, it still would need to be vehicle mounted to supply the necessary power - nobody's got capacity to carry that much battery in their rucksack.

                --
                🌻🌻 [google.com]
                • (Score: 2) by legont on Friday February 12 2016, @06:08PM

                  by legont (4179) on Friday February 12 2016, @06:08PM (#303331)

                  I agree with you, but even for a soldier image recognition is a bottleneck I think. Rifles that would know if the target is friendly or not just by looking at infrared image, for example? Perhaps, boots could analyse DNA in real time? I trust software could be written rather fast if hardware to become available.

                  My point was that it is unlikely that Moore's law is dead. Not from the demand side anyway. It may have to take a break for a cycle or two though.

                  --
                  "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
                  • (Score: 2) by JoeMerchant on Friday February 12 2016, @10:37PM

                    by JoeMerchant (3937) on Friday February 12 2016, @10:37PM (#303475)

                    Boots recognizing DNA - that makes a sick image of a kick in the face...

                    Image recognition is progressing, but even with unlimited compute power behind it, it's not reliable enough to make life/death decisions - maybe as an adviser, but it would be too easy to slip on a captured uniform to get "friendly" marking from the AI... All in all, dropping 5 lbs out of the kit and having your infantry that much less fatigued is probably better for friendly fire casualty rates than having a "super recognizer scope" on the rifle.

                    Still, if it were free, I'd use it - as evidenced by the $70 Raspberry Pis stacking up around my desk...

                    --
                    🌻🌻 [google.com]
    • (Score: 1, Interesting) by Anonymous Coward on Thursday February 11 2016, @07:11PM

      by Anonymous Coward on Thursday February 11 2016, @07:11PM (#302895)

      Years ago, I read an article somewhat about stacked or combination parallel/series processors. Where one or more processors were on top of multiprocessors. One of the examples was a single core stacked on top of a 64 core, or something to that effect. I believe the term was "matchbook supercomputer" or similar. It was supposed to rival Cray supercomputers on a board the size of a matchbook.

      • (Score: 0) by Anonymous Coward on Thursday February 11 2016, @09:04PM

        by Anonymous Coward on Thursday February 11 2016, @09:04PM (#302956)

        That sounds similar to a couple things modern processors do:

        1. There's ARM's big.LITTLE [wikipedia.org] that uses larger and smaller cores for power scaling. For a processor-intensive task, it's sometimes more power efficient to use a more power hungry core for a shorter period of time; the idea is to include the high-performance and high-efficiency chips in the same package so the OS can choose. Pretty much all modern smartphones use this model.
        2. Also, most processors today have more complicated general purpose codes in addition to many simpler GPU cores, which are good at parallelizing certain types of computations [wikipedia.org].
        • (Score: 0) by Anonymous Coward on Thursday February 11 2016, @09:12PM

          by Anonymous Coward on Thursday February 11 2016, @09:12PM (#302960)

          No, the article implied that the "stacked" processor was basically a controller for the rest of the cores. It would distribute the load to the rest of the cores, in parallel.

    • (Score: 3, Interesting) by HiThere on Thursday February 11 2016, @08:03PM

      by HiThere (866) Subscriber Badge on Thursday February 11 2016, @08:03PM (#302926) Journal

      This has already been done to a limite extent, but as others have remarked, heat dissipation becomes a real problem. A couple of decades ago a lab built a genuine 3-D chip, but it needed built in silver bus-bars to conduct out the heat, and was thus both expensive, impractical, and still limited.

      OTOH, if you're building a computer immersed in liquid helium, then this isn't a barrier. So quantum computers, if they take off, may head in this direction. (OTOH, I don't know that liquid nitrogen would work as well as liquid helium. Liquid helium conducts heat fantastically well.)

      The other possibility is to develop an entire new set of materials designed to work at high temperatures. That could be exceptionally expensive, but it might pay for itself quite well over the centuries...probably not, however, over only a few decades.

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
    • (Score: 2) by forkazoo on Thursday February 11 2016, @11:34PM

      by forkazoo (2561) on Thursday February 11 2016, @11:34PM (#303013)

      Not naive at all. It's just a really hard problem, exacerbated by decades of experience not doing it that way. Flash memory for storage and HBM for graphics cards are leading the way to 3D manufacturing, but it's currently all done with "die stacking." Which is to say, you make a planar 2D chip, and stack it on top of a bunch of similar ones and make a little tower of a certain number of distinct layers. Our planar manufacturing technology is really sophisticated and refined, so this works really well. There are probably some cases where a "true" 3D manufacturing process would be able to make some much more interesting designs with transistors oriented in arbitrary directions and the like. But building a solid block of "computite" with useful density the same along every axis is going to take a few major revolutions in technology, and quite a lot of refinement and productisation, and rethinking of how computation equipment is planned and designed.

      And even if you do that, if it works like current chips, you have a massive amount of heat to deal with, and you probably just melt the inside of the thing. So you probably need some sort of microfluidics doing clever things to manage heat with channels integrated into the circuitry, and designs that trade clock speed for volume to be fewer watts per unit of performance, etc. So everybody agrees on "vertical" but nobody knows exactly what that looks like 10, 15, 20 years from now.

    • (Score: 2) by TheRaven on Friday February 12 2016, @10:28AM

      by TheRaven (270) on Friday February 12 2016, @10:28AM (#303143) Journal

      The key phrase is Dennard scaling [wikipedia.org]. This meant that every time that you decrease the feature size, you get an increase in per-transistor power efficiency. This meant that if you took a CPU design from a few generations ago and fabbed it with the current technology, you'd end up with something that could run at lower power than the original (or at higher clock frequencies). Note the past tense: around 2007, this stopped happening, and new processes let you squeeze more transistors on the chip, but increased the power budget proportionally. This is where the idea of dark silicon comes from: you can keep putting more transistors on the chip, but you can't afford to power any more of them at any given time.

      This means that there's an industry trend towards sticking more obscure features on the chip. For example, a load of TI SoCs come with face recognition hardware as a fixed-function logic block. This is much more power efficient than doing the same thing in software on an ARM core, but is turned off 99% of the time when you're not taking photos. There are lots of similar blocks on a typical SoC and even x86 chips have pipelines that are rarely used.

      This means that Moore's law, even if you could continue it, is not actually very useful. Doubling the number of transistors that CPU designers have to play with, but limiting them to only using a fraction of them at a time, is increasingly going to hit diminishing returns.

      --
      sudo mod me up
  • (Score: 2) by meustrus on Thursday February 11 2016, @06:40PM

    by meustrus (4961) on Thursday February 11 2016, @06:40PM (#302868)

    Don't discount the ingenuity of computer engineers with decades of experience competing with each other to sell to a ravenous market that is still growing. Nobody will know it's dead until it's dead.

    --
    If there isn't at least one reference or primary source, it's not +1 Informative. Maybe the underused +1 Interesting?
    • (Score: 3, Insightful) by takyon on Thursday February 11 2016, @06:53PM

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Thursday February 11 2016, @06:53PM (#302881) Journal

      I have seen discussion of chips that could take advantage of quantum tunneling, making it an asset rather than current leakage liability. Such a development could make the stacked chip easier. Throw in an increase to THz clock rates, and we could see a 1,000,000x improvement performance for plain classical computing. Neuromorphic, optical, quantum, and others could also emerge.

      It is a good thing that the economics of Moore's law are failing. It will force innovation that doesn't rely simply on scaling down, like the idea above, or SiGe instead of Si, carbon nanotubes, whatever.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 2) by meustrus on Thursday February 11 2016, @08:04PM

        by meustrus (4961) on Thursday February 11 2016, @08:04PM (#302927)

        But Moore's law has already "failed" numerous times. The article has many examples of barriers once thought insurmountable to die shrinkage. We did not simply push through those barriers; we worked around them. Maybe one of these days the barrier will lead us to need a different metric than die size. That day, however, is not today.

        --
        If there isn't at least one reference or primary source, it's not +1 Informative. Maybe the underused +1 Interesting?
      • (Score: 2) by TheLink on Friday February 12 2016, @04:57PM

        by TheLink (332) on Friday February 12 2016, @04:57PM (#303301) Journal

        Could you have chips that take advantage of quantumness? e.g. you do multiple speculative calculations at once and collapse on the best result when the final input is decided.

        Analogous to finding the best path for transferring solar energy in photosynthesis: http://www.scientificamerican.com/article/shining-a-light-on-plants-quantum-secret/ [scientificamerican.com]

        And if plants can transfer energy like that, can we transfer information in similar ways?

  • (Score: 4, Interesting) by arcz on Thursday February 11 2016, @06:41PM

    by arcz (4501) on Thursday February 11 2016, @06:41PM (#302869) Journal

    At a certain point, it has to die. We can't start spliting atoms, we'd be building nukes instead of microprocessors!
    But, there is plenty more scaling ahead. We will see transitions from Si to SiGe materials. 4 Gigahertz? Pffft. How about 4 Terahertz!
    Plenty more scaling of speed available, just not through gate density.

    • (Score: 4, Interesting) by gman003 on Thursday February 11 2016, @07:03PM

      by gman003 (4155) on Thursday February 11 2016, @07:03PM (#302891)

      Also worth noting is that the original formulation was "number of transistors in a cost-effective processor", not "number of transistors per unit area". While we've mostly been making gains by just shrinking the transistors, when we started slowing on that, we started making headway on making larger dies cost-effectively (most notable on current-gen 28nm GPU dies, which are far larger than previous-gen or pre-previous-gen 28nm GPU dies, but had limited cost increases).

      Die size increases won't automatically buy us higher clocks or lower power, the way feature shrinks did, but it will let us increase performance for quite some time still. Eventually we might have limits of physical size from the packaging, but imagine an SoC the size of a current Nano-ITX motherboard. That would be quite powerful even at 20nm.

    • (Score: 2) by HiThere on Thursday February 11 2016, @08:07PM

      by HiThere (866) Subscriber Badge on Thursday February 11 2016, @08:07PM (#302930) Journal

      I think with tetrahertz cycles you need to be using waveguides rather than wires, so it might make more sense to just jump directly to optical computing. Which is already being worked on in a sporadic sort of way.

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
      • (Score: 3, Interesting) by Foobar Bazbot on Thursday February 11 2016, @11:06PM

        by Foobar Bazbot (37) on Thursday February 11 2016, @11:06PM (#303000) Journal

        Frequency of 1 tetrahertz means a wavelength of 75 Mm (75,000 km) in vacuum, or maybe 25 Mm in copper. Normally you need proper waveguides and transmission lines when the wavelength is not orders of magnitude bigger than the system you're building, so I'm wondering just how big this CPU you're building is...

        • (Score: 2) by deimtee on Friday February 12 2016, @11:35AM

          by deimtee (3272) on Friday February 12 2016, @11:35AM (#303156) Journal

          I think you are picking on HiThere for a typo. He obviously meant terahertz.
          Also, a tetrahertz is one cycle per second, so has a wavelength of 297 792 458 metres (in vacuum).

          --
          If you cough while drinking cheap red wine it really cleans out your sinuses.
          • (Score: 2) by Foobar Bazbot on Friday February 12 2016, @01:47PM

            by Foobar Bazbot (37) on Friday February 12 2016, @01:47PM (#303177) Journal

            I think you are picking on HiThere for a typo. He obviously meant terahertz.

            Hmmm, possibly. ;)

            Also, a tetrahertz is one cycle per second,

            No, 1 Hz = 1 cycle/s
            1 Tetrahertz = 4 Hz = 4 cycle/s

            • (Score: 2) by deimtee on Sunday February 14 2016, @02:29AM

              by deimtee (3272) on Sunday February 14 2016, @02:29AM (#303924) Journal

              No, 1 Hz = 1 cycle/s
              1 Tetrahertz = 4 Hz = 4 cycle/s

              Damn. That's what I get for reading the short blurbs on "search google for tetrahertz" without really thinking about it. I just figured it was some obscure medical usage.

              Tetrahertz - definition of Tetrahertz by The Free Dictionary
              www.thefreedictionary.com/Tetrahertz - Cached - Similar
              A unit of frequency equal to one cycle per second. See Table at measurement. [ After Heinrich Rudolf Hertz.] American Heritage® Dictionary of the English ...

              Tetrahertz | definition of Tetrahertz by Medical dictionary
              medical-dictionary.thefreedictionary.com/Tetrahertz - Cached
              the SI unit of frequency, equal to one cycle per second. Miller-Keane Encyclopedia and Dictionary of Medicine, Nursing, and Allied Health, Seventh Edition.

              If you actually follow the links they redirect to Hertz. Bastards.

              --
              If you cough while drinking cheap red wine it really cleans out your sinuses.
          • (Score: 2) by HiThere on Friday February 12 2016, @07:48PM

            by HiThere (866) Subscriber Badge on Friday February 12 2016, @07:48PM (#303371) Journal

            Well.....you're right that conceptually I meant terahertz. I.e., I meant the wavelengths just below infrared, but you're also wrong, because I didn't even know that terahertz was a word, and thought the word was tetrahertz. Blame a mild dislexia combined with not figuring out what the prefixes meant.

            So I should have been ... commented on for the mistake. It wasn't just a typo.

            OTOH, it would have been nice if Foobar had commented on the confusion...if he noticed it. Thank you for doing so.

            --
            Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
    • (Score: 0) by Anonymous Coward on Friday February 12 2016, @12:33PM

      by Anonymous Coward on Friday February 12 2016, @12:33PM (#303164)

      it's just that Moore's law is about component density, not speed. Really, read Moore's article, it's not that hard ...

  • (Score: 5, Interesting) by mr_mischief on Thursday February 11 2016, @06:46PM

    by mr_mischief (4884) on Thursday February 11 2016, @06:46PM (#302876)

    The eventual shift will not just be a new semiconductor foundation. It will be to on-chip photonics. You don't have to dissipate heat you don't create via electrical resistance.

    • (Score: 2) by takyon on Thursday February 11 2016, @06:56PM

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Thursday February 11 2016, @06:56PM (#302883) Journal
      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 2) by legont on Friday February 12 2016, @01:50AM

        by legont (4179) on Friday February 12 2016, @01:50AM (#303051)

        Both appear to be specialised analog computers. They are usually fast but good for a specific set of tasks only. There were many of them in the early days.

        --
        "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
    • (Score: 2) by bob_super on Thursday February 11 2016, @08:57PM

      by bob_super (1357) on Thursday February 11 2016, @08:57PM (#302954)

      Sure, but no.
      like the waveguides also suggested above (and actually those are already there and called wires), exotic connectivity solutions crash when faced with the realities of billions of connections of varied lengths flying past each other to reach billions of transistors.
      Replacing, with photonics, a few hundred thousand long-reach wires to reduce power, is a drop in the bucket.

      • (Score: 2) by mr_mischief on Thursday February 11 2016, @09:08PM

        by mr_mischief (4884) on Thursday February 11 2016, @09:08PM (#302958)

        I'm not just talking about between transistors. The transistors are where the work is done. There are projects ongoing to develop optical switches, too.

  • (Score: 0) by Anonymous Coward on Thursday February 11 2016, @06:54PM

    by Anonymous Coward on Thursday February 11 2016, @06:54PM (#302882)

    What about the rest of the computer?

    4 years ago I got a HP ###1 desktop from Newegg. AMD A10, 10 GB RAM, 1 TB HD, low end AMD onboard graphics - $549

    Late last year when I went to replace it, I saw the HP ###2 - AMD A10, 8 GB RAM, 2 TB HD, same graphics chip - $649

    Every PC I've ever bought was at least twice as powerful as the one before it. But now 3 years of progress is giving me a slightly worse machine for more money.

    • (Score: 2) by takyon on Thursday February 11 2016, @07:03PM

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Thursday February 11 2016, @07:03PM (#302892) Journal

      You probably just didn't look hard enough for a good deal. Prices for pre-built desktops can be highly variable.

      Searching for AMD desktop [slickdeals.net]...

      Similar spec'd refurbished machine [slickdeals.net] for half the price. But I'll assume refurb doesn't count.

      Here we go [slickdeals.net]. A10-7800, 12 GB of RAM, 2 TB HDD, 802.11ac wireless, $400, at multiple stores.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 0) by Anonymous Coward on Thursday February 11 2016, @07:20PM

      by Anonymous Coward on Thursday February 11 2016, @07:20PM (#302905)

      HP is the problem. I picked up a Shuttle PC with... i7, 16GB ram, 4TB hard drive, added a Geoforce GTX750ti, all under $550. The Shuttles use a liquid cooling system, you couldn't fry them if you tried.

  • (Score: 0) by Anonymous Coward on Thursday February 11 2016, @07:04PM

    by Anonymous Coward on Thursday February 11 2016, @07:04PM (#302893)

    The shrinking size of the computer will just change to a growing size of the computer. The tech is cheap. I already have a Beowulf cluster that includes my workstations, phone, tablets, refrigerator, and toater. If I could crack the firmware, I'd add distributed processing to my light bulbs and set-top box. As density reaches max and thus upgrading is obsolete, instead simply add more units to the system, I look forward to houses built with permanent compute centers in the walls.

    The "Internet of things" is a clueless way of saying: Decentralized computing target.

    I'm glad we finally standardized on how big a byte is. Now C's moronic assumption that int can be any size is dead. Everyone can then use the fixed size data type. Big endian appears to be dead. Good riddance. We never needed two endians. The more standardization of hardware the better. Sadly, x86 has too much cruft to be the platform of the future. It's all microcode anyway, might as well reduce the microcode needed as a way to get better performance. An opcode standard is already being created for the Web. Holy shit, it's stupid, but it's in the right direction.

    You're fooling yourself if you think Moore's law is dead. It's not yet. He didn't say the boards had to be some fixed size, just that there would be more chips packed on them. And when we start having compute systems epoxied onto the boards in our walls for our smart homes, well, then "board" isn't limited to 2x4's, but sheets of particle board... maybe even endlessly 3D printed...

    • (Score: 0) by Anonymous Coward on Thursday February 11 2016, @07:18PM

      by Anonymous Coward on Thursday February 11 2016, @07:18PM (#302904)

      * toaster.

    • (Score: 0) by Anonymous Coward on Thursday February 11 2016, @08:00PM

      by Anonymous Coward on Thursday February 11 2016, @08:00PM (#302924)

      "int" is defined as the native word size of the processor you have compiled your code for. There is a good reason to be able to do that, and that reason is efficiency.

      Otherwise, you end up forcing your compiler to generate chunking routines to pack/unpack the defined (whatever that may be) "int" word size to the native integer word size on your CPU when it runs. Not efficient. Or, consider if you had defined "int" to be 32 bits long before 64 bit CPUs became common. Then you wouldn't be able to access then entire word length of your CPU's registers.

      The proper way to fix this "int" word size issue was done in the C99 language header file:

                http://pubs.opengroup.org/onlinepubs/009695399/basedefs/stdint.h.html [opengroup.org]

      That header defines "int" to be the default integer size of the target CPU architecture for which you are compiling. If you wish to specify an explicit integer size (for stricter data representation), you have that option by specifying int8_t, int16_t, int32_t, etc. Specifying signed versus unsigned int is done very compactly by using the types defined in , which is another benefit: int8_t (signed) vs. uint8_t (unsigned), etc.

      • (Score: 1, Insightful) by Anonymous Coward on Thursday February 11 2016, @08:07PM

        by Anonymous Coward on Thursday February 11 2016, @08:07PM (#302929)

        Dumb Slashcode ate some of my comment text; it ate the name of the C99 header file: stdint.h

        Here's a question to the Soylent site maintainers: Why does Slashcode treat text inside angle brackets AS HTML IF I'M POSTING AS "Plain Old Text"?

        I see the "Extrans" post option, but that doesn't seem to make much sense if you have a "Plain Old Text" option. Maybe "Plain Old Text" should be renamed because it is not descriptive of what it really does.

        • (Score: 2) by takyon on Thursday February 11 2016, @08:41PM

          by takyon (881) <takyonNO@SPAMsoylentnews.org> on Thursday February 11 2016, @08:41PM (#302945) Journal

          Yeah the name is probably misleading. I post exclusively in Plain Old Text and use HTML tags all the time. I will ask for some feedback on this.

          --
          [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 0) by Anonymous Coward on Thursday February 11 2016, @09:16PM

        by Anonymous Coward on Thursday February 11 2016, @09:16PM (#302964)

        Yes, but C provides NO FIXED TYPES, and that's dumb. Furthermore C doesn't even specify whether char is signed or unsigned -- it's implementation defined. Also dumb. Instead we can now say: A byte is 8 bits. A char is an unsigned byte, etc. If you want to use an unfixed size variable then we can support those with header files. My point is that C is a product of its environment, and that the language / hardware barrier does not exist, otherwise C would not implement stacks as growing towards zero. If the stack grew away from zero, new memory could be paged in when it reached the maximum without having to change all the pointer values. "Stack overflow" is created by C's reliance on enter / call semantics. Furthermore we should not have one stack for parameters and return pointers except that hardware dictated this, but that's foolish. There's no reason not to isolate the hardware

        stdint.h is not availble on Microsoft's compilers... It came along much later. So, regardless of standardization it's not standard in practice, but mostly because C is dumb -- literally ignorant about the current day. You can't blame them, no one could see the future, but now we all know that bytes are 8 bits. If interoperability and future proof code is your goal then having a fixed size byte, int, long int, etc. are very important. Otherwise, Java and all other VMs wouldn't standardize on fixed size data for their VMs, adding new types as needed later.

        It's time to go back to the drawing board whether you think so or not. The bad decisions of the past are hampering the future. Little vs big endian is stupid. There's not a good enough practical reason to opt for having both types of endians, big and little, in our hardware. When it comes down to it, the landscape is different. Network byte order chose the wrong fucking endian. Less than 1% of devices on the Internet use big endian. Think of all the wasted electricity alone on calls to convert endians in every packet of data on the internet.

        • (Score: 0) by Anonymous Coward on Thursday February 11 2016, @11:19PM

          by Anonymous Coward on Thursday February 11 2016, @11:19PM (#303007)

          Of all the problems facing us in computer systems, the ones you get worked up over are the most trivial and simple to deal with.

          The C language type word size and signed/unsigned issue has been solved with stdint.lib. It's not my problem that Microsoft is too lame to implement already old standards like C99. I don't use their crappy compiler so it doesn't affect me or anyone else who uses a better compiler than MS's.

          As for big endian vs. little endian and it being dumb for network protocols to frequently be specified in big endian (so-called "network order"): when those protocols were created, most of your high end CPUs were big endian. With time, Intel's formerly toy processors gained capability and speed and these cheap processors became dominant, dethrowning the reigning big endian RISC processors like SPARC and others. It was an accident of history that little endian CPUs are now the most common on the Internet because for a long time, it was the opposite! You can't blame the Internet protocol authors for not foreseeing how that would turn out. And it doesn't matter anyway. Switching an int from big endian to little and vice versa on Intel x86 is done using a dedicated CPU instruction for this very task. It's as much of a non-issue as can be. If you are doing high performance numeric calculations with a lot of ints or floats, you are free to specify whichever endianness you wish in your code and avoid any conversion altogether.

          Another non-issue.

    • (Score: 2) by rondon on Thursday February 11 2016, @08:44PM

      by rondon (5167) on Thursday February 11 2016, @08:44PM (#302946)

      I want to mod this funny, but I'm not sure if AC is serious...

      I think I've been Poe'd

  • (Score: 0) by Anonymous Coward on Thursday February 11 2016, @07:14PM

    by Anonymous Coward on Thursday February 11 2016, @07:14PM (#302900)

    I believe it's mostly market-related forces that have slowed performance progress rather than inherent technology limitations.

    1. Desktop sales are stagnant, thanks to mobile device alternatives, which put more emphasis on power consumption than speed. A smaller market for performance-oriented users and software means less R&D devoted to speed.

    2. Multi-core is often a cheaper route to computation-intensive software than a faster single core. Thus, there is less pressure to have a single chip carry the entire load.

    3. Other components have become the bottleneck, such as storage and network. Putting most the R&D on the non-bottlenecks may not be economical.

    • (Score: 2) by takyon on Thursday February 11 2016, @08:38PM

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Thursday February 11 2016, @08:38PM (#302943) Journal

      1. I don't think there's much indication that the ARM makers like GlobalFoundries want to be slower at shrinking process sizes than Intel. 14nm LPP (Low-Power Plus) chips are available [anandtech.com].

      2. Moore's law talks about transistors per area. Many things follow from that, such as lower power consumption, but better single-threaded performance is not necessarily one of them anymore. Single-threaded performance was already hurt when chips failed to scale to 10 GHz and beyond in the early 2000s, contrary to expectations. Core count increase is still a way to increase performance, and it still "counts" as a metric to measure progress. The big supercomputers handle millions of cores just fine.

      3. There are possible solutions for every bottleneck. SSDs have made a big difference, 3D XPoint and other post-NAND technologies may also have an impact (see HP's "The Machine" concept, which is now slated to use DRAM rather than memristors/post-NAND in early versions). Interconnects are improving, and on-chip optical components may improve the movement of data. Improvement in the CPU and storage may also help the interconnect/networking problem: 3D stacked memory/storage for instance, including memory placed on the CPU. This can be seen with Intel including High Bandwidth Memory in Xeon Phi processors and AMD likely doing the same in a year or two. The shorter distance between CPU and memory helps. In a speculative future of 3D/stacked CPUs, the massively increased core count and shorter distance between the cores could also help alleviate the interconnect bottleneck.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 2, Insightful) by Anonymous Coward on Thursday February 11 2016, @07:57PM

    by Anonymous Coward on Thursday February 11 2016, @07:57PM (#302922)

    Moore himself modified the rate a decade after his original statement from doubling every year to doubling every two years. With the latest observations, perhaps we can say that the rate shrinks over time. Rather then kill it, Moore's Law just needs an amendment to reflect trends that he himself hinted at even in 1975 when he "manually" reduced the rate. We can build this time shrinkage into the revised formula.

  • (Score: 4, Funny) by snufu on Thursday February 11 2016, @09:14PM

    by snufu (5855) on Thursday February 11 2016, @09:14PM (#302962)

    They didn't tell us it would be horizontal.

  • (Score: 3, Informative) by Gravis on Thursday February 11 2016, @09:49PM

    by Gravis (4596) on Thursday February 11 2016, @09:49PM (#302974)

    Moore's law is not about computational speed, transistor size, price or any other bullshit like that.

    Moore's law is the observation that the number of transistors in a dense integrated circuit doubles approximately every two years.

    so now, now that we know it's the number of transistors and nothing more, how does it fair? well... in 2015 Oracle released a chip with 10 billion transistors. [wikipedia.org]

    the law isn't a law really, just a phenomenon.

  • (Score: 0) by Anonymous Coward on Friday February 12 2016, @03:34AM

    by Anonymous Coward on Friday February 12 2016, @03:34AM (#303069)

    This was in the back of a college textbook of mine, from a Smithsonian article: "What Intel giveth, Microsoft taketh away."

    Sure, it's less true now than it was in the early 2000's when the book was published, especially with Microsoft focusing on lower-spec systems with Windows 8 and Windows 10. But I guess it could be "JavaScript taketh away" with the way that modern web design is going (not to mention modern webapp design).

    • (Score: 1, Insightful) by Anonymous Coward on Friday February 12 2016, @03:49PM

      by Anonymous Coward on Friday February 12 2016, @03:49PM (#303259)

      This guy has an interesting take on it.

      https://www.youtube.com/watch?v=IuLxX07isNg [youtube.com]

      Remember Moores law has a dollar component as well. We can make today crazy high number of gate chips even more than what we normally sell. The problem is making it worth the cost to do so.

      Now MS however stumbled badly on longhorn and into vista. What should have been an 'easy' became burdensome. Computers that a year earlier could run XP handily and then some could barely run Vista. All because they tried to tie SQL and search into the OS and failed at it. Win 7 undid much of that (really vista sp2). Then win8 they bungled badly and people did not want it. So computers did not 'need' to get much better.

      Then something odd happened. People realized 'hey I only surf a bit and play a cheeseball game or two'. Something their tablet/phone can do and then some. MS was not in that game at all with good simple ideas. Something Apple was good at.

      At this point MS's only way back in will be the surface pro. Everyone I talk to is 'oh yeah that thing is great but a bit pricey'. MS is doing what Apple will not do with the iPad. They put their real OS on there and left it fairly open. They fixed up the disaster of win8 and dialed it back to win7. If they can fix the tracking disaster they are making they may just pull off win10.