Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.
posted by Fnord666 on Sunday January 28 2018, @11:28AM   Printer-friendly
from the RIP dept.

Submitted via IRC for AndyTheAbsurd

Hammered by the finance of physics and the weaponisation of optimisation, Moore's Law has hit the wall, bounced off - and reversed direction. We're driving backwards now: all things IT will become slower, harder and more expensive.

That doesn't mean there won't some rare wins - GPUs and other dedicated hardware have a bit more life left in them. But for the mainstay of IT, general purpose computing, last month may be as good as it ever gets.

Going forward, the game changes from "cheaper and faster" to "sleeker and wiser". Software optimisations - despite their Spectre-like risks - will take the lead over the next decades, as Moore's Law fades into a dimly remembered age when the cornucopia of process engineering gave us everything we ever wanted.

From here on in, we're going to have to work for it.

It's well past the time that we move from improving performance by increasing clock speeds and transistor counts; it's been time to move on to increasing performance wherever possible by writing better parallel processing code.

Source: https://www.theregister.co.uk/2018/01/24/death_notice_for_moores_law/


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Informative) by bradley13 on Sunday January 28 2018, @04:17PM (2 children)

    by bradley13 (3053) on Sunday January 28 2018, @04:17PM (#629484) Homepage Journal

    Sure, Moore's law was about transistor count, but this has always had a direct relationship with performance. Not only in transistor speed (they switch faster when they are smaller), but also in the available complexity: some of the individual commands available in processors would have represented massive software functions just a few years ago.

    Strangely, processor performance has outpaced memory performance, so there was caching. Pushing single-thread speed for simple commands: first came pipelines, then came speculative execution. What we're seeing in Meltdown and Spectre are unexpected interactions between these various optimizations. So they'll have to back off.

    However, still hugely underexploited: parallelism. Watch your processor usage when you are intensively using some application or other. If you have a 4-core, hyperthreaded processor, most likely you will see one or two of your eight virtual cores really being used. This is even with the zillions of background tasks running on every machine nowadays.

    At a guess, if you took out all the bleeding-edge single thread optimization stuff, like branch prediction, and you also put the massively complex commands into shared units, you might reduce processor performance by a factor of 2-3, but you could put at least 5 times as many processors onto the chip. The problem is: they would sit there, unused.

    It goes along with the bloatware problem. There's too much software that needs written, and there are too few actually really good programmers out there. So you get crap built on frameworks glued to other frameworks and delivered to the clueless customer. Dunno what we do about it, but this is a fundamental problem.

    --
    Everyone is somebody else's weirdo.
    Starting Score:    1  point
    Moderation   +1  
       Informative=1, Total=1
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 2, Interesting) by AlphaSnail on Sunday January 28 2018, @07:36PM (1 child)

    by AlphaSnail (5814) on Sunday January 28 2018, @07:36PM (#629549)

    I have this feeling which might be wrong but I suspect this AI phenomena at some point is going to be turned toward translating code into more optimized and parallel functionality like some kind of super compiler. I see no reason that we couldn't hit a point where software isn't typed out like it is now but written by some kind of Siri like assistant that converses with you until the program is running as you describe it rather than working out every line of code yourself. At that point how good a coder is will be a concept like how good a switchboard operator is, irrelevant having been replaced by a superior technique. After that people might code for fun but it wont be used by any businesses or for profit entities. Why it might be a while until that happens is it would be coders putting themselves out of a job, and I think they are going to hold off on that for as long as possible but I think it is inevitable since whoever does it first will have such an advantage once the cats out of the bag they will dominate the software industry in every respect on cost and quality.

    • (Score: 2) by TheRaven on Monday January 29 2018, @11:32AM

      by TheRaven (270) on Monday January 29 2018, @11:32AM (#629779) Journal
      AI is not magic. AI is a good way of getting a half-arsed solution to a problem that you don't understand, as long as that problem doesn't exist in a solution space that has too many local minima. Pretty much none of that applies to software development, and especially not to optimisation. We already have a bunch of optimisation techniques that we're not using because they're too computationally expensive to be worth it (i.e. the compiler will spend hours or days per compilation unit to get you an extra 10-50% performance). AI lets us do the same thing, only slower.
      --
      sudo mod me up