Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.
posted by Fnord666 on Sunday January 28 2018, @11:28AM   Printer-friendly
from the RIP dept.

Submitted via IRC for AndyTheAbsurd

Hammered by the finance of physics and the weaponisation of optimisation, Moore's Law has hit the wall, bounced off - and reversed direction. We're driving backwards now: all things IT will become slower, harder and more expensive.

That doesn't mean there won't some rare wins - GPUs and other dedicated hardware have a bit more life left in them. But for the mainstay of IT, general purpose computing, last month may be as good as it ever gets.

Going forward, the game changes from "cheaper and faster" to "sleeker and wiser". Software optimisations - despite their Spectre-like risks - will take the lead over the next decades, as Moore's Law fades into a dimly remembered age when the cornucopia of process engineering gave us everything we ever wanted.

From here on in, we're going to have to work for it.

It's well past the time that we move from improving performance by increasing clock speeds and transistor counts; it's been time to move on to increasing performance wherever possible by writing better parallel processing code.

Source: https://www.theregister.co.uk/2018/01/24/death_notice_for_moores_law/


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Informative) by takyon on Sunday January 28 2018, @02:32PM (14 children)

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Sunday January 28 2018, @02:32PM (#629456) Journal

    I call it silly, because AWS, Azure and similar platforms (even Folding At Home, SETI signal searching, Electric Sheep and other crowdsourced CPU cycles) effectively lease or beg/borrow massively parallel hardware orders of magnitude larger than anything on that list.

    Folding@home [wikipedia.org] is the biggest distributed computing project, and it's only at around 135 petaflops, comparable to Sunway TaihuLight at ~93 petaflops (125 peak).

    Add up the top 10 systems from the latest TOP500 list, and you get over 250 petaflops. That will be eclipsed by new systems coming in the next year or two that could be around 180-300 petaflops.

    Last time I checked, AWS doesn't publish a size estimate for their cloud. But it's helped by the fact that most customers don't need an intense simulation 24/7 (otherwise they could just buy their own hardware).

    Silicon processes may stop their shrink around 10nm

    7 nm [anandtech.com] is in the bag. 3-5 nm [arstechnica.com] are likely:

    GAAFETs are the next evolution of tri-gate finFETs: finFETs, which are currently used for most 22nm-and-below chip designs, will probably run out of steam at around 7nm; GAAFETs may go all the way down to 3nm, especially when combined with EUV. No one really knows what comes after 3nm.

    Maybe 0.5-2 nm can be reached with a different transistor design as long as EUV works.

    Although "X nm" is somewhat meaningless so other metrics like transistors per mm2 should be used. If we're at 100 million transistors/mm2 now (Intel 10nm), maybe we can get to 500 million transistors/mm2. The shrinking has not stopped yet.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    Starting Score:    1  point
    Moderation   +4  
       Informative=3, Underrated=1, Total=4
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 2) by JoeMerchant on Sunday January 28 2018, @02:52PM (8 children)

    by JoeMerchant (3937) on Sunday January 28 2018, @02:52PM (#629458)

    Folding@home is the biggest distributed computing project, and it's only at around 135 petaflops

    Fair point, it has the potential to be bigger than any supercomputer on the list, but it's hard to get there with begging. I know I have only run FAH for short periods because it hits my CPUs so hard that the fans spin up, and I don't need them to pack in with dust any faster than they already are.

    --
    🌻🌻 [google.com]
    • (Score: 4, Informative) by requerdanos on Sunday January 28 2018, @05:13PM (2 children)

      by requerdanos (5997) Subscriber Badge on Sunday January 28 2018, @05:13PM (#629506) Journal

      There's a compromise, provided by the cpulimit command.

      sudo apt-get install cpulimit && man cpulimit

      Works like this:
      $ cpulimit -e nameofprogram -l 25%
      Limits program "nameofprogram" to 25% CPU usage.

      For single-core single-thread CPUs, the math is pretty easy; 100% means the whole CPU and 50% means half of it, etc.

      For multi-core CPUs, 100 * number of cores % is all of it, and ( (100 * number of cores) /2 ) % is half, etc.

      • (Score: 0) by Anonymous Coward on Monday January 29 2018, @01:56PM (1 child)

        by Anonymous Coward on Monday January 29 2018, @01:56PM (#629800)

        I wonder how that performs against SIGSTOP/SIGCONT toggled by temperature treshold, an approach I sometimes use myself. At least my variation is looking at the relevant variable. Of course on the other hand you might get wildly fluctuating temperatures if you set the cut off and start again limits widely apart. And then there is the fact that some things will crash on you with STOP/CONT, empirical observation.

        I personally think that distributed computing (from a home users perspective) stopped making sense after CPUs learned to slow down and sleep instead of "furiously doing nothing at 100% blast". But hey if you want to help cure cancer or map the skies or design a new stealth bomber on your dime be my guest.

        • (Score: 2) by requerdanos on Monday January 29 2018, @04:30PM

          by requerdanos (5997) Subscriber Badge on Monday January 29 2018, @04:30PM (#629859) Journal

          my variation is looking at the relevant variable [temperature threshold].

          I have a separate daemon watching temperature and scaling CPU frequency and/or governor to moderate it (though usually that only comes into play if there is a cooling problem; I have one host that would simply cook itself without it though). I have cpulimit jobs in cron that indicate ~full blast while I am usually in bed asleep, and limited to a fair minority share during the workday (with conky on the desktop telling me the status). I have admittedly probably spent too much time on this, but it's a hobby. Although when people ask if I have hobbies and I say "server administration" I always get odd stares until I say "and playing guitar"

    • (Score: 2) by RS3 on Tuesday January 30 2018, @01:23AM (4 children)

      by RS3 (6367) on Tuesday January 30 2018, @01:23AM (#630132)

      I used to run FAH on a desktop that stayed on 24/7. For some reason it quit working and I had little patience to figure it out and fix it. But I thought it had a control interface that let you set how much CPU it used. I let it run full-on and the fan was noticeable but barely.

      • (Score: 2) by JoeMerchant on Tuesday January 30 2018, @05:11AM (3 children)

        by JoeMerchant (3937) on Tuesday January 30 2018, @05:11AM (#630195)

        I used the CPU controls internal to FAH and that limited it to 1 thread, but that's still enough to get the notebook fans cranking... I suppose I could go for external controls to limit its CPU use further, but... that's a lot of effort.

        --
        🌻🌻 [google.com]
        • (Score: 2) by RS3 on Tuesday January 30 2018, @02:40PM (2 children)

          by RS3 (6367) on Tuesday January 30 2018, @02:40PM (#630366)

          I've noticed over the years that laptops/notebooks never have enough CPU cooling for long-term full-speed CPU stuff. Audio and video rendering comes to mind.

          It is a bit of effort, especially considering what FAH and other piggyback modules do and ask of you- you'd think they would make it the least intrusive it could be.

          Windows Task Manager allows you to right-click on a process and set priority and affinity, kind of like *nix "nice", but that won't stop a process from hogging most of the available CPU.

          I've seen, and used to use, CPU cooling software for laptops. I think it just took up much process time, but put the CPU to sleep for most of the timeslice. I forget the name but it's easy to search for them. Then if you assign the FAH process to have lowest priority, normal foreground / human user processes should get most of the CPU when needed.

          • (Score: 2) by JoeMerchant on Tuesday January 30 2018, @03:21PM (1 child)

            by JoeMerchant (3937) on Tuesday January 30 2018, @03:21PM (#630394)

            FAH on a notebook is kind of a misplaced idea in the first place, and the notebook lives in the bedroom so fan noise is not an option - but, it's the most powerful system I've got and I thought I'd at least throw one more work unit to FAH after their recent little victory press release, so I tried it again.

            I've got other systems in other rooms (with slower CPUs), but they too crank up their fans in response to heavy CPU loading and I really hate having to disassemble things just to clean the fans - $30/year for the electricity I can handle as a charity donation, but catching an overheating problem and doing a 2 hour repair job when it crops up isn't on my list of things I like to do in life.

            What I wonder is why FAH hasn't found a charity sponsor who bids up Amazon EC2 spot instances for them? At $0.0035/hr ($30.60/year), that's pretty close to the price than I can buy electricity for my CPUs.

            --
            🌻🌻 [google.com]
            • (Score: 2) by RS3 on Tuesday January 30 2018, @03:52PM

              by RS3 (6367) on Tuesday January 30 2018, @03:52PM (#630407)

              I 100% agree on all points. Being a hardware hacker I have a couple/few compressors, so occasionally I drag a computer outside and blow out the dust. I've always been frustrated with computer "cooling" systems. Let's suck dirty dusty air in everywhere we can, especially into floppy / optical drives. Let's also have very fine-finned heatsinks so we can collect that dust. Real equipment has fans that suck air in through washable / replaceable filters. A long time ago I had modded a couple of my computers- just flipped the fans around and attached filters.

              People with forced-air HVAC will have much more dust in their computers. I highly advocate better filtration in HVAC, plus room air filters.

              I have never, and would never run FAH or SETI or any such thing on a laptop. I don't leave laptops running unattended for very long.

              My computers, and ones I admin, are all somewhat older by today's standards (4+ years) but have decent enough computing and GPU power. FAH won't use my GPUs- dumb!

              Ideally we'll all get power from the sun. Major companies like Amazon are investing in rooftop solar generation. They could / should donate the watts to important projects like FAH.

  • (Score: 2) by JoeMerchant on Sunday January 28 2018, @03:03PM (2 children)

    by JoeMerchant (3937) on Sunday January 28 2018, @03:03PM (#629462)

    it's helped by the fact that most customers don't need an intense simulation 24/7 (otherwise they could just buy their own hardware).

    True enough - last time I did a cost-estimate for using AWS, they were coming in around 10x my cost of electricity, they probably get their electricity for half of my price... the thing for me is: I can dedicate $1000 worth of hardware to run in-house for 10 days (~$1.50 in electricity) to get a particular result, or... I could spin up leased AWS instances and get the same compute result for about $15. If I'm only doing 1 of these, then, sure, run it at home. If I'm trying to do 10... do I really want to wait for >3 months to get the result, when I could have it from AWS in 10 minutes for $150? Of course, this is all theory, I haven't really looked into EC2's ability to spin up 30,000 instances on-demand yet - but, I do bet they could handle a few hundred instances, getting those 10 results in less than a day, instead of waiting 3 months.

    --
    🌻🌻 [google.com]
    • (Score: 3, Interesting) by Anonymous Coward on Sunday January 28 2018, @09:41PM (1 child)

      by Anonymous Coward on Sunday January 28 2018, @09:41PM (#629589)

      If you just need a computation done that's highly parallel, you can get a much better price by using spot instances [amazon.com]. It gets you compute time cheap when no one else wants it with the catch that your computation may be canceled at any time (... which is fine if you're able to save progress regularly). The people I know who train machine learning models do it pretty much only with AWS spot instances.

      • (Score: 2) by JoeMerchant on Sunday January 28 2018, @09:59PM

        by JoeMerchant (3937) on Sunday January 28 2018, @09:59PM (#629601)

        Thanks for this (EC2 spot instances) - that may very well be the way I go when I get enough time to pursue my hobbies.

        --
        🌻🌻 [google.com]
  • (Score: 2) by hendrikboom on Sunday January 28 2018, @03:36PM

    by hendrikboom (1125) Subscriber Badge on Sunday January 28 2018, @03:36PM (#629474) Homepage Journal

    One thing the computers on the silly list have that folding or seti at home don't have is fast interconnect. That's essential for solving partial differential equations.

  • (Score: 1) by khallow on Sunday January 28 2018, @05:45PM

    by khallow (3766) Subscriber Badge on Sunday January 28 2018, @05:45PM (#629515) Journal
    The computing power thrown at Bitcoin dwarfs any of that though it's pure integer logic math. According to here [bitcoincharts.com], the Bitcoin network is currently computing almost 8 million terahashes per second. A hash is apparently fixed in computation to about 1350 arithmetic logic unit operations [bitcointalk.org] per second ("ALU ops") presently, but not floating point operations. So the Bitcoin network is cranking out about 10*21 arithmetic operations per second - 10 million petaALU ops.