Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Sunday January 28 2018, @11:28AM   Printer-friendly
from the RIP dept.

Submitted via IRC for AndyTheAbsurd

Hammered by the finance of physics and the weaponisation of optimisation, Moore's Law has hit the wall, bounced off - and reversed direction. We're driving backwards now: all things IT will become slower, harder and more expensive.

That doesn't mean there won't some rare wins - GPUs and other dedicated hardware have a bit more life left in them. But for the mainstay of IT, general purpose computing, last month may be as good as it ever gets.

Going forward, the game changes from "cheaper and faster" to "sleeker and wiser". Software optimisations - despite their Spectre-like risks - will take the lead over the next decades, as Moore's Law fades into a dimly remembered age when the cornucopia of process engineering gave us everything we ever wanted.

From here on in, we're going to have to work for it.

It's well past the time that we move from improving performance by increasing clock speeds and transistor counts; it's been time to move on to increasing performance wherever possible by writing better parallel processing code.

Source: https://www.theregister.co.uk/2018/01/24/death_notice_for_moores_law/


Original Submission

Related Stories

Is Moore’s Law Actually Dead This Time? Nvidia Seems to Think So 23 comments

Chips "going... down in price is a story of the past," CEO says:

When Nvidia rolled out its new RTX 40-series graphics cards earlier this week, many gamers and industry watchers were a bit shocked at the asking prices the company was putting on its latest top-of-the-line hardware. New heights in raw power also came with new heights as far as MSRP, which falls in the $899 to $1,599 range for the 40-series cards.

When asked about those price increases, Nvidia CEO Jensen Huang told the gathered press to, in effect, get used to it. "Moore's law is dead," Huang said during a Q&A, as reported by Digital Trends. "A 12-inch wafer is a lot more expensive today. The idea that the chip is going to go down in price is a story of the past."

[...] Generational price comparisons aside, Huang's blanket assertion that "Moore's law is dead" is a bit shocking for a company whose bread and butter has been releasing graphics cards that roughly double in comparable processing power every year. But the prediction is far from a new one, either for Huang—who said the same thing in 2019 and 2017—or for the wider industry—the International Technology Roadmap for Semiconductors formally announced it would stop chasing the benchmark in its 2016 roadmap for chip development.

[...] As Kevin Kelly laid out in a 2009 piece, though, Moore's law is best understood not as a law of physics but as a law of economics and corporate motivation. Processing power keeps doubling partly because consumers expect it to keep doubling and finding uses for that extra power.

That consumer demand, in turn, pushes companies to find new ways to keep pace with expectations. In the recent past, that market push led to innovations like tri-gate 3D transistors and production process improvements that continually shrink the size of individual transistors, which IBM can now push out at just 2 nm.

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 0) by Anonymous Coward on Sunday January 28 2018, @11:53AM (45 children)

    by Anonymous Coward on Sunday January 28 2018, @11:53AM (#629406)

    And yet, we still find ways to shrink transistor size and find revolutionary ways to conduct lithography.

    Most of the major fabs are already beginning to tool for 10nm and below.

    • (Score: 2, Informative) by Anonymous Coward on Sunday January 28 2018, @12:08PM (44 children)

      by Anonymous Coward on Sunday January 28 2018, @12:08PM (#629410)

      We have found such ways till now. It seems like the race is almost over. To what I know we are at the very limits of miniaturisation not because of how able we are to do it, but for how small it can get physics wise. The materials we already use just cant get any smaller or faster, otherwise they loose the proprieties and nothing works.

      I am actually happy for this. Enough of the 'add more water to it' mentality in the computing industry. It is time to go back to properly design software and hardware for efficiency and get rid of a thousand layers of libraries upon libraries and needs to install .net and redist mvc++ libs simply because the gpu driver installer needs them.

      • (Score: 2) by takyon on Sunday January 28 2018, @12:29PM (43 children)

        by takyon (881) <takyonNO@SPAMsoylentnews.org> on Sunday January 28 2018, @12:29PM (#629416) Journal

        I am actually happy for this. Enough of the 'add more water to it' mentality in the computing industry. It is time to go back to properly design software and hardware for efficiency and get rid of a thousand layers of libraries upon libraries and needs to install .net and redist mvc++ libs simply because the gpu driver installer needs them.

        That won't happen for most people. There will be at least some amount of performance increase over the next 10 years, as well as a power consumption reduction (smartphone/netbook users can get a more noticeable increase). That's more free lunch to be wasted, even if it's a few moldy week-old sack lunches.

        The abstraction allows more clueless programmers or script kiddies to do things. What could wipe them out is AI-written code, perhaps even AI-written low-level code.

        If the home user already has more performance than they need (as long as they aren't using a 10 year old computer), then halting performance increases doesn't change much for them. The home user can already get 8+ cores, and is probably more GPU-bound than CPU-bound (GPUs will improve much more than CPUs towards the end).

        The users who actually need to optimize, like supercomputer users or anybody making IoT devices the size of ants, should already be doing so.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
        • (Score: 2, Interesting) by Anonymous Coward on Sunday January 28 2018, @01:38PM (17 children)

          by Anonymous Coward on Sunday January 28 2018, @01:38PM (#629433)

          I am using 10 to 25 yr old machines with no all on one chip. Real hardware. So I can change what I need too. The death to computers started to me was the sound card modems. Make the cpu do all the work. Waste cycles doing work that was done else where. It is why computers whole machine was not gotten faster in the last 10 yrs. cpu wasting cycles not doing real work. Earle 2000 I had a k6 computer that out preformed the newest p4 machines with twice the clocks. It has acidic card and drives so cpu was wasting it time talking to and controlling the drives.

          You are right is time to get back to real software design. But also REAL hardware design.

          I am an old school programmer that started with machines that had 12k of memory and 2x 5M disks. Machine was the size of two refrigators. That ran a company of $1B dollars. Yes the new computers can put that to shame but it would not be replaced with but 100’s so in the end cost more be more waste full of time and money.

          • (Score: 3, Interesting) by RS3 on Sunday January 28 2018, @02:01PM (10 children)

            by RS3 (6367) on Sunday January 28 2018, @02:01PM (#629442)

            A kindred spirit you are. I wish you and others like you (positive contributors) would get and use a login here. I like having a conversation with someone I can somehow remember and chat with again.

            Actually I have at least 1 16-bit ISA soundcard + modem that is _not_ "WinModem"- real sound and modem processors on the ISA card.

            I do have an ISA "WinModem" - surprising thing- must hog up lots of ISA bandwidth. Otherwise they are PCI (DMA).

            I was not a fan of WinModems when they first came out, but at some point CPU power got to the point of them being viable (IMHO).

            • (Score: 3, Interesting) by maxwell demon on Sunday January 28 2018, @02:25PM (2 children)

              by maxwell demon (1608) on Sunday January 28 2018, @02:25PM (#629450) Journal

              All the stuff being done by the CPU has more implications than just speed. If the code runs on the main CPU, any vulnerabilities are ways to compromise the computer. With dedicated hardware, all you would get access to by exploiting bugs in it would be that hardware.

              --
              The Tao of math: The numbers you can count are not the real numbers.
              • (Score: 0) by Anonymous Coward on Sunday January 28 2018, @10:21PM (1 child)

                by Anonymous Coward on Sunday January 28 2018, @10:21PM (#629610)

                and all memory you mean, since the hardware designers never bothered to compartmentalize or put security in. Just look at firewire. Heck, just look at meltdown, just letting an unpriviledged process read kernel memory "I'm sure we can fix it up later well enough".
                That few people bothered to exploit dedicated hardware doesn't mean they had any fewer security issues. It usually means it's harder to do and it would affect fewer people. That doesn't make it a solution at all.

                • (Score: 2) by maxwell demon on Monday January 29 2018, @05:18AM

                  by maxwell demon (1608) on Monday January 29 2018, @05:18AM (#629705) Journal

                  I don't see how hacking a modem hanging on the serial port would in any way allow to compromise your computer. Unless there's an additional vulnerability in the serial driver that the attacker can exploit, of course. But that's a further line that has to be broken. Remember, security is not an absolute; there is no "secure", there is only "less secure" and "more secure".

                  --
                  The Tao of math: The numbers you can count are not the real numbers.
            • (Score: 1, Interesting) by Anonymous Coward on Sunday January 28 2018, @03:07PM (3 children)

              by Anonymous Coward on Sunday January 28 2018, @03:07PM (#629464)

              Actually I do have an account. Signed in the first few weeks. I just like AC since my history does not affect me. I am always one vote away from dropping off a discussion. It is both a plus andimus.

              • (Score: 1) by Ethanol-fueled on Sunday January 28 2018, @06:49PM (2 children)

                by Ethanol-fueled (2792) on Sunday January 28 2018, @06:49PM (#629534) Homepage

                This ain't Slashdot buddy. You don't even need your Mulligan because you won't be modded down unless you get racial or insult Larry Wall.

                • (Score: 0) by Anonymous Coward on Sunday January 28 2018, @08:00PM (1 child)

                  by Anonymous Coward on Sunday January 28 2018, @08:00PM (#629558)

                  Its been here too. Remember slash code holds this site up. My AC was blocked for a few more NTSB. But I pointed out they only blocked one machine me out 8 I use all the time. Then I have VMs to give extra access.

                  But again live on the bleeding edge. One vote and I am gone. One vote makes me relevant. This means having to be sure to make meaningful posts that may support or not the conversation. I started doing this once my account was untouchable was preset to post higher because of my points. I like the work to it.

            • (Score: 2) by ilsa on Monday January 29 2018, @09:09PM (2 children)

              by ilsa (6082) Subscriber Badge on Monday January 29 2018, @09:09PM (#630026)

              And of course don't forget WinPrinters too. Printers that would only work on windows because the print engine was done in the driver software. The hardware was completely dumb and unusable on it's own.

              • (Score: 2) by RS3 on Tuesday January 30 2018, @12:34AM (1 child)

                by RS3 (6367) on Tuesday January 30 2018, @12:34AM (#630120)

                Ugh. WinPrinters. I had completely blissfully forgotten about them. Why'd you remind me. I've never touched one. I remember when they came out I just felt like the world was going a bad direction. I was already running Linux and who knows what else. I don't remember if you could set up Windows as a print server in those days; regardless, it just seemed like a terrible thing to be bound to an MS product, and who knows if it would work in the next Windows update or version. I'm glad they faded away.

                • (Score: 2) by ilsa on Tuesday January 30 2018, @10:00PM

                  by ilsa (6082) Subscriber Badge on Tuesday January 30 2018, @10:00PM (#630656)

                  IIRC, Windows was capable of acting as a print server for as long as networking was a thing. As early as Windows 3.1 I believe. But that still meant you needed to have a windows machine sitting on your network, as networked printers were still very very rare and expensive.

                  But agreed. I'm glad all those Win-things were just a passing fad.

          • (Score: 2, Interesting) by Anonymous Coward on Sunday January 28 2018, @06:26PM (4 children)

            by Anonymous Coward on Sunday January 28 2018, @06:26PM (#629528)

            I bought an external modem since Linux at the time couldn't handle my winmodem; the best part was that the external modem had a physical off switch (an actual toggle switch!) that gave me the ability to shut down the connection whenever I did something stupid (like send an email that I instantly regretted - I could shut down the connection before the email was fully sent through the modem).

            My first hard drive (on a 25MHZ 486 laptop) had 100MB capacity. Twitter for iOS is listed in the iTunes store as 189.6MB, almost twice the size of my original hard drive. Twitter for macOS is listed at 5MB. The difference probably comes down to coding style and frameworks and Swift's bloat. What can be done (by the same company) in 5MB is done in 190MB, because the iOS dev team just doesn't care.

            There's plenty of space to optimize in the future just because our current generation is downright sinful in its wastefulness. MS knew this back in the early 2000's when they limited by edict MS web page sizes because they saw what absurd bloating was going on even back then.

            • (Score: 0) by Anonymous Coward on Sunday January 28 2018, @10:24PM (3 children)

              by Anonymous Coward on Sunday January 28 2018, @10:24PM (#629612)

              How is a power switch a significant improvement for aborting a transfer over just pulling the phone cable?!
              At least around here, the plug was similar to ethernet, so trivial to disconnect.

              • (Score: 1, Insightful) by Anonymous Coward on Monday January 29 2018, @01:06AM (1 child)

                by Anonymous Coward on Monday January 29 2018, @01:06AM (#629658)

                One is designed to repeative use. The other is designed for rare use. Break the little clip off then you break connection must easier.

                • (Score: 0) by Anonymous Coward on Monday January 29 2018, @10:55AM

                  by Anonymous Coward on Monday January 29 2018, @10:55AM (#629766)

                  Progress eh? Sheesh, this is what the whole bitch-fest boils down to? A little plastic tab on a cable. Bitch moan waaah.

              • (Score: 2) by dry on Monday January 29 2018, @04:24AM

                by dry (223) on Monday January 29 2018, @04:24AM (#629695) Journal

                Here, the ISA modem's phone plug was around the back of the computer whereas when I updated to an external modem, it sat beside the computer with a nice accessible switch. Switching it off also saved power as those modems ran pretty hot.

          • (Score: 2) by kazzie on Monday January 29 2018, @08:27AM

            by kazzie (5309) Subscriber Badge on Monday January 29 2018, @08:27AM (#629744)

            I'll second that. My main home computer is still an X61s Thinkpad, which is now over 10 years old. Running Slackware and XFCE, it's a bit beat up around the edges, but does browsing, light gaming and typing just fine.

            The heavy number crunching is all done on the (far newer) work PC instead.

        • (Score: 5, Interesting) by JoeMerchant on Sunday January 28 2018, @01:53PM (23 children)

          by JoeMerchant (3937) on Sunday January 28 2018, @01:53PM (#629439)

          Already in 2006, parallelization was the "way of the future" as the serious workstations moved to 4, 8 and 16 core processors, and even laptops came with at least 2 cores. It's a very old story by now, but some things parallelize well, and others just don't. All the supercomputers on the top 500 list are massively parallel, and even if they run a single thread quite fast compared to your cell phone, it's their ability to run hundreds or thousands in parallel that gets them on that silly list. I call it silly, because AWS, Azure and similar platforms (even Folding At Home, SETI signal searching, Electric Sheep and other crowdsourced CPU cycles) effectively lease or beg/borrow massively parallel hardware orders of magnitude larger than anything on that list.

          Also in the early 2000s, it was already becoming apparent that energy efficiency was the new metric, as compared to performance at any power price. There was a brief period there where AMD was in front in the race to low power performance, but that didn't last too long. Today, I won't even consider purchase of a "high performance" laptop unless it runs cool (30W), and cellphone processors are pushing that envelope even further.

          Silicon processes may stop their shrink around 10nm, but lower power, higher density, and special purpose configurations for AI, currency mining, and many other compute hungry applications aren't just the way of the future, they're already here... and growing. 4GB of RAM and 4GHz of CPU is truly all that a human being will ever need to work on a spreadsheet, at least any spreadsheet that a team of humans can hope to comprehend and discuss. Any time a compute hungry application comes along that can't be addressed efficiently by simple massive parallelization, if it has value, special purpose hardware will be developed to optimize it.

          Now, khallow: how should that value be decided?

          --
          🌻🌻 [google.com]
          • (Score: 2) by RS3 on Sunday January 28 2018, @02:05PM (3 children)

            by RS3 (6367) on Sunday January 28 2018, @02:05PM (#629446)

            I can't speak for most software, but one of the new features of FireFox 58 (or whatever the newest is) is that it's rendering is faster because it's using multiple CPU cores more efficiently. 12 years later, right? (2018 - 2006 = 12)

            • (Score: 2) by JoeMerchant on Sunday January 28 2018, @02:25PM

              by JoeMerchant (3937) on Sunday January 28 2018, @02:25PM (#629451)

              HTML rendering is one of those tasks that does parallelize, but not stupidly easily - not surprising that it's taking a long time to get the gains, I was going to make a dig: especially when the competition is "Edge", but Chrome is out there too... which is probably why Firefox and Opera are still trying for new speed gains. There's a huge amount of truth to the statement: if all browsers rendered a certain type of page unacceptably slowly, then that type of page wouldn't show up very much on the web. So, it's usually those grey-area problems, the pages that you could notice a speed improvement, but they're not in "unacceptable" territory yet, that need (and get) the optimizations, and that's going to be a perpetually moving target.

              --
              🌻🌻 [google.com]
            • (Score: 2) by maxwell demon on Sunday January 28 2018, @02:27PM (1 child)

              by maxwell demon (1608) on Sunday January 28 2018, @02:27PM (#629453) Journal

              Too bad you have to decide: Either keep you extensions and get no performance upgrade, or get a performance upgrade but lose your extensions.

              --
              The Tao of math: The numbers you can count are not the real numbers.
              • (Score: 0) by Anonymous Coward on Monday January 29 2018, @11:02AM

                by Anonymous Coward on Monday January 29 2018, @11:02AM (#629768)

                Particularly when your extensions are intended (uBlock, Flashblock, noScript) to improve performance by disabling the new whizzbang.

          • (Score: 5, Informative) by takyon on Sunday January 28 2018, @02:32PM (14 children)

            by takyon (881) <takyonNO@SPAMsoylentnews.org> on Sunday January 28 2018, @02:32PM (#629456) Journal

            I call it silly, because AWS, Azure and similar platforms (even Folding At Home, SETI signal searching, Electric Sheep and other crowdsourced CPU cycles) effectively lease or beg/borrow massively parallel hardware orders of magnitude larger than anything on that list.

            Folding@home [wikipedia.org] is the biggest distributed computing project, and it's only at around 135 petaflops, comparable to Sunway TaihuLight at ~93 petaflops (125 peak).

            Add up the top 10 systems from the latest TOP500 list, and you get over 250 petaflops. That will be eclipsed by new systems coming in the next year or two that could be around 180-300 petaflops.

            Last time I checked, AWS doesn't publish a size estimate for their cloud. But it's helped by the fact that most customers don't need an intense simulation 24/7 (otherwise they could just buy their own hardware).

            Silicon processes may stop their shrink around 10nm

            7 nm [anandtech.com] is in the bag. 3-5 nm [arstechnica.com] are likely:

            GAAFETs are the next evolution of tri-gate finFETs: finFETs, which are currently used for most 22nm-and-below chip designs, will probably run out of steam at around 7nm; GAAFETs may go all the way down to 3nm, especially when combined with EUV. No one really knows what comes after 3nm.

            Maybe 0.5-2 nm can be reached with a different transistor design as long as EUV works.

            Although "X nm" is somewhat meaningless so other metrics like transistors per mm2 should be used. If we're at 100 million transistors/mm2 now (Intel 10nm), maybe we can get to 500 million transistors/mm2. The shrinking has not stopped yet.

            --
            [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
            • (Score: 2) by JoeMerchant on Sunday January 28 2018, @02:52PM (8 children)

              by JoeMerchant (3937) on Sunday January 28 2018, @02:52PM (#629458)

              Folding@home is the biggest distributed computing project, and it's only at around 135 petaflops

              Fair point, it has the potential to be bigger than any supercomputer on the list, but it's hard to get there with begging. I know I have only run FAH for short periods because it hits my CPUs so hard that the fans spin up, and I don't need them to pack in with dust any faster than they already are.

              --
              🌻🌻 [google.com]
              • (Score: 4, Informative) by requerdanos on Sunday January 28 2018, @05:13PM (2 children)

                by requerdanos (5997) Subscriber Badge on Sunday January 28 2018, @05:13PM (#629506) Journal

                There's a compromise, provided by the cpulimit command.

                sudo apt-get install cpulimit && man cpulimit

                Works like this:
                $ cpulimit -e nameofprogram -l 25%
                Limits program "nameofprogram" to 25% CPU usage.

                For single-core single-thread CPUs, the math is pretty easy; 100% means the whole CPU and 50% means half of it, etc.

                For multi-core CPUs, 100 * number of cores % is all of it, and ( (100 * number of cores) /2 ) % is half, etc.

                • (Score: 0) by Anonymous Coward on Monday January 29 2018, @01:56PM (1 child)

                  by Anonymous Coward on Monday January 29 2018, @01:56PM (#629800)

                  I wonder how that performs against SIGSTOP/SIGCONT toggled by temperature treshold, an approach I sometimes use myself. At least my variation is looking at the relevant variable. Of course on the other hand you might get wildly fluctuating temperatures if you set the cut off and start again limits widely apart. And then there is the fact that some things will crash on you with STOP/CONT, empirical observation.

                  I personally think that distributed computing (from a home users perspective) stopped making sense after CPUs learned to slow down and sleep instead of "furiously doing nothing at 100% blast". But hey if you want to help cure cancer or map the skies or design a new stealth bomber on your dime be my guest.

                  • (Score: 2) by requerdanos on Monday January 29 2018, @04:30PM

                    by requerdanos (5997) Subscriber Badge on Monday January 29 2018, @04:30PM (#629859) Journal

                    my variation is looking at the relevant variable [temperature threshold].

                    I have a separate daemon watching temperature and scaling CPU frequency and/or governor to moderate it (though usually that only comes into play if there is a cooling problem; I have one host that would simply cook itself without it though). I have cpulimit jobs in cron that indicate ~full blast while I am usually in bed asleep, and limited to a fair minority share during the workday (with conky on the desktop telling me the status). I have admittedly probably spent too much time on this, but it's a hobby. Although when people ask if I have hobbies and I say "server administration" I always get odd stares until I say "and playing guitar"

              • (Score: 2) by RS3 on Tuesday January 30 2018, @01:23AM (4 children)

                by RS3 (6367) on Tuesday January 30 2018, @01:23AM (#630132)

                I used to run FAH on a desktop that stayed on 24/7. For some reason it quit working and I had little patience to figure it out and fix it. But I thought it had a control interface that let you set how much CPU it used. I let it run full-on and the fan was noticeable but barely.

                • (Score: 2) by JoeMerchant on Tuesday January 30 2018, @05:11AM (3 children)

                  by JoeMerchant (3937) on Tuesday January 30 2018, @05:11AM (#630195)

                  I used the CPU controls internal to FAH and that limited it to 1 thread, but that's still enough to get the notebook fans cranking... I suppose I could go for external controls to limit its CPU use further, but... that's a lot of effort.

                  --
                  🌻🌻 [google.com]
                  • (Score: 2) by RS3 on Tuesday January 30 2018, @02:40PM (2 children)

                    by RS3 (6367) on Tuesday January 30 2018, @02:40PM (#630366)

                    I've noticed over the years that laptops/notebooks never have enough CPU cooling for long-term full-speed CPU stuff. Audio and video rendering comes to mind.

                    It is a bit of effort, especially considering what FAH and other piggyback modules do and ask of you- you'd think they would make it the least intrusive it could be.

                    Windows Task Manager allows you to right-click on a process and set priority and affinity, kind of like *nix "nice", but that won't stop a process from hogging most of the available CPU.

                    I've seen, and used to use, CPU cooling software for laptops. I think it just took up much process time, but put the CPU to sleep for most of the timeslice. I forget the name but it's easy to search for them. Then if you assign the FAH process to have lowest priority, normal foreground / human user processes should get most of the CPU when needed.

                    • (Score: 2) by JoeMerchant on Tuesday January 30 2018, @03:21PM (1 child)

                      by JoeMerchant (3937) on Tuesday January 30 2018, @03:21PM (#630394)

                      FAH on a notebook is kind of a misplaced idea in the first place, and the notebook lives in the bedroom so fan noise is not an option - but, it's the most powerful system I've got and I thought I'd at least throw one more work unit to FAH after their recent little victory press release, so I tried it again.

                      I've got other systems in other rooms (with slower CPUs), but they too crank up their fans in response to heavy CPU loading and I really hate having to disassemble things just to clean the fans - $30/year for the electricity I can handle as a charity donation, but catching an overheating problem and doing a 2 hour repair job when it crops up isn't on my list of things I like to do in life.

                      What I wonder is why FAH hasn't found a charity sponsor who bids up Amazon EC2 spot instances for them? At $0.0035/hr ($30.60/year), that's pretty close to the price than I can buy electricity for my CPUs.

                      --
                      🌻🌻 [google.com]
                      • (Score: 2) by RS3 on Tuesday January 30 2018, @03:52PM

                        by RS3 (6367) on Tuesday January 30 2018, @03:52PM (#630407)

                        I 100% agree on all points. Being a hardware hacker I have a couple/few compressors, so occasionally I drag a computer outside and blow out the dust. I've always been frustrated with computer "cooling" systems. Let's suck dirty dusty air in everywhere we can, especially into floppy / optical drives. Let's also have very fine-finned heatsinks so we can collect that dust. Real equipment has fans that suck air in through washable / replaceable filters. A long time ago I had modded a couple of my computers- just flipped the fans around and attached filters.

                        People with forced-air HVAC will have much more dust in their computers. I highly advocate better filtration in HVAC, plus room air filters.

                        I have never, and would never run FAH or SETI or any such thing on a laptop. I don't leave laptops running unattended for very long.

                        My computers, and ones I admin, are all somewhat older by today's standards (4+ years) but have decent enough computing and GPU power. FAH won't use my GPUs- dumb!

                        Ideally we'll all get power from the sun. Major companies like Amazon are investing in rooftop solar generation. They could / should donate the watts to important projects like FAH.

            • (Score: 2) by JoeMerchant on Sunday January 28 2018, @03:03PM (2 children)

              by JoeMerchant (3937) on Sunday January 28 2018, @03:03PM (#629462)

              it's helped by the fact that most customers don't need an intense simulation 24/7 (otherwise they could just buy their own hardware).

              True enough - last time I did a cost-estimate for using AWS, they were coming in around 10x my cost of electricity, they probably get their electricity for half of my price... the thing for me is: I can dedicate $1000 worth of hardware to run in-house for 10 days (~$1.50 in electricity) to get a particular result, or... I could spin up leased AWS instances and get the same compute result for about $15. If I'm only doing 1 of these, then, sure, run it at home. If I'm trying to do 10... do I really want to wait for >3 months to get the result, when I could have it from AWS in 10 minutes for $150? Of course, this is all theory, I haven't really looked into EC2's ability to spin up 30,000 instances on-demand yet - but, I do bet they could handle a few hundred instances, getting those 10 results in less than a day, instead of waiting 3 months.

              --
              🌻🌻 [google.com]
              • (Score: 3, Interesting) by Anonymous Coward on Sunday January 28 2018, @09:41PM (1 child)

                by Anonymous Coward on Sunday January 28 2018, @09:41PM (#629589)

                If you just need a computation done that's highly parallel, you can get a much better price by using spot instances [amazon.com]. It gets you compute time cheap when no one else wants it with the catch that your computation may be canceled at any time (... which is fine if you're able to save progress regularly). The people I know who train machine learning models do it pretty much only with AWS spot instances.

                • (Score: 2) by JoeMerchant on Sunday January 28 2018, @09:59PM

                  by JoeMerchant (3937) on Sunday January 28 2018, @09:59PM (#629601)

                  Thanks for this (EC2 spot instances) - that may very well be the way I go when I get enough time to pursue my hobbies.

                  --
                  🌻🌻 [google.com]
            • (Score: 2) by hendrikboom on Sunday January 28 2018, @03:36PM

              by hendrikboom (1125) Subscriber Badge on Sunday January 28 2018, @03:36PM (#629474) Homepage Journal

              One thing the computers on the silly list have that folding or seti at home don't have is fast interconnect. That's essential for solving partial differential equations.

            • (Score: 1) by khallow on Sunday January 28 2018, @05:45PM

              by khallow (3766) Subscriber Badge on Sunday January 28 2018, @05:45PM (#629515) Journal
              The computing power thrown at Bitcoin dwarfs any of that though it's pure integer logic math. According to here [bitcoincharts.com], the Bitcoin network is currently computing almost 8 million terahashes per second. A hash is apparently fixed in computation to about 1350 arithmetic logic unit operations [bitcointalk.org] per second ("ALU ops") presently, but not floating point operations. So the Bitcoin network is cranking out about 10*21 arithmetic operations per second - 10 million petaALU ops.
          • (Score: 2) by HiThere on Sunday January 28 2018, @06:05PM

            by HiThere (866) Subscriber Badge on Sunday January 28 2018, @06:05PM (#629521) Journal

            Sorry, but while there are a few things that don't parallelize well, most things do...you just need very different algorithms. Often we don't know what the algorithms are, but brains are an existence proof that they exist. The non-parallelizable things tend to be those most common to conscious thoughts. My theory is that conscious thought was invented as a side effect of indexing memories for retrieval. It is demonstrable a very small part of what's involved in thinking. I believe (again little evidence) that most thinking is pure pattern recognition, and that's highly parallelizable...though not really on a GPU. GPUs only handle a highly specialized subset of the process.

            --
            Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
          • (Score: 0) by Anonymous Coward on Monday January 29 2018, @01:18AM

            by Anonymous Coward on Monday January 29 2018, @01:18AM (#629660)

            For me it wa 1983. Preloaded services with 1 to n of a given service so that disk IO could be interwoven. On that machine 1 disk IO was equal 40,000 asset instructions. So a name search normally had 3 services to share by 72 workstation / users. And 20 llines to show on a display would work out to 0.75 secs. Always once the system learned how many simotatus requests it needed to support.

          • (Score: 2) by TheRaven on Monday January 29 2018, @11:24AM (1 child)

            by TheRaven (270) on Monday January 29 2018, @11:24AM (#629776) Journal
            If I were designing a new CPU now, to be fast and not have to run any legacy software, it would have a large number of simple in-order cores, multiple register contexts, hardware-managed thread switching (pool of hardware-managed thread contexts spilled to memory), hardware inter-thread message passing, young-generation garbage collection integrated with the cache, and a cache-cohernecy protocol without an exclusive state (no support for multiple cores having access to the same mutable data), and a stack-based instruction set (denser instructions and no need to care about ILP because we don't try to do any).

            We could build such a thing today and it would both be immune to Spectre-like attacks and have a factor of 2-10 faster overall throughput than any current CPU with a similar transistor budget and process. It would suck for running C code, but a low-level language with an abstract machine closer to Erlang would be incredibly fast.

            We've invested billions in compiler and architecture R&D to let programmers pretend that they're still using a fast PDP-11. That probably needs to stop soon.

            --
            sudo mod me up
            • (Score: 2) by JoeMerchant on Monday January 29 2018, @02:08PM

              by JoeMerchant (3937) on Monday January 29 2018, @02:08PM (#629802)

              We've invested billions in compiler and architecture R&D

              I'm pretty sure that number is in the trillions by now... inertia is a remarkably powerful thing, just witness the staying power of DOS/Windows - sure there are alternatives, there always have been, but as far as market dominance goes we're past 30 years now.

              --
              🌻🌻 [google.com]
        • (Score: 2) by frojack on Sunday January 28 2018, @09:11PM

          by frojack (1554) on Sunday January 28 2018, @09:11PM (#629575) Journal

          There will be at least some amount of performance increase over the next 10 years, as well as a power consumption reduction

          I predict performance increases will continue, in spite of this story. We are simply in a developmental pause, as new technologies come on line.

          Did Moore's law apply to steam engines? They got bigger and faster (120mph) and then they disappeared. Poof.
          Yet the dog-slow Amtrak Acella Express attains 150 mph daily, and are laughed at by the Japanese and Germans and Chinese "bullet" trains.

          The path to more power does not rely ONLY on maintaining the same technology. But even if you do, you can use the same technology in totally different ways. Example: Field-programmable gate array (FPGA), can evolve so that the first thing that happens when a program is launched is that generic FPGA is allocated from a pool, and the software sets it up (programs the array), runs ts main functions on the (now custom) array, releases the array, and exits.

          --
          No, you are mistaken. I've always had this sig.
  • (Score: 2) by pkrasimirov on Sunday January 28 2018, @12:07PM (7 children)

    by pkrasimirov (3358) Subscriber Badge on Sunday January 28 2018, @12:07PM (#629409)

    > it's been time to move on to increasing performance wherever possible by writing better parallel processing code.
    In startups and small-scale projects, or generally in code running on limited resources, perhaps yes. In modern "professional" software even single-threaded algorithms are yet to be refined. I mean I have seen a background task taking 8+ hours doing 10k+ SQLs in a for-loop. When rewritten to be one SQL as it should, it completes in around a minute. The developer claimed "great success" [youtube.com] and proceed to get his annual bonus payment. And I'm not talking some irrelevant task, it's related to the payslips of thousands of unrelated and unsuspecting people.

    So, yeah. No law can substitute thinking and coding, and that's not new.

    • (Score: 2) by JoeMerchant on Sunday January 28 2018, @02:03PM

      by JoeMerchant (3937) on Sunday January 28 2018, @02:03PM (#629444)

      I recently optimized a single-threaded app that we use at work's execution time down from ~8 minutes to just under 1 minute... as the data it crunches inevitably grew it started around a minute of execution time and grew to ~8... there might be a dozen people who execute this process, all told it might be running 12 times a day on average (heavy user hitting it 3-4x per day, other users hitting it once every 3-4 days or even less frequently). Took a half-day to run the optimizer, identify the big bottleneck, address it, test it, verify no change in output. Up to that point, A) it wasn't a big enough issue to warrant the attention, and B) optimization on a smaller dataset might not have addressed the problems that became clear when the dataset grew. All told, a single small subroutine grew from 4 lines of code to about 12, adding a buffer that stores a costly result and flushing that buffer when changes in inputs invalidate it: 9x speedup. If we wanted another 9x speedup today, that would be a LOT more costly to achieve - the profiler didn't identify any more low hanging fruit. Maybe when the database grows some more, some new low-hanging fruit will present itself.

      --
      🌻🌻 [google.com]
    • (Score: 2) by JoeMerchant on Sunday January 28 2018, @03:10PM (3 children)

      by JoeMerchant (3937) on Sunday January 28 2018, @03:10PM (#629466)

      Now, for your background task that was taking 8+ hours to execute - if that runs once a month unattended, maybe it's not such a big deal. Sure, it can be >1000x faster, and good on you for improving it, but if it was meeting the users' needs, was it really broken? If there were other things that required attention, things that were actually losing data or returning incorrect results, maybe the question of optimization is moot if the process already completes "overnight" for something that doesn't ever need to be run, then run again quickly?

      As for me, I'd go for the faster execution just because it's more testable, but if your "great success" developer had a blind spot there, but was contributing value in other areas (granted, unlikely), things that don't need to be fixed aren't really broken.

      --
      🌻🌻 [google.com]
      • (Score: 4, Funny) by pkrasimirov on Sunday January 28 2018, @04:06PM (2 children)

        by pkrasimirov (3358) Subscriber Badge on Sunday January 28 2018, @04:06PM (#629482)

        > if it was meeting the users' needs, was it really broken?
        You are ready to be a manager...

        > things that don't need to be fixed aren't really broken.
        ... in operations.

        • (Score: 4, Insightful) by JoeMerchant on Sunday January 28 2018, @04:26PM

          by JoeMerchant (3937) on Sunday January 28 2018, @04:26PM (#629487)

          if it was meeting the users' needs, was it really broken?

          You are ready to be a manager...

          (Good) managers do add value. If you can make a fix like that in less than the time it takes the non-optimized code to run once, sure - go for it, even take a couple more days to prove it's right and "big win" - on the other hand, when there's a loss-of-revenue fire on another project, spending 3 days to speed up something that nobody is complaining about? That's 3 days of lost revenue for "no good reason."

          --
          🌻🌻 [google.com]
        • (Score: 2) by Virindi on Tuesday January 30 2018, @03:02AM

          by Virindi (3484) on Tuesday January 30 2018, @03:02AM (#630156)

          > if it was meeting the users' needs, was it really broken?
          You are ready to be a manager...

          The reason people are employed or contracted is so that they can do things which provide value to the employer. If something genuinely is fine and improving it would provide no value, the employer should not pay for it.

          The problem comes when the engineer who is more knowledgeable about the system, sees a problem that the employer does not. This is the common case; often something 'works fine' now but is set up in a way that will cause problems later. But whether this is actually a problem worth fixing depends on both technical and business considerations.......it is a cost/benefit calculation. That is the supposed job of management*, and it is not wrong.

          *In a society/environment where people are not acting like little kids who constantly have to be prodded to keep doing work.

    • (Score: 0) by Anonymous Coward on Sunday January 28 2018, @05:00PM (1 child)

      by Anonymous Coward on Sunday January 28 2018, @05:00PM (#629502)

      As I teach new programmers. Break the job up! Think like fire fighter. Bucket brigades. SQL is not effechent database. Get data archers to design better databases programmers do not understand. At least for 5 to 10 years.

      Personally I would like all programmers to write code that only use 10 k of memory or less. Really learn and show their skills.

  • (Score: 4, Insightful) by takyon on Sunday January 28 2018, @12:14PM (4 children)

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Sunday January 28 2018, @12:14PM (#629411) Journal

    Moore's Law: Not Dead? Intel Says its 10nm Chips Will Beat Samsung's [soylentnews.org]

    Transistor counts are still increasing. Intel's 10nm has been delayed and will probably be delayed again by the "in-silicon" fixes they have alluded to for Meltdown (Spectre?). And although their transistors per mm2 makes them look better than the competition, it's clear that rather than "10nm" Intel competing against "10nm" Samsung/TSMC/GlobalFoundries, it will be against "7nm" Samsung/TSMC/GlobalFoundries. Intel has slipped enough that the competitors can probably reach similar transistors per mm2.

    Where will those transistors go? More cores and graphics (and less to security risk enabling optimizations). The Ryzen/Threadripper, Intel Core i9, and "mainstream" Intel 6-core launches greatly boosted the amount of cores/threads you can cheaply acquire (Intel was previously selling the 10-core i7-6950X for $1700). From 6 cores to 18 cores, it's all much cheaper. Xeon and Epyc are pushing to 32 cores and beyond.

    2017 saw performance/$ massively increase, IF you can take advantage of parallelism that is. And the performance "Meltdown" only really affected Intel AFAIK. AMD users potentially got a massive increase in multi-threading capability as well as a ~50% increase in IPC. So their single-threaded performance didn't decline at all.

    New nodes are still on the roadmap. [soylentnews.org] "7nm" is assured. "5nm" is likely. "3-4nm" is possible. EUV might not be needed although it would help.

    When CMOS scaling does become certifiably dead, it will eventually be replaced by something else. Even if we have to endure a few years in which no significant improvements are made at all (meaning even less than the 3-10% IPC improvements certain Intel generations have made, no increase in core count at the same size, no increase in transistors per mm2, etc). And what we need to see is a new processing element with a huge reduction in power consumption and heat so that it can be stacked. A growing amount of software will be able to take advantage of thousands or millions of cores.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 0) by Anonymous Coward on Sunday January 28 2018, @02:11PM (1 child)

      by Anonymous Coward on Sunday January 28 2018, @02:11PM (#629448)

      You mean... like the Mill CPU?

      https://www.youtube.com/watch?v=LgLNyMAi-0I [youtube.com]

    • (Score: 2) by TheRaven on Monday January 29 2018, @11:29AM (1 child)

      by TheRaven (270) on Monday January 29 2018, @11:29AM (#629778) Journal

      Transistor counts are still increasing

      This is only kind-of true. The thing that's changed in the last 3-4 years is that new process technologies are not bringing down the cost of transistors. The other part of Moore's law is that the dollar investment is constant: the number of transistors on a chip that you can buy for a fixed amount doubles every 18 months. With the last couple of process advances, the cost per transistor has gone up.

      It used to be that for large volume runs, you always wanted the newest generation. It might have higher up-front costs and a higher cost more per wafer, but the cost per chip would go down because the amortised cost per transistor was lower. That's no longer the case: even for enormous runs where the fixed costs are negligible when amortised across the entire run, the cost per wafer has gone up by enough that the cost per chip has also gone up. Oh, and those fixed costs have gone up by a lot, so for smaller runs this is even worse.

      --
      sudo mod me up
  • (Score: 4, Insightful) by maxwell demon on Sunday January 28 2018, @12:15PM (1 child)

    by maxwell demon (1608) on Sunday January 28 2018, @12:15PM (#629412) Journal

    Does that mean we will reach the end of bloatware, as you no longer can count on hardware advances to compensate your software sloppiness?

    --
    The Tao of math: The numbers you can count are not the real numbers.
  • (Score: 2, Informative) by Anonymous Coward on Sunday January 28 2018, @12:29PM (10 children)

    by Anonymous Coward on Sunday January 28 2018, @12:29PM (#629415)

    Repeat after me: Moore's law is talking about transistor count, it says absolutely nothing about performance.

    Therefore, it is not impacted in any way whatsoever by performance drops due to Spectre/Meltdown.

    Of course, we may be nearing "the end" of increasing transistor density as well, if only due to laws of physics, but you'll forgive me if I decide to wait for it to actually stop before I start singing requiems. The doom-sayers have been crying about imminent death of Moore's law since, I dunno, May 1965 I guess.

    The "Death notice 2 January 2018" is just ridiculous clickbait written without even a cursory glance at the Wikipedia article about Moore's law [wikipedia.org], let alone understanding any of it. Literally, the first sentence is "Moore's law is the observation that the number of transistors in a dense integrated circuit doubles approximately every two years."

    • (Score: 2) by Runaway1956 on Sunday January 28 2018, @12:45PM (5 children)

      by Runaway1956 (2926) Subscriber Badge on Sunday January 28 2018, @12:45PM (#629424) Journal

      Well, actually, Moore's law isn't exactly a law. Moore simply commented on a phenomenon, which is temporary. At some point in time, we will reach molecule, then atom sized transistors, and only so many will fit where we want to put them. The time it takes to double the number of transisters, and thus performance and efficiency, will begin to take longer, then longer, and eventually, they'll just give up on Moore. Incremental improvements in a lifetime will be the norm.

      Wonder if there was some similar "law" cited with the advent of reinforced concrete, and high rise buldings? "At the pace that construction is improving, we'll all be living in skyscraping towers in the next century!"

      • (Score: 0) by Anonymous Coward on Sunday January 28 2018, @01:06PM (1 child)

        by Anonymous Coward on Sunday January 28 2018, @01:06PM (#629426)

        > Well, actually, Moore's law isn't exactly a law. Moore simply commented on a phenomenon, which is temporary.

        Well, yeah, I know, I never argued otherwise. It's also in the first sentence from Wikipedia I mentioned ("Moore's law is the observation that...").

        I'm saying that the core premise of TFA is complete bullshit. Might as well use Spectre/Meltdown to announce the deaths of the Sturgeon's law [wikipedia.org] and the Hofstadter's law [wikipedia.org]. It's just as relevant.

        • (Score: 0) by Anonymous Coward on Sunday January 28 2018, @02:00PM

          by Anonymous Coward on Sunday January 28 2018, @02:00PM (#629441)

          Not a death announcement, but rather a textbook confirmation of Hofstadter's law.
          It is going to take a bit longer than expected to get the next speedup.
          This is because the planning did not account for having to stop and backup and adjust for an unaccounted use case.
          Kind of the whole point of scheduling complexity being complex.

          Moore's law is a simple equation. It may need another term to account for approaching the limits of the current bags of tricks.
          It seems likely that another bag will be found. Perhaps 3d?

      • (Score: 2) by choose another one on Sunday January 28 2018, @01:50PM (1 child)

        by choose another one (515) Subscriber Badge on Sunday January 28 2018, @01:50PM (#629436)

        > Wonder if there was some similar "law" cited with the advent of reinforced concrete, and high rise buldings?

        Not sure, but it is not dissimilar - concrete keeps getting better and buildings keep getting taller, but the concrete is not the limiting factor, wind loading becomes your problem, and when better simulation and design sort of solved that, you run into limits because you cannot fit enough elevators into a building core to move the people from floor to floor in acceptable time and still retain any usable building outside the elevator shafts. Tallest (probably) building under construction is now a cable-stayed monster in dubai which solves the elevator issue by omitting most of the lower floors, it's basically a smaller skyscraper up in the air on a concrete stick.

        • (Score: 2) by hendrikboom on Sunday January 28 2018, @03:56PM

          by hendrikboom (1125) Subscriber Badge on Sunday January 28 2018, @03:56PM (#629479) Homepage Journal

          Once again, it's an interconnect problem. Now if we just interconnected all those buildings at the 30th storey...

      • (Score: 4, Informative) by c0lo on Sunday January 28 2018, @02:03PM

        by c0lo (156) Subscriber Badge on Sunday January 28 2018, @02:03PM (#629443) Journal

        Well, actually, Moore's law isn't exactly a law.

        Murphy's law isn't exactly a law either. But, boy, does it happen or does it happen.

        --
        https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
    • (Score: 2) by JoeMerchant on Sunday January 28 2018, @02:07PM

      by JoeMerchant (3937) on Sunday January 28 2018, @02:07PM (#629447)

      Moore's law has been (mis)applied by the tech press to everything from transistor count, to clock frequency, storage density, performance per dollar, performance per watt, and anything else that makes rapid progress.

      --
      🌻🌻 [google.com]
    • (Score: 4, Informative) by FatPhil on Sunday January 28 2018, @02:25PM (1 child)

      by FatPhil (863) <{pc-soylent} {at} {asdf.fi}> on Sunday January 28 2018, @02:25PM (#629452) Homepage
      Pedantically it was formulated as complexity (meaning transistor count) at minimum cost per transistor (which requires taking into account yields).

      The fake news in CPUs is the *name* of the processes. Nothing in a 14um chip is 14nm in size, or in distance from anything else. Nothing in a 10nm chip is 10nm in size, or in distance from anything else. Nothing in a 7nm chip is 7nm in size, or in distance from anything else. Worst - nothing in a 7nm chip is half the size of what it was in a 14nm chip (it's no longer a linear relationship that's being followed, it's more square rooty). Modern "nm" make bogomips look something from the SI units' definitions.
      --
      Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
      • (Score: 0) by Anonymous Coward on Monday January 29 2018, @02:15PM

        by Anonymous Coward on Monday January 29 2018, @02:15PM (#629804)

        So where does the name from from then? Marketing dept?

    • (Score: 2) by HiThere on Sunday January 28 2018, @06:40PM

      by HiThere (866) Subscriber Badge on Sunday January 28 2018, @06:40PM (#629532) Journal

      Sorry, but this is just about the time that the death of Moore's law was predicted to happen. There'll probably be another generation or two of improvement, but noise levels are increasing. So to start planning that it's happening is reasonable. Sometime between now and, say, 2025. 2027 at the latest. But the last generation will experience lower beneficial gains.

      I really think it's time to start planning for a changed architecture, with a strong increase in parallel algorithms, and a shift towards languages that make that easy. FWIW, I'm not alone in this opinion. Most recently designed languages tend to presume that this is going to be important. There may, however, be only a few who think it should go down as far as the chip assembly language, as I do. What I'm in favor of it using something like Erlang's BEAM virtual machine and implementing it as the assembler. And designing with LOTS of CPUs (that's a misnomer in this context) with perhaps one or two of the current chip design for sequential operations....and even that's wrong. It should be a specialized design that has a processor more similar to a 64 bit i386 than to a current processor, but with additional operations to facilitate "actor" i/o from the other processors. Each of the processors should have a small dedicated FAST cache and a much larger non-volatile cache (so it won't dissipate power in just holding memory). I have a suspicion that some of the announcements about "neural style computers" being developed are actually this kind of a system. If it's done right this should reduce the cooling needs significantly, to the point where actual 3-D chips become feasible. Hopefully without the need for internal cooling systems, as those add tremendously to the complexity. Then these chips could themselves be connected via a message passing network for increased power. And there should also be GPU modules for graphics processing, redesigned to communicate with the main system via message passing.

      What I'm proposing is clearly not an optimum design, but merely a first cut by someone who isn't expert in the field...but it seems about right to me. And it leaves place to allow the attachment of any desired peripherals, as long as they communicate with the system by message passing, with is already fairly standard for most peripherals. (DMA channels, e.g., are few and far between.)

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
  • (Score: 3, Informative) by bradley13 on Sunday January 28 2018, @04:17PM (2 children)

    by bradley13 (3053) on Sunday January 28 2018, @04:17PM (#629484) Homepage Journal

    Sure, Moore's law was about transistor count, but this has always had a direct relationship with performance. Not only in transistor speed (they switch faster when they are smaller), but also in the available complexity: some of the individual commands available in processors would have represented massive software functions just a few years ago.

    Strangely, processor performance has outpaced memory performance, so there was caching. Pushing single-thread speed for simple commands: first came pipelines, then came speculative execution. What we're seeing in Meltdown and Spectre are unexpected interactions between these various optimizations. So they'll have to back off.

    However, still hugely underexploited: parallelism. Watch your processor usage when you are intensively using some application or other. If you have a 4-core, hyperthreaded processor, most likely you will see one or two of your eight virtual cores really being used. This is even with the zillions of background tasks running on every machine nowadays.

    At a guess, if you took out all the bleeding-edge single thread optimization stuff, like branch prediction, and you also put the massively complex commands into shared units, you might reduce processor performance by a factor of 2-3, but you could put at least 5 times as many processors onto the chip. The problem is: they would sit there, unused.

    It goes along with the bloatware problem. There's too much software that needs written, and there are too few actually really good programmers out there. So you get crap built on frameworks glued to other frameworks and delivered to the clueless customer. Dunno what we do about it, but this is a fundamental problem.

    --
    Everyone is somebody else's weirdo.
    • (Score: 2, Interesting) by AlphaSnail on Sunday January 28 2018, @07:36PM (1 child)

      by AlphaSnail (5814) on Sunday January 28 2018, @07:36PM (#629549)

      I have this feeling which might be wrong but I suspect this AI phenomena at some point is going to be turned toward translating code into more optimized and parallel functionality like some kind of super compiler. I see no reason that we couldn't hit a point where software isn't typed out like it is now but written by some kind of Siri like assistant that converses with you until the program is running as you describe it rather than working out every line of code yourself. At that point how good a coder is will be a concept like how good a switchboard operator is, irrelevant having been replaced by a superior technique. After that people might code for fun but it wont be used by any businesses or for profit entities. Why it might be a while until that happens is it would be coders putting themselves out of a job, and I think they are going to hold off on that for as long as possible but I think it is inevitable since whoever does it first will have such an advantage once the cats out of the bag they will dominate the software industry in every respect on cost and quality.

      • (Score: 2) by TheRaven on Monday January 29 2018, @11:32AM

        by TheRaven (270) on Monday January 29 2018, @11:32AM (#629779) Journal
        AI is not magic. AI is a good way of getting a half-arsed solution to a problem that you don't understand, as long as that problem doesn't exist in a solution space that has too many local minima. Pretty much none of that applies to software development, and especially not to optimisation. We already have a bunch of optimisation techniques that we're not using because they're too computationally expensive to be worth it (i.e. the compiler will spend hours or days per compilation unit to get you an extra 10-50% performance). AI lets us do the same thing, only slower.
        --
        sudo mod me up
  • (Score: 2) by turgid on Sunday January 28 2018, @04:38PM (2 children)

    by turgid (4318) Subscriber Badge on Sunday January 28 2018, @04:38PM (#629492) Journal

    In my humble experience. it's difficult enough to get "professional software developers" to write working single-threaded code let alone multi-threaded or any other sort of fancy parallel stuff. We have an industry built on hubris and ego.

    • (Score: 3, Insightful) by TheRaven on Monday January 29 2018, @11:35AM (1 child)

      by TheRaven (270) on Monday January 29 2018, @11:35AM (#629780) Journal
      In Alan Kay's experience, if you teach small children to program in a shared-nothing actor-model language, they can happily write code that effectively uses hundreds of threads. The difficulty is not writing parallel code, it's writing parallel code in a language that's fundamentally designed for serial code. Languages designed for parallelism, such as Erlang, tend to give more reliable code because you need better isolation between components (unless you think cache-line ping-pong is a fun game) and tightly constrained side effects.
      --
      sudo mod me up
  • (Score: 2) by requerdanos on Sunday January 28 2018, @05:27PM (3 children)

    by requerdanos (5997) Subscriber Badge on Sunday January 28 2018, @05:27PM (#629510) Journal

    for the mainstay of IT, general purpose computing, last month may be as good as it ever gets.

    This is a dead tie for the dumbest, least likely thing I ever heard.

    It's up there with...

    • "There's a world market for about 5 computers"
    • "TV has no future because people will get tired of staring at a plywood box"
    • "When the [1878] Paris Exhibition closes, electric light will close with it and no more will be heard of it"
    • "640K is enough for anyone"
    • "I wonder what happens when I hit myself in the head with this sledgehammer*"

    You read it here first, folks:

    For general purpose computing, last month is absolutely not as good as it ever will get because technologies will advance such that computing gets faster. - requerdanos of soylentnews.org

    --------------------------------
    * You'd probably start writing future technology predictions like these folks did.

  • (Score: 2) by MichaelDavidCrawford on Sunday January 28 2018, @08:25PM

    by MichaelDavidCrawford (2339) Subscriber Badge <mdcrawford@gmail.com> on Sunday January 28 2018, @08:25PM (#629561) Homepage Journal

    All the CS majors at Kuro5hin said that was impossible but I have a physics degree and they don't.

    Surely there is a reason we plug our servers into the wall or build expensive batteries into our mobile device.

    Here's a simple one:

    There is an x86_64 instruction that turns the cache off for just one cache line. For this to work the coder has to fill the entire cache line with new data.

    If you don't use that instruction then write some data into the cache line, a line from the level 2 cache will be read into the L1 cache line, then just that one bit of data will overwrite a little piece of it. If you go on to overwrite the entire line, that L2 cache read will be wasted.

    I've tinkered with ways to do this but being homeless made it infeasible to perform actual measurements.

    But it seems to me that it's enough simply to point out this can be done. For that energy savings to make a real difference a vast number of codebases will need to be refactored.

    --
    Yes I Have No Bananas. [gofundme.com]
  • (Score: 2) by wonkey_monkey on Sunday January 28 2018, @09:09PM

    by wonkey_monkey (279) on Sunday January 28 2018, @09:09PM (#629573) Homepage

    We're driving backwards now:

    What, we're going to start uninventing chip manufacturing processes? (Moore's Law isn't about computing performance per se).

    it's been time to move on to increasing performance wherever possible by writing better parallel processing code.

    Not everything can be parallelised.

    --
    systemd is Roko's Basilisk
(1)