Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Saturday December 30 2017, @06:45PM   Printer-friendly
from the perhaps-providing-prompt-prompts-prompts-perceived-performance-primacy dept.

Have you ever had that nagging sensation that your computer was slower than it used to be? Or that your brand new laptop seemed much more sluggish than an old tower PC you once had? Dan Luu, a computer engineer who has previously worked at Google and Microsoft, had the same sensation, so he did what the rest of us would not: He decided to test a whole slew of computational devices ranging from desktops built in 1977 to computers and tablets built this year. And he learned that that nagging sensation was spot on—over the last 30 years, computers have actually gotten slower in one particular way.

Not computationally speaking, of course. Modern computers are capable of complex calculations that would be impossible for the earliest processors of the personal computing age. The Apple IIe, which ended up being the “fastest” desktop/laptop computer Luu tested, is capable of performing just 0.43 million instructions per second (MIPS) with its MOS 6502 processor. The Intel i7-7700k, found in the most powerful computer Luu tested, is capable of over 27,000 MIPS.

But Luu wasn’t testing how fast a computer processes complex data sets. Luu was interested in testing how the responsiveness of computers to human interaction had changed over the last three decades, and in that case, the Apple IIe is significantly faster than any modern computer.

https://gizmodo.com/the-one-way-your-laptop-is-actually-slower-than-a-30-ye-1821608743


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Insightful) by Arik on Saturday December 30 2017, @06:49PM (8 children)

    by Arik (4543) on Saturday December 30 2017, @06:49PM (#615916) Journal
    When I tell people this they look at me like I'm crazy, but it's true.

    The sheer unresponsiveness of today's computer systems is maddening, and there's absolutely no excuse for it. The hardware has only gotten faster, but the programming has only gotten sloppier and lazier.
    --
    If laughter is the best medicine, who are the best doctors?
    • (Score: 3, Funny) by legont on Saturday December 30 2017, @06:56PM (7 children)

      by legont (4179) on Saturday December 30 2017, @06:56PM (#615921)

      It's conspiracy - they simply want us to buy a new one.

      --
      "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
      • (Score: 2) by Arik on Saturday December 30 2017, @07:46PM (6 children)

        by Arik (4543) on Saturday December 30 2017, @07:46PM (#615938) Journal
        Every time you do that it gets worse.

        The logical man would stop buying them.

        There weren't enough logical men for companies making good PCs to survive though :(

        So how we have to use compromised garbage for everything. George Orwell was right. The future is one boot, stomping one face, over and over again, forever.

        --
        If laughter is the best medicine, who are the best doctors?
        • (Score: 0) by Anonymous Coward on Sunday December 31 2017, @04:52AM (5 children)

          by Anonymous Coward on Sunday December 31 2017, @04:52AM (#616075)

          The problem is the programming, so stop using proprietary, user-subjugating software. Then if the problem is still the programming, fix it and contribute.

          • (Score: 3, Informative) by frojack on Sunday December 31 2017, @06:11AM (4 children)

            by frojack (1554) on Sunday December 31 2017, @06:11AM (#616088) Journal

            so stop using proprietary, user-subjugating software

            This year marks the death of 32bit operating systems. Not just from Apple and Microsoft.

            Almost every linux distribution in the wold is dropping 32bit versions. Virtually nobody will release a linux kernel for 4.12.
            For no good reason, mind you. A 32bit OS is usually just a compiler parameter away using any modern source language.
            Thee are literally thousands of millions of these machines in existence. They are literally free at flea markets.

            So don't give me that about user-subjugating crap.

            --
            No, you are mistaken. I've always had this sig.
            • (Score: 2) by maxwell demon on Sunday December 31 2017, @08:14AM

              by maxwell demon (1608) on Sunday December 31 2017, @08:14AM (#616103) Journal

              If it is really just a compiler parameter away, what stops you from taking the source code and compile it with that compiler parameter?

              --
              The Tao of math: The numbers you can count are not the real numbers.
            • (Score: 3, Informative) by digitalaudiorock on Sunday December 31 2017, @05:56PM (2 children)

              by digitalaudiorock (688) on Sunday December 31 2017, @05:56PM (#616168) Journal

              Almost every linux distribution in the wold is dropping 32bit versions. Virtually nobody will release a linux kernel for 4.12.
              For no good reason, mind you. A 32bit OS is usually just a compiler parameter away using any modern source language.

              Interestingly, my MythTV systems which work flawlessly on 1080 OTA content are both still on old x86 hardware, which I've been able to get away with since they're both on Gentoo (currently running a 4.12.12 kernel). Ironically however, I'll eventually get screwed because of the only proprietary software I depend on...and that's the nVidia driver which I need for my frontend's GT430 card for VDPAU etc...and nVidia is dropping 32 bit support. Wonderful.

              So that will eventually force me to 64 bit hardware at least for the frontend sooner than I probably would have done so. The sad irony is that I've yet to find remote control USB IR receiver that isn't a laggy piece of total shit compared to my old serial IR reciever...so odds are, I'll loose functionality with the "upgrade". Shit's just moving backwards for sure.

              • (Score: 0) by Anonymous Coward on Monday January 01 2018, @12:12AM (1 child)

                by Anonymous Coward on Monday January 01 2018, @12:12AM (#616276)

                You can get a USB-serial converter so you can still use your old IR receiver. Or you can get a cheap AMD video card. The AMD driver is open source, though that's still no guarantee it will work on 32-bit kernels forever.

                • (Score: 2) by digitalaudiorock on Tuesday January 02 2018, @12:30AM

                  by digitalaudiorock (688) on Tuesday January 02 2018, @12:30AM (#616540) Journal

                  Actually no. The serial IR code in the kernel is expressly designed to work with a real serial device only. That's always been the case. That's why the usual serial UART needs to be disabled. In my case I just left the ordinary serial support out of the kernel freeing the port for the IR.

                  I've tried an MCE USB receiver and the lag was unusable. I just received an IguanaWorks receiver but it turned out to be one that requires a separate wired receiver (with a 1/8" phone jack) which I'm waiting to get. I'm hoping that works better, but I've read many accounts that there's nothing out there as reliable as the old homebrew IR receivers for real serial ports.

  • (Score: 5, Insightful) by bradley13 on Saturday December 30 2017, @07:05PM (39 children)

    by bradley13 (3053) on Saturday December 30 2017, @07:05PM (#615924) Homepage Journal

    Too many layers of crap. Programs built using frameworks that use other frameworks that use libraries that call operating system APIs that use other libraries that...

    There are also far too many background processes, each with dozens of threads. It's impossible to know what they all are doing.

    --
    Everyone is somebody else's weirdo.
    • (Score: 3, Touché) by takyon on Saturday December 30 2017, @07:08PM (2 children)

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Saturday December 30 2017, @07:08PM (#615927) Journal

      Just increase the cores. A core for every thread or two. 8 cores? No! 18 cores? Higher! 72 core? Keep going!

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 0) by Anonymous Coward on Sunday December 31 2017, @08:35PM (1 child)

        by Anonymous Coward on Sunday December 31 2017, @08:35PM (#616200)

        Cores may increase throughput but still increase latency/delays.

        It's like assigning 72 people to answer 72 different questions that are given on a single sheet of paper. And then somehow getting the answers onto that same sheet of paper.

        Compared to just one person reading those questions and writing the answers down one by one.

        In the one person case you may get the first answer faster than the 72 person case. e.g. lower latency.

        Might even be before a naive scheduler has got around to copying all the questions to 72 different new sheets of paper for the 72 different people.

    • (Score: 5, Insightful) by RS3 on Saturday December 30 2017, @07:34PM (31 children)

      by RS3 (6367) on Saturday December 30 2017, @07:34PM (#615933)

      You sound like me for the past 20 years. Curmudgeonly I've become.

      Whenever I install a new MS OS I spend at least an hour trimming the fat, shutting off unnecessary services, updaters, drivers, etc. I have some nice utilities which will do some of that too.

      I still run XP on many systems. The ones I've "upgraded" to 7 are noticeably slower. "Wah, XP is so insecure, update to a modern OS" they whine. I don't know the numbers, but as far as I can tell, javascript is the mechanism for most malware's entry, so the newer the OS, the newer the browser, the more web APIs it supports, and the more potential security holes.

      Is this when I'm supposed to tell you kids to get off my lawn?

      • (Score: 5, Informative) by Arik on Saturday December 30 2017, @07:53PM (28 children)

        by Arik (4543) on Saturday December 30 2017, @07:53PM (#615941) Journal
        "Whenever I install a new MS OS I spend at least an hour trimming the fat, shutting off unnecessary services, updaters, drivers, etc. I have some nice utilities which will do some of that too."

        Yeah, you're really going to hate Windows10.

        You need to check out Gentoo. You can trim a lot more with compiler flags.

        USE="-systemd -gnome -gtk" alone trims gigabytes of breakage.
        --
        If laughter is the best medicine, who are the best doctors?
        • (Score: 4, Insightful) by RS3 on Saturday December 30 2017, @08:15PM (4 children)

          by RS3 (6367) on Saturday December 30 2017, @08:15PM (#615953)

          I haven't done Gentoo yet, but thank you for the info. :)

          I'm a 23 year Slackware guy, plus CentOS (only professionally), Alpine (pretty cool- great xen support/integration) - pretty much any lightweight non-systemd distro and I've used it. A few years ago I had delved deeply into some of the really stripped down Linux distros for an embedded / fast boot project. There are so many...

          Oh, I've done Win10 too. Not on any of my own machines (yet). What I hate more than anything: as far as I can tell, it's the same OS architecture, a few added things, but they rearrange the UI. With XP I could double-click the network icon in the lower right and get good info, get right into settings, etc. Now it's many more clicks to do the same thing. Ugh. I really hate the new filesystem search. If I click "open file location", it goes there, but I lose my search! I can't open new windows like you could with the XP search.

          I feel like MS is just punishing us- rearranging everything so we have to get "training" to find where everything is moved to. Reminds me of how they would punish Helen Keller- rearrange her bedroom.

          • (Score: 2, Insightful) by RandomFactor on Saturday December 30 2017, @10:13PM (2 children)

            by RandomFactor (3682) Subscriber Badge on Saturday December 30 2017, @10:13PM (#615984) Journal

            "rearranging everything so we have to get "training" to find where everything is moved to"

            I am of the opinion that Linux squandered a big opportunity. It was actually easier and more familiar for people to upgrade from XP/7 to something like Ubuntu/Mint than to Windows 8/10

            That said - If you want Windows 8+ to be less excruciatingly user hostile you can start by installing Classic Shell (or one of the various similar apps)

            --
            В «Правде» нет известий, в «Известиях» нет правды
            • (Score: 3, Interesting) by RS3 on Saturday December 30 2017, @11:01PM

              by RS3 (6367) on Saturday December 30 2017, @11:01PM (#615997)

              I agree. On most of my Linux machines (where I use the GUI) I use fvwm, so I think most XP-style UI users would love it.

              Add-on shells / window managers for MS Windows have been around since the early 90s as I recall. I think Norton had one? I have to help / support people using who use Win10 so it's in my interest to get used to the UI. Well, I doubt I'll ever get used to it- whenever I do, it will be replaced! I don't mind the concept of typing and search suggestions come up in the "start menu" or whatever it's being called. "Master control popup"? Holder for unwanted ads? Receptacle of despair?

            • (Score: 2) by number11 on Sunday December 31 2017, @12:33AM

              by number11 (1170) Subscriber Badge on Sunday December 31 2017, @12:33AM (#616024)

              If you want Windows 8+ to be less excruciatingly user hostile you can start by installing Classic Shell (or one of the various similar apps)

              Unfortunately, the developer of Classic Shell has given up. It's been opensourced, and hopefully someone else will take the task on.

          • (Score: 0) by Anonymous Coward on Saturday December 30 2017, @11:44PM

            by Anonymous Coward on Saturday December 30 2017, @11:44PM (#616014)

            The Mighty Buzzard must be off fishing.
            About this time, he usually mentions Calculate Linux, a Gentoo fork with pre-compiled apps by default.

            .
            Whenever I install a new MS OS

            At that point, I ask myself "What's that guy doing that can't be done without Redmond's malware magnet?"

            -- OriginalOwner_ [soylentnews.org]

        • (Score: 4, Informative) by Azuma Hazuki on Saturday December 30 2017, @08:16PM (16 children)

          by Azuma Hazuki (5086) on Saturday December 30 2017, @08:16PM (#615954) Journal

          Void is great too. It's Arch Linux for adults with the runit init system. I have a T440s with the original SSD in it, and it takes about 6 seconds to go from BIOS to my display manager. Runit is faster even than OpenRC let alone SystemD.

          --
          I am "that girl" your mother warned you about...
          • (Score: 2) by RS3 on Saturday December 30 2017, @11:48PM (13 children)

            by RS3 (6367) on Saturday December 30 2017, @11:48PM (#616015)

            Thank you for that tip. Rolling release, systemdless, awesome. The only drawback, and I know I'm getting backed into a corner, is Void needs a minimum P4. I still support some live servers running P3 cpus. Don't laugh- they're fast at everything but php, and low power, and it's not my choice, but I am proud of how well they run. Maybe I'll get the Void sources and compile for P3...

            • (Score: 0) by Anonymous Coward on Sunday December 31 2017, @01:16AM (1 child)

              by Anonymous Coward on Sunday December 31 2017, @01:16AM (#616035)

              P3? That's close to 20 years old! What kind of services are they doing?
              I have a Katmai in storage ... Kept it as gold.../1999, still work fine.

              Cheers.

              • (Score: 2) by RS3 on Sunday December 31 2017, @04:00AM

                by RS3 (6367) on Sunday December 31 2017, @04:00AM (#616061)

                Dual CPU! 2003 vintage. One machine runs WordPress on LAMP for Internet, Samba for LAN, etc.; another runs Windows Server 2003, right now just file sharing CIFS, but used to run print and IIS (sometimes I turn ftp on).

            • (Score: 2) by Azuma Hazuki on Sunday December 31 2017, @03:01AM (6 children)

              by Azuma Hazuki (5086) on Sunday December 31 2017, @03:01AM (#616050) Journal

              Oh, wow, you still have running PIIIs? Coolness. I built my first machine right when the P4 and Thunderbird CPUs came out (and thankfully I went Socket A rather than 462, eeeesh...). For a while the PIII was outperforming the P4 clock for clock, so if yours still work I can see them being suitable for light duty server stuff :)

              --
              I am "that girl" your mother warned you about...
              • (Score: 2) by RS3 on Sunday December 31 2017, @04:15AM (5 children)

                by RS3 (6367) on Sunday December 31 2017, @04:15AM (#616062)

                Not personally- they're running a tiny hosting company I admin. Dual CPU Dell PE2450. I just published another post saying one runs LAMP, another Win 2003. There are several other machines, not running, that used to run mail host, more Apache stuff, test/development for asp.net, etc. Parts machines!

                I used to run P3s personally and someday I'll build one up again (for NAS). You weren't kidding when you wrote "coolness" - they just don't get warm. Many cases have them with passive heatsinks only. Yes, for simple I/O, static webserving, etc., they're great. php bogs down (WordPress).

                I have a Cyrix CPU I always wanted to use. Few motherboards would support the Cyrix. By the time I got one, I was already into P4s.

                Somehow I fell into socket 462 and had several motherboards, CPUs, etc., and they seemed great for me. I still have one that's a 4th tier backup. It still works, so...

                • (Score: 2) by julian on Sunday December 31 2017, @04:32AM (2 children)

                  by julian (6003) Subscriber Badge on Sunday December 31 2017, @04:32AM (#616065)

                  Many cases have them with passive heatsinks only.

                  Are you counting case fans? You can get an i3 to run without a CPU fan if you have a heatsink and case fan. And it'll run way faster than your ancient Pentium.

                  • (Score: 2) by Azuma Hazuki on Sunday December 31 2017, @04:50AM

                    by Azuma Hazuki (5086) on Sunday December 31 2017, @04:50AM (#616073) Journal

                    You can do that with a T-series maybe, with their 35W TDP limit, but I wouldn't try that with the non-T ones. I specialize in SFF builds but avoid passive cooling like the plague.

                    --
                    I am "that girl" your mother warned you about...
                  • (Score: 2) by RS3 on Monday January 01 2018, @03:29AM

                    by RS3 (6367) on Monday January 01 2018, @03:29AM (#616328)

                    You missed an important point: it's not my choice. If you want to talk the guy into buying new hardware, I will thank you. Or give him the $. If I push his budget he'll move everything to a godaddy and I'll lose the gig. Right now it's a fun challenge for me that pays a little.

                    Some of the P3s I'm remembering have case fans that barely turn, and I've never gotten them to spin up. It's been a few years since I've run any of them. The aforementioned servers seem to have 14 very noisy fans. I might have exaggerated but it's a lot. It's crazy they run full-speed- the servers have plenty of temp sensing, fan speed sensing, and should be able to control fan speed. I've never found any way to do it but I haven't tried very hard either.

                • (Score: 2, Interesting) by toddestan on Sunday December 31 2017, @04:50AM (1 child)

                  by toddestan (4982) on Sunday December 31 2017, @04:50AM (#616074)

                  I still run a P3 in my router. As you say, they are good CPUs. The early Coppermine CPUs were under 10W at full load. That was far better than many CPUs that came after it during the MHz wars for many years. Of course, there's a lot more options now, but they are still plenty fast enough for many tasks (they'll easily run circles around the Raspberry PI). The supporting hardware is pre-ROHS and mostly before the capacitor plague so it'll last a long time, other than the IDE hard drives of course.

                  I had a Cyrix CPU a long time ago. They had really lousy FPU's - the one I had was a PR200+ which according to Cyrix meant it was as fast as a Pentium 200 but in reality you could forget about multitasking while playing a MP3 file (the AMD K6 was a revelation - you could play your music in the background with no noticeable impact on performance). Integer logic though was pretty fast, so it would probably do okay as a webserver - at least as far as something that old would do. The problem with trying to use one now is you're stuck on Socket 7 which means either an ancient Intel chipset or a sketchy VIA one, spotty USB support, and you probably won't have things like boot from CD so hope you have some floppies handy. Nice thing about P3-era hardware is that in many ways acts like newer hardware but slower, whereas if you go much older it's a whole different world.

                  • (Score: 4, Interesting) by Hyperturtle on Sunday December 31 2017, @01:36PM

                    by Hyperturtle (2824) on Sunday December 31 2017, @01:36PM (#616135)

                    I still have an operational, and functional, backup server. It is a quad processor 200mhz Pentium Pro Compaq Proliant 5000 circia 1996. The only thing that has gone wrong is that mice got inside and smoked. I had to make some functional sacrifices -- but I don't need more than 512MB of RAM in it anyway... (Although the floppy drive and bootable CD-ROM have been vital).

                    I have two fiber ethernet cards in it, each card also has two 100mb connections. I teamed the fiber and the 100mbs to an old Catalyst 5000 switch from Cisco (with the fiber module in it) and uh if I go on much longer I'll give away the secrets of my dark lair.

                    It was really fast for the era, faster than data could be placed on the network until some tweaking was done on the network side. (Most modern SATA drives can at least burst that fast; SSDs can do it continuously if the therms are good, but many servers of the era were capable of performing well on 100mb networks; not 2gb teams of fiber connections. I had more bandwidth than most businesses.

                    My limits seemed only really came about due to age and OS support limitations. The integrated video also was 640x480 with 16 colors... adding a non compaq video card just hosed a lot and I ended up using RDP most of the time, since that supported 256 colors...

                    The pentium pro server could put data on the network at about 250MB/s if pushing to a few destinations (otherwise the etherchannel teaming has 1:1 traffic and thus limited it to 125MB/s or so) That is fast regardless; but it *was* on the same subnet because gigabit routing is still hard to do inexpensively and well... it didn't exist back then. (1.2gbps was the backplane on the catalyst, so to route with the layer 3 module, it had to leave the fiber module and go up to the routing module then out to another copper ethernet module to then hit the network segment to reach the host... so 600mbps bidirectionally when off the same subnet, but for one way transfers on the same subnet to a different module, it was easy to hit 125MB/s without even tweaking the frame sizes.. but I digress). Between the two servers I could get that 250MB/s due to the teaming, if I used different IP addresses and chose the right method of load balancing (it wasnt mac address based but I forget what I chose... been a while. I had a raid 0 of 7x4GB disks and could provide continuous data off the scsi controller, but if there was heavy non-sequential file access then of course it slowed down like you'd expect.)

                    For the longest time, I used it as a proxy server and ad blocker. I had a program called atguard that became sygate that became Norton Stupid Subscription As a Service or something (and detected the server OS and refused to install at that point...) Eventually used desktops of faster CPU power and more compatibility with modern stuff replaced the core functions... but it is still fun to poke at and see how well it formed for what it was. (And as a 5U server, it had plenty of options available for it).

                    It is/was faster than many modern workstations in a few ways; network traffic is of course really fast, but modern PCIe based network cards can now match it. (I doubt the system will ever see 10gb working in it due to the legacy PCI-X and EISA architecture with ISA and PCI slot support... and the OS and driver support that would be lacking...) there is no USB of any kind and I don't really want to add new things to it.

                      Installing Windows 2003 R2 on hardware 10 years newer (2006) has its own set of issues and I have to use 32 bit versions on everything I've mentioned. I worked out that if everything I have in it was installed when new, it would have been a server that cost over $100k. I got it and others for free that I canabalized for parts to make it when a company I did support for went out of business and were selling them for scrap by weight without concern for the value. I offered to take some of it off their hands and they let me... (I even still got paid! One of the few dot bomb success stories I have...)

                    One thing that may be of interest -- TFTP traffic used 100% of one CPU -- so, 200mhz pentium pro processor was at 100% utilization copying data via TFTP. The regular backup software was better at managing CPU use and spread it over the CPUs, but a "big" 600MB service pack could also spike utilization when copied. TFTP either was efficiently stealing all CPU resources to prevent a disruption to the UDP traffic, or it stunk and was badly optimized with coil whine. Not sure which...

                    Also, I had a number of your Cyrix CPUs in workstations I had made (eventually used for gaming). The Cyrix 5x86; the 133mhz ran like a Pentium 120, the 120mhz more like a Pentium 100, and the 100mhz more like a Pentium 75/90ish... it wasn't paired with a motherboard that worked so great and so some things worked better than others (disable the integrated sound and it was fine). Descent ran fantastic, and I was able to run it at 800x600 via command line options. It was really awesome at the time.

            • (Score: 0) by Anonymous Coward on Sunday December 31 2017, @10:15PM (3 children)

              by Anonymous Coward on Sunday December 31 2017, @10:15PM (#616233)

              Alpine might be up your alley. It was designed for routers and the like, but it runs well as a desktop distro. Arch-descended packaging system, very light, very fast, runs well on any hardware and has some interesting tools.

              • (Score: 2) by RS3 on Monday January 01 2018, @03:16AM (2 children)

                by RS3 (6367) on Monday January 01 2018, @03:16AM (#616324)

                Yes, thanks, I mentioned Alpine in one of my posts here. I've deployed Alpine mostly for NAS and it's awesome. Package management isn't what I would like it to be. Not enough detail / info. For example after running apk update ; apk upgrade, it says there was an error, but doesn't tell me what went wrong. Can't find any useful logs. Haven't put in much effort or time on it either, it's just annoying.

                In my 23+ years hands-on with Linux, I've consistently said that package management is the biggest / most important factor for Linux's widespread adoption. With Linux there are so many options, if you have good package management, you can build anything you want. I was never a huge fan of RedHat .rpm, but when yum and some of the yum gui stuff came out, then with the ability to add repositories, and some great ones at that, I was sold. But then came systemd with RH7, so I'm going to migrate away soon, when RH6 updates cease. Sigh. It may be Gentoo! I'll compile on a fast machine and rsync the servers.

                • (Score: 1, Informative) by Anonymous Coward on Monday January 01 2018, @09:27PM (1 child)

                  by Anonymous Coward on Monday January 01 2018, @09:27PM (#616509)

                  I used to be hardcore into gentoo, but I prefer crux nowadays. It's source based, but the packaging scripts are much simpler than gentoo ebuilds, very similar to arch's source packages. It's very easy to set up and does a lot of stuff right.

                  • (Score: 2) by RS3 on Tuesday January 02 2018, @01:32AM

                    by RS3 (6367) on Tuesday January 02 2018, @01:32AM (#616565)

                    Crux looks interesting, but is 64-bit (well, and ARM).

          • (Score: 2) by coolgopher on Sunday December 31 2017, @12:30AM (1 child)

            by coolgopher (1157) on Sunday December 31 2017, @12:30AM (#616023)

            Wasn't aware Arch uses runit. It's one of my personal favourite init systems and it gets frequently used at $work in our embedded Linux projects.

            I'll keep Arch in mind should Devuan go off rails, thanks for the info!

            • (Score: 0) by Anonymous Coward on Sunday December 31 2017, @12:52AM

              by Anonymous Coward on Sunday December 31 2017, @12:52AM (#616030)

              No, Arch uses systemd. Void.

        • (Score: 3, Interesting) by JoeMerchant on Saturday December 30 2017, @09:16PM

          by JoeMerchant (3937) on Saturday December 30 2017, @09:16PM (#615974)

          I "lived" Gentoo from about 2004-2007, it was.... mostly un-necessary re-compiles of stuff that works just fine in Debian or whatever they call Cent/RedHat packages.

          Ubuntu, Cent, or your flavor of choice is so much more cooperative about configuration than Windows - Windows seems to take pride in making it harder each generation to find the things you're after, especially the options to control updates.

          Now, I've been burned by both Ubnuntu 14.04 running out and looking for updates, slowing my CPU, and also a configuration script I wrote for CentOS7 that quit working when they decided to add a license notification to first boot... but, both of those were fixable once discovered with an hour or two of research and re-scripting. Windows would have broken my stuff 20 times in the same period, and been harder to get right after breaking.

          --
          🌻🌻 [google.com]
        • (Score: 0) by Anonymous Coward on Saturday December 30 2017, @10:58PM (3 children)

          by Anonymous Coward on Saturday December 30 2017, @10:58PM (#615996)

          Why would he hate windows 10 because of that? Everything they have been doing for the past 15 years works exactly the same.

          Win 7 on this laptop 1:20 to usable desktop. Win10 about 20 seconds.

          Look I like linux. Use it all the time. But do not spread FUD. It is silly to do and usually makes you look foolish when you get called out on it.

          You need to check out Gentoo. You can trim a lot more with compiler flags.
          That is along the lines of hey that nice 4 door saloon car you have to get to work is junk. You need a Ferrari F430. Do not worry about all the tweaky things to get it just right. Oh and it will go sideways sometimes but thats ok you can just google how to fix it. Oh wait the same as he does now....

          • (Score: 2) by RS3 on Saturday December 30 2017, @11:51PM

            by RS3 (6367) on Saturday December 30 2017, @11:51PM (#616016)

            You need a Ferrari F430.

            An AC finally types something correct, and it didn't take 1,000,000 years.

          • (Score: 2, Funny) by Anonymous Coward on Sunday December 31 2017, @11:25AM (1 child)

            by Anonymous Coward on Sunday December 31 2017, @11:25AM (#616113)

            I wish people would stop misrepresenting Gentoo, it's not like that at all. If we need a car analogy...

            You go shopping for a car.

            The first dealership sells the Windows SUV. There's only one model. It weighs 9000 pounds, gets 8 MPG, goes 0-60 in 24 seconds, and it won't move on Tuesdays unless you let it warm up for three hours. Yet, it doesn't carry people any better than any other car. For some reason, 90% of people buy this one, because that's what their neighbors have.

            The second dealership sells the Mac SUV. It only weighs 6000 pounds, and it gets 18 MPG. It looks and runs nicer than the Windows SUV, but it costs $100,000. But the people that have them won't even ride in anything else.

            The third dealership sells the iOS compact cars. These look great, get great fuel economy and they're even self-driving. Unfortunately, they come with a list of a couple dozen common destinations, and you aren't allowed to go anywhere else.

            The fourth dealership sells Android compact cars. These seem a little like the iOS cars, cost less, and you can even pick where to go. The bad news is they break down after a year and nobody will fix them, and they don't have seat belts.

            The third dealership sells Linux SUVs. They have compact cars, sedans, vans, trucks, whatever. You have to pass a driving test before you can buy one, but they work great. Trouble is, none of them are ever quite right. Some of them don't have power windows, or stereos, or windshield wipers. The ones that do are starting to seem just like the Windows ones. But at least if you want a small car, you can have a small car, and if you want a big truck, you can have a big truck.

            Then you see, around the corner, the little Gentoo dealership. You go in, they have a dozen models and all the cars are nice. The sports car is fast and handles great, the truck tows 25,000 pounds, the sedan has a comfortable ride and is easy to drive, the van has seating for 10 and the seats fold down just right. They've even got a 9000 pound SUV if somebody wants one for some reason. And they have a special deal - if you ever get tired of the car you bought, you can bring it back and trade it in for a different one, no questions asked. You go up to the salesman and are all ready to buy it. Then he says "One last thing though - when it's time to change the oil, it takes three hours instead of one."

            And you run out of the dealership yelling "NO WAY!"

            • (Score: 0) by Anonymous Coward on Sunday December 31 2017, @11:29AM

              by Anonymous Coward on Sunday December 31 2017, @11:29AM (#616114)

              Being a little more on topic - computers are faster and compile times are shorter. And the package management system is MUCH better than it was ten years ago. If you ever use any other Linux distribution, and you want to do something that isn't exactly what they imagined you'd do, and you have to follow any guide that contains the words "now download the dependencies..." or similar, then the time and hassle of that one task is more than all the time you'll ever spend reconfiguring Gentoo. Perl is still a little annoying, but everything else, really, just works - and it makes a lot of stuff that you probably assume has to be hard just work too.

        • (Score: 2) by linkdude64 on Sunday December 31 2017, @11:45PM

          by linkdude64 (5482) on Sunday December 31 2017, @11:45PM (#616265)

          "You're really going to hate Windows 10"

          Just as an FYI there is such a thing as Win10 LTSB which removes the Facebook, Cortana, MS Apps, etc.

      • (Score: 2, Funny) by Anonymous Coward on Saturday December 30 2017, @08:18PM

        by Anonymous Coward on Saturday December 30 2017, @08:18PM (#615956)

        Is this when I'm supposed to tell you kids to get off my lawn?

        You could, but most of the people here would join you on the rocking chairs. The kids are over on the, fittingly enough, Green Site :)

      • (Score: 2) by All Your Lawn Are Belong To Us on Wednesday January 03 2018, @05:58PM

        by All Your Lawn Are Belong To Us (6553) on Wednesday January 03 2018, @05:58PM (#617256) Journal

        Yes. Yes it is; you have my permission to do so.

        --
        This sig for rent.
    • (Score: 2) by frojack on Sunday December 31 2017, @06:16AM (2 children)

      by frojack (1554) on Sunday December 31 2017, @06:16AM (#616089) Journal

      Exactly.

      Every improvement in processing power in the last 40 years was gobbled up by look and feel.

      Oddly, they could get away with it because the processing power available when you simply stopped dicking around with the screen for 10 seconds was sufficient to meet all or our actual computational needs for a month or two.

      --
      No, you are mistaken. I've always had this sig.
      • (Score: 5, Informative) by TheRaven on Sunday December 31 2017, @08:18AM (1 child)

        by TheRaven (270) on Sunday December 31 2017, @08:18AM (#616104) Journal
        You should RTFA, because it actually explains a lot of the delay.

        First, keyboard scan rates are actually lower. The Apple IIe keyboard gave you a delay of about 8.6ms, whereas a modern keyboard gives you closer to 18ms (purely from scan speed, ignoring the USB overhead). At the other end, the Apple IIe drew directly into the frame buffer, synchronised with the monitor refresh. This gave you a minimum delay of the refresh of one field of an interlaced monitor, so 1/50th of a second (20ms). If you have a 60MHz TFT and you draw directly to the frame buffer then you can get similar speeds, but if you're double buffering (and you want to, because otherwise you get tearing) then you're going to always be drawing one frame behind and so you're now at 50ms just from keyboard and monitor delays.

        Basically, even if the software is infinitely fast, you're going to see 50-60ms on a modern computer.

        That said, there's no excuse for the ones that were over 120ms.

        --
        sudo mod me up
        • (Score: 2) by Nuke on Sunday December 31 2017, @02:42PM

          by Nuke (3162) on Sunday December 31 2017, @02:42PM (#616138)

          Yes TFA is about keyboard delay. However, that is only of the extra delays these days. Look and feel bling is another, official spyware another, Javascriptcrap on the internat another.

    • (Score: 2) by Hyper on Sunday December 31 2017, @09:47AM

      by Hyper (1525) on Sunday December 31 2017, @09:47AM (#616108) Journal

      Is this a detrimental comment against the layers of DRM in modern systems?

  • (Score: 1, Interesting) by Anonymous Coward on Saturday December 30 2017, @07:36PM (1 child)

    by Anonymous Coward on Saturday December 30 2017, @07:36PM (#615934)

    Triple boot Windows 10, Mac whatever is newest, and a Linux flavor on new and old hardware. Then you'll see where the slowness is.

    • (Score: 0) by Anonymous Coward on Saturday December 30 2017, @10:43PM

      by Anonymous Coward on Saturday December 30 2017, @10:43PM (#615994)

      Exactly how does one do that without a custom-built or premium-priced Hackintosh?

  • (Score: 0) by Anonymous Coward on Saturday December 30 2017, @07:38PM (7 children)

    by Anonymous Coward on Saturday December 30 2017, @07:38PM (#615935)

    Apple II was a single task machine - didn't have no event-driven UI, thread/process contention, no background task/process, etc.

    • (Score: 4, Touché) by Anonymous Coward on Saturday December 30 2017, @07:59PM

      by Anonymous Coward on Saturday December 30 2017, @07:59PM (#615947)

      Your browser mining bitcoins in the background is totally an improvement. Yeah.

    • (Score: 4, Interesting) by Anonymous Coward on Saturday December 30 2017, @08:05PM (2 children)

      by Anonymous Coward on Saturday December 30 2017, @08:05PM (#615949)

      Faster still could have been contemporary coin-op video game hardware? I was close to one of the development teams in the late 1980s for a fairly complex system -- it used 5 different processors (separate audio, video, UI, game physics, etc) that all talked to each other in a bank of shared memory. The latency from when you moved the controller to the display updating was no more than the time to draw the next screen (~1/30th second). I seem to remember that there were just a few corner cases where the 3D scene was particularly complex, then there might be one more frame delay.

      • (Score: 0) by Anonymous Coward on Saturday December 30 2017, @11:03PM

        by Anonymous Coward on Saturday December 30 2017, @11:03PM (#616000)

        https://byuu.org/articles/latency/ [byuu.org]

        This is about emulation but it is similar to what you are talking about. Latency.

        Latency is what we get with each additional layer we add on. Each of those layers though give is wildly more flexibility. But as I used to tell my boss whenever he had a brainstorm of a new abstraction layer in our software I said 'itll cost ya'. He always responded with 'worth it'

      • (Score: 0) by Anonymous Coward on Sunday December 31 2017, @08:05PM

        by Anonymous Coward on Sunday December 31 2017, @08:05PM (#616192)

        it used 5 different processors (separate audio, video, UI, game physics, etc) that all talked to each other in a bank of shared memory.

        And that's why it might still have higher latency than an Apple II. It's like having 5 different people trying to talk to each other using the same blackboard compared to a single person doing stuff. You might have more total work done with multiple people but it often takes longer to get the first result/"answer" unless you are careful about the protocols/system.

    • (Score: 2) by JoeMerchant on Saturday December 30 2017, @09:28PM (2 children)

      by JoeMerchant (3937) on Saturday December 30 2017, @09:28PM (#615978)

      Absolutely, no selection of keyboards, USB layers, audio drivers, etc. to worry about in the Apple ][.

      --
      🌻🌻 [google.com]
      • (Score: 2, Informative) by Anonymous Coward on Saturday December 30 2017, @11:27PM (1 child)

        by Anonymous Coward on Saturday December 30 2017, @11:27PM (#616007)

        Sorry to break your dream, but there were plenty of disk, communication, display, processor cards etc. for Apple IIs.
        Check this out: https://en.wikipedia.org/wiki/Apple_II_peripheral_cards [wikipedia.org]

        • (Score: 0) by Anonymous Coward on Sunday January 07 2018, @03:18AM

          by Anonymous Coward on Sunday January 07 2018, @03:18AM (#618990)

          WOOSH!

  • (Score: 3, Interesting) by gringer on Saturday December 30 2017, @08:18PM

    by gringer (962) on Saturday December 30 2017, @08:18PM (#615957)

    I suspect there's a level of interactivity that people are comfortable with, and they don't want to spend any more money on programmer time to get a better system.

    If we wanted good interactivity, then we'd all be using a system designed to prioritise interactivity, like BeOS [wikipedia.org], or Haiku [wikipedia.org].

    --
    Ask me about Sequencing DNA in front of Linus Torvalds [youtube.com]
  • (Score: 4, Informative) by fustakrakich on Saturday December 30 2017, @08:20PM

    by fustakrakich (6150) on Saturday December 30 2017, @08:20PM (#615960) Journal

    The signal has to go all the way to Utah and back. Plus it appears that the keyboard is given low priority.

    --
    La politica e i criminali sono la stessa cosa..
  • (Score: 1, Interesting) by Anonymous Coward on Saturday December 30 2017, @08:39PM (7 children)

    by Anonymous Coward on Saturday December 30 2017, @08:39PM (#615968)

    The most beautiful desktop environment and most responsive computer I have ever seen were the same. I saw Ian Finder (he deserves credit for this) running Mac OS9 (emulated) on a mac book air (~2014 I think). Hand crafted pixel perfect graphics on a very nice display with incredible responsiveness in a tiny form factor. Even running emulated, it was still faster than on the hardware from the OS9 era. He was doing his homework in CodeWarrior.

    Looking at the article, it seems like a lot of the lag is in the keyboard (he measured from when the key started down until the character was displayed in a terminal). I wonder if the results would be even more extreme if he only measured the OS part of the stack (ignoring display and keyboard latency).

    • (Score: 4, Interesting) by Anonymous Coward on Saturday December 30 2017, @10:39PM (5 children)

      by Anonymous Coward on Saturday December 30 2017, @10:39PM (#615993)

      I didn't see what keyboard the tester was using, but I'm guessing it's some flavor of USB or blue tooth.

      Try plugging in a good ols PS/2 keyboard if your mobo still supports it and watch the latency go away.

      Real gamers use PS/2 keyboards and mice.

      • (Score: 2, Interesting) by Anonymous Coward on Sunday December 31 2017, @03:00AM (2 children)

        by Anonymous Coward on Sunday December 31 2017, @03:00AM (#616049)

        Some people say USB keyboards can't do more than 6 keys at a time... wrong. That is USB in BIOS compat mode. USB HID supports N-key rollover fine. Look for keyboards that show two interface descriptors, one of them "huge" to fit a full key mask (for example "bInterfaceSubClass 1 Boot Interface Subclass", "wMaxPacketSize 0x0008 1x 8 bytes" and "bInterfaceSubClass 0 No Subclass", "wMaxPacketSize 0x0040 1x 64 bytes" as reported by lsusb).

        Some people say USB keyboards are 1.5Mbps limited... wrong again. You can get 12Mbps ones, the above example comes from one of those.

        Some people say USB keyboards can't update fast enough, because polling is slow... wrong for a third time. They are right USB keyboards work via polling, but the speed can be high, for example 1ms interval ("bInterval 1"). That's 1000Hz refresh, 1000 times a second the keyboard gets asked. The controller in the computer should be able to queue all updates, and not disturb the main CPU and the OS for nothing. So far I never had missed keys.

        Old crappy USB mouse uses 1.5Mbps and 10ms. Probably getting a new mouse matching the keyboard will give the "high" speed values. This example mouse is from first waves of opticals, capable of speaking PS2 with dumb adapter.

        PS2 support in main boards is still there, but sometimes with bugs. So it can be tricky, and just hunting for good USB devices the solution.

        PS: some info also avaliable with "mount -t debugfs none /sys/kernel/debug/" then "less /sys/kernel/debug/usb/devices" (look for fields like Spd=12, MxPS=64 and Ivl=1ms).

        • (Score: 0) by Anonymous Coward on Sunday December 31 2017, @11:32AM (1 child)

          by Anonymous Coward on Sunday December 31 2017, @11:32AM (#616115)

          Why would low-speed vs. full-speed possibly matter for a keyboard? Can someone type a million characters per second? Latency isn't different by more than a few microseconds.

          Only difference is probably just the version of the protocol implemented by the interface chip, and has no impact on performance.

          • (Score: 0) by Anonymous Coward on Monday January 01 2018, @02:45AM

            by Anonymous Coward on Monday January 01 2018, @02:45AM (#616315)

            Yes, if the keyboard is polled every 10ms, it means that key that was hit 0.5ms since last report, has to wait 0.5 ms in the 1 ms rate and 9.5 ms in the 10 ms rate (milliseconds, 1/1000 of second, not microseconds) to be noticed. A 60Hz screen refreshes every 16 ms, for comparison.

            So in one case the keyboard could report 16 times per frame, while in the other it reports twice at best, and in some cases only once (think about the out of sync pattern 0, 10, 20, 30, 40 for keyboard but 0, 16, 32, 48 for screen). That is jitter, and will make things worse, as delay varies and your worst case is not very good as they are both very similar so sometimes it's too late, and never fast. The rendering will have less time to react or will have to give up until next frame in a repetitive yet weird pattern.

      • (Score: 0) by Anonymous Coward on Sunday December 31 2017, @11:36AM

        by Anonymous Coward on Sunday December 31 2017, @11:36AM (#616117)

        Best solution to this would be to implement a virtual keyboard in suitable hardware (an Arduino for example). Then you can send a real start signal on a GPIO pin concurrently with the keypress being sent to the host. Of course if you use the same keyboard for all computers, the relative effect will be the same, but it won't hide differences between PS/2 and USB interface. On the other hand, real-world latency does include a keyboard.

      • (Score: 0) by Anonymous Coward on Sunday December 31 2017, @08:21PM

        by Anonymous Coward on Sunday December 31 2017, @08:21PM (#616199)

        Try plugging in a good ols PS/2 keyboard if your mobo still supports it and watch the latency go away.

        Real gamers use PS/2 keyboards and mice.

        Wrong. PS/2 devices can have high latency too. I've compared PS/2 keyboards with other keyboards/mice- use a ruler or similar to press keys/buttons at about the same time and measure the difference in the time. Repeat the tests and even though you don't get the absolute latency you can compare the relative latency between two devices.

        See also:
        http://www.blackboxtoolkit.com/responsedevices.html [blackboxtoolkit.com]
        http://www2.pstnet.com/eprimedevice.cfm [pstnet.com]

        And: https://danluu.com/keyboard-latency/ [danluu.com]

    • (Score: 1, Interesting) by Anonymous Coward on Sunday December 31 2017, @12:52AM

      by Anonymous Coward on Sunday December 31 2017, @12:52AM (#616029)

      Not beautiful, but really fast is my ancient FinalWord II word processing software running on a Win 7 laptop under the DOSbox emulator. Later versions were called Borland Sprint wordprocessor.
          Scroll top to bottom of medium sized docs, 100 pages, 50 lines on a "VGA" screen page, is effectively instant (at the repeat rate of the PgDn key). Jump from home to end is instant.
          Same for search, effectively instant, along with search & replace.
          FWII was written to use a swap file so if DOS crashed, all but the most recent key strokes could be recovered. Now it's so fast that I don't lose much of anything. Used to be the "Swapping..." legend would display for a second every now and then. Now it's rare that I ever see it, it's not up long enough to be displayed.

      While I don't use this emulation very often, it's convenient when I need to look at the source files for a number of long docs that I wrote on CP/M and MS-DOS systems.

  • (Score: 4, Informative) by Runaway1956 on Saturday December 30 2017, @08:52PM

    by Runaway1956 (2926) Subscriber Badge on Saturday December 30 2017, @08:52PM (#615971) Journal

    he tested some systems using displays with multiple refresh rates to see how refresh rate alters the lag

    This prompted me to look at refresh rates on my own machine. I really had no idea where to look, so used Google for each of my monitors. One has an adjustable refresh rate, the other does not. Adjusted the refresh on that one to "hi" and the difference in noticeable. The other is not adjustable, it stays at any speed I like, as long as it is 60hz. I'll be swapping out my extra large monitor, and going back to the large, which is a mate to the one that is adjustable. Wow.

    OK - let's say that you're on Windows, and you want to pick up some responsiveness. Visit Black Viper's site, follow his guide, and turn off all of the services that you do not need. If you're on a company machine, you may not be able to turn off anything. If you use your machine for work, but it is your own machine, you might turn off a couple services. Your own machine, and you simply don't need 20 services, turn them all off - the difference is dramatic.

    http://www.blackviper.com/service-configurations/black-vipers-windows-10-service-configurations/ [blackviper.com]

    That link will take you directly to the Windows 10 service configuration page. This next link is Black Viper's home page, from which you can browse to other versions of Windows.

    http://www.blackviper.com/ [blackviper.com]

    Whether you run Windows, or a real operating system, there is no good reason to spend your time waiting on a stupid timer whirling around. Black Viper makes Windows tolerable.

  • (Score: 4, Insightful) by Anonymous Coward on Saturday December 30 2017, @10:24PM (1 child)

    by Anonymous Coward on Saturday December 30 2017, @10:24PM (#615988)

    Code bloat and not programming to the architecture, got 8 cores well congrats you only get to use one. parallel execution is hard and poorly supported, pipelines are longer (started with the P4, 21 stages!) you can do a lot more but only if you know how to program which people don't anymore, it's not hard it just requires people to not be greedy and dumb, so the problem is impossible

    • (Score: 0) by Anonymous Coward on Sunday December 31 2017, @05:05AM

      by Anonymous Coward on Sunday December 31 2017, @05:05AM (#616079)

      You're going to have a conniption when I say this, but Rust is very promising to help developers write multi-core applications and operating systems more safely.

      It feels a lot like C with RAII.

  • (Score: 5, Interesting) by Rich on Saturday December 30 2017, @11:09PM

    by Rich (945) on Saturday December 30 2017, @11:09PM (#616002) Journal

    Back in the day, I think I did most of my work on a IIgs and already a Mac Plus back then, I got hold of a handful of phased-out Apple //c. Eventually, I started to tinker with one of them. I looked at how the RamWorks III (iirc) card was mapped, and designed a decoder logic from 4 TTL chips, handwired, for the upper address bit of 41256 RAMs (the //c came with 4164s, which had this last bit unconnected). The upper bank of 4164s went out, 41256s went in, and were connected to my logic on their bent-up A8 pins. My first ever digital design, and it actually worked. Looking back, I have no idea how I got the old chips out without ruining the traces with my crude tools. Filled with pride about my work, I thought the machine now deserved the fullest possible deck-out. 8 MHz ZIP chip and real-time clock followed. To put that to good use, with the RAM extension it was possible to run AppleWorks (Classic) in RAM-only mode, and have a macro processor on top. Finally, there was some fast loader for it, which I included as well, on a disk that primarily held my address book.

    Result was the most responsive computer I ever typed on. Not pretty with its pixelized, text-only display, but incredibly effective. And fast starting too, cold power on to working address book in 8 seconds, from floppy.

    So, yes, I am living witness that the story has a point. :)

  • (Score: 4, Insightful) by Anonymous Coward on Sunday December 31 2017, @02:58AM

    by Anonymous Coward on Sunday December 31 2017, @02:58AM (#616047)

    Moore's Law: Every 18 months, the speed of hardware doubles.

    Gates' Law: Every 18 months, the speed of software halves.

  • (Score: 1, Insightful) by Anonymous Coward on Sunday December 31 2017, @03:05AM

    by Anonymous Coward on Sunday December 31 2017, @03:05AM (#616052)

    That is really all, most idiots can throw together something for code dot org especially with abstracted languages, like java or rust, but it will slow and slow the further away programmers get away from the hardware, learn assembly FIRST then C then other things

  • (Score: 0) by Anonymous Coward on Sunday December 31 2017, @11:46AM (1 child)

    by Anonymous Coward on Sunday December 31 2017, @11:46AM (#616120)

    Sort of like how George R.R. Martin writes on an old mechanical typewriter, if I ever wrote a book, I'd use AppleWorks on my old //e. Mine only had the usual 128K though - not really enough to run the later versions well. Maybe I can find a 1MB expansion card somewhere. I don't think an emulator would do. You need the original screen and keyboard. A Raspberry Pi with an emulator could drive the screen, but the keyboard would be a lost cause. The //e had a really great keyboard, too - mechanical keys with a good amount of force and a good angle held just up off the desk, too, so reduced wrist strain. It's not exactly pretty - unless you're the sort that also thinks a 1967 Ford pickup is pretty. But it is pleasant to physically use.

    I think AppleWorks - and the Apple // in general - has exactly the right amount of interface. vi gives you just a blank screen with nothing. Word gives you a massive pile of bloat options, and distracts you with WYSIWYG view of the text (which is great for editing but not composing). Even LibreOffice is very bloaty and distracting.

    AppleWorks gives you enough interface so that you can always "feel" where the commands are - it's easy to get to menus and you don't really have to remember any commands - but when you write, you don't have to think about anything but your text.

    Modern programs could really take some lessons from the software of the 80s. What worked, what didn't.

    • (Score: 0) by Anonymous Coward on Sunday December 31 2017, @11:51AM

      by Anonymous Coward on Sunday December 31 2017, @11:51AM (#616121)

      but the keyboard would be a lost cause

      Or would it? It was a simple scan protocol. You could implement the keyboard scan on the Pi GPIOs. It might even work at 3.3 volts, or you could get an Arduino to handle the keyboard part.

      Still no clanking disk drives, though. You'd be stuck with fast, quiet flash storage.

  • (Score: 1, Interesting) by Anonymous Coward on Sunday December 31 2017, @12:56PM (2 children)

    by Anonymous Coward on Sunday December 31 2017, @12:56PM (#616128)

    As he discovered, but many gamers already know, without driver tweaks there are roughly four frames of latency between an application sending a frame to the GPU, and it actually being sent to the monitor. Monitor display lag [displaylag.com], which runs anywhere from 1/2 frame average up to three or four, is on top of that. Because these sources of latency are per rendered frame rather than defined by wall clock time, they naturally decrease with higher framerate. CRTs have no display lag (although it does take time to actually scan out the image).

    Gaming monitors are specially designed to minimize display lag, but laptops for the most part don't worry about it much.

    It's possible to tweak drivers to reduce the rendering latency, which like any other pipeline is actually chosen to strike a balance between latency and throughput.

    "Squishy" (non-mechanical/membrane) keyboards have a distance between the key "bump" and actually activating the keypress, which contributes to latency as well.

    Overall unless you're playing a competitive game where one frame of latency can cause you to lose, latency on a quality PC is probably not worth worrying too much about. The human brain hides latency up to around 200ms, which is (perhaps not coincidentally) also roughly the human reaction time. Now, if you want to talk about ATMs or grocery store checkouts that take two seconds to respond and still drop half your keypresses...

    • (Score: -1, Troll) by Anonymous Coward on Sunday December 31 2017, @01:23PM (1 child)

      by Anonymous Coward on Sunday December 31 2017, @01:23PM (#616131)

      And it's pretty clear that he still doesn't really know what he's talking about.

      At 144 Hz, each frame takes 7 ms. A change to the screen will have 0 ms to 7 ms of extra latency as it waits for the next frame boundary before getting rendered (on average,we expect half of the maximum latency, or 3.5 ms). On top of that, even though my display at home advertises a 1 ms switching time, it actually appears to take 10 ms to fully change color once the display has started changing color. When we add up the latency from waiting for the next frame to the latency of an actual color change, we get an expected latency of 7/2 + 10 = 13.5ms

      Cannot distinguish between switching time and latency. Display lag is caused by, in addition to the unavoidable delay waiting for the frame to be transmitted over the display cable, electronics in the monitor spending time doing things like rescaling the image (even if it's already in the right resolution), dithering colors to make 18-bit displays look like 24-bit displays, filling a buffer needed to translate HDMI/DVI/DisplayPort into the monitor's internal signaling format, fiddling with the brightness and contrast, and whatever other image processing the designers and marketing team thought would look good. Switching time is caused by the time it takes the actual pixels in the panel to change color once the electronics have decided to tell it to switch.

      They are not the same. All monitor manufacturers advertise the switching time. Almost nobody, outside of high-end gaming monitors, advertises the electronics-related latency. Even then, they normally just say it's good, and don't bother with actual numbers.

      • (Score: 0) by Anonymous Coward on Monday January 01 2018, @09:41AM

        by Anonymous Coward on Monday January 01 2018, @09:41AM (#616378)
        Actually it seems more like you are the one who doesn't know what he is talking about or are intentionally ignoring it.

        Firstly he's talking about overall latency. Quote: "Luu was interested in testing how the responsiveness of computers to human interaction had changed over the last three decades, "

        So it doesn't really matter what the different latencies are called, what matters is in many cases they add up to quite a high value.

        Secondly he may actually know the difference, he's stating that just because displays advertise 1ms switching times doesn't mean the latency is actually 1ms. Lots of people see the 1ms and assume the total is 1ms.
  • (Score: 0) by Anonymous Coward on Monday January 01 2018, @12:41AM

    by Anonymous Coward on Monday January 01 2018, @12:41AM (#616283)

    Its the crappy software you are running.

    Run something efficient, its just fine. Run commodity crap like windows + office, ya, its slower. Between feature bloat and poor coding.

(1)