Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.
posted by FatPhil on Thursday January 26 2017, @11:46AM   Printer-friendly
from the all-the-world's-a-Cray dept.

Arch Linux is moving ahead with preparing to deprecate i686 (x86 32-bit) support in their distribution.

Due to declining usage of Arch Linux i686, they will be phasing out official support for the architecture. Next month's ISO spin will be the last for offering a 32-bit Arch Linux install. Following that will be a nine month deprecation period where i686 packages will still see updates.

Any Soylentils still making major use of 32-bit x86? And any of you using Arch Linux? Distrowatch still lists Arch Linux as a top 10 distribution.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Informative) by ledow on Thursday January 26 2017, @02:01PM

    by ledow (5567) on Thursday January 26 2017, @02:01PM (#458911) Homepage

    32-bit is dying on the desktop.

    Moan all you like, but it's on the way out.

    Games are stopping supporting it. OS are stopping supporting it. Everything is 64-bit by default and works just the same. Hardware that doesn't have 64-bit drivers nowadays is dead in the water.

    32-bit will be consigned to embedded and specialist environments, and is already headed that way.

    If you buy a new computer, install a 64-bit OS. Run your 32-bit stuff virtualised if you have to, but a 32-bit OS or application set is not going to last you long enough to suffer the hassle of installing it.

    It's sad, but unfortunate. Like 640Kb, FAT16, LBA48, etc. the days of 32-bit are numbered and 64-bit does everything it needs to do, including running your legacy apps and VMs. And like those technologies, it's annoying to be hit with it on "working" hardware but you will get no choice and will eventually have to upgrade anyway.

    I deployed a network 3 years ago. 64-bit head to toe. An IT consultant queried why and I explained this same thing back then. You're going to have to do it at some time, and it might as well be on a major refresh like I was doing. With appropriate testing, obviously, but there are precisely zero problems related to 64-bit that have hit us. He's since gone elsewhere, where they've had to re-do all their machine images and servers on 64-bit because their use-cases mean they need more than 4Gb nowadays, and they bought the hardware upgrades without realising they'd have to have a 64-bit OS to use them, no matter what the motherboards could support.

    I did the same at another workplace 2 years before that and it raised a few more eyebrows because of driver support but that wasn't my problem either - I specified machines with 64-bit drivers, got machines with 64-bit drivers, problem solved. The poor guy at the suppliers has to figure out what has those and what doesn't, and it's no longer an issue any more.

    If you're not on 64-bit, do it next time you have the choice.
    If you're "not going to touch 64-bit", prepare for a hard time next time you upgrade.
    If you're already on 64-bit, you have nothing more to do.

    Even ARM etc. - which is an entirely different use case - have 64-bit support in all their modern chips.

    And even from a base level, 4Gb isn't a lot any more. Even for casual stuff.

    Starting Score:    1  point
    Moderation   +3  
       Insightful=1, Informative=2, Total=3
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 0, Troll) by Anonymous Coward on Thursday January 26 2017, @02:37PM

    by Anonymous Coward on Thursday January 26 2017, @02:37PM (#458930)

    Spoken like someone who has absolutely no idea what he's talking about.

    • (Score: 2) by chewbacon on Thursday January 26 2017, @02:41PM

      by chewbacon (1032) on Thursday January 26 2017, @02:41PM (#458935)

      No, there's some truth to that. Installed some hardware on my desktop yesterday and the driver disk only had a 64-bit folder for drivers.

  • (Score: 1, Informative) by Anonymous Coward on Thursday January 26 2017, @04:15PM

    by Anonymous Coward on Thursday January 26 2017, @04:15PM (#458974)

    You can use more than 4GB of ram on any common 32-bit OS without having to do anything yourself (they all use PAE out of the box).
    Thus your post contains absolutely no argument for 64 and comes across as completely clueless.

    • (Score: 1) by Scruffy Beard 2 on Thursday January 26 2017, @05:03PM

      by Scruffy Beard 2 (6030) on Thursday January 26 2017, @05:03PM (#459002)

      The Redmond OS does not support PAE, apparently.

      • (Score: 3, Informative) by LoRdTAW on Thursday January 26 2017, @06:13PM

        by LoRdTAW (3755) on Thursday January 26 2017, @06:13PM (#459028) Journal

        The higher end datacenter and enterprise versions of Win 2k and 2003 supported the full 36 bit PAE address space of 64GB. There were hacks and settings you could change in 2k workstation/server and XP Pro/2003 to get PAE working since they all used the same kernel.

        tl;dr Thank god we have 64 bit.

        For a windows application to use more than 2GB of RAM (virtual addressing was limited to a 32 bit space of 4GB with half reserved for the system, a 2-2 split) you had to use Address Windowing Extensions (AWE) in your application to let your application use more than the 2-4GB 32 bit limit. There were settings to allow for a 3-1GB split if your application needed the extra gig. Though that was all dependent on your application and less system RAM meant less cache for disk and network I/O.

        Linux supported PAE and you used shm with mmap() to map files into memory. You could use over 4GB but you implemented your memory as a file and you had to fseek()/fread()/fwrite() to locations where you stored your data. Not easy or friendly to use as you built a memory manager using this system which as you can imagine how much of a bitch it would be to debug.

        Both solution were very difficult to implement as you had to pay damn good attention to your memory management design. Both solutions didn't afford you the ability to call malloc()/realloc()/new/etc in your process to get over 2-3GB RAM; you were still stuck with a 32 bit process memory space. You had to implement your own goofy memory system or use a library if they existed (I think memcached is one).

        • (Score: 2) by Scruffy Beard 2 on Thursday January 26 2017, @09:54PM

          by Scruffy Beard 2 (6030) on Thursday January 26 2017, @09:54PM (#459160)

          My PentiumII-350 is running an SMP kernel with PAE support because it is recommended for that machine (debian).

          Never mind that it only physically supports 384MB of RAM.

          I guess we may not even be disagreeing since my use-case does not have any programs using more than 2GB of memory. (I do have that much swap available -- but the system would be unusable (more than it already is)).

    • (Score: 2) by ledow on Friday January 27 2017, @01:04AM

      by ledow (5567) on Friday January 27 2017, @01:04AM (#459233) Homepage

      And still no single process can use more than 4GB. Which is kind of a killer if, as indicated, single programs are using more than that already, demanding x64 instructions sets to even load (I've seen drivers, games, and other applications all demand such), and not supporting other configurations.

      PAE is a paging hack, just like "expanded memory", "extended memory", "swap" or anything else that gets in the way of just asking for an amount of RAM from the memory allocator and getting it. Do you remember what happened to those terms?

      Anyone can write a program on any -bit OS that accesses as much as you like, so long as you don't care about speed, complexity, managing it yourself and have direct access to hardware in ANY way. Hell, you just read and write a 2Tb file and you're done, or memory mapping on anything you can physically access (which can be a file or a remote network for all you know).

      But bog-standard malloc support for stuff like that isn't present. You require software designed with that in mind. And, as I say, software like that is on the way out because application developers don't care about jumping through hoops when they can just say "Use hardware and software that's been standard for over a decade".

      The argument for 64? You can jump through hoops like people did for a few years in the DOS days, the early Windows days, the early Linux days (I can remember LBA hacks and drive overlay support in Linux for hardware that didn't support it), and all had to be replaced eventually by just using a sane system with larger addressable memory directly. Or you can expect that to happen and next time you build something for yourself just select the 64-bit option, rather than having to do all that.

      Literally - lose nothing, gain ongoing support forever.

      Distros are dropping 32-bit left, right and centre.

      Microsoft granted it some relief with Windows 10 after two years of threatening to remove support, you think the next version will have it? Server 2012 - 2016 are already 64-bit only.
      The Linux kernel lists have talked about dropping it for years, and the bodge of x32 isn't going to be supported forever.

      I'm not suggesting OH MY GOD BURN ALL THE 32'S! I'm saying its days are numbered. Like pre-Pentium support, like ISA bus drivers, like BIOS instead of UEFI. These things are dying off, dropping off supported hardware and software lists, and slowly disappearing. And the impact of 64-bit on systems you have NOW is likely nil past a reinstall or upgrade, or at worst a virtualisation of what you have.

      So rather than coast along until one day the machines just don't work or have to be kept on insecure and unmaintained software, drivers or OS, the next time something happens, install the 64-bit version and save yourself the hassle.

      Or join that one guy who still has an MCA bus in his machine and keeps trying to port the patches to Linux 3.0...

  • (Score: 0) by Anonymous Coward on Thursday January 26 2017, @05:23PM

    by Anonymous Coward on Thursday January 26 2017, @05:23PM (#459008)

    "Everything is 64-bit by default and works just the same."

    There's a problem with that statement.

  • (Score: 0) by Anonymous Coward on Thursday January 26 2017, @08:37PM

    by Anonymous Coward on Thursday January 26 2017, @08:37PM (#459118)

    I have a couple of old AthlonXP desktops that I only rarely power-on. I recently powered one up and did all the software updates...only to realize the Firefox update killed it (no support for CPU's with SSE2) and none of the other big/mainstream browsers supported it either (best I came up with was Dillo.)

    • (Score: 1) by UncleSlacky on Thursday January 26 2017, @09:11PM

      by UncleSlacky (2859) on Thursday January 26 2017, @09:11PM (#459139)

      Midori or Qupzilla might run OK on it.

    • (Score: 2) by Scruffy Beard 2 on Thursday January 26 2017, @10:02PM

      by Scruffy Beard 2 (6030) on Thursday January 26 2017, @10:02PM (#459168)

      Firefox-esr 45.6.0 works on my PentiumII-350Mhz machine (stupidly slowly). It does not even have SSE support.

      Maybe your distro is dropping i686 support as well.

      I *did* run into that problem with Chromium though. It is particularly bad for old AMD systems because they lag behind one generation with the SSE support.