Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Sunday June 06 2021, @05:51PM   Printer-friendly
from the 640k-is-more-memory-than-anyone-will-ever-need dept.

Linux x86/x86_64 Will Now Always Reserve The First 1MB Of RAM - Phoronix:

The Linux x86/x86_64 kernel code already had logic in place for reserving portions of the first 1MB of RAM to avoid the BIOS or kernel potentially clobbering that space among other reasons while now Linux 5.13 is doing away with that "wankery" and will just unconditionally always reserve the first 1MB of RAM.

[...] The motivation now for Linux 5.13 in getting that 1MB unconditional reservation in place for Linux x86/x86_64 stems from a bug report around an AMD Ryzen system being unbootable on Linux 5.13 since the change to consolidate their early memory reservations handling. Just unconditionally doing the first 1MB makes things much simpler to handle.

The change was sent in this morning as part of x86/urgent. "Do away with all the wankery of reserving X amount of memory in the first megabyte to prevent BIOS corrupting it and simply and unconditionally reserve the whole first megabyte."

no more wankery


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Interesting) by Anonymous Coward on Sunday June 06 2021, @06:33PM (34 children)

    by Anonymous Coward on Sunday June 06 2021, @06:33PM (#1142432)

    In my youthful computer geek days I used to subscribe to some programming magazines and I remember looking on in envy when all the 32-bit Unix workstation started to come out. In those days, you typically had a 32-bit CPU, 4MB of RAM and a 120MB hard disk. Often the CPU was a Motorola 68020 or 030 or, if you were very lucky, a SPARC or MIPS. There were even some ARM Unix workstations. 4MB seemed to be about the minimum for running Unix. 1MB is 25% of that, and the Linux kernel harks back to those days. In 1991, when Linux was first devised, a 386 or 486 with 4MB of RAM was quite a powerful PC, and the minimum you needed to run a proper OS.

    Starting Score:    0  points
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  

    Total Score:   1  
  • (Score: 3, Informative) by sjames on Sunday June 06 2021, @06:43PM (3 children)

    by sjames (2882) on Sunday June 06 2021, @06:43PM (#1142436) Journal

    And the memory went on a full sized daughter card where you plugged in a seemingly endless array of DIP chips out of a tube, then set DIP switches to specify the size of the chips. Then check for memory errors, pull the card and mash each chip with your thumb to seat it better.

    • (Score: 3, Interesting) by Reziac on Monday June 07 2021, @03:10AM (2 children)

      by Reziac (2489) on Monday June 07 2021, @03:10AM (#1142613) Homepage

      LOL, yes, my 286 has one of those cards.... 2mb was at the time a massive upgrade, except DOS couldn't see it as system RAM (something was wrong with the card's driver). But it could see it as a RAMdisk. So I converted it into workspace, and had an astonishingly fast 286.

      --
      And there is no Alkibiades to come back and save us from ourselves.
      • (Score: 2) by sjames on Monday June 07 2021, @12:23PM (1 child)

        by sjames (2882) on Monday June 07 2021, @12:23PM (#1142712) Journal

        That was the other crazy part. Until the 386 and 32 bit OSes, 1 MB was the limit for the old segmented memory without dirty tricks and poor performance, so the extra memory was bank switched into a 64K chunk of "high memory".

        • (Score: 2) by Reziac on Monday June 07 2021, @01:19PM

          by Reziac (2489) on Monday June 07 2021, @01:19PM (#1142723) Homepage

          This one was supposed to work as EMS, and did work on a different 286 and DOS5 (I ran DOS6). But not on mine, I already had RAM cram-packed with all my TSRs and everything worked out to the last byte, so not having it as usable RAM wasn't so bad. Having the high-speed workspace was more valuable, so I lost interest in pursuing it.

          --
          And there is no Alkibiades to come back and save us from ourselves.
  • (Score: 4, Insightful) by choose another one on Sunday June 06 2021, @07:18PM (3 children)

    by choose another one (515) Subscriber Badge on Sunday June 06 2021, @07:18PM (#1142442)

    Yep, in fact I think back in late 80s early 90s some Unix boxes were only 2MB RAM, reason I think that is we had a disk quota larger than RAM size and I'm absolutely sure disk quota was 4MB :-)

    When I decided to get my own hardware (for Linux) I pushed the boat out and specified 16MB RAM, I can remember getting a "what do you need that much for" comment from a guy in the next lab - I think he was a vi user, I wasn't, I knew what I needed it for :-)

    Back to topic though - you are correct and what is being referred to as "wankery" now had good reason for existing back then. You very probably needed every possibly available byte of that first 1MB, and having it reserved away at boot might well make the difference between a system that was usable or constantly in swap hell.

    • (Score: 2, Funny) by Anonymous Coward on Sunday June 06 2021, @08:31PM (1 child)

      by Anonymous Coward on Sunday June 06 2021, @08:31PM (#1142463)

      eight megabytes and constantly swapping?

    • (Score: 3, Interesting) by Acabatag on Monday June 07 2021, @04:10AM

      by Acabatag (2885) on Monday June 07 2021, @04:10AM (#1142633)

      I still have a running specimen of an ancient Altos box. Called an 'Altos 586' because it was a five-user (serial consoles) 8086-based Unix box running Microsoft Xenix. It supports five users in 512K of RAM with an 8086.

  • (Score: 4, Insightful) by krishnoid on Sunday June 06 2021, @07:44PM (3 children)

    by krishnoid (1156) on Sunday June 06 2021, @07:44PM (#1142452)

    There was also a period where RAM was just plain *expensive*. The "correct" option to get the best performance out of your computer was to add as much RAM as possible, but that was infeasible due to the expense. So there were articles on getting a motherboard with a faster bus, faster-spinning hard drives, faster/multicore CPUs etc. to improve performance.

    However, since RAM prices became a tenth or less what they used to be *and* Linux/Windows 7+ aggressively/opportunistically caches disk access in RAM, only a few motherboards can accommodate much more RAM and the shipped standard is something like 8GB (more like 16GB these days), when 24GB+ drastically improves (non-network) multiple-application responsiveness. People still ask me questions about DDR3 (?)vs DDR4 RAM, or whether they should get faster RAM, when based on my understanding the RAM speed is less important than having a lot of it to hold your whole application and much of your data memory-resident.

    I guess most of these machines are spec'd for web browsing and gaming, but power users who might switch between a few apps for work -- e.g., Outlook, IDE, MS Teams, 20-tab-open browser running multiple heavyweight web apps, maybe a few MS-office type documents -- could probably benefit from maxing out RAM nowadays for maybe $300-$500. The speed and durability of solid-state drives make this somewhat less of an issue, but I found it odd that RAM used to be a big deal, and now that operating systems and power-user demands could benefit from that headroom, what seems to have been a holy grail/pipe-dream in the past seems to have dropped out of consideration.

    • (Score: 0) by Anonymous Coward on Sunday June 06 2021, @10:33PM

      by Anonymous Coward on Sunday June 06 2021, @10:33PM (#1142494)

      RAM speed matters, the performance difference between generations here is significant. I'm also not sure many users could figure out how to use 8GB of RAM, let alone 16+, because the browser is the heaviest application in common use, and even chrome is fine with 4.

    • (Score: 3, Interesting) by Runaway1956 on Monday June 07 2021, @12:51AM

      by Runaway1956 (2926) Subscriber Badge on Monday June 07 2021, @12:51AM (#1142560) Journal

      based on my understanding the RAM speed is less important than having a lot of it to hold your whole application and much of your data memory-resident.

      That is also my experience. If all of your programs fit in memory, and stay there, even ancient PC-100 memory would be "fast enough". It's the constant swapping that wears away that last nerve.

      As you note, SSD's have changed that to some extent. Still, a shortage of RAM is going to decrease the life of your SSD. Better to read the SSD once, commit to memory, then write back to the SSD when appropriate, than to read continuously from SSD.

    • (Score: 3, Interesting) by Reziac on Monday June 07 2021, @03:18AM

      by Reziac (2489) on Monday June 07 2021, @03:18AM (#1142618) Homepage

      That's my experience too. 8GB is adequate, but same box with 32GB is night and day, and that's even when it's not being particularly exercised -- everything is just a whole lot snappier. And was the same with several different OSs. (Glad I got greedy and ran around filling up all my RAM capacity before the prices went stupid...)

      --
      And there is no Alkibiades to come back and save us from ourselves.
  • (Score: 1, Funny) by Anonymous Coward on Sunday June 06 2021, @07:45PM (7 children)

    by Anonymous Coward on Sunday June 06 2021, @07:45PM (#1142453)

    Pfft when I were a lad, we had 64k and we ENJOYED it. Luxury.

    • (Score: 0) by Anonymous Coward on Sunday June 06 2021, @08:26PM (3 children)

      by Anonymous Coward on Sunday June 06 2021, @08:26PM (#1142461)

      My first computer had 1k of RAM and storage was to cassette tape at 300 baud.

      • (Score: 1, Funny) by Anonymous Coward on Sunday June 06 2021, @08:35PM (2 children)

        by Anonymous Coward on Sunday June 06 2021, @08:35PM (#1142466)

        You kids had it easy.

        • (Score: 1, Funny) by Anonymous Coward on Monday June 07 2021, @03:01AM (1 child)

          by Anonymous Coward on Monday June 07 2021, @03:01AM (#1142609)

          Yeah, we had punch card operated looms, and our parents called us lazy asses (they used manual looms).

    • (Score: 3, Funny) by Thexalon on Monday June 07 2021, @03:17AM (2 children)

      by Thexalon (636) on Monday June 07 2021, @03:17AM (#1142617)

      Of course, we had it tough: We had to get up at 2 o'clock every morning, half-an-hour before we went to bed, eat a lump of gray goo from a blown-out capacitor, calculate 400 instructions perfectly on bits of scrap paper, solder 15 chips in place, and when we got home our computers would kill us and dance around on our graves singing Daisy, Daisy.

      --
      The only thing that stops a bad guy with a compiler is a good guy with a compiler.
      • (Score: 3, Funny) by Eratosthenes on Monday June 07 2021, @05:45AM (1 child)

        by Eratosthenes (13959) on Monday June 07 2021, @05:45AM (#1142646) Journal

        Was thinking of composing a "Yorkshiremen" dialogue, but now I just stand in awe of Thexalon. Daisy! In Space, no one can hear.

        • (Score: 0) by Anonymous Coward on Monday June 07 2021, @06:26AM

          by Anonymous Coward on Monday June 07 2021, @06:26AM (#1142660)

          Will I dream?

  • (Score: 3, Informative) by SomeGuy on Sunday June 06 2021, @07:46PM (8 children)

    by SomeGuy (5632) on Sunday June 06 2021, @07:46PM (#1142454)

    It was also the Unix/Linux folks who kept complaining that a GUI would waste "too much memory", even though the original Mac did it in 128k, and Windows 95 could do it in as little as 4MB.

    • (Score: 0) by Anonymous Coward on Sunday June 06 2021, @08:31PM (5 children)

      by Anonymous Coward on Sunday June 06 2021, @08:31PM (#1142464)

      I remember when the Linux people were complaining that X was "slow" and used too many resources. A friend of mine asked one of the original X developer about it, and he said that X was designed to run on 1 MIPS machines. People were worried that because it was a network protocol that somehow it would be very slow. It isn't, and particularly not when you're running it on the local machine.

      • (Score: 2) by tangomargarine on Monday June 07 2021, @06:24AM (4 children)

        by tangomargarine (667) on Monday June 07 2021, @06:24AM (#1142659)

        A friend of mine asked one of the original X developer about it, and he said that X was designed to run on 1 MIPS machines. People were worried that because it was a network protocol that somehow it would be very slow.

        It isn't, and particularly not when you're running it on the local machine.

        Notice the sudden shift in tenses during your post...

        --
        "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
        • (Score: 0) by Anonymous Coward on Monday June 07 2021, @02:27PM (3 children)

          by Anonymous Coward on Monday June 07 2021, @02:27PM (#1142743)

          No kidding. X's fundamental design is wrong in the one should build a graphics system with fast local performance and built a network layer (IF this is even to be done!) on top of that. Experience has long shown that drawing primitives sent across the network are a failed idea. Way too low-level way to implement GUIs across a network.

          The Smalltalk-72 programming language made the same mistake and implemented message passing in a network transparent manner. Method calls were terribly slow, and later versions of Smalltalk were implemented using virtual methods for message passing, same as in Java or C++. Objective-C, which is an amalgamation of Smalltalk and C, implemented the original Smalltalk-72 message passing technique of network transparent method calls. As a result, it is slow. Objective-C has been in the process of being phased out for a long time by Apple in favor of other languages.

          • (Score: 0) by Anonymous Coward on Monday June 07 2021, @02:56PM (1 child)

            by Anonymous Coward on Monday June 07 2021, @02:56PM (#1142752)

            Experience has long shown that drawing primitives sent across the network are a failed idea.

            If the client and server are the same machine, that is if you are running the application locally, the data isn't sent across the network. I love the flexibility that X gives for running GUI apps on a LAN. It means I can have one monitor and one keyboard and several machines without a KVM.

            • (Score: 0) by Anonymous Coward on Monday June 07 2021, @03:52PM

              by Anonymous Coward on Monday June 07 2021, @03:52PM (#1142765)

              Modern X doesn't send drawing primitives across the network. The X extension used updates the remote framebuffer. MIT guessed wrong when they came up with the design of X.

          • (Score: 0) by Anonymous Coward on Monday June 07 2021, @05:23PM

            by Anonymous Coward on Monday June 07 2021, @05:23PM (#1142813)

            Correction: I need to correct my programming language post: the programming languages I mentioned did not, I believe implement NETWORK TRANSPARENT message processing, but they did do dynamic message lookup based on an intermediate method lookup broker. This is one step below network transparency, but is similar in some ways.

    • (Score: 0) by Anonymous Coward on Monday June 07 2021, @01:29AM (1 child)

      by Anonymous Coward on Monday June 07 2021, @01:29AM (#1142571)

      128K was not enough. A mere year later, Apple released the "Fat Mac" with 512K of RAM.

      • (Score: 3, Informative) by mechanicjay on Monday June 07 2021, @06:43PM

        Yep, though the OG 128K release model didn't even make it a full year.

        • Feb '84: 128K machines released.
        • Sept '84: 512k machines released.

        I have an actual FEb '84 128k mac -- it is possibly the most useless computer I've ever used. It's fine for like a demo of a sneak peak of what the platform might be capable of, but not really usable. Adding an external floppy drive helps quite a bit, which lets you keep the internal disk as the "system disk" so you don't have keep swapping it back it at random times. It's neat as an artifact, but I have far old and more capable machines that I use on a regular basis.

        --
        My VMS box beat up your Windows box.
  • (Score: 1, Informative) by Anonymous Coward on Sunday June 06 2021, @08:09PM (4 children)

    by Anonymous Coward on Sunday June 06 2021, @08:09PM (#1142456)

    4MB of RAM and a 120MB hard disk

    To think that my cheap wireless router has more of both.

    • (Score: 0) by Anonymous Coward on Monday June 07 2021, @12:39AM (3 children)

      by Anonymous Coward on Monday June 07 2021, @12:39AM (#1142555)

      One day the cheap wireless router will have 256Gb and 1million cores. Think about that.

      • (Score: 3, Insightful) by maxwell demon on Monday June 07 2021, @05:07AM (2 children)

        by maxwell demon (1608) on Monday June 07 2021, @05:07AM (#1142642) Journal

        I doubt it. With the size of transistors approaching the size of atoms, Moore's law is coming to an end.

        --
        The Tao of math: The numbers you can count are not the real numbers.
        • (Score: 2) by coolgopher on Monday June 07 2021, @08:48AM (1 child)

          by coolgopher (1157) on Monday June 07 2021, @08:48AM (#1142688)

          Meh, atoms are mostly empty space - surely we can cram in higher densities there!

          • (Score: 0) by Anonymous Coward on Monday June 07 2021, @02:57PM

            by Anonymous Coward on Monday June 07 2021, @02:57PM (#1142754)

            Neutron star memory?