Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Wednesday March 31 2021, @12:25PM   Printer-friendly
from the bigger-faster-cheaper? dept.

Samsung's 512GB DDR5 module is a showcase for the future of RAM:

Samsung has unveiled a new RAM module that shows the potential of DDR5 memory in terms of speed and capacity. The 512GB DDR5 module is the first to use High-K Metal Gate (HKMG) tech, delivering 7,200 Mbps speeds — over double that of DDR4, Samsung said. Right now, it's aimed at data-hungry supercomputing, AI and machine learning functions, but DDR5 will eventually find its way to regular PCs, boosting gaming and other applications.

[...] With 7,200 Mbps speeds, Samsung's latest module would deliver around 57.6 GB/s transfer speeds on a single channel. In Samsung's press release, Intel noted that the memory would be compatible with its next-gen "Sapphire Rapids" Xeon Scalable processors. That architecture will use an eight-channel DDR5 memory controller, so we could see multi-terabyte memory configurations with memory transfer speeds as high as 460 GB/s. Meanwhile, the first consumer PCs could arrive in 2022 when AMD unveils its Zen 4 platform, which is rumored to support DDR5.

Previously:
SK Hynix Ready to Ship 16 Gb DDR5 Dies, Has Its Own 64 GB DDR5-4800 Modules
JEDEC Releases DDR5 Memory Specification
SK Hynix Announces Plans for DDR5-8400 Memory, and More


Original Submission

Related Stories

SK Hynix Announces Plans for DDR5-8400 Memory, and More 6 comments

SK Hynix: Up to DDR5-8400 at 1.1 Volts

Back in November last year, we reported that SK Hynix had developed and deployed its first DDR5 DRAM. Fast forward to the present, and we also know SK Hynix has recently been working on its DDR5-6400 DRAM, but today the company has showcased that it has plans to offer up to DDR5-8400, with on-die ECC, and an operating voltage of just 1.1 Volts.

WIth CPU core counts rising with the fierce battle ongoing between Intel and AMD in the desktop, professional, and now mobile markets, the demand to increase throughput performance is high on the agenda. Memory bandwidth by comparison has not been increasing as much, and at some level the beast needs to be fed. Announcing more technical details on its official website, SK Hynix has been working diligently on perfecting its DDR5 chips with capacity for up to 64 Gb per chip.

Micron will begin selling High Bandwidth Memory (HBM) this year, entering the market alongside Samsung and SK Hynix and potentially lowering prices:

Bundled in their latest earnings call, Micron has revealed that later this year the company will finally introduce its first HBM DRAM for bandwidth-hungry applications. The move will enable the company to address the market for high-bandwidth devices such as flagship GPUs and network processors, which in the last five years have turned to HBM to meet their ever-growing bandwidth needs. And as the third and final of the "big three" memory manufacturers to enter the HBM market, this means that HBM2 memory will finally be available from all three companies, introducing a new wrinkle of competition into that market.

Also at Wccftech.

See also: Cadence DDR5 Update: Launching at 4800 MT/s, Over 12 DDR5 SoCs in Development


Original Submission

JEDEC Releases DDR5 Memory Specification 11 comments

DDR5 Memory Specification Released: Setting the Stage for DDR5-6400 And Beyond

We'll start with a brief look at capacity and density, as this is the most-straightforward change to the standard compared to DDR4. Designed to span several years (if not longer), DDR5 will allow for individual memory chips up to 64Gbit in density, which is 4x higher than DDR4's 16Gbit density maximum. Combined with die stacking, which allows for up to 8 dies to be stacked as a single chip, then a 40 element LRDIMM can reach an effective memory capacity of 2TB. Or for the more humble unbuffered DIMM, this would mean we'll eventually see DIMM capacities reach 128GB for your typical dual rank configuration.

[...] For DDR5, JEDEC is looking to start things off much more aggressively than usual for a DDR memory specification. Typically a new standard picks up from where the last one started off, such as with the DDR3 to DDR4 transition, where DDR3 officially stopped at 1.6Gbps and DDR4 started from there. However for DDR5 JEDEC is aiming much higher, with the group expecting to launch at 4.8Gbps, some 50% faster than the official 3.2Gbps max speed of DDR4. And in the years afterwards, the current version of the specification allows for data rates up to 6.4Gbps, doubling the official peak of DDR4.

Of course, sly enthusiasts will note that DDR4 already goes above the official maximum of 3.2Gbps (sometimes well above), and it's likely that DDR5 will eventually go a similar route. The underlying goal, regardless of specific figures, is to double the amount of bandwidth available today from a single DIMM. So don't be too surprised if SK Hynix indeed hits their goal of DDR5-8400 later this decade.

[...] JEDEC is also using the introduction of the DDR5 memory standard to make a fairly important change to how voltage regulation works for DIMMs. In short, voltage regulation is being moved from the motherboard to the individual DIMM, leaving DIMMs responsible for their own voltage regulation needs. This means that DIMMs will now include an integrated voltage regulator, and this goes for everything from UDIMMs to LRDIMMs.

JEDEC is dubbing this "pay as you go" voltage regulation, and is aiming to improve/simplify a few different aspects of DDR5 with it. The most significant change is that by moving voltage regulation on to the DIMMs themselves, voltage regulation is no longer the responsibility of the motherboard. Motherboards in turn will no longer need to be built for the worst-case scenario – such as driving 16 massive LRDIMMs – simplifying motherboard design and reining in costs to a degree. Of course, the flip side of this argument is that it moves those costs over to the DIMM itself, but then system builders are at least only having to buy as much voltage regulation hardware as they have DIMMs, and hence the PAYGO philosophy.

"On-die ECC" is mentioned in the press release and slides. If you can figure out what that means, let us know.

See also: Micron Drives DDR5 Memory Adoption with Technology Enablement Program

Previously: DDR5 Standard to be Finalized by JEDEC in 2018
DDR5-4400 Test Chip Demonstrated
Cadence and Micron Plan Production of 16 Gb DDR5 Chips in 2019
SK Hynix Announces Plans for DDR5-8400 Memory, and More


Original Submission

SK Hynix Ready to Ship 16 Gb DDR5 Dies, Has Its Own 64 GB DDR5-4800 Modules 7 comments

DDR5 is Coming: First 64GB DDR5-4800 Modules from SK Hynix

DDR5 is the next stage of platform memory for use in the majority of major compute platforms. The specification (as released in July 2020) brings the main voltage down from 1.2 V to 1.1 V, increases the maximum silicon die density by a factor 4, doubles the maximum data rate, doubles the burst length, and doubles the number of bank groups. Simply put, the JEDEC DDR specifications allows for a 128 GB unbuffered module running at DDR5-6400. RDIMMs and LRDIMMs should be able to go much higher, power permitting.

[...] SK Hynix's announcement today is that they are ready to start shipping DDR5 ECC memory to module manufacturers – specifically 16 gigabit dies built on its 1Ynm process that support DDR5-4800 to DDR5-5600 at 1.1 volts. With the right packaging technology (such as 3D TSV), SK Hynix says that partners can build 256 GB LRDIMMs. Additional binning of the chips for better-than-JEDEC speeds will have to be done by the module manufacturers themselves. SK Hynix also appears to have its own modules, specifically 32GB and 64GB RDIMMs at DDR5-4800, and has previously promised to offer memory up to DDR5-8400.

[...] As part of the announcement, it was interesting to see Intel as one of the lead partners for these modules. Intel has committed to enabling DDR5 on its Sapphire Rapids Xeon processor platform, due for initial launch in late 2021/2022. AMD was not mentioned with the announcement, and neither were any Arm partners.

SK Hynix quotes that DDR5 is expected to be 10% of the global market in 2021, increasing to 43% in 2024. The intersection point for consumer platforms is somewhat blurred at this point, as we're probably only half-way through (or less than half) of the DDR4 cycle. Traditionally we expect a cost interception between old and new technology when they are equal in market share, however the additional costs in voltage regulation that DDR5 requires is likely to drive up module costs – scaling from standard power delivery on JEDEC modules up to a beefier solution on the overclocked modules. It should however make motherboards cheaper in that regard.

See also: Insights into DDR5 Sub-timings and Latencies

Previously: DDR5 Standard to be Finalized by JEDEC in 2018
DDR5-4400 Test Chip Demonstrated
Cadence and Micron Plan Production of 16 Gb DDR5 Chips in 2019
SK Hynix Announces Plans for DDR5-8400 Memory, and More
JEDEC Releases DDR5 Memory Specification


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 1, Interesting) by Anonymous Coward on Wednesday March 31 2021, @01:37PM (9 children)

    by Anonymous Coward on Wednesday March 31 2021, @01:37PM (#1131590)

    I'm curious if anyone uses, "RAM Drives" [wikipedia.org] today.

    I used to think they were good for browser cache, distributed computing projects, and other uses. Are they a real thing today or is this a relic of the past?

    • (Score: 4, Funny) by DannyB on Wednesday March 31 2021, @02:11PM

      by DannyB (5839) Subscriber Badge on Wednesday March 31 2021, @02:11PM (#1131606) Journal

      Protip: if you use a "ram disk" (as it was once called), then be sure you use virtual memory with a large swap area to ensure you have sufficient memory available.

      --
      The lower I set my standards the more accomplishments I have.
    • (Score: 0) by Anonymous Coward on Wednesday March 31 2021, @02:16PM

      by Anonymous Coward on Wednesday March 31 2021, @02:16PM (#1131607)

      Basicly every linux distro uses tmpfs these days.

    • (Score: 0) by Anonymous Coward on Wednesday March 31 2021, @03:23PM

      by Anonymous Coward on Wednesday March 31 2021, @03:23PM (#1131625)

      A modern I-Ram with DDR5 would be cool...
      https://en.wikipedia.org/wiki/I-RAM [wikipedia.org]

    • (Score: 3, Interesting) by nostyle on Wednesday March 31 2021, @05:57PM

      by nostyle (11497) on Wednesday March 31 2021, @05:57PM (#1131699) Journal

      Running nearly entirely out of RAM here.

      I only reboot my box every six months or so - uptime right now is 209 days. Hence taking a few minutes/hours to reboot/configure my environment is acceptable to me. I hate the idea that a bad spinning disk could hose/crash my system. I hate the fact that the lifetime of SD/SSD drives is write limited - with little or no warning of impending failures.

      Consequently, I boot off of DVD-ROM (...try to hack those files you crackers out there...) using the "-toram" switch to load the entire OS into RAM. Then, post-boot, I remap a few directories onto an SSD or SD card so that my downloads and projects/documents survive reboots, and large but infrequently written caches (such as apt) do not exhaust RAM. I sometimes configure a zram swap space to effectively increase the RAM available to the OS.

      The result is that there is zero latency in the response of my machine, and there are almost never any writes to persistent storage. There is no spinning media evar; I never worry about SSD wear; and the machine is dead quiet since I'm running fanless chromebook hardware.

      The only trouble with this system comes when running off an Ubuntu-based derivative, where I find some systemd-related task has a serious memory leak - currently having leaked an entire gigabyte since the last reboot. Having 8 GB RAM available, this is livable albeit annoying. When running off a Slackware image, there are no noticeable leaks, and multi-year uptimes are not unusual. Beyond this, there is only the danger of stray cosmic rays flipping bits of RAM - events which are reasonably rare.

      --
      I would love to have a box with 512 GB RAM - but I don't actually have a use-case to justify it yet.

    • (Score: 0) by Anonymous Coward on Wednesday March 31 2021, @08:57PM (1 child)

      by Anonymous Coward on Wednesday March 31 2021, @08:57PM (#1131812)
    • (Score: 2) by Freeman on Wednesday March 31 2021, @09:21PM

      by Freeman (732) on Wednesday March 31 2021, @09:21PM (#1131827) Journal

      In the event you're talking about virtual, certainly. In the event you're talking about the hardware RAM drives where you stuck some RAM on a PCI Express card and plugged it into a computer, that was more of a gimmick than anything. With PCI Express NVME drives, you're getting the best of both speed and reliability.

      --
      Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
    • (Score: 2, Informative) by amigasource on Thursday April 01 2021, @05:37PM

      by amigasource (1738) on Thursday April 01 2021, @05:37PM (#1132203) Homepage

      Amiga OS for the win! Use my RAMDISK every day!

      --
      Please visit www.AmigaSource.com !!! Since 2001... Your BEST source for Amiga information. Again...
    • (Score: 0) by Anonymous Coward on Thursday April 01 2021, @10:52PM

      by Anonymous Coward on Thursday April 01 2021, @10:52PM (#1132303)

      If you define 'RAM drive' to include Linux's tmpfs then yes. Pretty much every Linux system today mounts tmpfs on /tmp and /var/run by default.

  • (Score: 2) by takyon on Wednesday March 31 2021, @01:53PM

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Wednesday March 31 2021, @01:53PM (#1131595) Journal

    I don't think any of the consumer memory modules out there use TSV stacked dies like this 512 GB module. It would be nice if that led to a cost-per-bit reduction (fat chance).

    T-Force Gaming Confirms Development of Next-Gen DDR5 Memory Modules With Overclocking Support, Can Push Voltages Over 2.6V [wccftech.com]

    T-Force also states that DDR5 memory has far greater room for voltage adjustment when it comes to overclocking support. This is primarily due to the upgraded power management ICs (PMIC) that allows for voltages over 2.6V. It is also detailed that existing DDR4 memory modules handled their voltage conversion through the motherboard but that changes with DDR5. The components that are required for the voltage conversion are now moved over to the memory DIMM itself, reducing voltage wear and noise generation while simultaneously offering increased room for overclocking.

    That seems very high and a big deal but I don't overclock memory so...

    China starts mass production of DDR5 memory [videocardz.com]
    Chinese Memory Maker Commences DDR5 Mass Production, Expected To Launch With Intel Alder Lake Desktop CPUs [wccftech.com]

    Micron memory, Chinese module.

    Alder Lake is still on track for around a September/October launch, but it will be launched with DDR4 and DDR5 motherboards. The majority of buyers might choose DDR4. Zen 4 will probably launch around mid-2022, DDR5 only.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 0, Touché) by Anonymous Coward on Wednesday March 31 2021, @01:55PM (26 children)

    by Anonymous Coward on Wednesday March 31 2021, @01:55PM (#1131597)

    I don't want more ram.
    In fact I prefer a lot less for EVERYONE.
    Computers were a lot better with 1MB of ram than they are now.

    • (Score: 2) by takyon on Wednesday March 31 2021, @01:57PM (8 children)

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Wednesday March 31 2021, @01:57PM (#1131599) Journal

      Objectively wrong. Replace storage with RAM.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 1, Insightful) by Anonymous Coward on Wednesday March 31 2021, @02:05PM (7 children)

        by Anonymous Coward on Wednesday March 31 2021, @02:05PM (#1131603)

        Storage too but finite but large ram like 1MB prevents sloppy "programmers" of recent times adding yet another abstraction layer above the existing ones and actually care about code size and thus speed.
        Yea, more ram makes big LUTs possible and all that shit that may increase performance but it's mostly irrelevant for majority of programs anyway.

        • (Score: 4, Interesting) by takyon on Wednesday March 31 2021, @02:40PM (5 children)

          by takyon (881) <takyonNO@SPAMsoylentnews.org> on Wednesday March 31 2021, @02:40PM (#1131613) Journal

          To your original point, the cat is out of the bag. Average RAM per PC/tablet/smartphone isn't going to go down, maybe only reset as smaller devices come out (smartwatches currently have about 1 GB to 1.5 GB, which should go up by the time they are projecting Star Wars style holograms).

          Large amounts of RAM enable "bad" behavior, but don't prevent "good" behavior. A small amount of RAM doesn't give you a choice.

          Sloppy, abstract code (like my linked extension) usually runs everywhere (especially browser-based) and is easy to write. It's a good thing. Don't run sloppy code if you don't like it. We may be reaching the peak of abstraction anyway. If everyone suddenly had 8 TB of memory instead of 8 GB, would the bloat be able to grow to use up all of it?

          If most programmers are sloppy, a competent programmer has more value, even if that's restricted to niche areas like supercomputing.

          Finally, I think a good way to address the "problem" would be to participate in or support programming contests that restrict you to using limited hardware or resources (e.g. demoscene art). I know that limiting program size is popular, but I'm not sure about memory usage.

          https://www.youtube.com/watch?v=4UqwP2-FIWo [youtube.com]
          https://www.youtube.com/watch?v=JZ6ZzJeWgpY [youtube.com]

          --
          [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
          • (Score: 2) by DannyB on Wednesday March 31 2021, @04:20PM (4 children)

            by DannyB (5839) Subscriber Badge on Wednesday March 31 2021, @04:20PM (#1131663) Journal

            Sometimes more computer resources actually buy you features. Sometimes these features are what you once would have considered amazing. Yet those amazing features are nearly invisible. You don't think of them. Other than in terms of how modern software uses up so much more memory and cpu cycles.

            For example, the web browser box I'm typing this into right now does spell checking and highlights Miss Spelled words. (A cousin of Miss Direction and Miss Management)

            Word processors now routinely grammar check as you type. Now even email clients spell check and some even grammar check.

            Modern IDEs (used as alternatives to notepad to write code) offer all kinds of huge assistance at your fingertips. Smartly suggesting the next thing you might be wanting to type in your code. Code templates. Refactorings. All sorts of actually amazing things I would have amazingly found amazing just a few decades ago. Like when I used Classic Mac OS, or got my first Linux box. Heck my wristwatch (Linux, btw) is more powerful than my first Linux PC was in 1999.

            So is it bloat, or is it features? Or maybe some of both. What we call bloat might also be called "shorter development time" or "lower development effort". Which you hinted at:

            "and is easy to write"

            It is what it is.

            I for one welcome our bigger memory / faster cpu overlords.

            --
            The lower I set my standards the more accomplishments I have.
            • (Score: 2) by takyon on Wednesday March 31 2021, @04:44PM (3 children)

              by takyon (881) <takyonNO@SPAMsoylentnews.org> on Wednesday March 31 2021, @04:44PM (#1131678) Journal

              Loading all Earth mapping data [gearthblog.com] or a future 100 billion star catalog [spaceengine.org] could easily take up terabytes of memory*. A niche thing, but it would allow you to zip around instantly for your own amusement or animations.

              * "Universal memory" doesn't need a distinction between memory and storage. :P

              --
              [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
              • (Score: 2) by DannyB on Wednesday March 31 2021, @05:10PM (2 children)

                by DannyB (5839) Subscriber Badge on Wednesday March 31 2021, @05:10PM (#1131690) Journal

                That was an interesting and great example.

                There will come a point where the distinction between memory and storage will disappear. Rather, permanent non volatile storage will be as fast as what we now call "memory".

                That will upend decades of system architecture assumptions. Especially in the area of OS design.

                The ancient Palm Pilot had the illusion of working this way. Only because when turned off it went into a deep sleep. So what was in memory seemed to be saved on storage. But the design of how applications worked is what I find interesting. How does the design of entire systems change when you no longer have this artificial distinction between memory and storage?

                --
                The lower I set my standards the more accomplishments I have.
                • (Score: 2, Interesting) by aixylinux on Thursday April 01 2021, @12:48PM (1 child)

                  by aixylinux (7294) on Thursday April 01 2021, @12:48PM (#1132110)
                  Consider the single-level-storage architecture of the AS/400 (IBM System i).

                  https://en.wikipedia.org/wiki/IBM_System_i

                  If you are interested in computer system architectures, and all you know are Intel/Windows/Linux/Unix, you owe it to yourself to study the AS/400. The AS/400 has been rehosted many times on different chips, and it never has been even necessary to recompile the OS, much less the applications.

                  https://www.amazon.com/Inside-AS-400-Frank-Soltis/dp/1882419669
                  • (Score: 2) by DannyB on Thursday April 01 2021, @02:31PM

                    by DannyB (5839) Subscriber Badge on Thursday April 01 2021, @02:31PM (#1132137) Journal

                    The TIMI instruction set vaguely causes me to think of LLVM.

                    I don't mean to say this is comparable, but back in the day of the UCSD p-System, on small computers (eg, Apple II, and DEC PDP, etc), there was the UCSD p-System. To port it, you build a p-Code emulator. Typically about 1 or 2K words. Then the entire OS and all applications can be run without recompiling. Cross platform compiled bytecode compatibility long before Java.

                    --
                    The lower I set my standards the more accomplishments I have.
        • (Score: 1) by melyan on Wednesday March 31 2021, @07:06PM

          by melyan (14385) on Wednesday March 31 2021, @07:06PM (#1131734) Journal

          Ego. It's not about RAM. It's about you feeling superior. Ego ruins everything.

    • (Score: 4, Informative) by DannyB on Wednesday March 31 2021, @02:27PM (11 children)

      by DannyB (5839) Subscriber Badge on Wednesday March 31 2021, @02:27PM (#1131610) Journal

      As we look at What's new in Java 16 [malloc.se], we find this:

      Summary:

      • Sub-milliseond Max Pause Times
      • The 10 ms max pause time is now well under 1 ms
      • . . . on multi terabyte heaps
      • ZGC now has O(1) pause times. In other words, they execute in constant time and do not increase with the heap, live-set, or root-set size (or anything else for that matter).
      • Results of the SPECjbb 2015 benchmark, on machine with 3 TB heap, 224 hyper threads (Intel), and ~2100 Java threads, max pause time was 0.5 ms (or 500 µs), and average pause times of 50 µs.
      • ZGC now has pause times in the microsecond domain, with average pause times of ~50µs and max pause times of ~500µs. Pause times are unaffected by the heap, live-set and root-set size.

      Last September, when Java 15 came out, they had raised the maximum heap size from a paltry 4 TB to a more reasonable 16 TB. Now in Java 16, I can find no stated upper limit on heap size. The closest guidance I could find was whatever limit the OS imposes on heap size when launching a process. On an IBM web site (IIRC), there was a limitation on some Linux system of 128 TB because the kernel reserved 128 TB for kernel space and only 128 TB for user space.

      With advances in both GC, and improved memory modules, such as described in TFA, we will be able to write bigger and more effective hello world programs. (See: Java Hello World: Enterprise Edition [github.com]) One criticism of this hello world program is that it fails to make use of XML for a configuration file to specify where and how the "hello world" message text is located, and in what human language it is translated to.

      --
      The lower I set my standards the more accomplishments I have.
      • (Score: 4, Funny) by takyon on Wednesday March 31 2021, @02:43PM (5 children)

        by takyon (881) <takyonNO@SPAMsoylentnews.org> on Wednesday March 31 2021, @02:43PM (#1131614) Journal

        If your machine doesn't have at least 1 petabyte of RAM, why even live?

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
        • (Score: 2) by DannyB on Wednesday March 31 2021, @03:49PM (2 children)

          by DannyB (5839) Subscriber Badge on Wednesday March 31 2021, @03:49PM (#1131638) Journal

          If you suffer from insufficient memory, there still is hope. Use virtual memory. Then create a ramdisk. Then use the ramdisk as swap space for more virtual memory. Repeat until things are running smoothly.

          --
          The lower I set my standards the more accomplishments I have.
          • (Score: 0) by Anonymous Coward on Wednesday March 31 2021, @04:16PM (1 child)

            by Anonymous Coward on Wednesday March 31 2021, @04:16PM (#1131656)

            You need to try that joke a third time on this page because mod points are happiness and are so valuable

            • (Score: 2) by DannyB on Wednesday March 31 2021, @04:24PM

              by DannyB (5839) Subscriber Badge on Wednesday March 31 2021, @04:24PM (#1131667) Journal

              Okay: Do you remember a program called Soft Ram? Or how about Ram Doubler 9.0! [amazon.com]

              --
              The lower I set my standards the more accomplishments I have.
        • (Score: 0) by Anonymous Coward on Thursday April 01 2021, @03:04AM (1 child)

          by Anonymous Coward on Thursday April 01 2021, @03:04AM (#1131963)

          1 petabyte ought to be enough for anybody.

          • (Score: 0) by Anonymous Coward on Thursday April 01 2021, @06:14AM

            by Anonymous Coward on Thursday April 01 2021, @06:14AM (#1132028)

            No, no! That was Gill Bates saying 640K was enough for anybody, before he started injecting chips into everyone under the pretence of a vaccine just like they did with Osama bin Laden, and then started GPS tracking all of us and keeping records on when we did doody or peepee.

      • (Score: 2) by EEMac on Wednesday March 31 2021, @02:43PM (4 children)

        by EEMac (6423) on Wednesday March 31 2021, @02:43PM (#1131615)

        > ZGC now has O(1) pause times

        Sounds *fantastic*. What dark magic is this?! [malloc.se]

        But it has a price [malloc.se] . . .

        > The most important tuning option for ZGC is setting the max heap size (-Xmx). Since ZGC is a concurrent collector a max heap size must be selected such that, 1) the heap can accommodate the live-set of your application, and 2) there is enough headroom in the heap to allow allocations to be serviced while the GC is running.

        I remember setting memory size for applications back in classic MacOS. I guess that time has come again.

        • (Score: 3, Informative) by DannyB on Wednesday March 31 2021, @03:43PM (3 children)

          by DannyB (5839) Subscriber Badge on Wednesday March 31 2021, @03:43PM (#1131632) Journal

          Large Java applications often set the maximum heap size and have done so for years. The idea is to avoid allocating more from and then giving memory back to the OS for every little growth and shrinkage of the overall heap size. For a few versions back now, Java will release memory back to the OS if after some threshold the workload is not using all it has available. On the flip side, if memory demand grows beyond the max heap size, and the OS can satisfy such an increase, then Java will expand its heap beyond your max specified heap size -- but only for the time it is needed. eg, if a thread servicing a request briefly needs a large additional gob of memory ("gob" being the technical term), then it will get it. GC will soon reclaim it asap, and that above-max heap allocation from the OS will be returned back to the OS. This all means that these days it is less critical to get the memory allocation parameter right. For small programs under 4 GB of heap, specifying a max heap is often ignored. (Such as a desktop application. Or teeny tiny server.)

          Since the modern GCs, particularly ZGC, and also Red Hat's Shenandoah GC, and Azul's Zing GC, are affected by number of cores, the overall garbage collection rate can be impacted by number of cores available. You want collection rate to be similar to allocation rate. People who maintain large applications typically know theses parameters. There are great tools in Java to monitor these things, along with all sorts of knobs and dials to tweak. Java offers multiple different GCs to choose from. And each GC has different performance characteristics which may be appealing. For example the Parallel GC has great throughput, but doesn't care too much about latency. Great for some purposes. The G1 collector will give you a good overall balance. This is great for most workloads. Very few actually need ZGC, Shenandoah or Zing.

          While Zing is only available commercially from Azul, the ZGC and Red Hat's Shenandoah GC are available in Open JDK from any of Adopt OpenJDK, IBM, SAP, Red Hat, Amazon, Azul, and a few others. One thing you'll notice about those companies is that they work on big servers, and big servers often run Java. Red Hat and IBM (the former acquired recently by the latter) both invest significantly in Java development. Why? Because it's where their customers are. Yes, Red Hat.

          I also remember memory allocation in Classic Mac OS. Ah, those were fun days.

          --
          The lower I set my standards the more accomplishments I have.
          • (Score: 2) by EEMac on Thursday April 01 2021, @12:10AM (1 child)

            by EEMac (6423) on Thursday April 01 2021, @12:10AM (#1131892)

            Today I learned. Thank you!

          • (Score: 0) by Anonymous Coward on Thursday April 01 2021, @11:12PM

            by Anonymous Coward on Thursday April 01 2021, @11:12PM (#1132309)

            Its more insidious than that. Tracing collectors have a size/speed trade-off in that the faster they become the less memory efficient they are. IIRC, to get a tracing collector to run as fast as direct collection requires at least 4x the RAM. To get the sub-millisecond times DannyB is talking about would requires substantial space inefficiency.

            There are three different heap collection strategies used:
            Direct collection. Best space and time efficiency, but can only handles tree structures. Used in C and by Rust's singleton pointers.
            Counting collectors. Can handle DAGs but leak cycles. Space efficient but slower than DC. Manual implementations exist in C, used in many managed languages such as Python.
            Tracing collectors. Can handle memory cycles. Space inefficient and there is no guarantee if or when destructors will run. The most efficient ones don't support destructors at all. Unsuitable for memory latency constrained systems.

    • (Score: 2) by VLM on Wednesday March 31 2021, @02:46PM (3 children)

      by VLM (445) on Wednesday March 31 2021, @02:46PM (#1131616)

      Come over to the virtualization world.

      Somehow vmware nsx-t seems to need like 40 GB per node for all its virtualized networking nonsense by the time you add everything up.

      Meanwhile "because you can" you end up with lots of little VMs all configured by ansible (or old timey puppet or similar)

      I don't have any 1 MB VMs and even my first linux box in '93 had 4 megs, but I do have a lot of 1 gb vms just doing appliance like things.

      There is a situation where giving VMs more ram lets them ramdisk buffer more stuff in memory reducing IO pressure. You can add memory, spread it around to various VMs, and watch your daily IOPS drop. So just because I "can" run freebsd on 512 megs or whatever doens't mean I do.

      • (Score: 2) by DannyB on Wednesday March 31 2021, @04:02PM (2 children)

        by DannyB (5839) Subscriber Badge on Wednesday March 31 2021, @04:02PM (#1131647) Journal

        Somehow vmware nsx-t seems to need like 40 GB per node for all its virtualized networking nonsense by the time you add everything up.

        Sounds horrible. At first. But does it actually save you more money, ignoring the cost in bytes and cycles?

        There is too much focus on bytes and cpu cycles instead of focus on dollars. Once, long ago, saving bytes and cpu cycles was how you saved dollars because machines were expensive and developers were cheap. Now it's the reverse: developers expensive (and whiny) and machines cheap, reliable and commodity off the shelf.

        I have never touched vmware. But we use it in both our data centers and virtual data centers (yes, that's an actual thing now, wtf). I only access the VMs that I maintain. I make sure the apps work, they make sure the infrastructure works. And work it does. I have looked up the model numbers of some of these CPU chips my VMs are on, and boy those are way expensive. Thousands of dollars worth of cpu chip just to go into one socket on the board. I'm sure I'll never lay eyes on this hardware somewhere far away.

        I occasionally get to see screen shots of vmware, or someone using it remotely.

        --
        The lower I set my standards the more accomplishments I have.
        • (Score: 2) by VLM on Monday April 05 2021, @03:27PM (1 child)

          by VLM (445) on Monday April 05 2021, @03:27PM (#1133493)

          I occasionally get to see screen shots of vmware, or someone using it remotely.

          Ask for limited admin rights, they gave me some. vSphere is pretty nice. I like being able to template VMs and clone them. Also the high level monitoring is pretty nice.

          You can also set it up at home for a modest expense under an educational license of nominal fee.

          • (Score: 2) by DannyB on Monday April 05 2021, @04:33PM

            by DannyB (5839) Subscriber Badge on Monday April 05 2021, @04:33PM (#1133517) Journal

            That is interesting.

            I have not need, especially for a fee, to learn how to do this. I tend to automatically be skeptical about learning something that costs money in order to learn. Unless there is a very clear reason to learn it. And in that situation, my employer would pay for that learning. However at work, I have no need to manage the VMs and data centers. We have people who do that. I am part of the people who develop and maintain applications.

            --
            The lower I set my standards the more accomplishments I have.
    • (Score: 2) by Tork on Thursday April 01 2021, @02:16AM

      by Tork (3914) Subscriber Badge on Thursday April 01 2021, @02:16AM (#1131951)
      You are only one porn-quest away from regretting your words.
      --
      🏳️‍🌈 Proud Ally 🏳️‍🌈
(1)