Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.
posted by Fnord666 on Wednesday March 31 2021, @12:25PM   Printer-friendly
from the bigger-faster-cheaper? dept.

Samsung's 512GB DDR5 module is a showcase for the future of RAM:

Samsung has unveiled a new RAM module that shows the potential of DDR5 memory in terms of speed and capacity. The 512GB DDR5 module is the first to use High-K Metal Gate (HKMG) tech, delivering 7,200 Mbps speeds — over double that of DDR4, Samsung said. Right now, it's aimed at data-hungry supercomputing, AI and machine learning functions, but DDR5 will eventually find its way to regular PCs, boosting gaming and other applications.

[...] With 7,200 Mbps speeds, Samsung's latest module would deliver around 57.6 GB/s transfer speeds on a single channel. In Samsung's press release, Intel noted that the memory would be compatible with its next-gen "Sapphire Rapids" Xeon Scalable processors. That architecture will use an eight-channel DDR5 memory controller, so we could see multi-terabyte memory configurations with memory transfer speeds as high as 460 GB/s. Meanwhile, the first consumer PCs could arrive in 2022 when AMD unveils its Zen 4 platform, which is rumored to support DDR5.

Previously:
SK Hynix Ready to Ship 16 Gb DDR5 Dies, Has Its Own 64 GB DDR5-4800 Modules
JEDEC Releases DDR5 Memory Specification
SK Hynix Announces Plans for DDR5-8400 Memory, and More


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 1, Interesting) by Anonymous Coward on Wednesday March 31 2021, @01:37PM (9 children)

    by Anonymous Coward on Wednesday March 31 2021, @01:37PM (#1131590)

    I'm curious if anyone uses, "RAM Drives" [wikipedia.org] today.

    I used to think they were good for browser cache, distributed computing projects, and other uses. Are they a real thing today or is this a relic of the past?

    • (Score: 4, Funny) by DannyB on Wednesday March 31 2021, @02:11PM

      by DannyB (5839) Subscriber Badge on Wednesday March 31 2021, @02:11PM (#1131606) Journal

      Protip: if you use a "ram disk" (as it was once called), then be sure you use virtual memory with a large swap area to ensure you have sufficient memory available.

      --
      When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
    • (Score: 0) by Anonymous Coward on Wednesday March 31 2021, @02:16PM

      by Anonymous Coward on Wednesday March 31 2021, @02:16PM (#1131607)

      Basicly every linux distro uses tmpfs these days.

    • (Score: 0) by Anonymous Coward on Wednesday March 31 2021, @03:23PM

      by Anonymous Coward on Wednesday March 31 2021, @03:23PM (#1131625)

      A modern I-Ram with DDR5 would be cool...
      https://en.wikipedia.org/wiki/I-RAM [wikipedia.org]

    • (Score: 3, Interesting) by nostyle on Wednesday March 31 2021, @05:57PM

      by nostyle (11497) on Wednesday March 31 2021, @05:57PM (#1131699) Journal

      Running nearly entirely out of RAM here.

      I only reboot my box every six months or so - uptime right now is 209 days. Hence taking a few minutes/hours to reboot/configure my environment is acceptable to me. I hate the idea that a bad spinning disk could hose/crash my system. I hate the fact that the lifetime of SD/SSD drives is write limited - with little or no warning of impending failures.

      Consequently, I boot off of DVD-ROM (...try to hack those files you crackers out there...) using the "-toram" switch to load the entire OS into RAM. Then, post-boot, I remap a few directories onto an SSD or SD card so that my downloads and projects/documents survive reboots, and large but infrequently written caches (such as apt) do not exhaust RAM. I sometimes configure a zram swap space to effectively increase the RAM available to the OS.

      The result is that there is zero latency in the response of my machine, and there are almost never any writes to persistent storage. There is no spinning media evar; I never worry about SSD wear; and the machine is dead quiet since I'm running fanless chromebook hardware.

      The only trouble with this system comes when running off an Ubuntu-based derivative, where I find some systemd-related task has a serious memory leak - currently having leaked an entire gigabyte since the last reboot. Having 8 GB RAM available, this is livable albeit annoying. When running off a Slackware image, there are no noticeable leaks, and multi-year uptimes are not unusual. Beyond this, there is only the danger of stray cosmic rays flipping bits of RAM - events which are reasonably rare.

      --
      I would love to have a box with 512 GB RAM - but I don't actually have a use-case to justify it yet.

    • (Score: 0) by Anonymous Coward on Wednesday March 31 2021, @08:57PM (1 child)

      by Anonymous Coward on Wednesday March 31 2021, @08:57PM (#1131812)
    • (Score: 2) by Freeman on Wednesday March 31 2021, @09:21PM

      by Freeman (732) on Wednesday March 31 2021, @09:21PM (#1131827) Journal

      In the event you're talking about virtual, certainly. In the event you're talking about the hardware RAM drives where you stuck some RAM on a PCI Express card and plugged it into a computer, that was more of a gimmick than anything. With PCI Express NVME drives, you're getting the best of both speed and reliability.

      --
      Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
    • (Score: 2, Informative) by amigasource on Thursday April 01 2021, @05:37PM

      by amigasource (1738) on Thursday April 01 2021, @05:37PM (#1132203) Homepage

      Amiga OS for the win! Use my RAMDISK every day!

      --
      Please visit www.AmigaSource.com !!! Since 2001... Your BEST source for Amiga information. Again...
    • (Score: 0) by Anonymous Coward on Thursday April 01 2021, @10:52PM

      by Anonymous Coward on Thursday April 01 2021, @10:52PM (#1132303)

      If you define 'RAM drive' to include Linux's tmpfs then yes. Pretty much every Linux system today mounts tmpfs on /tmp and /var/run by default.

  • (Score: 2) by takyon on Wednesday March 31 2021, @01:53PM

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Wednesday March 31 2021, @01:53PM (#1131595) Journal

    I don't think any of the consumer memory modules out there use TSV stacked dies like this 512 GB module. It would be nice if that led to a cost-per-bit reduction (fat chance).

    T-Force Gaming Confirms Development of Next-Gen DDR5 Memory Modules With Overclocking Support, Can Push Voltages Over 2.6V [wccftech.com]

    T-Force also states that DDR5 memory has far greater room for voltage adjustment when it comes to overclocking support. This is primarily due to the upgraded power management ICs (PMIC) that allows for voltages over 2.6V. It is also detailed that existing DDR4 memory modules handled their voltage conversion through the motherboard but that changes with DDR5. The components that are required for the voltage conversion are now moved over to the memory DIMM itself, reducing voltage wear and noise generation while simultaneously offering increased room for overclocking.

    That seems very high and a big deal but I don't overclock memory so...

    China starts mass production of DDR5 memory [videocardz.com]
    Chinese Memory Maker Commences DDR5 Mass Production, Expected To Launch With Intel Alder Lake Desktop CPUs [wccftech.com]

    Micron memory, Chinese module.

    Alder Lake is still on track for around a September/October launch, but it will be launched with DDR4 and DDR5 motherboards. The majority of buyers might choose DDR4. Zen 4 will probably launch around mid-2022, DDR5 only.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 0, Touché) by Anonymous Coward on Wednesday March 31 2021, @01:55PM (26 children)

    by Anonymous Coward on Wednesday March 31 2021, @01:55PM (#1131597)

    I don't want more ram.
    In fact I prefer a lot less for EVERYONE.
    Computers were a lot better with 1MB of ram than they are now.

    • (Score: 2) by takyon on Wednesday March 31 2021, @01:57PM (8 children)

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Wednesday March 31 2021, @01:57PM (#1131599) Journal

      Objectively wrong. Replace storage with RAM.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 1, Insightful) by Anonymous Coward on Wednesday March 31 2021, @02:05PM (7 children)

        by Anonymous Coward on Wednesday March 31 2021, @02:05PM (#1131603)

        Storage too but finite but large ram like 1MB prevents sloppy "programmers" of recent times adding yet another abstraction layer above the existing ones and actually care about code size and thus speed.
        Yea, more ram makes big LUTs possible and all that shit that may increase performance but it's mostly irrelevant for majority of programs anyway.

        • (Score: 4, Interesting) by takyon on Wednesday March 31 2021, @02:40PM (5 children)

          by takyon (881) <takyonNO@SPAMsoylentnews.org> on Wednesday March 31 2021, @02:40PM (#1131613) Journal

          To your original point, the cat is out of the bag. Average RAM per PC/tablet/smartphone isn't going to go down, maybe only reset as smaller devices come out (smartwatches currently have about 1 GB to 1.5 GB, which should go up by the time they are projecting Star Wars style holograms).

          Large amounts of RAM enable "bad" behavior, but don't prevent "good" behavior. A small amount of RAM doesn't give you a choice.

          Sloppy, abstract code (like my linked extension) usually runs everywhere (especially browser-based) and is easy to write. It's a good thing. Don't run sloppy code if you don't like it. We may be reaching the peak of abstraction anyway. If everyone suddenly had 8 TB of memory instead of 8 GB, would the bloat be able to grow to use up all of it?

          If most programmers are sloppy, a competent programmer has more value, even if that's restricted to niche areas like supercomputing.

          Finally, I think a good way to address the "problem" would be to participate in or support programming contests that restrict you to using limited hardware or resources (e.g. demoscene art). I know that limiting program size is popular, but I'm not sure about memory usage.

          https://www.youtube.com/watch?v=4UqwP2-FIWo [youtube.com]
          https://www.youtube.com/watch?v=JZ6ZzJeWgpY [youtube.com]

          --
          [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
          • (Score: 2) by DannyB on Wednesday March 31 2021, @04:20PM (4 children)

            by DannyB (5839) Subscriber Badge on Wednesday March 31 2021, @04:20PM (#1131663) Journal

            Sometimes more computer resources actually buy you features. Sometimes these features are what you once would have considered amazing. Yet those amazing features are nearly invisible. You don't think of them. Other than in terms of how modern software uses up so much more memory and cpu cycles.

            For example, the web browser box I'm typing this into right now does spell checking and highlights Miss Spelled words. (A cousin of Miss Direction and Miss Management)

            Word processors now routinely grammar check as you type. Now even email clients spell check and some even grammar check.

            Modern IDEs (used as alternatives to notepad to write code) offer all kinds of huge assistance at your fingertips. Smartly suggesting the next thing you might be wanting to type in your code. Code templates. Refactorings. All sorts of actually amazing things I would have amazingly found amazing just a few decades ago. Like when I used Classic Mac OS, or got my first Linux box. Heck my wristwatch (Linux, btw) is more powerful than my first Linux PC was in 1999.

            So is it bloat, or is it features? Or maybe some of both. What we call bloat might also be called "shorter development time" or "lower development effort". Which you hinted at:

            "and is easy to write"

            It is what it is.

            I for one welcome our bigger memory / faster cpu overlords.

            --
            When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
            • (Score: 2) by takyon on Wednesday March 31 2021, @04:44PM (3 children)

              by takyon (881) <takyonNO@SPAMsoylentnews.org> on Wednesday March 31 2021, @04:44PM (#1131678) Journal

              Loading all Earth mapping data [gearthblog.com] or a future 100 billion star catalog [spaceengine.org] could easily take up terabytes of memory*. A niche thing, but it would allow you to zip around instantly for your own amusement or animations.

              * "Universal memory" doesn't need a distinction between memory and storage. :P

              --
              [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
              • (Score: 2) by DannyB on Wednesday March 31 2021, @05:10PM (2 children)

                by DannyB (5839) Subscriber Badge on Wednesday March 31 2021, @05:10PM (#1131690) Journal

                That was an interesting and great example.

                There will come a point where the distinction between memory and storage will disappear. Rather, permanent non volatile storage will be as fast as what we now call "memory".

                That will upend decades of system architecture assumptions. Especially in the area of OS design.

                The ancient Palm Pilot had the illusion of working this way. Only because when turned off it went into a deep sleep. So what was in memory seemed to be saved on storage. But the design of how applications worked is what I find interesting. How does the design of entire systems change when you no longer have this artificial distinction between memory and storage?

                --
                When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
                • (Score: 2, Interesting) by aixylinux on Thursday April 01 2021, @12:48PM (1 child)

                  by aixylinux (7294) on Thursday April 01 2021, @12:48PM (#1132110)
                  Consider the single-level-storage architecture of the AS/400 (IBM System i).

                  https://en.wikipedia.org/wiki/IBM_System_i

                  If you are interested in computer system architectures, and all you know are Intel/Windows/Linux/Unix, you owe it to yourself to study the AS/400. The AS/400 has been rehosted many times on different chips, and it never has been even necessary to recompile the OS, much less the applications.

                  https://www.amazon.com/Inside-AS-400-Frank-Soltis/dp/1882419669
                  • (Score: 2) by DannyB on Thursday April 01 2021, @02:31PM

                    by DannyB (5839) Subscriber Badge on Thursday April 01 2021, @02:31PM (#1132137) Journal

                    The TIMI instruction set vaguely causes me to think of LLVM.

                    I don't mean to say this is comparable, but back in the day of the UCSD p-System, on small computers (eg, Apple II, and DEC PDP, etc), there was the UCSD p-System. To port it, you build a p-Code emulator. Typically about 1 or 2K words. Then the entire OS and all applications can be run without recompiling. Cross platform compiled bytecode compatibility long before Java.

                    --
                    When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
        • (Score: 1) by melyan on Wednesday March 31 2021, @07:06PM

          by melyan (14385) on Wednesday March 31 2021, @07:06PM (#1131734) Journal

          Ego. It's not about RAM. It's about you feeling superior. Ego ruins everything.

    • (Score: 4, Informative) by DannyB on Wednesday March 31 2021, @02:27PM (11 children)

      by DannyB (5839) Subscriber Badge on Wednesday March 31 2021, @02:27PM (#1131610) Journal

      As we look at What's new in Java 16 [malloc.se], we find this:

      Summary:

      • Sub-milliseond Max Pause Times
      • The 10 ms max pause time is now well under 1 ms
      • . . . on multi terabyte heaps
      • ZGC now has O(1) pause times. In other words, they execute in constant time and do not increase with the heap, live-set, or root-set size (or anything else for that matter).
      • Results of the SPECjbb 2015 benchmark, on machine with 3 TB heap, 224 hyper threads (Intel), and ~2100 Java threads, max pause time was 0.5 ms (or 500 µs), and average pause times of 50 µs.
      • ZGC now has pause times in the microsecond domain, with average pause times of ~50µs and max pause times of ~500µs. Pause times are unaffected by the heap, live-set and root-set size.

      Last September, when Java 15 came out, they had raised the maximum heap size from a paltry 4 TB to a more reasonable 16 TB. Now in Java 16, I can find no stated upper limit on heap size. The closest guidance I could find was whatever limit the OS imposes on heap size when launching a process. On an IBM web site (IIRC), there was a limitation on some Linux system of 128 TB because the kernel reserved 128 TB for kernel space and only 128 TB for user space.

      With advances in both GC, and improved memory modules, such as described in TFA, we will be able to write bigger and more effective hello world programs. (See: Java Hello World: Enterprise Edition [github.com]) One criticism of this hello world program is that it fails to make use of XML for a configuration file to specify where and how the "hello world" message text is located, and in what human language it is translated to.

      --
      When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
      • (Score: 4, Funny) by takyon on Wednesday March 31 2021, @02:43PM (5 children)

        by takyon (881) <takyonNO@SPAMsoylentnews.org> on Wednesday March 31 2021, @02:43PM (#1131614) Journal

        If your machine doesn't have at least 1 petabyte of RAM, why even live?

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
        • (Score: 2) by DannyB on Wednesday March 31 2021, @03:49PM (2 children)

          by DannyB (5839) Subscriber Badge on Wednesday March 31 2021, @03:49PM (#1131638) Journal

          If you suffer from insufficient memory, there still is hope. Use virtual memory. Then create a ramdisk. Then use the ramdisk as swap space for more virtual memory. Repeat until things are running smoothly.

          --
          When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
          • (Score: 0) by Anonymous Coward on Wednesday March 31 2021, @04:16PM (1 child)

            by Anonymous Coward on Wednesday March 31 2021, @04:16PM (#1131656)

            You need to try that joke a third time on this page because mod points are happiness and are so valuable

            • (Score: 2) by DannyB on Wednesday March 31 2021, @04:24PM

              by DannyB (5839) Subscriber Badge on Wednesday March 31 2021, @04:24PM (#1131667) Journal

              Okay: Do you remember a program called Soft Ram? Or how about Ram Doubler 9.0! [amazon.com]

              --
              When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
        • (Score: 0) by Anonymous Coward on Thursday April 01 2021, @03:04AM (1 child)

          by Anonymous Coward on Thursday April 01 2021, @03:04AM (#1131963)

          1 petabyte ought to be enough for anybody.

          • (Score: 0) by Anonymous Coward on Thursday April 01 2021, @06:14AM

            by Anonymous Coward on Thursday April 01 2021, @06:14AM (#1132028)

            No, no! That was Gill Bates saying 640K was enough for anybody, before he started injecting chips into everyone under the pretence of a vaccine just like they did with Osama bin Laden, and then started GPS tracking all of us and keeping records on when we did doody or peepee.

      • (Score: 2) by EEMac on Wednesday March 31 2021, @02:43PM (4 children)

        by EEMac (6423) on Wednesday March 31 2021, @02:43PM (#1131615)

        > ZGC now has O(1) pause times

        Sounds *fantastic*. What dark magic is this?! [malloc.se]

        But it has a price [malloc.se] . . .

        > The most important tuning option for ZGC is setting the max heap size (-Xmx). Since ZGC is a concurrent collector a max heap size must be selected such that, 1) the heap can accommodate the live-set of your application, and 2) there is enough headroom in the heap to allow allocations to be serviced while the GC is running.

        I remember setting memory size for applications back in classic MacOS. I guess that time has come again.

        • (Score: 3, Informative) by DannyB on Wednesday March 31 2021, @03:43PM (3 children)

          by DannyB (5839) Subscriber Badge on Wednesday March 31 2021, @03:43PM (#1131632) Journal

          Large Java applications often set the maximum heap size and have done so for years. The idea is to avoid allocating more from and then giving memory back to the OS for every little growth and shrinkage of the overall heap size. For a few versions back now, Java will release memory back to the OS if after some threshold the workload is not using all it has available. On the flip side, if memory demand grows beyond the max heap size, and the OS can satisfy such an increase, then Java will expand its heap beyond your max specified heap size -- but only for the time it is needed. eg, if a thread servicing a request briefly needs a large additional gob of memory ("gob" being the technical term), then it will get it. GC will soon reclaim it asap, and that above-max heap allocation from the OS will be returned back to the OS. This all means that these days it is less critical to get the memory allocation parameter right. For small programs under 4 GB of heap, specifying a max heap is often ignored. (Such as a desktop application. Or teeny tiny server.)

          Since the modern GCs, particularly ZGC, and also Red Hat's Shenandoah GC, and Azul's Zing GC, are affected by number of cores, the overall garbage collection rate can be impacted by number of cores available. You want collection rate to be similar to allocation rate. People who maintain large applications typically know theses parameters. There are great tools in Java to monitor these things, along with all sorts of knobs and dials to tweak. Java offers multiple different GCs to choose from. And each GC has different performance characteristics which may be appealing. For example the Parallel GC has great throughput, but doesn't care too much about latency. Great for some purposes. The G1 collector will give you a good overall balance. This is great for most workloads. Very few actually need ZGC, Shenandoah or Zing.

          While Zing is only available commercially from Azul, the ZGC and Red Hat's Shenandoah GC are available in Open JDK from any of Adopt OpenJDK, IBM, SAP, Red Hat, Amazon, Azul, and a few others. One thing you'll notice about those companies is that they work on big servers, and big servers often run Java. Red Hat and IBM (the former acquired recently by the latter) both invest significantly in Java development. Why? Because it's where their customers are. Yes, Red Hat.

          I also remember memory allocation in Classic Mac OS. Ah, those were fun days.

          --
          When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
          • (Score: 2) by EEMac on Thursday April 01 2021, @12:10AM (1 child)

            by EEMac (6423) on Thursday April 01 2021, @12:10AM (#1131892)

            Today I learned. Thank you!

          • (Score: 0) by Anonymous Coward on Thursday April 01 2021, @11:12PM

            by Anonymous Coward on Thursday April 01 2021, @11:12PM (#1132309)

            Its more insidious than that. Tracing collectors have a size/speed trade-off in that the faster they become the less memory efficient they are. IIRC, to get a tracing collector to run as fast as direct collection requires at least 4x the RAM. To get the sub-millisecond times DannyB is talking about would requires substantial space inefficiency.

            There are three different heap collection strategies used:
            Direct collection. Best space and time efficiency, but can only handles tree structures. Used in C and by Rust's singleton pointers.
            Counting collectors. Can handle DAGs but leak cycles. Space efficient but slower than DC. Manual implementations exist in C, used in many managed languages such as Python.
            Tracing collectors. Can handle memory cycles. Space inefficient and there is no guarantee if or when destructors will run. The most efficient ones don't support destructors at all. Unsuitable for memory latency constrained systems.

    • (Score: 2) by VLM on Wednesday March 31 2021, @02:46PM (3 children)

      by VLM (445) on Wednesday March 31 2021, @02:46PM (#1131616)

      Come over to the virtualization world.

      Somehow vmware nsx-t seems to need like 40 GB per node for all its virtualized networking nonsense by the time you add everything up.

      Meanwhile "because you can" you end up with lots of little VMs all configured by ansible (or old timey puppet or similar)

      I don't have any 1 MB VMs and even my first linux box in '93 had 4 megs, but I do have a lot of 1 gb vms just doing appliance like things.

      There is a situation where giving VMs more ram lets them ramdisk buffer more stuff in memory reducing IO pressure. You can add memory, spread it around to various VMs, and watch your daily IOPS drop. So just because I "can" run freebsd on 512 megs or whatever doens't mean I do.

      • (Score: 2) by DannyB on Wednesday March 31 2021, @04:02PM (2 children)

        by DannyB (5839) Subscriber Badge on Wednesday March 31 2021, @04:02PM (#1131647) Journal

        Somehow vmware nsx-t seems to need like 40 GB per node for all its virtualized networking nonsense by the time you add everything up.

        Sounds horrible. At first. But does it actually save you more money, ignoring the cost in bytes and cycles?

        There is too much focus on bytes and cpu cycles instead of focus on dollars. Once, long ago, saving bytes and cpu cycles was how you saved dollars because machines were expensive and developers were cheap. Now it's the reverse: developers expensive (and whiny) and machines cheap, reliable and commodity off the shelf.

        I have never touched vmware. But we use it in both our data centers and virtual data centers (yes, that's an actual thing now, wtf). I only access the VMs that I maintain. I make sure the apps work, they make sure the infrastructure works. And work it does. I have looked up the model numbers of some of these CPU chips my VMs are on, and boy those are way expensive. Thousands of dollars worth of cpu chip just to go into one socket on the board. I'm sure I'll never lay eyes on this hardware somewhere far away.

        I occasionally get to see screen shots of vmware, or someone using it remotely.

        --
        When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
        • (Score: 2) by VLM on Monday April 05 2021, @03:27PM (1 child)

          by VLM (445) on Monday April 05 2021, @03:27PM (#1133493)

          I occasionally get to see screen shots of vmware, or someone using it remotely.

          Ask for limited admin rights, they gave me some. vSphere is pretty nice. I like being able to template VMs and clone them. Also the high level monitoring is pretty nice.

          You can also set it up at home for a modest expense under an educational license of nominal fee.

          • (Score: 2) by DannyB on Monday April 05 2021, @04:33PM

            by DannyB (5839) Subscriber Badge on Monday April 05 2021, @04:33PM (#1133517) Journal

            That is interesting.

            I have not need, especially for a fee, to learn how to do this. I tend to automatically be skeptical about learning something that costs money in order to learn. Unless there is a very clear reason to learn it. And in that situation, my employer would pay for that learning. However at work, I have no need to manage the VMs and data centers. We have people who do that. I am part of the people who develop and maintain applications.

            --
            When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
    • (Score: 2) by Tork on Thursday April 01 2021, @02:16AM

      by Tork (3914) Subscriber Badge on Thursday April 01 2021, @02:16AM (#1131951)
      You are only one porn-quest away from regretting your words.
      --
      🏳️‍🌈 Proud Ally 🏳️‍🌈
(1)