Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 13 submissions in the queue.
posted by martyb on Tuesday October 23 2018, @05:15PM   Printer-friendly
from the keeping-up-with-the-Joneses^W^W-Samsung dept.

Cadence & Micron DDR5 Update: 16 Gb Chips on Track for 2019

Earlier this year Cadence and Micron performed the industry's first public demonstration of next-generation DDR5 memory. At a TSMC event earlier this month the two companies provided some updates concerning development of the new memory technology. As it appears, the spec has not been finalized at JEDEC yet, but Micron still expects to start production of DDR5 memory chips in late 2019.

As noted back in May, the primary feature of DDR5 SDRAM is capacity of chips, not just a higher performance and a lower power consumption. DDR5 is expected to bring in I/O speeds of 4266 to 6400 MT/s, with a supply voltage drop to 1.1 V and an allowable fluctuation range of 3% (i.e., at ±0.033V). It is also expected to use two independent 32/40-bit channels per module (without/or with ECC). Furthermore, DDR5 will have an improved command bus efficiency (because the channels will have their own 7-bit Address (Add)/Command (Cmd) buses), better refresh schemes, and an increased bank group for additional performance. In fact, Cadence goes as far as saying that improved functionality of DDR5 will enable a 36% higher real-world bandwidth when compared to DDR4 even at 3200 MT/s (this claim will have to be put to a test) and once 4800 MT/s speed kicks in, the actual bandwidth will be 87% higher when compared to DDR4-3200. In the meantime, one of the most important features of DDR5 will be monolithic chip density beyond 16 Gb.

Leading DRAM makers already have monolithic DDR4 chips featuring a 16 Gb capacity, but those devices cannot offer extreme clocks or I/O speeds because of laws of physics. Therefore, companies like Micron have a lot of work to do in a bid to bring together high DRAM densities and performance in the DDR5 era. In particular, Micron is concerned about variable retention time, and other atomic level occurrences, once production technologies used for DRAM reach 10 – 12 nm. Meanwhile, the DDR5 Add/Cmd bus already features on-die termination to make signals cleaner and to improve stability at high data rates. Furthermore, high-end DDR5 DIMMs will have their own voltage regulators and PMICs. Long story short, while the DDR5 standard is tailored to wed performance and densities, there is still a lot of magic to be done by DRAM manufacturers.

Previously: DDR5 Standard to be Finalized by JEDEC in 2018
DDR5-4400 Test Chip Demonstrated


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2) by bzipitidoo on Tuesday October 23 2018, @06:57PM (6 children)

    by bzipitidoo (4388) on Tuesday October 23 2018, @06:57PM (#752563) Journal

    There are so many areas within a computer system that, any more, operate with a good deal of independence, it's hard to follow them all. Been a long time since the Central in CPU has meant God like central control of everything, when there was One Clock to Rule Them All.

    Let's see, the subsystems are:

    CPU
    GPU
    memory
    bus (PCI-E)
    USB
    networking
    HDD or SSD
    audio

    Of those, perhaps only audio is no big deal?

    • (Score: 4, Interesting) by takyon on Tuesday October 23 2018, @08:07PM (5 children)

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Tuesday October 23 2018, @08:07PM (#752584) Journal

      The biggest deal for performance in recent memory has been the transition from HDDs to SSDs.

      A more transient thing has been the usual industry collusion causing DRAM prices to effectively triple.

      We may see something new get added to your list: a neuromorphic or tensor processing unit. Lots of smartphones have been including a variation on "AI" hardware in their SoCs, and I could see it getting added to desktop PCs. Perhaps integrated with the CPU.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 2) by jasassin on Wednesday October 24 2018, @12:43AM (4 children)

        by jasassin (3566) <jasassin@gmail.com> on Wednesday October 24 2018, @12:43AM (#752688) Homepage Journal

        The biggest deal for performance in recent memory has been the transition from HDDs to SSDs.

        I think this will continue to be the most important factor for everyday use in the foreseeable future. I don't have one, because I'm concerned about their reliability and longevity.

        I'd love to hear positive and negative comments from long term owners of SSD's.

        --
        jasassin@gmail.com GPG Key ID: 0xE6462C68A9A3DB5A
        • (Score: 2) by bzipitidoo on Wednesday October 24 2018, @04:31AM (2 children)

          by bzipitidoo (4388) on Wednesday October 24 2018, @04:31AM (#752790) Journal

          In recent years, I've had more trouble with HDDs than ever. That AMD machine I mentioned is now on its 3rd HDD, the first 2 having failed prematurely. Meanwhile, every one of my older hard drives is still working, or was when I last used them.

          The SSD in my newest machine is a year and a half old, and so far, it's been working perfectly. I even keep it pretty close to full. And yeah, I even have a swap partition on it. That used to be a very bad idea, one of the prime ways to send your SSD to an early grave. But SSDs are a lot more robust these days, and with wear leveling and their much higher lifetime limit on the number of write cycles they can take, swap no longer poses that problem. Nevertheless, I have plenty of RAM, so the swap is seldom used. One thing I use swap for is to slow everything down if memory runs short. Gives me time to notice there's a problem and take intelligent action before the process killer starts bumping off processes, or worse things happen. That worked fine with an HDD-- their slowness was an advantage in that scenario. But now, with an SSD, file I/O may be too fast.

          Anyway, no matter which you use, you know what you're supposed to do: backup regularly. That does more than protect you from failing hardware. It also protects from human error.

          • (Score: 0) by Anonymous Coward on Friday October 26 2018, @07:47PM (1 child)

            by Anonymous Coward on Friday October 26 2018, @07:47PM (#754199)

            Seagate has been notorious for their failure rates. Toshiba/HGST are currently the best on most models, and WD falls inbetween them and Seagate, although closer to them in reliability.

            I haven't had a serious failure on any of the latter three brands in 10-20 years, with 10+ year old drives still running with only a few stable bad sector patches on them. Seagate on the other hand I have had a complete drive failure on a 3TB and some drive electronics failures on the older 500GB-2TB drives utilizing the same or similar controller modules.

            • (Score: 2) by bzipitidoo on Friday October 26 2018, @10:06PM

              by bzipitidoo (4388) on Friday October 26 2018, @10:06PM (#754259) Journal

              Yeah, I've heard Seagate is the least reliable these days, but the two that failed on me were WD drives. First one, a 1T caviar green, failed in just 9 months. The next one was a caviar black, and it failed after 3 years. I was determined to get a different brand after that, but there's not a lot to choose from any more. Went with a Fujitsu, which is still working today. Meh, seems SSD is the way to go from now on.

        • (Score: 0) by Anonymous Coward on Wednesday October 24 2018, @09:37AM

          by Anonymous Coward on Wednesday October 24 2018, @09:37AM (#752878)

          SSD reliability has long since surpassed HDD reliability. Buy a name brand (samsung, corsair, western digital, intel perhaps if you have more money than brains) and keep an eye on the drive life SMART counter. If you are convinced that your workload is sufficiently high that you will wear out a SSD before you upgrade the machine buy Optane which has nearly unlimited write lifetime. hint - this is unlikely, even with swap on the hdd and half a dozen windows updaters running in the background you are unlikely to use more than a TB per year

  • (Score: 2) by DannyB on Tuesday October 23 2018, @10:03PM

    by DannyB (5839) Subscriber Badge on Tuesday October 23 2018, @10:03PM (#752600) Journal

    In the Java world there are two new GCs for multi-Terabyte heaps (yes, you read that right) to achieve 10 ms GC pause times. Red Hat's Shenandoah, and Oracle's ZGC. Both are in Open JDK. (GPL + classpath exception) There is also a commercial GC for big heaps: Azul Systems' Zing, that's been around a while, known to be good for hundreds of gigabyte heaps.

    I have heard of such systems having up to 768 cores.

    At long last it will be common to have enough memory, and low enough GC pause time for a decent Java Hello Whirrled program.

    --
    Poverty exists not because we cannot feed the poor, but because we cannot satisfy the rich.
(1)