Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Monday June 04 2018, @10:35AM   Printer-friendly
from the remember-when-a-hard-disk-held-20MB? dept.

Samsung Unveils 32 GB DDR4-2666 SO-DIMMs

Samsung on Wednesday introduced its first consumer products based on its 16 Gb DDR4 memory chips demonstrated earlier this year. The new SO-DIMMs are aimed at high-performance notebooks that benefit from both speed and capacity of memory modules.

Samsung's new 32 GB DDR4 SO-DIMMs based on 16 Gb DDR4 memory ICs (integrated circuits) are rated for a 2666 MT/s data transfer rate at 1.2 V. Because the 16 Gb memory chips are made using Samsung's 10 nm-class process technology, the new module is claimed to be 39% more energy efficient than the company's previous-gen 16 GB SO-DIMM based on 20 nm-class ICs. According to Samsung, a laptop equipped with 64 GB of new memory consumes 4.578 W in active mode, whereas a notebook outfitted with 64 GB of previous-gen DDR4 consumes 7.456 W in active mode.

Insert obligatory ECC comment here.

Samsung press release. Also at Tom's Hardware and DigiTimes.


Original Submission

Related Stories

HP Footnote Leads Intel to Confirm Support for 128 GB of DRAM for 9th-Generation Processors 21 comments

Following HP's announcement of new ZBook mobile workstations, Intel has confirmed that the memory controller in 9th generation Intel Core processors will support up to 128 GB of DRAM. AMD's memory controller should also support 128 GB of DRAM:

Normally mainstream processors only support 64GB, by virtue of two memory channels, two DIMMs per memory channel (2DPC), and the maximum size of a standard consumer UDIMM being 16GB of DDR4, meaning 4x16GB = 64GB. However the launch of two different technologies, both double height double capacity 32GB DDR4 modules from Zadak and G.Skill, as well as new 16Gb DDR4 chips coming from Samsung, means that technically in a consumer system with four memory slots, up to 128GB might be possible.

With AMD, the company has previously stated that its memory controller can support future memory that comes to market (with qualification), however Intel has been steadfast in limiting its memory support on its chips specifically within the specification. HP is now pre-empting the change it its latest launch with the following footnote:

1. 128GB memory planned to be available in December 2018

This has forced Intel into a statement, which reads as the following:

The new 9th Gen Intel Core processors memory controller is capable of supporting DDR4 16Gb die density DIMMs which will allow the processors to support a total system memory capacity of up to 128GB when populating both motherboard memory channels with 2 DIMMs per Channel (2DPC) using these DIMMs. As DDR4 16Gb die density DIMMs have only recently become available, we are now validating them, targeting an update in a few months' time.

Here's an example of double height, double capacity 32 GB memory modules from G.Skill, which uses 8 Gb DRAM chips.

These are the Samsung 32 GB SO-DIMM DDR4 modules for laptops mentioned in the article. They are of a normal size but use Samsung's latest 16 Gb chips instead of 8 Gb.


Original Submission

Samsung Shows Off 256 GB Server Memory Modules Using 16 Gb Chips 4 comments

Samsung's plans to make 256 GB memory modules using 16 Gb chips are moving forward:

Samsung this week demonstrated its first 256 GB memory module for upcoming servers. The new Registered DIMM (RDIMM) is based on Samsung's 16 Gb DDR4 memory devices introduced earlier this year and takes advantage of the company's 3DS (three-dimensional stacking) packaging. The new module will offer higher performance and lower power consumption than two 128 GB LRDIMMs used today.

Samsung's 256 GB DDR4 Registered DIMM with ECC carries 36 memory packages featuring 8 GB (64 Gbit) of capacity each, along with IDT's 4RCD0229K register chip (to buffer address and command signals and increase the number of ranks supported by a memory channel). The packages are based on four single-die 16 Gb components that are interconnected using through-silicon vias (TSVs). Architecturally, the 256 GB module is octal ranked as it features two physical ranks and four logical ranks.

1 TB can't be too far behind.

Previously: Samsung Mass Produces 128 GB DDR4 Server Memory
Samsung Shows Off New 64 GB Server Memory Modules Using 16 Gb Chips, Promises 128-256 GB This Year
Samsung Unveils 32 GB Laptop DDR4 Modules


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2) by FakeBeldin on Monday June 04 2018, @11:26AM (1 child)

    by FakeBeldin (3360) on Monday June 04 2018, @11:26AM (#688321) Journal

    How prone would these chips be to rowhammer attacks [wikipedia.org]?

    • (Score: 4, Insightful) by FatPhil on Monday June 04 2018, @02:58PM

      Given that later generation DDR3 chips mitegated against rowhammer, you'd hope that the same defences are in place, as the actual tech/protocol differences between DDR3 and DDR4 are minimal, it's more of a physical upgrade.
      --
      Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
  • (Score: 2) by Runaway1956 on Monday June 04 2018, @11:48AM (5 children)

    by Runaway1956 (2926) Subscriber Badge on Monday June 04 2018, @11:48AM (#688325) Journal

    Computer programs demand more and more memory, all the time. The OS demands more and more memory, all the time. For example, WinXP ran really nice, if you could give it a full gig of memory. It ran well on 512 meg, but could choke on occassion. Try running any late version of Windows on a single gig of memory. It will truly suck.

    I've noted a time or two that my "desktop", or whatever you may choose to call it, has 24 gig of memory. It never writes ANYTHING to virtual memory. Browsers, games, newsfeeds, utilities, terminals - they all stay open for weeks at a time, and still it never touches virtual memory. It truly is sweet, never having to wait for something to first, write to disk, then read "memory" from disk. It just never happens here.

    32 gig modules? Presumably, laptops will be offered with two or more modules. Holy CRAP! Never, never, never wait for virtual memory!

    Sure, some people could load up 32 or 64 gig of memory, and want more. Such people are NOT typical. My son, the mathematician? He's perfectly happy with 32 gig of memory. He doesn't use it all. His primary bottleneck are bus speeds, and secondary bottleneck is CPU speed. I guess on a rare occassion, he uses most of his memory, but he isn't bumping into his upper limit on a regular basis.

    Personally, I've never used a laptop that had enough memory. At best, laptops seem to have barely adequate memory. The only more or less legitimate use of virtual memory on a laptop, is the sleep feature.

    I could be motivated to invest in a real laptop, given all of that memory space!

    • (Score: 0) by Anonymous Coward on Monday June 04 2018, @01:05PM (2 children)

      by Anonymous Coward on Monday June 04 2018, @01:05PM (#688333)

      I could be motivated to invest in a real laptop, given all of that memory space!

      Invest?
      Pray tell, what exactly is what you'll do that will recoup that money that you "invested"?

      • (Score: 2) by Runaway1956 on Monday June 04 2018, @02:37PM

        by Runaway1956 (2926) Subscriber Badge on Monday June 04 2018, @02:37PM (#688378) Journal

        I mean "invest" in a similar manner to sport divers who invest in bigger, lighter, aluminum air tanks. Or, mountain climbers invest in higher quality rope, pitons, and carabiners. I don't use a computer to make my living, but I'm willing to "invest" in something that can be compared to my desktop. 32 gig of ram, and an octocore CPU, with a sweet Nvidia 1080? It may not exactly keep up with my server which serves as a desktop, but it would be in the ballpark. Then again, applications optimized to make full use of that GPU would fly, better than my twelve core rig, sporting a 3 generations old GPU.

      • (Score: 2) by bob_super on Monday June 04 2018, @05:31PM

        by bob_super (1357) on Monday June 04 2018, @05:31PM (#688451)

        My former job would "invest" into giving us the ability to manipulate or compile the latest FPGAs on our laptops. 32G was required three years ago, 64GB would clearly help today.
        Obviously, the other parameter, processor speed, has not doubled in the last three years, so an actual compile would be insanely long (overnight to multiple days) and only for major issues, but at least analyzing and tweaking the results would benefit from more elbow room.
        The beefy compile server at the factory is not always reachable, often by design (.mil customers).

    • (Score: 2) by DannyB on Monday June 04 2018, @05:30PM

      by DannyB (5839) Subscriber Badge on Monday June 04 2018, @05:30PM (#688449) Journal

      My desktops have a minimum of 32 GB of memory. But no Swap. I don't want swapping putting wear on the SSDs. I'm coming up on 3 years of no swap. Works perfectly. If for some reason I ever needed swap, I could create a swap file. It would be as efficient as a partition because SSD has no seek or rotational latency. Every sector is as near as every other.

      I start using memory if I run VMs. Or if I run a certain Java program that parses huge data files creating a model in memory.

      My Pixelbook has 8 GB memory. The Ubuntu on it has 11 GB of swap, just in case -- because I occasionally might launch Eclipse and related programs on it. But I don't treat it as a development machine.

      As for programs demanding more and more memory, they also demand more and more cpu cycles. There's a reason for that. Software is getting far more sophisticated. There is a huge difference in features between Notepad and LibreOffice Writer. Or between Notepad and Eclipse. Writer and Eclipse can edit text, but each brings a vast feature set. Things we now take for granted. Spell checking as you type. And as you type in your source code.

      Or Excel takes so much more memory than Lotus 123. But modern Excel has way more features and sophistication.

      Developers put more into software. Making it bigger and slower. People like the features, but pay for it with Moore's Law.

      As long as my SSDs don't begin developing a vibration, I won't worry about it.

      --
      What doesn't kill me makes me weaker for next time.
    • (Score: 0) by Anonymous Coward on Monday June 04 2018, @06:28PM

      by Anonymous Coward on Monday June 04 2018, @06:28PM (#688479)

      When I built my most recent computer earlier this year, I just maxed it out at 64GB and turned off swap and figured I'd never have to worry about running out of memory. It would be nice to have a laptop with the same amount of memory.

  • (Score: 3, Interesting) by FatPhil on Monday June 04 2018, @02:26PM (3 children)

    It doesn't just apply to processors, it's about *processes*, so even banks of dumb memory cells count.

    The "10nm-class" is being defined as having "twice the density" (thus scaled by area, not linearly) as "20nm-class", which is consistent with the marketting scheme they've had for about a decade now. But the previous chips came out 4 years ago. That's half the rate of density-doubling that has been remarkably consistent until pretty recently, it does provide evidence that we really are beginning to stretch the limits of lithography. And whilst improvements are being promised, we don't know how quickly they'll appear, and even then quite soon the current technology will be hitting blockages put in the way by the laws of physics themselves. And I don't see any viable replacement tech.

    Moore's Law's not dead, but it looks like it's on crutches. It's utterly amazing that his observation held so true for so long, to be honest.

    I'm not too worried, there's still an enormous amount of waste caused by a dogged attachment to legacy architectures. Radical new processor designs could save the day, or at least slow down the rot for a while. And maybe software bloat will follow a J curve rather than an S curve, and we all strive for "lightweight" again, rather than "feature packed".
    --
    Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
  • (Score: 1, Interesting) by Anonymous Coward on Monday June 04 2018, @08:22PM

    by Anonymous Coward on Monday June 04 2018, @08:22PM (#688550)

    Possibly a silly thought and probably has severe performance cost associated, but we have software RAID in linux for storage; what happens if we were to implement software ECC for memory? How feasible is it? What funky features and use cases could this enable if we allowed only specific chunks of memory space to be software-ECC-enabled (with varying levels of redundancy), much like you can chop up block devices in Linux with LVM2 and do all sorts of mix-and-match and weird layering with RAID on top of that?

(1)