Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.
posted by martyb on Tuesday February 25 2020, @08:00PM   Printer-friendly
from the lotsa-bits dept.

Samsung Starts Mass Production of Second-Gen 16GB LPDDR5 RAM for Future Premium Smartphones

Samsung has announced that it will kick off mass production of the world's first 16GB LPDDR5 RAM package for future smartphones. Last year, the Korean giant stated that it started mass production of 12GB LPDDR5 RAM. For 2020, Samsung has taken that production dial to the next phase and claims that the new RAM packages will enable users to experience enhanced 5G and AI features ranging from graphic-rich gaming and smart photography.

According to the company, the data transfer rate for the 16GB LPDDR5 [package] is 5500Mb/s (megabits per second), making it significantly faster than the previous-generation LPDRR4X RAM package, which peaks out at 4266Mb/s. That's not the only benefit of using these chips, because compared to an 8GB LPDDR4X package, the new mobile DRAM can deliver more than 20 percent power savings while offering twice the memory capacity.

16 GB DRAM packages could also be used in single board computers and other compact systems. For example, the BCM2711 SoC used in the Raspberry Pi 4 Model B can theoretically address up to 16 GB of memory.

Samsung press release. Also at AnandTech.

Previously: Samsung Announces 8 GB DRAM Package for Mobile Devices
Samsung Announces LPDDR5 DRAM Prototype Before Specification is Finalized
Samsung Begins Mass Producing 12 GB DRAM Packages for Smartphones
Samsung Mass Producing LPDDR5 DRAM (12 Gb x 8 for 12 GB Packages)
Get Ready for Smartphones with 16 GB of RAM


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by DannyB on Wednesday February 26 2020, @03:19PM (1 child)

    by DannyB (5839) Subscriber Badge on Wednesday February 26 2020, @03:19PM (#962885) Journal

    Memory closely integrated with processors at the chip level makes sense. You would upgrade memory and processing power together.

    Another thing I think will eventually happen, but that will be controversial.

    Hardware assisted GC

    Note that all modern languages in the last 2 freaking decades have garbage collection. Remember "lisp machines" from the 1980's? Like Symbollics? Their systems didn't execute Lisp especially fast, but what they did was provide hardware level assistance for GC which made GC amazingly fast.

    I look at the amazing things JVM (Java Virtual Machine) has done with GC. If only the JVM's GC could benefit all other languages (Python, JavaScript, Go, Lisps, etc). Of course, those languages could use JVM as a runtime. And GraalVM _might_ make something like that happen where lots of different languages run in the same runtime and can transparently call each others functions and classes and have a common set of underlying data types. Red Hat's Shenandoah and Oracle's open source ZGC are amazing garbage collector technology. Terabytes of memory with 1 ms GC pause times. Now imagine if you had hardware assistance for GC. (btw, why is Red Hat investing so much into Java development? I thought they were a Linux company? Could Red Hat, which is a publicly tiraded company, have some economic reason Java is making them lots of money?)

    Rationale: GC is an economic reality. Ignore the whining of he C programmers in the peanut gallery for a moment. They'll jump up and down and accuse other professionals of not knowing how to manage memory. Ignore it. Why do we use high level languages (like C) instead of assembly language? Answer: human productivity! Our code would be so much more efficient if we wrote EVERYTHING including this SN board directly in assembly language!!! So why don't we??? Because, as C programmers are simply unwilling to admit, the economic reality is that programmers are vastly more productive in higher and ever higher level languages. Sure there is an efficiency cost to this. But we're optimizing for dollars not for bytes and cpu cycles. Hardware is cheap, developer time is expensive.

    Slight aside: ARM processors already have some hardware provision for executing JVM bytecodes (gasp! omg!).

    I'm surprised that modern Intel or AMD designs haven't introduced some hardware assistance for GC.

    Symbollics hardware, IIRC, had extra bits in each memory word (36 bit words I think) to "tag" the type of information in every word. Then a way to efficiently find all words that happened to be a "pointer". A way to tag all words that were "reachable" or "marked" from the root set, etc.

    Maybe this can happen if memory and processing elements become highly integrated and interconnected. Hardware design will follow the money just as programming languages and technology stacks do.

    Others will believe that system design will stand still to conform to a romantic idealism that was the major economic reality once upon a time.

    --
    To transfer files: right-click on file, pick Copy. Unplug mouse, plug mouse into other computer. Right-click, paste.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by DannyB on Wednesday February 26 2020, @03:33PM

    by DannyB (5839) Subscriber Badge on Wednesday February 26 2020, @03:33PM (#962890) Journal

    I just reposted this as a Journal entry. [soylentnews.org]

    --
    To transfer files: right-click on file, pick Copy. Unplug mouse, plug mouse into other computer. Right-click, paste.