Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Tuesday February 25 2020, @08:00PM   Printer-friendly
from the lotsa-bits dept.

Samsung Starts Mass Production of Second-Gen 16GB LPDDR5 RAM for Future Premium Smartphones

Samsung has announced that it will kick off mass production of the world's first 16GB LPDDR5 RAM package for future smartphones. Last year, the Korean giant stated that it started mass production of 12GB LPDDR5 RAM. For 2020, Samsung has taken that production dial to the next phase and claims that the new RAM packages will enable users to experience enhanced 5G and AI features ranging from graphic-rich gaming and smart photography.

According to the company, the data transfer rate for the 16GB LPDDR5 [package] is 5500Mb/s (megabits per second), making it significantly faster than the previous-generation LPDRR4X RAM package, which peaks out at 4266Mb/s. That's not the only benefit of using these chips, because compared to an 8GB LPDDR4X package, the new mobile DRAM can deliver more than 20 percent power savings while offering twice the memory capacity.

16 GB DRAM packages could also be used in single board computers and other compact systems. For example, the BCM2711 SoC used in the Raspberry Pi 4 Model B can theoretically address up to 16 GB of memory.

Samsung press release. Also at AnandTech.

Previously: Samsung Announces 8 GB DRAM Package for Mobile Devices
Samsung Announces LPDDR5 DRAM Prototype Before Specification is Finalized
Samsung Begins Mass Producing 12 GB DRAM Packages for Smartphones
Samsung Mass Producing LPDDR5 DRAM (12 Gb x 8 for 12 GB Packages)
Get Ready for Smartphones with 16 GB of RAM


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Interesting) by DannyB on Tuesday February 25 2020, @08:54PM (8 children)

    by DannyB (5839) Subscriber Badge on Tuesday February 25 2020, @08:54PM (#962544) Journal

    Imagine cheap $35 dollar small computer boards that have four cpu cores, gigabytes of memory, and solid state fast storage.

    In January 1975, an Altair 8800, with 1 K of memory was expensive, bulky, heavy, slow, power hungry and required hand loading a boot loader on the front panel.

    Imagine the power of packing that large Altair 8800 box with these hypothetical cheap computer boards I speak of.

    Now think of our future. In time, you can expect small inexpensive computers that have hundreds of cpu cores, astonishing amounts of memory, capable of running software of untold complexity and abstraction. And C programmers will still whine that their low level language has the best performance, and pretend that it somehow matters. Developer time will still cost money. Most problems to solve will be even more complex than today's.

    --
    Don't put a mindless tool of corporations in the white house; vote ChatGPT for 2024!
    Starting Score:    1  point
    Moderation   +2  
       Interesting=2, Total=2
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 1, Insightful) by Anonymous Coward on Tuesday February 25 2020, @09:16PM (3 children)

    by Anonymous Coward on Tuesday February 25 2020, @09:16PM (#962549)

    ... and all that power will be used to try to sell you stuff you don't need.

    • (Score: 4, Funny) by takyon on Tuesday February 25 2020, @09:17PM (2 children)

      by takyon (881) <{takyon} {at} {soylentnews.org}> on Tuesday February 25 2020, @09:17PM (#962552) Journal

      Like a foldable smartphone with 32 GB of RAM?

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 0) by Anonymous Coward on Wednesday February 26 2020, @01:48AM

        by Anonymous Coward on Wednesday February 26 2020, @01:48AM (#962655)

        will 32GB even be enough to containerize a docker instance on a Hackintosh-Air ARM64 edition?

      • (Score: 2) by DannyB on Wednesday February 26 2020, @02:58PM

        by DannyB (5839) Subscriber Badge on Wednesday February 26 2020, @02:58PM (#962867) Journal

        What I want is for a phone to fold eight times to become so small it disappears into the quantum foam so that it doesn't cause a large bulge in my pocket.

        This would be an improvement on George Jetson's car folding into a briefcase that is too heavy to lift.

        --
        Don't put a mindless tool of corporations in the white house; vote ChatGPT for 2024!
  • (Score: 5, Interesting) by takyon on Tuesday February 25 2020, @09:16PM (3 children)

    by takyon (881) <{takyon} {at} {soylentnews.org}> on Tuesday February 25 2020, @09:16PM (#962550) Journal

    The future is going to be monolithic 3D chips with either x86, ARM, or RISC-V cores and memory placed just tens/hundreds of nanometers away from the cores. The performance increase will be so vast that single board computers or docked smartphones could replace most desktops. Ideally, we will see a universal memory technology with much higher density and lower cost than today's DRAM, also capable of replacing NAND.

    I don't know how many more layers of abstraction can be added. But if there is a demand to optimize software for every iota of performance, such as for supercomputing, then someone will get paid to do it.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: -1, Offtopic) by Anonymous Coward on Wednesday February 26 2020, @07:40AM

      by Anonymous Coward on Wednesday February 26 2020, @07:40AM (#962760)

      Gu tossed Xiu Xiang's project onto the lawn. He reached into the drive compartment and flipped out the loose hull section. A ragged and maybe disdainful cheer rose from the kids behind him. "Hey, dork! There has to be a latch. Why didn't you scam the lock?"

      Gu didn't seem to hear. He leaned forward to look into the interior. Juan edged closer. The compartment was in shadow, but he could see well enough. Not counting damage, it looked just like the manual said. There were some processor nodes and fiber leading to the dozens of other nodes and sensors and effectors. There was the steering servo. Along the bottom, just missed by Gu's cutting, was the DC bus to the left-front wheel. The rest was empty space. The capacitor and power cells were in the back.

      Gu stared into the shadows. There was no fire, no explosion. Even if he had chopped into the back, the safeties would have prevented any spectacular outcome. But Juan saw more and more error flags float into view. A junk wagon would be coming real soon.

      Gu's shoulders slumped, and Juan got a closer look at the component boxes. Every one had physical signage: "No user-serviceable parts within".

      The old guy stood and took a step away from the car. Behind them, Chumlig and now Williams were on the scene, herding the students back into the tent. For the most part, the kids were fully stoked by all the insanity. None of them, not even the Radner brothers, ever had the courage to run amok. When they committed something major, it was usually done in software, like what the guy had shouted from the crowd.

      Xiu Xiang gathered up her weird, Gu-improved, project. She was shaking her head and mumbling to herself. She unplugged the gadget and took a step toward Robert Gu. "I object to your appropriation of my toy!" she said. There was an odd expression on her face. "Though you did improve it with that extra bend." Gu didn't respond. She hesitated. "And I never would have run it with line power!"

      Gu waved at the guts of the dead car. "It's Russian dolls all the way down, isn't it, Orozco?"

      Juan didn't bother to look up "Russian dolls". "It's just throwaway stuff, Professor Gu. Why would anyone want to fool with it?"

    • (Score: 2) by DannyB on Wednesday February 26 2020, @03:19PM (1 child)

      by DannyB (5839) Subscriber Badge on Wednesday February 26 2020, @03:19PM (#962885) Journal

      Memory closely integrated with processors at the chip level makes sense. You would upgrade memory and processing power together.

      Another thing I think will eventually happen, but that will be controversial.

      Hardware assisted GC

      Note that all modern languages in the last 2 freaking decades have garbage collection. Remember "lisp machines" from the 1980's? Like Symbollics? Their systems didn't execute Lisp especially fast, but what they did was provide hardware level assistance for GC which made GC amazingly fast.

      I look at the amazing things JVM (Java Virtual Machine) has done with GC. If only the JVM's GC could benefit all other languages (Python, JavaScript, Go, Lisps, etc). Of course, those languages could use JVM as a runtime. And GraalVM _might_ make something like that happen where lots of different languages run in the same runtime and can transparently call each others functions and classes and have a common set of underlying data types. Red Hat's Shenandoah and Oracle's open source ZGC are amazing garbage collector technology. Terabytes of memory with 1 ms GC pause times. Now imagine if you had hardware assistance for GC. (btw, why is Red Hat investing so much into Java development? I thought they were a Linux company? Could Red Hat, which is a publicly tiraded company, have some economic reason Java is making them lots of money?)

      Rationale: GC is an economic reality. Ignore the whining of he C programmers in the peanut gallery for a moment. They'll jump up and down and accuse other professionals of not knowing how to manage memory. Ignore it. Why do we use high level languages (like C) instead of assembly language? Answer: human productivity! Our code would be so much more efficient if we wrote EVERYTHING including this SN board directly in assembly language!!! So why don't we??? Because, as C programmers are simply unwilling to admit, the economic reality is that programmers are vastly more productive in higher and ever higher level languages. Sure there is an efficiency cost to this. But we're optimizing for dollars not for bytes and cpu cycles. Hardware is cheap, developer time is expensive.

      Slight aside: ARM processors already have some hardware provision for executing JVM bytecodes (gasp! omg!).

      I'm surprised that modern Intel or AMD designs haven't introduced some hardware assistance for GC.

      Symbollics hardware, IIRC, had extra bits in each memory word (36 bit words I think) to "tag" the type of information in every word. Then a way to efficiently find all words that happened to be a "pointer". A way to tag all words that were "reachable" or "marked" from the root set, etc.

      Maybe this can happen if memory and processing elements become highly integrated and interconnected. Hardware design will follow the money just as programming languages and technology stacks do.

      Others will believe that system design will stand still to conform to a romantic idealism that was the major economic reality once upon a time.

      --
      Don't put a mindless tool of corporations in the white house; vote ChatGPT for 2024!
      • (Score: 2) by DannyB on Wednesday February 26 2020, @03:33PM

        by DannyB (5839) Subscriber Badge on Wednesday February 26 2020, @03:33PM (#962890) Journal

        I just reposted this as a Journal entry. [soylentnews.org]

        --
        Don't put a mindless tool of corporations in the white house; vote ChatGPT for 2024!