Stories
Slash Boxes
Comments

SoylentNews is people

posted by chromas on Tuesday September 11 2018, @03:03AM   Printer-friendly
from the nano-SoCs dept.

Samsung Foundry Updates: 8LPU Added, EUVL on Track for HVM in 2019

Samsung recently hosted its Samsung Foundry Forum 2018 in Japan, where it made several significant foundry announcements. Besides reiterating plans to start high-volume manufacturing (HVM) using extreme ultraviolet lithography (EUVL) tools in the coming quarters, along with reaffirming plans to use gate all around FETs (GAAFETs) with its 3 nm node, the company also added its brand-new 8LPU process technology to its roadmap. Samsung Foundry's general roadmap was announced earlier this year, so at SFF in Japan the contract maker of semiconductors reiterated some of its plans, made certain corrections, and provided some additional details about its future plans.

First up, Samsung added another fabrication technology into its family of manufacturing processes based on its 10 nm node. The new tech is called 8LPU (low power ultimate) and, according to Samsung's usual classification, this is a process for SoCs that require both high clocks and high transistor density. Samsung's 8LPP technology, which qualified for production last year, is a development of Samsung's 10 nm node that uses narrower metal pitches to deliver a 10% area reduction (at the same complexity) as well as a 10% lower power consumption (at the same frequency and complexity) compared to 10LPP process. 8LPU is a further evolution of the technology platform that likely increases transistor density and frequency potential vs 8LPP. Meanwhile Samsung does not disclose how it managed to improve 8LPU vs. 8LPP and whether it involved advances of design rules, usage of a new library, or a shrink of metal pitches. Samsung's 8LPP and 8LPU technologies are aimed at customers who need higher performance or lower power and/or higher transistor density than what Samsung's 10LPP, 10LPC, and 10LPU processes can offer, but who cannot gain access to Samsung's 7LPP or more advanced manufacturing technologies that use EUVL. Risk production using 8LPU was set to start in 2018, so expect high-volume manufacturing to commence next year at Samsung's Fab S1 in Giheung, South Korea.

[...] By the time the new production line in Hwaseong becomes operational, Samsung Foundry promises to start risk production using its 5/4 nm node. As reported earlier this year, Samsung is prepping 5LPE, 4LPE, and 4LPP fabrication technologies, but eventually this list will likely expand. Based on what Samsung has disclosed about all three manufacturing processes so far, they will have certain similarities, which will simplify migration from 5LPE all the way to 4LPP, though the company does not elaborate. [...] One of the unexpected things that Samsung Foundry announced was start of risk production using its 3 nm node already in 2020, which is at least a year ahead of what was expected earlier. Samsung's 3 nm will be the first node to use the company's own GAAFET implementation called MBCFET (multi-bridge-channel FETs) and will officially include at least two process technologies: 3GAAE and 3GAAP (3nm gate-all-around early/plus).

Previously: Samsung Roadmap Includes "5nm", "4nm" and "3nm" Manufacturing Nodes
Samsung Plans to Make "5nm" Chips Starting in 2019-2020

Related: GlobalFoundries Abandons "7nm LP" Node, TSMC and Samsung to Pick Up the Slack


Original Submission

Related Stories

Samsung Roadmap Includes "5nm", "4nm" and "3nm" Manufacturing Nodes 10 comments

Samsung has replaced planned "6nm" and "5nm" nodes with a new "5nm" node on its roadmap, and plans to continue scaling down to "3nm", which will use gate-all-around transistors instead of Fin Field-effect transistors. Extreme ultraviolet lithography (EUV) will be required for everything below "7nm" (TSMC and GlobalFoundries will start producing "7nm" chips without EUV initially):

Last year Samsung said that its 7LPP manufacturing technology will be followed up by 5LPP and 6LPP in 2019 (risk production). The new roadmap does not mention either processes, but introduces the 5LPE (5 nm low power early) that promises to "allow greater area scaling and ultra-low power benefits" when compared to 7LPP. It is unclear when Samsung plans to start using 5LPE for commercial products, but since it is set to replace 7LPP, expect the tech to be ready for risk production in 2019.

[...] Samsung will have two 4 nm process technologies instead of one — 4LPE and 4LPP. Both will be based on proven FinFETs and usage of this transistor structure is expected to allow timely ramp-up to the stable yield level. Meanwhile, the manufacturer claims that their 4 nm nodes will enable higher performance and geometry scaling when compared to the 5LPE, but is not elaborating beyond that (in fact, even the key differences between the three technologies are unclear). Furthermore, Samsung claims that 4LPE/4LPP will enable easy migration from 5LPE, but is not providing any details.

[...] The most advanced process technologies that Samsung announced this week are the 3GAAE/GAAP (3nm gate-all-around early/plus). Both will rely on Samsung's own GAAFET implementation that the company calls MBCFET (multi-bridge-channel FETs), but again, Samsung is not elaborating on any details. The only thing that it does say is that the MBCFET has been in development since 2002, so it will have taken the tech at least twenty years to get from an early concept to production.

MBCFETs are intended to enable Samsung to continue increasing transistor density while reducing power consumption and increasing the performance of its SoCs. Since the 3GAAE/GAAP technologies are three or four generations away, it is hard to make predictions about their actual benefits. What is safe to say is that the 3GAAE will be Samsung's fifth-generation EUV process technology and therefore will extensively use appropriate tools. Therefore, the success of the[sic] EUV in general will have a clear impact on Samsung's technologies several years down the road.

Previously: Samsung Plans a "4nm" Process

Related: IBM Demonstrates 5nm Chip With Horizontal Gate-All-Around Transistors
"3nm" Test Chip Taped Out by Imec and Cadence
TSMC Details Scaling/Performance Gains Expected From "5nm CLN5" Process


Original Submission

Samsung Plans to Make "5nm" Chips Starting in 2019-2020 5 comments

Samsung is preparing to manufacture 7LPP and 5LPE process ARM chips:

Samsung has said its chip foundry building Arm Cortex-A76-based processors will use 7nm process tech in the second half of the year, with 5nm product expected mid-2019 using the extreme ultra violet (EUV) lithography process.

The A76 64-bit chips will be able to pass 3GHz in clock speed. Back in May we wrote: "Arm reckoned a 3GHz 7nm A76 single core is up to 35 per cent faster than a 2.8GHz 10nm Cortex-A75, as found in Qualcomm's Snapdragon 845, when running mixed integer and floating-point math benchmarks albeit in a simulator."

[...] Samsung eventually envisages moving to a 3nm Gate-All-Round-Early (3AAE) on its process technology roadmap. Catch up, Intel, if you can.

Also at AnandTech.

Previously: Samsung Roadmap Includes "5nm", "4nm" and "3nm" Manufacturing Nodes

Related: Samsung's 10nm Chips in Mass Production, "6nm" on the Roadmap (obsolete)
Moore's Law: Not Dead? Intel Says its 10nm Chips Will Beat Samsung's
Samsung Plans a "4nm" Process


Original Submission

GlobalFoundries Abandons "7nm LP" Node, TSMC and Samsung to Pick Up the Slack 15 comments

GlobalFoundries has halted development of its "7nm" low power node, will fire 5% of its staff, and will also halt most development of smaller nodes (such as "5nm" and "3nm"):

GlobalFoundries on Monday announced an important strategy shift. The contract maker of semiconductors decided to cease development of bleeding edge manufacturing technologies and stop all work on its 7LP (7 nm) fabrication processes, which will not be used for any client. Instead, the company will focus on specialized process technologies for clients in emerging high-growth markets. These technologies will initially be based on the company's 14LPP/12LP platform and will include RF, embedded memory, and low power features. Because of the strategy shift, GF will cut 5% of its staff as well as renegotiate its WSA and IP-related deals with AMD and IBM. In a bid to understand more what is going on, we sat down with Gary Patton, CTO of GlobalFoundries.

[...] Along with the cancellation of the 7LP, GlobalFoundries essentially canned all pathfinding and research operations for 5 nm and 3 nm nodes. The company will continue to work with the IBM Research Alliance (in Albany, NY) until the end of this year, but GlobalFoundries is not sure it makes sense to invest in R&D for 'bleeding edge' nodes given that it does not plan to use them any time soon. The manufacturer will continue to cooperate with IMEC, which works on a broader set of technologies that will be useful for GF's upcoming specialized fabrication processes, but obviously it will refocus its priorities there as well (more on GF's future process technologies later in this article).

So, the key takeaway here is that while the 7LP platform was a bit behind TSMC's CLN7FF when it comes to HVM – and GlobalFoundries has never been first to market with leading edge bulk manufacturing technologies anyway – there were no issues with the fabrication process itself. Rather there were deeper economic reasons behind the decision.

GlobalFoundries would have needed to use deep ultraviolet (DUV) instead of extreme ultraviolet (EUV) lithography for its initial "7nm" chips. It would have also required billions of dollars of investment to succeed on the "7nm" node, only to make less "7nm" chips than its competitors. The change in plans will require further renegotiation of GlobalFoundries' and AMD's Wafer Supply Agreement (WSA).

Meanwhile, AMD will move most of its business over to TSMC, although it may consider using Samsung:

Samsung Completes Development of "5nm" Node; TSMC Reveals "6nm" Node 8 comments

Samsung Completes Development of 5nm EUV Process Technology

Samsung Foundry this week announced that it has completed development of its first-generation 5 nm fabrication process (previously dubbed 5LPE). The manufacturing technology uses extreme ultraviolet lithography (EUVL) and is set to provide significant performance, power, and area advantages when compared to Samsung's 7 nm process (known as 7LPP). Meanwhile, Samsung stresses that IP developed for 7LPP can be also used for chips to be made using 5LPE.

Samsung's 5 nm technology continues to use FinFET transistors, but with a new standard cell architecture as well as a mix of DUV and EUV step-and-scan systems. When compared to 7LPP, Samsung says that their 5LPE fabrication process will enable chip developers to reduce power consumption by 20% or improve performance by 10%. Furthermore, the company promises an increase in logic area efficiency of up to 25%.

One interesting technology that will eventually be on Samsung's roadmap: "gate-all-around" field effect transistors.

Meanwhile, TSMC has announced a new node, "6nm", which will allow for smaller die sizes than "7nm" with no improvements to performance or power consumption. It is also not better than the TSMC "7nm+" node, which will use extreme ultraviolet lithography:

TSMC this week unveiled its new 6 nm (CLN6FF, N6) manufacturing technology, which is set to deliver a considerably higher transistor density when compared to the company's 7 nm (CLN7FF, N7) fabrication process. An evolution of TSMC's 7nm node, N6 will continue to use the same design rules, making it easier for companies to get started on the new process. The technology will be used for risk production of chips starting Q1 2020.

TSMC states that their N6 fabrication technology offers 18% higher logic density when compared to the company's N7 process (1st Gen 7 nm, DUV-only), yet offers the same performance and power consumption. Furthermore, according to TSMC N6 'leverages new capabilities in extreme ultraviolet lithography (EUVL)' gained from N7+, but does not disclose how exactly it uses EUV for the particular technology. Meanwhile, N6 uses the same design rules as N7 and enables developers of chips to re-use the same design ecosystem (e.g., tools, etc.), which will enable them to lower development costs. Essentially, N6 allows to shrink die sizes of designs developed using N7 design rules by around 15% while using the familiar IP for additional cost savings.

See table in article.

Previously: Samsung Discusses Foundry Plans Down to "3nm"
TSMC's "5nm" (CLN5FF) Process On-Track for High-Volume Manufacturing in 2020


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Interesting) by bzipitidoo on Tuesday September 11 2018, @03:35AM (11 children)

    by bzipitidoo (4388) on Tuesday September 11 2018, @03:35AM (#733043) Journal

    We hit a speed plateau in the 3 to 4 GHz range back in the 2000s, and I thought the end of Moore's Law might be nigh. Instead, we moved from 32bit to 64bit CPUs, and exploded the core count from 1 to 4 or more. These days, isn't 2 about the minimum number of cores available in consumer grade CPUs? Next, the ongoing move from HDD to SSD has delivered another huge speed boost.

    This talk of 7nm all the way down to 3nm is huge. Currently, we're at 14nm, aren't we, or is that so last year now? Still, to go from 14nm to 3nm, assuming we're not being treated to marketing hype, is a massive leap forward, and I really wonder how much further things can go. 1nm? 0.5nm? I suspect we'll soon hit a plateau (or bottom) on that, but I'm not counting Moore's Law done even if we do. There's just way too many other areas to improve.

    Like, why not a 128bit or 256bit CPU architecture for one thing? Maybe another new memory bus, DDR6. Or, TriDR1 for Triple Data Rate 1, or more like QDR1, since computers work so much better with base 2 and 4 than with base 3? And maybe PCI Express is getting a bit long in the tooth? How can a freaking _serial_ standard have beaten a parallel one? We may be seeing a "Parallel PCI" (PPCI) bus next decade. Or perhaps the whole mess will be ditched in favor of "Super SoC". Then there's the old trick of implementing common software routines in hardware. Lot more mining to do there, why should only video decoding be implemented in hardware? And of course our software has gotten very bloated. Would be a lot of effort, but there is doubtless an awful lot of fat to trim there.

    • (Score: 2, Funny) by ShadowSystems on Tuesday September 11 2018, @04:19AM

      by ShadowSystems (6185) <{ShadowSystems} {at} {Gmail.com}> on Tuesday September 11 2018, @04:19AM (#733055)

      3NM? Bah. Call me when they start in on sub-Planc-levels.
      64bit processors? Bah. Wake me when they start producing 1Terabit ones.
      SSD's with 3Gbps read/write speeds? Meh. Call me when they get it into the Quadrillions of bytes per Attosecond.
      Storeage space measured in mere terabytes? *Dismissive hand wave* We're not even TRYING to get into the Googolplexibyte ranges yet.
      *Shakes a palsied fist*
      Danged whippersnappers an'yer crappy ancient technology.
      Why, forward in MY day we willhavehad TroogolplexiHertz speed (like Googole is 100 zeros only think one *trillion* zeros), OctoZetaExabyte wide architecture CPU's with a quintzillion cores, coupled with so much RAM & storeage space that they had to resort to Disaster Area Math to even express it, and all fitting down in a subatomic sized SOC that could be snorted up yer nose.
      And that's the way we will LIKE it!
      Now get outta' my Primordial Slurpee!
      *Massive cough*

      (Shout from off screen)BERSER!(/UU Admin).
      I think I need my dried frog pills again...
      =-)p

    • (Score: 0) by fakefuck39 on Tuesday September 11 2018, @07:31AM (3 children)

      by fakefuck39 (6620) on Tuesday September 11 2018, @07:31AM (#733072)

      because you're on a tech site, and you have to actually know something for your posts don't sound like you're the retard you are.

      1. the transistors are not the same when you get this small, or you get leakage, and physics gets in the way. that 3nm transistor is not the same design as the 22nm one, nor does it have proportional power and performance gains scaling down lithography size.

      2. 32bit is not slower than 64bit. if you don't realize why register size is not related to speed, you shouldn't just go back to reddit, go to aol lol.

      3. serial is faster than parallel. I'm not sure how you cannot understand that. even from a protocol pov, you don't need to include tags for multiple device management. an electrical wire on a parallel bus does not transmit "multiple currents" - parallel is an abstraction, hence slower

      4. you don't know what a core is. I'm not sure how that's possible if you've ever had a computer, but there it is. as a hint, it's not called a even called a core in AIX. we've had systems with multiple processors forever. that does not speed up things - it only lets you run more things.

      5. video decoding is not the only thing implemented in hardware. CPUs of many brands have had generic vector execution engines, used for all kinds of stuff in software, for many decades.

      what the hell are you even talking about anywise? you're clearly in the middle of your msoffice class for adults, at the community college. pay attention - you really do need to learn something related to computers.

      • (Score: 0) by Anonymous Coward on Tuesday September 11 2018, @03:22PM (1 child)

        by Anonymous Coward on Tuesday September 11 2018, @03:22PM (#733172)

        Desperate to sound smart much lol? This must be you: https://news.ycombinator.com/item?id=17958648 [ycombinator.com]. Go back to HN and dropping deuces or whatever it is you are supposedly good at.

        • (Score: 0) by fakefuck39 on Thursday September 13 2018, @01:18AM

          by fakefuck39 (6620) on Thursday September 13 2018, @01:18AM (#733944)

          well, there's being smart, which being in the industry for 20 years helps with. then there's being a retard. he sees some guy's comment that shits on people. he checks the post history and sees that this guy only comes to soylent to shit on people. then he comes to the conclusion that this guy must have an account on the heavily-moderated NH where such comments are not allowed. let me ask you retard - did you make the original comment I replied to? both show the same level of "smart"

      • (Score: 0) by Anonymous Coward on Tuesday September 11 2018, @03:43PM

        by Anonymous Coward on Tuesday September 11 2018, @03:43PM (#733178)

        ^^ Pointless bile from a Soylent regular.
        If there are a lot of people like you, no wonder so many don't have friends...

    • (Score: 2) by takyon on Tuesday September 11 2018, @09:54AM (2 children)

      by takyon (881) <{takyon} {at} {soylentnews.org}> on Tuesday September 11 2018, @09:54AM (#733084) Journal

      Filling in beyond what fakefuck said...

      Feature sizes aren't accurately described by the number they brand the process with. "14nm", "7nm", "3nm", etc. are marketing lies, although parts are getting somewhat smaller, faster, and more power efficient. There is talk about nodes smaller than "3nm" [semiengineering.com], and "0.5nm" is probably as low as they can claim without being laughed out of the building. After that, we probably have to move towards stacked layers of transistors, which requires solving heat dissipation issues. Or something much more hypothetical like femtotechnology [hplusmagazine.com]. More likely, more effort will be focused on neuromorphic and quantum computers, which won't work the same as a classical computer but could deliver great results in some areas.

      HDD to SSD transition is a great speedup, but has nothing to do with Moore's law.

      A lot of chips can get up to 5 GHz [wccftech.com], usually only with "turbo" features. Which is fine since a short burst of performance is what most home users need.

      If AMD doubles the core count of "7nm" Zen 2 chips, desktop Ryzen could have 8 cores minimum.

      DDR5 [wikipedia.org] comes first. There may not be a need for DDR6, and I would rather see something like HBM4, stacked close to the processor (hard to make that user upgradeable?).

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 1) by exaeta on Tuesday September 11 2018, @09:38PM (1 child)

        by exaeta (6957) on Tuesday September 11 2018, @09:38PM (#733333) Homepage Journal

        Or you know, instead of making them smaller, the focus could shift to materials. So we'll get new material semiconductors every so often that can switch faster. We aren't going to make 1/2 atom wide transistors, but there's no reason we can't have a 5THz clock rate if we find something better than silicon.

        --
        The Government is a Bird
    • (Score: 2) by Freeman on Tuesday September 11 2018, @03:58PM

      by Freeman (732) on Tuesday September 11 2018, @03:58PM (#733182) Journal

      There's still plenty of single-core devices running around. Especially in the low end consumer market.

      --
      Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
    • (Score: 2) by LoRdTAW on Tuesday September 11 2018, @04:23PM (1 child)

      by LoRdTAW (3755) on Tuesday September 11 2018, @04:23PM (#733185) Journal

      Like, why not a 128bit or 256bit CPU architecture for one thing?

      mores bits != faster a faster processor unless you work with a lot of really big numbers. Besides that's what AVX instructions are for.

      Maybe another new memory bus, DDR6. Or, TriDR1 for Triple Data Rate 1, or more like QDR1, since computers work so much better with base 2 and 4 than with base 3?

      Maybe. But for now the problem is stuffing too many CPU's on one die with a comparatively dinky memory controller. We only get away with it at the desktop level because most people don't crunch through huge sets of data in memory across multiple cpu's. The larger caches on the multi "core" monsters hide this shortcoming for most desktop loads. But some loads have demonstrated the weakness of the monster multicore like the bandwidth issue of the 32 core AMD threadripper. It's a balancing act that the user has to consider. Someone building an HPC might use more physical processor sockets with lower core counts to mitigate memory bandwidth issues.

      And maybe PCI Express is getting a bit long in the tooth? How can a freaking _serial_ standard have beaten a parallel one? We may be seeing a "Parallel PCI" (PPCI) bus next decade.

      No need for parallel PCI as PCIe links may consist of multiple lanes up to 32x. Video cards use 16 links to make one x16 link. So there's your parallel PCI. The serial protocol allows for this flexibility where parallel PCI is stuck at x bits width. The advantages are numerous the big ones being pin count reduction and trace count is severely reduced making trace management at the PCB level easier in order to maintain equal trace length to maintain phasing. It can also lower board costs by reducing the number of layers needed. win-win.

      Or perhaps the whole mess will be ditched in favor of "Super SoC".

      We're there already. AMD has a nice set of SoC's for embedded use, the G series (I have a PCengines APU2). Intel also has SoC's for embedded and I believe mobile/industrial use as well. The desktop is still demands a bit of flexibility and performance so the pincount on those packages is needed for lots of PCIe lanes (NVME uses up to 4 lanes per "disk"), processor links, and two or more 64 bit memory channels. I'm sure we will see low/mid range desktop and mobile SoC's in the future as the desktop shrinks down to smaller sizes.

      Then there's the old trick of implementing common software routines in hardware. Lot more mining to do there, why should only video decoding be implemented in hardware?

      The The APU and heterogeneous computing is going to come into play here eventually. And while hardware video decoders may appear redundant when paired with beefy multicore cpu's, they actually level the playing field by removing the CPU variable from the equation. With a hardware decoder, it doesn't matter if your CPU is 1GHz single core or 4GHz 16 core, it's going to play back properly.

      And of course our software has gotten very bloated. Would be a lot of effort, but there is doubtless an awful lot of fat to trim there.

      Personally I hold these two projects as pinnacles of software simplicity: OpenBSD and Plan 9.

      • (Score: 2) by bzipitidoo on Tuesday September 11 2018, @08:17PM

        by bzipitidoo (4388) on Tuesday September 11 2018, @08:17PM (#733287) Journal

        Really it comes down to more parallelism. Which combination of a wider (128bit or more) CPU, more cores, more AVX sorts of instructions, wider buses, offloading common algorithms to dedicated hardware, and other techniques will make for the fastest computer is the question.

        The parallelism in a 1980s era home computer was practically none. For instance, the 6502 CPU in the Apple II had to do everything, and I do mean everything, even low level stuff such as pulsing the floppy drive arm stepper motor at the correct intervals and in the correct order to move the arm in or out, reading the individual bits on the floppy disk, and generating sound by clicking the speaker hundreds of times per second. It didn't take long for falling hardware costs and the desire for more speed to make it viable to move everything they could to what were basically built in embedded computer systems designed to run whatever piece of hardware was being offloaded.

  • (Score: 0) by Anonymous Coward on Tuesday September 11 2018, @01:57PM (1 child)

    by Anonymous Coward on Tuesday September 11 2018, @01:57PM (#733143)

    3nm? it's probably the end.
    in the past on the software side, a new computer would be bogged down with bloatware and "updates" until it was as fast or slow as the old one and then it was time to buy a new one, probably rolling the dung heap problem onto the next generation of smaller and faster chips.

    i fear that chip design has been aided by software and computers and that 90% of improvments enjoyed just come from smallyfying the chips and adding more transistors without thought.

    so in the same way that nothing is assemblered on the software side anymore but rather everythin is bot.net errr.. dot.net in the same manner making traces in chips smaller without thought might leave us with a computer thats super easy to program and a marvel of engineering at 3nm but doesnt run or do anything more meaningful then 10 years ago ... thus not allowing for anything truely new and innovative that REALLY needs all that processing power?

    • (Score: 2) by takyon on Tuesday September 11 2018, @02:07PM

      by takyon (881) <{takyon} {at} {soylentnews.org}> on Tuesday September 11 2018, @02:07PM (#733148) Journal

      doesnt run or do anything more meaningful then 10 years ago

      Just your illusion. GPUs and supercomputers have gotten much faster. CPUs are faster, even before considering multiple cores. Obviously you are running the wrong bloated software or OS, or don't have any use case for the increased performance.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(1)