Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Sunday February 25 2018, @06:08AM   Printer-friendly
from the bringing-processors-to-light dept.

AnandTech's Ian Cutress interviewed Dr. Gary Patton, CTO of GlobalFoundries. A number of topics were discussed, including the eventual use of ASML's extreme ultraviolet lithography (EUV) for the "7nm" node:

Q13: With EUV still in the process of being brought up, and the way it is with counting masks and pellicle support coming through, is there ever a mentality of 7nm not getting EUV, and that 7nm could end up a purely optical transition? Do you fully expect EUV to come in at 7nm?

GP: I absolutely believe that EUV is here. It's coming, I absolutely believe it so. As you've seen with the machines we are installing in the clean room, we have placed a big bet on it. As Tom (Thomas Caulfield) was saying, it's a pretty high scale investment. I think if you look at the tool itself, for example, ASML has demonstrated 250W with it. This is pretty repeatable, so I think that it looks in good shape. There are some challenges with the collector availability. They are getting close, I think around 75% availability now is pretty solid, but they have to get to 85%, and they are cranking these tools out. Even with this as a work in progress, there are going to be a lot of tools out on the field, and that is going to also help with improving the performance and control of the tools. The tools we have here are the ultimate tools, the ultimate manufacturing versions.

The lithographic resist is a little bit of a challenge, but we are still trying to optimize that. I don't see that as a show stopper, as we are managing throughout bring up. I think the real challenge is the masks, and I feel very good about the pellicle process. They have made a lot of progress, and they have shown it can handle 250W. The biggest issue has been that you lose a bit of power - so you've done all this work to get to 250W, and then you just lost 20% of that. So it has to go up another 10%, so it's closer to 90%, in terms of a loss to be viable. For contacts and vias, we can run without pellicles. We have the right inspection infrastructure to manage that, and then bring the pellicles in when they are ready.

[...] Q17: Does the first generation of 7LP target higher frequency clocks than 14LPP?

GP: Definitely. It is a big performance boost - we quoted around 40%. I don't know how that exactly will translate into frequency, but I would guess that it should be able to get up in the 5GHz range, I would expect.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Insightful) by bzipitidoo on Sunday February 25 2018, @03:15PM (5 children)

    by bzipitidoo (4388) on Sunday February 25 2018, @03:15PM (#643451) Journal

    Used to be the clock speeds were the numbers everyone worshipped. Then in the early 2000s, increases in clock speeds stalled. Couldn't push past 4 GHz, and that's where we're still at today.

    Now it's transistor size. And that doesn't give a performance gain so much as a cut in power usage so that desktop computers don't need to occupy a box the size of a toilet tank and run cooling fans so fast they sound like a jet engine. Meanwhile, the real performance gains are "CISCy" things like the SSE4 instructions, multiple cores, and the almighty GPU.

    Maybe soon, software performance and size will get more attention. For instance, replacing the X server with Wayland. Our software has collected a ton of cruft over the years, as we hastily slap programs together, throwing in every library function under the sun. Whenever I observe the size that apt reports for a new package, I wonder how such simple functionality could require dozens of megabytes worth of binaries? Okay, sure, 64bit code is bigger than 32bit code, by something like 20%. The ability to handle UTF-8 rather than just ASCII, for applications that deal with text, is another space burner. Standards such as HTML/XML and PDF were not well designed with space efficiency in mind, and then there are a lot of utilities that bloat those inefficient formats up by as much as a factor of 3 in their processing. PDF is especially notorious for bloat.

    For instance, I took a close look at a PDF that originated from a scan of a printed document. They could have just made the document into a collection of images, one per page, using the PDF format as a mere container, but they used some tools to OCR the text, so that if you try to cut and paste words from the PDF document, that actually works. Nice to have that. But OCR is not very good, and no one bothered to go over the document manually. Too much time and effort to have a human do that. So, they kept the images in the PDF, positioning the image directly over the text so that human readers could correct whatever mistakes they encountered in the OCR job. Also, the tools that they used to do all this did not generate efficient PDF lingo. I looked into that too. The PDF language is stack based. It's a simple stream of parameters and function calls. They seem to like to do it one word at a time, so that the spacing between the letters can be different for each word. This is typical of what's in a PDF, for the text parts anyway: "14.40 0 TD 1 1 1 rg 0.58 Tc 0 Tw (languages ) Tj". That says to print the text "languages" at a position 14.4 units to the right and 0 units below the previous item, at maximum background intensity (the "1 1 1 rg" command), a spacing of 0.58 units between each letter (the Tc command), and a spacing of 0 between each word (the Tw command). A good PDF optimizer strips out the "0 Tw" because 0 is the default value, so saying it is merely redundant, and removes the trailing 0 from 14.40. "0 Tw" is also redundant on another level, in that as the document is printing one word at a time, the position of the next word will be set by the TD command preceding it, so it doesn't even matter what value Tw was given. That's a good example of the unnecessary bloat that PDF tools tend to pack into a PDF. But the PDF format itself forces a great deal of bloat to be in there. Especially ugly is "1 1 1 rg" over and over, once for each word, but that has to be there. They blow this off by mumbling that data compression smashes all that repetitive crap down to nearly nothing, so it doesn't matter. But it does. To be viewed, it has to be uncompressed at some point, and all that redundancy will take more memory and time for even the fastest computers to process. Anyway, the result of all that is a 680k pdf (900k uncompressed) for a lousy 10 page document, because it had to have both the images of the scans and the OCR results. The pdf optimizer I tried reduced that to 300k (470k uncompressed), so the tools used to generate and process the pdf nearly doubled its size with all the bloat they packed in. But if the OCR was near perfect, so good that it isn't necessary to keep the images, that pdf could have been much, much smaller, like maybe 50k, and it still would be bloated. The plain text conversion was only 31k, 11k when compressed with gzip.

    The pdf example was an especially bad case, turning 31k of text into 900k of pdf that the compression they are counting on to save them from their own wastefulness can only reduce to 680k, and even the optimizer couldn't do better than 300k. I've also observed Microsoft's mangling of HTML to add all this Microsoft specific garbage. Can't be simple tags, which are bad enough, no, every one has got to have an MS font and ID and so on in the attributes, and be wrapped in a SPAN and/or a DIV tag as well. Our software, formats, and methods are brimming over with such waste, and we count too much on increasing computer speed and capacity to render that wastefulness unimportant.

    Starting Score:    1  point
    Moderation   +2  
       Insightful=2, Total=2
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 2) by Aiwendil on Sunday February 25 2018, @04:22PM (1 child)

    by Aiwendil (531) on Sunday February 25 2018, @04:22PM (#643474) Journal

    PDF also suffers from that you need to understand the tools you use, at work lots of stuff ships as PDF (and many items are often pulled to mobile devices in the field), one of our worst examples was a single 50meg pdf (try opening that on a 2010 era smartphone). After a while they decided to split it but even so it only changed the problem from 1*50 to 5*10 meg, after some tweaking we managed to get it down to about 5*1.3meg, however that is with images embedded. But yes, it took a quite hefty collaboration of just getting every tool in the chain set just so to get it down in size and cooperate.

    Another fun example is the PDF-exporter in another program, setting a filled circle about 5mm across takes about 5k each, while setting it to a few thick-lined circles gets it down to, well, sub-k. A single page for the projects we use that exporter to tends to have 40 to about 400 dots each.

    But yeah, the format allows for quite a bit of nasty pitfalls and people are too lazy to try variations and spot patterns as needed.

    • (Score: 2) by frojack on Sunday February 25 2018, @08:17PM

      by frojack (1554) on Sunday February 25 2018, @08:17PM (#643555) Journal

      If you do nothing else but ask your exporter tool to compress images, and write to pdf, and then open that pdf with an actual pdf tool, and print to a new pdf file you can get some phenomenal size reduction.

      Sometimes just printing to PDF format is all you need, no exporting. Yes you may lose some of the functionality of PDFs in the process, but none that you care care about in most cases.

      --
      No, you are mistaken. I've always had this sig.
  • (Score: 3, Informative) by takyon on Sunday February 25 2018, @08:13PM (2 children)

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Sunday February 25 2018, @08:13PM (#643552) Journal

    1. We did push past 4 GHz. The Ryzen line of processors goes up to 4.2 GHz. Intel Core i9 [wikipedia.org] does even better (these are "turbo" speeds of course). Zen+ [wccftech.com] is likely to improve clock speeds further.

    2. "Now it's transistor size." And then you list some good things that happen because of the relative (let's not venture into the weeds of marketing lies) decrease in transistor sizes. It is a good thing that computers are running cooler and quieter, and fitting into smaller form factors. The omnipresence of multicore systems forces developers to at least try to parallelize what they can, and now we have systems with 8-18 cores readily available to consumers, which means that games should be exploiting additional cores past 4 or even 8 (PS4/XB1 have up to 7 usable). And GPUs legitimately get better every generation since they have always been doing a parallel task, and they can also be used for stuff like machine learning.

    3. 31k/470k of text into 900k of pdf isn't a good example. You of course, could optimize it or produce optimized PDFs yourself, but the ship has mostly sailed. Storage capacities are pretty big to the point that nobody really pays much attention to another megabyte-sized file unless they are dealing with millions or billions of them. And if we suddenly stopped improving single-threaded CPU performance even by the middling single-digit percentages that some generations have, that wouldn't limit our ability to open the file.

    There are still performance gains to be realized from scaling down, and there seems to be a clear path from "10-14nm" to about "3-5nm". And maybe lower. And there could be incredible gains beyond the horizon if we ever figure out how to make a stacked CPU with thousands of full cores, or switch to a new material that lets clock speeds go even higher. Instead of optimizing to decrease a 1 MB PDF to 300 KB, we will see more stuff like this [soylentnews.org].

    Our software, formats, and methods are brimming over with such waste, and we count too much on increasing computer speed and capacity to render that wastefulness unimportant.

    We've already paved over the waste in many cases. The most wasteful PDF is loadable for most (maybe not your e-reader or 1.5 W TDP device). Newer video encoding standards increase computational complexity, but then off-load it with fixed function hardware decoders in the CPU/GPU. A sudden halt in performance scaling won't affect the PDF much. Maybe VR or supercomputer users will do the most optimization.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 2) by frojack on Sunday February 25 2018, @08:31PM (1 child)

      by frojack (1554) on Sunday February 25 2018, @08:31PM (#643562) Journal

      The omnipresence of multicore systems forces developers to at least try to parallelize what they can,

      What they can parallelize isn't that obvious to the programmer working in a high level language. There's just not that many tools to do this, nor functionality in the commonly available compilers.

      Calling a subroutine to spin off a long running report usually means your system waits for that task to finish. Launching a separate job that the OS can run asynchronously might actually use a bit more memory (not as much as you may think), but your main application moves on, and the report gets there when it gets there, which is usually faster.

      But the tendency of software developers to invoke useless progress bars (Just Like on TV) into everything just because they can waste cores, slows down everything else using the display stack, and often ends up single pathing the the whole application.

      Most compiler platforms (regardless of language) aren't geared for this kind of development. And most developers aren't either.

      --
      No, you are mistaken. I've always had this sig.
      • (Score: 2) by takyon on Sunday February 25 2018, @08:53PM

        by takyon (881) <takyonNO@SPAMsoylentnews.org> on Sunday February 25 2018, @08:53PM (#643570) Journal

        If the developers and compilers won't do it, maybe we'll need machine learning compilers (yes, "AI") to do it. Because increased core counts will probably be the most major source of performance gain in the future, and users can have an 8+ core system today.

        Games (especially ones with a lot going on like Skyrim) can be parallelized to an extent. Video applications like Premiere Pro, Handbrake, etc. can take the cores.

        If a company won't hire a developer who can optimize for multiple cores, maybe the software doesn't really need to be optimized very much. Or someone else will come out with software that does it faster.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]