Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.
posted by Fnord666 on Wednesday July 29 2020, @03:46AM   Printer-friendly
from the golden-parachutes-are-not-for-enginners dept.

ZDNet

Intel is revamping its technology leadership in a bid to turnaround its manufacturing unit after announcing delays in its 7nm processes.

Last week, Intel said on its second quarter earnings report that its 7nm products would be delayed. Rival AMD is already on 7nm as is TSMC. Since Intel's earnings report and market cap hit, analysts have been speculating that the chip giant may leave manufacturing.

In other words, Intel needed to revamp its technology organization. Under Monday's reorg, Dr. Ann Kelleher will lead technology development. She had led Intel manufacturing. Kelleher will focus on developing 7nm and 5nm processes. Murthy Renduchintala, Intel's chief engineering officer, will depart Aug. 3.

Intel is also separating its Technology, Systems Architecture and Client Group unit into teams focused on technology development, manufacturing and operations, design engineering, architecture, software and graphics and supply chain.

Safe to say Intel will be best positioned to fire 3 executives at the next slippage - I guess that may make the stock rebound faster than firing a single one.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Insightful) by bzipitidoo on Wednesday July 29 2020, @05:52PM (3 children)

    by bzipitidoo (4388) on Wednesday July 29 2020, @05:52PM (#1028196) Journal

    I find the upgrade treadmill irresistible. Can hold off for a while, but eventually the difference in performance and capability becomes so gross the only sensible thing to do is abandon the old computer. Any kind of breakdown, like an HDD going bad, is a whole bag of nails in the coffin. I'm still using a few decade old machines, but they are clearly obsolete. No SSE4 instructions. 45nm and 65nm dies. Consumes so much more power than a low power but still more performant 14nm that the difference in cost of electricity to run those boxes for a year is a significant portion of the cost of a new computer. Might be a difference of 500 kWh, and at 10 cent per kWh, that's $50 in electricity costs. And that's not factoring in the performance difference.

    There are a ton of enhancements that have been added over the years. Support for virtualization, with the VT-x and AMD-V stuff, has improved. Room for still more improvement there. I have completely abandoned 32bit x86 machines. Somewhere in there, mp4 decode in hardware crept in. We may expect AV1 codecs, and even AV2 in hardware eventually. Core counts are really climbing, and now a 2 core chip is low end. Integrated graphics are pretty common now, and their performance doesn't completely suck like the early Intel 845G crap from circa 2000. Another addition there is OpenCL and Vulkan support. Then there's fixes for the massive class of Spectre vulnerbilities. Old though my old machines are, they aren't so old as to predate Spectre. I wonder is it really such a performance penalty to completely fix Spectre?

    In short, as long as CPU makers can keep adding massive enhancements, as long as Moore's Law holds, they'll have a market. There's just too many things to do yet. I don't see it ending anytime in the near future. i guess it could easily continue for another 50 years.

    Starting Score:    1  point
    Moderation   +2  
       Insightful=1, Interesting=1, Total=2
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 0) by Anonymous Coward on Wednesday July 29 2020, @08:16PM (2 children)

    by Anonymous Coward on Wednesday July 29 2020, @08:16PM (#1028274)

    Either requires signed firmware or has built-in signed backdoors.

    Most commercially available arm devices since the mid 1990s, the same thing (BREW+Proprietary OS, then Trustzone+stage1 signing.)

    Even GPUs since Maxwell v2 and between AMD R800 and Vega (Depending on what restrictions you accept) are now signed out the ass.

    I run older hardware because that was the last time hardware was trustworthy. Most of it still runs the latest software. A lot of it that doesn't natively run the latest software can run it with a binary recompiler like Intel's instruction emulator or in some cases qemu-. Very few of the modern instructions are 'Mandatory' after SSE2, although some of them will solve latency issues at different points in the codebase. Most software didn't even need that while Windows 7 was still supported, although I imagine we may see that change now that Windows 10 is the minimum supported platform, although linux hardware can still support any cpu with PCIe and x86_64 with a modern GPU, which limits the need to upgrade if you have at least 8GB of RAM and a modern gpu running at x8-x16 PCIe 1.1 lanes. Speaking from personal experience, very few games actually REQUIRE the full PCIe bandwidth we have available today, even if you wanted to run them at 120hz. The later OpenGL and Vulkan standards helped eliminate a lot of the bus bandwidth the earlier generations of hardware demanded while setting up all geometry on the cpu and dumping it to the gpu every frame.