Stories
Slash Boxes
Comments

SoylentNews is people

posted by cmn32480 on Wednesday May 23 2018, @06:47PM   Printer-friendly
from the your-computer-is-not-a-fast-PDP-11 dept.

Very interesting article at the IEEE ACM by David Chisnall.

In the wake of the recent Meltdown and Spectre vulnerabilities, it's worth spending some time looking at root causes. Both of these vulnerabilities involved processors speculatively executing instructions past some kind of access check and allowing the attacker to observe the results via a side channel. The features that led to these vulnerabilities, along with several others, were added to let C programmers continue to believe they were programming in a low-level language, when this hasn't been the case for decades.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Insightful) by jmorris on Wednesday May 23 2018, @09:28PM (5 children)

    by jmorris (4844) on Wednesday May 23 2018, @09:28PM (#683287)

    The article is repeating a classic mistake. We have been here before. Lets make the CPU expose really low level details and since the compiler and language knows what it is actually trying to do it can generate better code to utilize all these raw CPU bits. That thinking lead to the wreck known as Itanic.

    It failed because they failed to realize the strength of C, x86, POSIX and Win32 is the binding contract across time they each provide. Yes you can build a highly optimized CPU core, expose all its compromises and optimizations to run really fast in $current_year's silicon. Add on the shinest research OS ideas. And if you are making a PlayStation you might sell enough units that developers and tool makers will invest the effort to extract the potential for some games that have the shelf life of produce. And if you are truly fortunate they will extract that maximum performance before the hardware is obsolete. Then ten years go by, the silicon world has changed entirely and your architecture is hopelessly obsolete and legacy code won't build well, if at all, on new hardware and you basically are left with emulation. But nobody is likely to port mainstream software to such a platform. Ask Intel and HP, they bet big and lost with Itanium when they built it and nobody came.

    The one real problem the article exposed is the problem of cache transparency. That needs fixing. Put a few GB of HBM on the CPU package, scale back cache and then let the OS explicitly handle the NUMA issues if there is off chip RAM. Explicitly handling cache at the end program level is simply asking for a trainwreck as all of that tech changes over time.

    The other problem is the bloat problem. CPUs have to cheat so outrageously to keep up with the increasingly inability of programmers to write efficient programs in any language. Netscape Navigator used to run well in 8MB, now Firefox can use up 8GB and want more. Does it do a thousand times as much? It does not. Full "Office" suites with Truetype, embedded graphics, DDE/OLE and such ran on machines with that same 8MB. Modern ones do some more things but again, do they really do hundreds of times as much? They certainly consume hundreds of times the memory. Which drives the ever increasing demand for faster chips and cutting corners.

    Starting Score:    1  point
    Moderation   +4  
       Insightful=2, Interesting=2, Total=4
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 1, Insightful) by Anonymous Coward on Wednesday May 23 2018, @10:24PM

    by Anonymous Coward on Wednesday May 23 2018, @10:24PM (#683303)

    > Does it do a thousand times as much? It does not.

    It's arguable whether modern browsers do more than NN (they certainly support more), but modern web pages and apps certainly do more.

    ONLYOFFICE/Google Docs/Microsoft Word let you edit Word documents right in the browser, the implementation basically in pure Javascript for ONLYOFFICE. Not surprised at modern browser's memory footprint. They weren't designed to be engines for full blown applications but here we are.

  • (Score: 2) by meustrus on Wednesday May 23 2018, @10:42PM (1 child)

    by meustrus (4961) on Wednesday May 23 2018, @10:42PM (#683310)

    CPUs have to cheat so outrageously to keep up with the increasingly inability of programmers to write efficient programs in any language.

    Hardware vs software performance has always been a bit of a chicken-and-the-egg problem. You can't just say that the CPUs get better at giving performance to the lazy, because a lot of software was built based on that level of performance.

    You can idolize the programmers of yore if you want to, but the fact is that they wrote more efficient code because they had to. No programmer starts out building everything right. We all start by making something work, and only after it doesn't work fast enough do we ever go back and try to make it faster. The same goes for memory efficiency, avoiding I/O latency, maintainability, and any other metrics you can come up with for what makes "good" code.

    It's the same with SSDs. The performance boost from replacing a spinning platter with an SSD has grown over time, because all software these days is developed on machines with them. The programmer does not experience the high latency of spinning disk I/O, so lots of software these days ships with synchronous file system access.

    It's a self-perpetuating cycle. And it just happens to benefit the hardware manufacturer, who gets to keep selling new chips that are better at running the code that people started writing for the last set of chips.

    --
    If there isn't at least one reference or primary source, it's not +1 Informative. Maybe the underused +1 Interesting?
    • (Score: 2) by Wootery on Thursday May 24 2018, @01:02PM

      by Wootery (2341) on Thursday May 24 2018, @01:02PM (#683526)

      You can't just say that the CPUs get better at giving performance to the lazy, because a lot of software was built based on that level of performance.

      Of course we can. 'Built based on that level of performance' doesn't mean we can't compare the functionality-to-hardware-capability ratio and conclude that it's plummeted over the years.

      'High-performance' applications like the Unreal Engine or scientific modelling, succeed in making good use of modern hardware. Desktop operating systems and word processors, on the other hand, do much the same as they did 20 years ago, but with vastly higher hardware requirements.

      it just happens to benefit the hardware manufacturer, who gets to keep selling new chips that are better at running the code that people started writing for the last set of chips.

      Well, kinda. I'm more inclined to credit competition in the hardware markets. If AMD and ARM imploded tomorrow, you think Intel would keep working hard on improving their products?

  • (Score: 5, Informative) by letssee on Thursday May 24 2018, @08:49AM

    by letssee (2537) on Thursday May 24 2018, @08:49AM (#683472)

    I was with you until the whining over bloat.

    Yes Firefox does 1000x more than Netscape (memorywise anyway). Just look at the data size of a complete website from the nineties to one of today. Easily a factor 1000 increase.

  • (Score: 2) by Freeman on Thursday May 24 2018, @03:45PM

    by Freeman (732) on Thursday May 24 2018, @03:45PM (#683593) Journal

    No, they don't provide 1000x more functionality, but your resolution sure is higher. Eye candy has driven the PC market just about as much as anything.

    --
    Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"