Stories
Slash Boxes
Comments

SoylentNews is people

posted by cmn32480 on Wednesday May 23 2018, @06:47PM   Printer-friendly
from the your-computer-is-not-a-fast-PDP-11 dept.

Very interesting article at the IEEE ACM by David Chisnall.

In the wake of the recent Meltdown and Spectre vulnerabilities, it's worth spending some time looking at root causes. Both of these vulnerabilities involved processors speculatively executing instructions past some kind of access check and allowing the attacker to observe the results via a side channel. The features that led to these vulnerabilities, along with several others, were added to let C programmers continue to believe they were programming in a low-level language, when this hasn't been the case for decades.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Insightful) by Snotnose on Wednesday May 23 2018, @07:08PM (14 children)

    by Snotnose (1623) on Wednesday May 23 2018, @07:08PM (#683218)

    C used to be a low level language, now it's not. Not because of the language, but because of the hardware it runs on. Back in the 80s it was routine to compile your C code, figure out the slow parts, look at the assembler, and rewrite the slow part in assembler. Hell, I remember embedding 8086 commands into C that gcc happily (well, grumpily but it would do it) assembled and integrated into my C code.

    I still remember the first time I couldn't hand code a routine to run faster than C. It was a fax machine driven by an NSC 320016. I had x milliseconds to read each row of pixels while scanning the document, I couldn't quite do it. Not even in assembly. I don't remember the final fix, be it hardware or software, but I spent a good 6 weeks on that.

    --
    Why shouldn't we judge a book by it's cover? It's got the author, title, and a summary of what the book's about.
    Starting Score:    1  point
    Moderation   +4  
       Insightful=2, Interesting=1, Informative=1, Total=4
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 5, Insightful) by vux984 on Wednesday May 23 2018, @08:03PM (6 children)

    by vux984 (5045) on Wednesday May 23 2018, @08:03PM (#683249)

    "C used to be a low level language, and now it's not."

    I think that accuses C of failing somehow. And that's not the case.

    even Assembler and raw Machine Language are not a low level languages by the metric they are using here. And C is a touch above those. The problem isn't that C is higher level than it was, because its still just as close to assembly and machine language as it has always been.

    No, the issue is simply that CPUs are more complex. Even if I was writing my hellow world user program in raw assembler, I wouldn't have to explicitly load values from main memory to the cache. The CPU does it for me. MOV EAX,[address] is as low as it gets -- I can hand assemble it to binary if you really like but its still not my problem to sort out if [address] is cached or not.

    C is no further from the bare metal than it ever was, but 'the bare metal' is a lot more complicated and functional in its own right now. There is no direct programmer control over a lot of what it does. Its an interesting question to ponder whether or not there should be.

    • (Score: 5, Insightful) by jmorris on Wednesday May 23 2018, @09:28PM (5 children)

      by jmorris (4844) on Wednesday May 23 2018, @09:28PM (#683287)

      The article is repeating a classic mistake. We have been here before. Lets make the CPU expose really low level details and since the compiler and language knows what it is actually trying to do it can generate better code to utilize all these raw CPU bits. That thinking lead to the wreck known as Itanic.

      It failed because they failed to realize the strength of C, x86, POSIX and Win32 is the binding contract across time they each provide. Yes you can build a highly optimized CPU core, expose all its compromises and optimizations to run really fast in $current_year's silicon. Add on the shinest research OS ideas. And if you are making a PlayStation you might sell enough units that developers and tool makers will invest the effort to extract the potential for some games that have the shelf life of produce. And if you are truly fortunate they will extract that maximum performance before the hardware is obsolete. Then ten years go by, the silicon world has changed entirely and your architecture is hopelessly obsolete and legacy code won't build well, if at all, on new hardware and you basically are left with emulation. But nobody is likely to port mainstream software to such a platform. Ask Intel and HP, they bet big and lost with Itanium when they built it and nobody came.

      The one real problem the article exposed is the problem of cache transparency. That needs fixing. Put a few GB of HBM on the CPU package, scale back cache and then let the OS explicitly handle the NUMA issues if there is off chip RAM. Explicitly handling cache at the end program level is simply asking for a trainwreck as all of that tech changes over time.

      The other problem is the bloat problem. CPUs have to cheat so outrageously to keep up with the increasingly inability of programmers to write efficient programs in any language. Netscape Navigator used to run well in 8MB, now Firefox can use up 8GB and want more. Does it do a thousand times as much? It does not. Full "Office" suites with Truetype, embedded graphics, DDE/OLE and such ran on machines with that same 8MB. Modern ones do some more things but again, do they really do hundreds of times as much? They certainly consume hundreds of times the memory. Which drives the ever increasing demand for faster chips and cutting corners.

      • (Score: 1, Insightful) by Anonymous Coward on Wednesday May 23 2018, @10:24PM

        by Anonymous Coward on Wednesday May 23 2018, @10:24PM (#683303)

        > Does it do a thousand times as much? It does not.

        It's arguable whether modern browsers do more than NN (they certainly support more), but modern web pages and apps certainly do more.

        ONLYOFFICE/Google Docs/Microsoft Word let you edit Word documents right in the browser, the implementation basically in pure Javascript for ONLYOFFICE. Not surprised at modern browser's memory footprint. They weren't designed to be engines for full blown applications but here we are.

      • (Score: 2) by meustrus on Wednesday May 23 2018, @10:42PM (1 child)

        by meustrus (4961) on Wednesday May 23 2018, @10:42PM (#683310)

        CPUs have to cheat so outrageously to keep up with the increasingly inability of programmers to write efficient programs in any language.

        Hardware vs software performance has always been a bit of a chicken-and-the-egg problem. You can't just say that the CPUs get better at giving performance to the lazy, because a lot of software was built based on that level of performance.

        You can idolize the programmers of yore if you want to, but the fact is that they wrote more efficient code because they had to. No programmer starts out building everything right. We all start by making something work, and only after it doesn't work fast enough do we ever go back and try to make it faster. The same goes for memory efficiency, avoiding I/O latency, maintainability, and any other metrics you can come up with for what makes "good" code.

        It's the same with SSDs. The performance boost from replacing a spinning platter with an SSD has grown over time, because all software these days is developed on machines with them. The programmer does not experience the high latency of spinning disk I/O, so lots of software these days ships with synchronous file system access.

        It's a self-perpetuating cycle. And it just happens to benefit the hardware manufacturer, who gets to keep selling new chips that are better at running the code that people started writing for the last set of chips.

        --
        If there isn't at least one reference or primary source, it's not +1 Informative. Maybe the underused +1 Interesting?
        • (Score: 2) by Wootery on Thursday May 24 2018, @01:02PM

          by Wootery (2341) on Thursday May 24 2018, @01:02PM (#683526)

          You can't just say that the CPUs get better at giving performance to the lazy, because a lot of software was built based on that level of performance.

          Of course we can. 'Built based on that level of performance' doesn't mean we can't compare the functionality-to-hardware-capability ratio and conclude that it's plummeted over the years.

          'High-performance' applications like the Unreal Engine or scientific modelling, succeed in making good use of modern hardware. Desktop operating systems and word processors, on the other hand, do much the same as they did 20 years ago, but with vastly higher hardware requirements.

          it just happens to benefit the hardware manufacturer, who gets to keep selling new chips that are better at running the code that people started writing for the last set of chips.

          Well, kinda. I'm more inclined to credit competition in the hardware markets. If AMD and ARM imploded tomorrow, you think Intel would keep working hard on improving their products?

      • (Score: 5, Informative) by letssee on Thursday May 24 2018, @08:49AM

        by letssee (2537) on Thursday May 24 2018, @08:49AM (#683472)

        I was with you until the whining over bloat.

        Yes Firefox does 1000x more than Netscape (memorywise anyway). Just look at the data size of a complete website from the nineties to one of today. Easily a factor 1000 increase.

      • (Score: 2) by Freeman on Thursday May 24 2018, @03:45PM

        by Freeman (732) on Thursday May 24 2018, @03:45PM (#683593) Journal

        No, they don't provide 1000x more functionality, but your resolution sure is higher. Eye candy has driven the PC market just about as much as anything.

        --
        Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
  • (Score: 5, Insightful) by crafoo on Wednesday May 23 2018, @08:13PM (2 children)

    by crafoo (6639) on Wednesday May 23 2018, @08:13PM (#683256)

    Accurate.
    CPUs are not low-level devices anymore. They run microcode and have a large bit of functionality that is either undocumented, incorrectly documented, and/or just completely hidden from the owner. Writing applications for CPU microcode is not low level programming. Really has nothing to do with which language you use to do this.

    • (Score: 2, Interesting) by pTamok on Thursday May 24 2018, @10:28AM

      by pTamok (3042) on Thursday May 24 2018, @10:28AM (#683490)

      The more things change, the more they stay the same.

      I used to know a technical genius who specialised in VAX/VMS programming for oil exploration companies. His job was to write programs that analysed the geophysical data coming back from the geological survey teams - basically, huge amounts of data listening to the echoes after precise explosions (or at least, loud noises). Time was money, so he was paid to optimise the programming, so he did not only program in VAX assembler, but also re-programmed the microcode of the cpus to get better performance for these highly specific tasks. The DEC VAX 11/780 loaded its microcode from a floppy, so it was entirely possible to modify it, and indeed there was even a Pascal complier that targeted the 11/780s microcode [dtic.mil] as its output. DEC provided support tools for people to be able to do this: "User microprogramming". See references here: https://people.cs.clemson.edu/~mark/uprog.html [clemson.edu]

      ...a study of pattern matching on the VAX-11/750 and reported that a microcoded implementation was 14-28 times faster in execution performance than hand-coded assembly language or a compiled C program [Lar82]. In 1978, Bell, Mudge, and McNamara estimated microprogramming productivity at 700 microinstructions per man year [Bel78]. This meant that microprogramming was five to ten times more expensive than conventional programming of same task, but it was estimated that microprogramming was usually ten times faster in performance.

    • (Score: 0) by Anonymous Coward on Friday May 25 2018, @03:08PM

      by Anonymous Coward on Friday May 25 2018, @03:08PM (#684042)

      How is writing microcode any different than writing programs against a control matrix?

  • (Score: 2) by JoeMerchant on Wednesday May 23 2018, @09:48PM

    by JoeMerchant (3937) on Wednesday May 23 2018, @09:48PM (#683295)

    We used a couple of 6811 C compilers back in the 90s: a French company called Cosmic which produced pretty good 6811 code, and some godforsaken port of a Z80 compiler that also output 6811 instructions but often resulted in 10x the code size and 1/10th the speed, or worse. Same code would compile and run on both compilers, but with Cosmic I usually couldn't improve the assembly code - not often enough to worry about, anyway. That other compiler should have been booted on day one, but it can be hard to separate developers from their preferred tools - especially when the developer was your boss.

    --
    🌻🌻 [google.com]
  • (Score: 2) by sjames on Thursday May 24 2018, @01:33AM (2 children)

    by sjames (2882) on Thursday May 24 2018, @01:33AM (#683352) Journal

    C was always considered a mid-level language. Higher level than assembly, but lower level than FORTRAN.

    These days wityh out of order and speculative execution, even asm isn't as low level as it used to be.

    • (Score: 1) by anubi on Thursday May 24 2018, @07:55AM (1 child)

      by anubi (2828) on Thursday May 24 2018, @07:55AM (#683464) Journal

      Yeh... I always saw "C" as like a super macro assembler... with nearly everything done by macros neatly defined in standard libraries.

      My Borland Turbo C would let me do inline assembly if I had to... and that was really efficient when I wrote device drivers, when I had to do a lot of bit-fiddling.

      It was kinda like mortar. Where Fortran and Cobol were more like bricks.

      I might write the primitives to a tape transport or display driver in assembly.

      Or write the primitives to a database engine in C.

      But I will take Fortran or Cobol any day to build the program to interface to US.

      --
      "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
      • (Score: 2) by sjames on Sunday May 27 2018, @06:03PM

        by sjames (2882) on Sunday May 27 2018, @06:03PM (#684863) Journal

        I thought of C in much the same way. Back in the days before the optimizers got sophisticated, I think it was fairly apt.

        I do think C is over-used these days. There is no reason to be using a mid-level language for UI.