Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.
posted by janrinok on Saturday November 12 2016, @06:27AM   Printer-friendly
from the we've-got-to-try dept.

At the 2015 Kernel Summit, Kees Cook said, he talked mostly about the things that the community could be doing to improve the security of the kernel. In 2016, instead, he was there to talk about what had actually been done. Kernel hardening, he reminded the group, is not about access control or fixing bugs. Instead, it is about the kernel protecting itself, eliminating classes of exploits, and reducing its attack surface. There is still a lot to be done in this area, but the picture is better than it was one year ago.

One area of progress is in the integration of GCC plugins into the build system. The plugins in the kernel now are mostly examples, but there will be more interesting ones coming in the future. Plugins are currently supported for the x86, arm, and arm64 architectures; he would like to see that list grow, but he needs help from the architecture maintainers to validate the changes. Plugins are also not yet used for routine kernel compile testing, since it is hard to get the relevant sites to install the needed dependencies.

Linus asked how much plugins would slow the kernel build process; linux-next maintainer Stephen Rothwell also expressed interest in that question, noting that "some of us do compiles all day." Kees responded that there hadn't been a lot of benchmarking done, but that the cost was "not negligible." It is, though, an important part of protecting the kernel.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Rich on Saturday November 12 2016, @09:03PM

    by Rich (945) on Saturday November 12 2016, @09:03PM (#426125) Journal

    On a 25 GHz NS32032, the whole Oberon System would compile itself in 15 milliseconds. The 32032 wasn't that fast. But it would certainly be a welcome difference to the person who complained "we're compiling all day" in the article ref'd in the submission.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by RamiK on Saturday November 12 2016, @11:47PM

    by RamiK (1813) on Saturday November 12 2016, @11:47PM (#426157)

    Oberon's memory model isn't C's and wasn't portable. The late 80's CISCs (National 32000 & Motorola's 68000) were all transitional products. For a brief period of roughly 5-7 years, the die-per-wafer yields during those years allowed CISC to compete against RISC. Now here's the thing, Writing a compiler for these very clean and human readable CISC designs was a joy. You didn't have to stage your compilation. Pipelining was a straightforwards text-book prelude and postlude. The 32000 especially had all generic registers so you didn't have to really worry about anything. It was so good that the Plan 9 (pseudo) assembler kept the 32000 instructions and would wrap native (68000, x86, ARM, MIPS...) instructions to those just because Ken liked working on those machines so much. In did, the Go assembler was Plan 9's assembler up to 1.5. Recently, it was rewritten in Go. But it still uses those very same instructions and wraps around native assembly just because it's so convenient to work with. Pike's ACME still uses Oberon's GUI since, like Ken's love for the 32000, Pike love Oberon's GUI.

    Which leads us to the problem: Oberon's design and code, like DSPs, was not too portable when it came to pipelining and managing different registers for different instructions. As a result, Oberon couldn't be made to work as efficiently in later hardware. This is why the latest Oberon implementation is on an FPGA board. So, when you're saying Oberon compiles under 15ms, you're talking about a VERY small and targeted compilation phase and code-base that shouldn't be treated as general purpose as much as it should be compared to DSPs and their respective compilers and (limited) kernels.

    Regardless, for what it's worth, the good stuff from Oberon made it to golang in a C-like portable fashion. Additionally, the latest garbage collector research is being poured into Go. More importantly, when I mentioned simulators, I had the Mill architecture [wikipedia.org] in mind which does type-safety on the die through the metadata [millcomputing.com] values. On those machines, while C will run extremely well, I suspect we'll also see garbage collected, type-safe languages getting some REALLY good compilers going.

    --
    compiling...