Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Tuesday March 22 2016, @08:09AM   Printer-friendly
from the we're-gonna-create-our-own-mistakez! dept.

There's a new operating system that wants to do away with the old mistakes and cruft in other operating systems. It's called Redox OS and is available on GitHub. It's aimed at creating an alternative OS that is able to run almost all Linux executables with only minimal modifications. It features a pure ecosystem using the Rust programming language which they hope will improve correctness and security over other OSes. They are not afraid to prioritize correctness over compatibility. The philosophy being that "Redox isn't afraid of dropping the bad parts of POSIX while preserving modest Linux API compatibility."

Redox levels harsh criticisms at other OSes, saying "...we will not replicate the mistakes made by others. This is probably the most important tenet of Redox. In the past, bad design choices were made by Linux, Unix, BSD, HURD, and so on. We all make mistakes, that's no secret, but there is no reason to repeat others' mistakes." Not stopping there, the Redox documentation contains blunt critiques of Plan 9, the GPL, and other mainstays.

Redox OS seems to be supported on the i386 and x86_64 platforms. The aims are microkernel design, implementation in Rust language, optional GUI — Orbital, newlib for C programs, MIT license, drivers in userspace, common Unix commands included, and plans for ZFS.

They want to do away with syscalls that stay around forever and drivers for hardware that, for a long time, simply isn't possible to buy any more. They also provide a codebase that doesn't require you to navigate around 25 million lines of code like Linux.

Perhaps the mathematically proven L4 microkernel is something to consider over the monolithic kernel approach where any single driver can wreck the system? One aspect to look out for is if they map the graphic cards into user space.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by ThePhilips on Tuesday March 22 2016, @09:11AM

    by ThePhilips (5677) on Tuesday March 22 2016, @09:11AM (#321483)

    Perhaps the mathematically proven L4 microkernel is something to consider over the monolithic kernel approach where any single driver can wreck the system?

    But this is just perspective of developers, who are ridiculous minority compared to users of the kernels.

    From perspective of users, it makes no sense to make more checks/etc during run-time, since the code doesn't change during run-time. After the systems has been tested, the checks/etc are pure and useless overhead. (Microkernel's isolation features are also sort of checks for invalid memory accesses.)

    Or as a direct example: Would you as a gamer play a game which never crashes at 25fps, or rather a game which might/might not crash once per week at 50fps? And that's pretty much sums up why the "better" microkernels never took off: it's not developers with their unsafe languages/etc, it is the users.

    One aspect to look out for is if they map the graphic cards into user space.

    As soon as you allow unfettered access to IO registers from user-space, all the security promises fly out of windows. And you can't have a driver in user-space without unfettered access to IO registers.

    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 3, Informative) by mth on Tuesday March 22 2016, @09:47AM

    by mth (2848) on Tuesday March 22 2016, @09:47AM (#321496) Homepage

    In modern Linux, graphics drivers have a large user space part that builds command buffers and a small kernel driver that verifies command buffers and sends them to the hardware. I think that's about as good as it's going to get with current PC hardware. If a system has an IOMMU, perhaps more could be moved to user space. I read something about AMD putting IOMMUs in their APUs, but I don't know if that's already present or something on their roadmap.

  • (Score: 3, Insightful) by VLM on Tuesday March 22 2016, @11:46AM

    by VLM (445) on Tuesday March 22 2016, @11:46AM (#321542)

    since the code doesn't change during run-time

    Um, that would be nice. Usually true, other than when someone is breaking in.

    • (Score: 2) by ThePhilips on Tuesday March 22 2016, @12:09PM

      by ThePhilips (5677) on Tuesday March 22 2016, @12:09PM (#321552)

      The NX bit [wikipedia.org] takes care of that for some time now. It allows to make the memory either writable or executable, but not both.

      But past all the HW and SW protection mechanisms come the logical errors. And the logical errors are independent of the language. If hacker can convince application to delete all data, or overwrite it with junk, no amount of abstract safety features would help.

      Otherwise, as a system developer, I do not mind - in fact, I welcome - such experiments. An advent of another system programming language beside C could only be a positive news. But I do not have much expectations toward the OS rewrite. If they were really serious about Rust as system language, as first step they should have tried integrate the support with BSD or Linux kernels, to allow writing drivers completely in Rust. But since they have started from the wrong end - rewrite of an OS - I really do not have any kind of hopes of them succeeding.

      • (Score: 2) by Pino P on Tuesday March 22 2016, @03:16PM

        by Pino P (4721) on Tuesday March 22 2016, @03:16PM (#321666) Journal

        The NX bit does not defend against return-oriented programming.

        • (Score: 0) by Anonymous Coward on Tuesday March 22 2016, @05:55PM

          by Anonymous Coward on Tuesday March 22 2016, @05:55PM (#321753)

          I haven't been keeping up lately and hadn't heard about that technique [wikipedia.org]. That's quite a fancy way to smash the stack!

  • (Score: 3, Informative) by TheRaven on Tuesday March 22 2016, @01:45PM

    by TheRaven (270) on Tuesday March 22 2016, @01:45PM (#321592) Journal

    Perhaps the mathematically proven L4 microkernel is something to consider over the monolithic kernel approach where any single driver can wreck the system?

    Whoever wrote this has no credibility. There is no mathematically proven L4 kernel. There is seL4, which is a formally verified microkernel that is inspired by L4 (but not an L4 implementation). It was a whole 8 hours between the public release of seL4 and the first security hole being identified, because it was something that wasn't part of their formal specification (which also makes a number of assumptions about the MMU behaviour that are not always true given known hardware bugs).

    The authors of seL4 put the cost at around 30 times that of developing with state-of-the-art informal software development methodology (i.e. detailed design, comprehensive test suites, and so on).

    As soon as you allow unfettered access to IO registers from user-space, all the security promises fly out of windows. And you can't have a driver in user-space without unfettered access to IO registers.

    What you say is true, assuming that you want to run on old hardware. If you are running on anything even vaguely modern, then as long as you correctly set up the IOMMU then this is not a problem. It's worth noting (to give a concrete example) that nVidia drivers have had direct access to I/O registers from userspace for several generations of hardware - all that the kernel part of the driver does is set up the initial mapping of the control registers and map and unmap memory segments. Nothing on the fast path for typical operation involves the kernel at all.

    --
    sudo mod me up
    • (Score: 2) by ThePhilips on Tuesday March 22 2016, @02:02PM

      by ThePhilips (5677) on Tuesday March 22 2016, @02:02PM (#321602)

      What you say is true, assuming that you want to run on old hardware.

      Any DMA-capable piece of hardware (today it is pretty much every piece of hardware) can be used to bypass completely any security mechanism, because it, duh, allows to access RAM directly without involvement of the CPU.

      I had already experience of (inadvertently) sending my stack over the network. And receiving the network packets into the stack.

      nVidia drivers have had direct access to I/O registers from userspace for several generations of hardware

      Not really. All "dangerous" commands are still has to be done via kernel part. They have access only to the IO-mmaped registers relevant to the graphical pipeline. User-space can fill the pipe-line of the GPU with data and commands, without calling the kernel, but flushing/syncing/etc (as well as configuration) is still done via kernel. IIRC, at least once per frame they have to call the kernel.

      • (Score: 5, Informative) by TheRaven on Tuesday March 22 2016, @02:22PM

        by TheRaven (270) on Tuesday March 22 2016, @02:22PM (#321620) Journal

        Any DMA-capable piece of hardware (today it is pretty much every piece of hardware) can be used to bypass completely any security mechanism, because it, duh, allows to access RAM directly without involvement of the CPU.

        No, it accesses RAM via the IOMMU (if one exists, which it does on all modern hardware - even without a full IOMMU [which does translation as well as protection], AMD CPUs have had a device exclusion vector for a decade, which allows the host CPU to restrict the physical pages that a device may access). The device can only access memory if there are valid mappings from the device virtual address space to the physical address space[1].

        The IOMMU does the same thing for the device that the CPU's MMU does for unprivileged code: it performs translation and permission checks on each virtual address and prevents unauthorised reads and writes.

        Note that many modern operating systems either misconfigure or don't bother to configure the IOMMU. Ubuntu is particularly fun, as it will advertise that the IOMMU is configured, even when it isn't.

        Not really. All "dangerous" commands are still has to be done via kernel part. They have access only to the IO-mmaped registers relevant to the graphical pipeline. User-space can fill the pipe-line of the GPU with data and commands, without calling the kernel, but flushing/syncing/etc (as well as configuration) is still done via kernel. IIRC, at least once per frame they have to call the kernel.

        This is simply not true (and please, go and read the driver code if you don't believe me - I'm speaking from first-hand experience here). Flushing the command buffer is done by writing to the producer (memory mapped device I/O) register from userspace. Userspace can poll for space in the ring buffer by reading another memory-mapped register.

        Graphics cards aren't the only devices to prefer this mode of operation. Infiniband devices have supported it forever and recent(ish) high-end NICs all support a kernel-bypass mode, where the device provides a virtual instance via SR-IOV that can all be mapped directly to userspace processes (or guest VMs). This is needed if you want to move the device driver entirely into the userspace process that's using it (increasingly common these days), but for a microkernel to grant direct access to a single driver process you just need an IOMMU. This isn't even a topic of research - several microkernels do this already.

        [1] Note that PCIe has some odd modes, such as permitting a device to indicate that it has a pre-translated physical address. This can be disabled, but the host OS must remember to do so.

        --
        sudo mod me up
        • (Score: 2) by ThePhilips on Tuesday March 22 2016, @02:42PM

          by ThePhilips (5677) on Tuesday March 22 2016, @02:42PM (#321636)

          The IOMMU does the same thing for the device that the CPU's MMU does for unprivileged code: it performs translation and permission checks on each virtual address and prevents unauthorised reads and writes.

          That's interesting.

          But "virtual address" you are talking about? The DMA works *always* on physical RAM. Or is it different kind of "virtual" memory?

          All the CPU memory protections I have seen work in concert with and rely on virtual memory. The "virtual memory address space" is process-specific - external hardware doesn't know anything about it.

          And how is this IOMMU is going to protect anything, when
          (1) the mapping virtual-to-physical is 1:1, while the same physical memory can have multiple virtual addresses (for different processes; consequently with different permissions) and
          (2) a very basic system can have rather huge number of virtual mappings, and no hardware ever is going to be as flexible as to allow unlimited number of the configuration entities (or it would have to go to the RAM for the configuration, and suffer the same performance penalty as the virtual memory machinery).

          In the Linux kernel, in the API for the DMA memory I've seen in the traces that it can potentially do something special for the memory, but so far I haven't worked with a single arch which implements there something (I worked only with PPC, ARM and Intel). Out of interest I have looked deeper, but only found handling for architectures which have limitations that not all memory is DAM-able. But nothing anywhere close to the protection against a malicious DMA access. (The same limitation was applicable to the older ISA hardware on Intel architectures, which could only access with DMA the first megabyte of the RAM.)

          • (Score: 3, Informative) by TheRaven on Tuesday March 22 2016, @05:00PM

            by TheRaven (270) on Tuesday March 22 2016, @05:00PM (#321719) Journal

            But "virtual address" you are talking about? The DMA works *always* on physical RAM

            No it doesn't. It only works on physical memory in the absence of an IOMMU. With an IOMMU, it works on a device virtual address.

            And how is this IOMMU is going to protect anything, when (1) the mapping virtual-to-physical is 1:1, while the same physical memory can have multiple virtual addresses (for different processes; consequently with different permissions) and

            Virtual to physical mapping isn't 1:1, it's N:1. That's how shared memory works - multiple virtual page corresponding to the same physical page.

            In the Linux kernel, in the API for the DMA memory I've seen in the traces that it can potentially do something special for the memory, but so far I haven't worked with a single arch which implements there something (I worked only with PPC, ARM and Intel).

            Most of the Linux APIs were introduced for PowerPC (contributed by IBM over a decade ago), so if you've worked on PowerPC then I'm quite surprised that you haven't come across them. They're used to prevent device driver errors affecting other parts of the system on IBM hardware. They're also used for device pass-through in virtualisation environments.

            Just looked for Intel/AMD IOMMU specs and they are all files under "Virtualization" and "I/O Virtualization"

            Of course they are, just as memory protection is under 'virtual memory'. What do you think virtualisation means?

            IOW, the tech is not intended to be used by the host - but rather to implement IO for the guest OS. In that case, the "virtual address" makes sense: it is the physical address of the guest OS, but the virtual memory address of the VM software.

            You're confusing marketing with functionality. You're also completely missing the common uses. Kernel bypass is most commonly used in graphics cards and high-end network cards, without a hypervisor (though hypervisors do use the same mechanisms for device pass through). As I said, nVidia cards for the last few generations have all supported kernel bypass. The kernel sets up memory maps, but all commands are submitted directly from userspace to the card without the kernel being involved. The kernel simply sets up mappings for memory that both the process and the device can access. High-end network cards work in the same way (Infiniband has worked like this for 20+ years), with the kernel setting up memory maps and the userspace drive initiating DMA to and from the rings that are in memory shared between the device and the userspace process.

            If you don't want to believe me, then go and read the driver code. Or read the reverse-engineered docs for the nVidia cards from the Nouveau project.

            --
            sudo mod me up
            • (Score: 2) by RamiK on Wednesday March 23 2016, @01:04AM

              by RamiK (1813) on Wednesday March 23 2016, @01:04AM (#321907)

              Most of the Linux APIs were introduced for PowerPC

              He might have worked on Macs or PowerQUICC. Not all PPCs were\are at feature parity. Especially when it comes to virtualization* which, similarly to ECC memory, was sometimes offered as a server "premium" feature.

              As for seL4, many production L4 family kernels fork off the proven seL4 code base and add patches to address hardware bugs. It's not a "security hole" to have a proven and correct core serve as a main branch that you occasionally fork production branches off and patch for hardware specific issues. It's simply a different development model.

              As for RedoxOS, while I personally believe it's a waste of time seriously developing anything new targeting the x86's metal, Rust still needs to prove itself by at least developing it's own toy research operating system. It's the price you pay for calling yourself a systems programming language. D is in the same boat. Go had the sense to avoid it.

              *virtualization is the modern term for general memory\device protections, like you said.

              --
              compiling...
        • (Score: 2) by ThePhilips on Tuesday March 22 2016, @02:47PM

          by ThePhilips (5677) on Tuesday March 22 2016, @02:47PM (#321639)

          Just looked for Intel/AMD IOMMU specs and they are all files under "Virtualization" and "I/O Virtualization". IOW, the tech is not intended to be used by the host - but rather to implement IO for the guest OS. In that case, the "virtual address" makes sense: it is the physical address of the guest OS, but the virtual memory address of the VM software.