Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Tuesday March 22 2016, @08:09AM   Printer-friendly
from the we're-gonna-create-our-own-mistakez! dept.

There's a new operating system that wants to do away with the old mistakes and cruft in other operating systems. It's called Redox OS and is available on GitHub. It's aimed at creating an alternative OS that is able to run almost all Linux executables with only minimal modifications. It features a pure ecosystem using the Rust programming language which they hope will improve correctness and security over other OSes. They are not afraid to prioritize correctness over compatibility. The philosophy being that "Redox isn't afraid of dropping the bad parts of POSIX while preserving modest Linux API compatibility."

Redox levels harsh criticisms at other OSes, saying "...we will not replicate the mistakes made by others. This is probably the most important tenet of Redox. In the past, bad design choices were made by Linux, Unix, BSD, HURD, and so on. We all make mistakes, that's no secret, but there is no reason to repeat others' mistakes." Not stopping there, the Redox documentation contains blunt critiques of Plan 9, the GPL, and other mainstays.

Redox OS seems to be supported on the i386 and x86_64 platforms. The aims are microkernel design, implementation in Rust language, optional GUI — Orbital, newlib for C programs, MIT license, drivers in userspace, common Unix commands included, and plans for ZFS.

They want to do away with syscalls that stay around forever and drivers for hardware that, for a long time, simply isn't possible to buy any more. They also provide a codebase that doesn't require you to navigate around 25 million lines of code like Linux.

Perhaps the mathematically proven L4 microkernel is something to consider over the monolithic kernel approach where any single driver can wreck the system? One aspect to look out for is if they map the graphic cards into user space.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough

Mark All as Read

Mark All as Unread

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by ThePhilips on Tuesday March 22 2016, @02:42PM

    by ThePhilips (5677) on Tuesday March 22 2016, @02:42PM (#321636)

    The IOMMU does the same thing for the device that the CPU's MMU does for unprivileged code: it performs translation and permission checks on each virtual address and prevents unauthorised reads and writes.

    That's interesting.

    But "virtual address" you are talking about? The DMA works *always* on physical RAM. Or is it different kind of "virtual" memory?

    All the CPU memory protections I have seen work in concert with and rely on virtual memory. The "virtual memory address space" is process-specific - external hardware doesn't know anything about it.

    And how is this IOMMU is going to protect anything, when
    (1) the mapping virtual-to-physical is 1:1, while the same physical memory can have multiple virtual addresses (for different processes; consequently with different permissions) and
    (2) a very basic system can have rather huge number of virtual mappings, and no hardware ever is going to be as flexible as to allow unlimited number of the configuration entities (or it would have to go to the RAM for the configuration, and suffer the same performance penalty as the virtual memory machinery).

    In the Linux kernel, in the API for the DMA memory I've seen in the traces that it can potentially do something special for the memory, but so far I haven't worked with a single arch which implements there something (I worked only with PPC, ARM and Intel). Out of interest I have looked deeper, but only found handling for architectures which have limitations that not all memory is DAM-able. But nothing anywhere close to the protection against a malicious DMA access. (The same limitation was applicable to the older ISA hardware on Intel architectures, which could only access with DMA the first megabyte of the RAM.)

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 3, Informative) by TheRaven on Tuesday March 22 2016, @05:00PM

    by TheRaven (270) on Tuesday March 22 2016, @05:00PM (#321719) Journal

    But "virtual address" you are talking about? The DMA works *always* on physical RAM

    No it doesn't. It only works on physical memory in the absence of an IOMMU. With an IOMMU, it works on a device virtual address.

    And how is this IOMMU is going to protect anything, when (1) the mapping virtual-to-physical is 1:1, while the same physical memory can have multiple virtual addresses (for different processes; consequently with different permissions) and

    Virtual to physical mapping isn't 1:1, it's N:1. That's how shared memory works - multiple virtual page corresponding to the same physical page.

    In the Linux kernel, in the API for the DMA memory I've seen in the traces that it can potentially do something special for the memory, but so far I haven't worked with a single arch which implements there something (I worked only with PPC, ARM and Intel).

    Most of the Linux APIs were introduced for PowerPC (contributed by IBM over a decade ago), so if you've worked on PowerPC then I'm quite surprised that you haven't come across them. They're used to prevent device driver errors affecting other parts of the system on IBM hardware. They're also used for device pass-through in virtualisation environments.

    Just looked for Intel/AMD IOMMU specs and they are all files under "Virtualization" and "I/O Virtualization"

    Of course they are, just as memory protection is under 'virtual memory'. What do you think virtualisation means?

    IOW, the tech is not intended to be used by the host - but rather to implement IO for the guest OS. In that case, the "virtual address" makes sense: it is the physical address of the guest OS, but the virtual memory address of the VM software.

    You're confusing marketing with functionality. You're also completely missing the common uses. Kernel bypass is most commonly used in graphics cards and high-end network cards, without a hypervisor (though hypervisors do use the same mechanisms for device pass through). As I said, nVidia cards for the last few generations have all supported kernel bypass. The kernel sets up memory maps, but all commands are submitted directly from userspace to the card without the kernel being involved. The kernel simply sets up mappings for memory that both the process and the device can access. High-end network cards work in the same way (Infiniband has worked like this for 20+ years), with the kernel setting up memory maps and the userspace drive initiating DMA to and from the rings that are in memory shared between the device and the userspace process.

    If you don't want to believe me, then go and read the driver code. Or read the reverse-engineered docs for the nVidia cards from the Nouveau project.

    --
    sudo mod me up
    • (Score: 2) by RamiK on Wednesday March 23 2016, @01:04AM

      by RamiK (1813) on Wednesday March 23 2016, @01:04AM (#321907)

      Most of the Linux APIs were introduced for PowerPC

      He might have worked on Macs or PowerQUICC. Not all PPCs were\are at feature parity. Especially when it comes to virtualization* which, similarly to ECC memory, was sometimes offered as a server "premium" feature.

      As for seL4, many production L4 family kernels fork off the proven seL4 code base and add patches to address hardware bugs. It's not a "security hole" to have a proven and correct core serve as a main branch that you occasionally fork production branches off and patch for hardware specific issues. It's simply a different development model.

      As for RedoxOS, while I personally believe it's a waste of time seriously developing anything new targeting the x86's metal, Rust still needs to prove itself by at least developing it's own toy research operating system. It's the price you pay for calling yourself a systems programming language. D is in the same boat. Go had the sense to avoid it.

      *virtualization is the modern term for general memory\device protections, like you said.

      --
      compiling...