Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


cafebabe (894)

cafebabe
(email not shown publicly)

Journal of cafebabe (894)

The Fine Print: The following are owned by whoever posted them. We are not responsible for them in any way.
Tuesday April 17, 18
03:01 PM
OS

I'm quite impressed with the concept of an exo-kernel. It is one of many variants in which functionality is statically or dynamically linked with user-space code. This variant could be concisely described as applying the principles of micro-controller development to desktop applications and servers.

In the case of micro-controllers, it is typical to include what you need, when you need and deploy on bare hardware. Need SPI to access Micro SD and read-only FAT32 to play MP3 audio? Well, include those libraries. Otherwise don't.

In the case of desktop applications, it is possible to include stub libraries when executing within one process or a legacy operating system or include full libraries to deploy on bare hardware or a virtual machine.

In the case of a server, the general trend is towards containers of various guises. While there are good reasons to aggregate under-utilized systems into one physical server, peak performance may be significantly reduced. For x86, the penalty was historically 15% due to Intel's wilful violation of Goldberg and Popek virtualization requirements. After Spectre and Meltdown, some servers incur more than 1/3 of additional overhead. Ignoring performance penalties, container bloat and the associated technical debt, the trend is to place each network service and application in its own container. This creates numerous failure modes when they start in a random order. This occurs because init systems avoid race conditions within one container but if each service runs in a separate container, this trivial safeguard is defeated.

Regardless, in the case of a server, an application may require a JavaScript Just In Time compiler, read-only NFS access to obtain source code for compilation and a database connection. All of this may run inside a container with externally enforced privileges. However, there is considerable overhead to provide network connections within the container's kernel-space while the compiler (and application) run in user-space. In the unlikely event that a malicious party escapes from the JavaScript, nothing is gained if network connections are managed in a separate memory-space. If we wish to optimize for the common case, we should have application and networking all in user-space or all in kernel-space. Either option requires a small elevation of privileges but the increased efficiency is considerable compared to the increased risk.

Running an application inside in a container may require a fixed allocation of memory unless there is an agreed channel to request more. People may recoil in horror at the concept of provisioning memory and storage for applications but the alternative is the arrangement popularized by Microsoft and Apple where virtual memory is over-committed until a system becomes unstable and unresponsive. The default should be a system which is secure and responsive as an 8 bit computer - and having an overview of what a system is doing at all times.

Similar arrangements may apply to storage. It is possible to have an arrangement where a kernel enforces access to local storage partitions and ensures that file meta-data is vaguely consistent but applications otherwise have raw access to sectors. If this seems similar to the UCSD p-code filing system, a Xerox Alto or my ideal filing system, that is entirely understandable. Xerox implementations of OO, GUIs and storage remain contentious but storage is the least explored.

The concept of an exo-kernel makes this feasible at the current scale of complexity and has certain benefits. For example, I previously proposed use of an untrusted computer for multi-media and trustworthy computers for physical security and process control. Trustworthy computers current fall into three cases:-

  1. Relatively trustworthy micro-controllers of 40MHz or more. These have limited power dissipation and may be programmed on-site to user requirements. This limits the ability to implement unwanted functionality. It may be possible to access micro-controller memory via radio but this is a tedious task if each site has a bespoke configuration.
  2. Legacy 8 bit computers of 2MHz or less. Tampered firmware must work within very limited resources. It is also slow and difficult to tamper with a system which is constructed 10 years or more after an attack.
  3. A mini-computer design which is likely to run at 0.1MHz or less. Cannot rely upon security by obscurity but a surface mount 8 bit micro-coded mini-computer simulating a 64 bit virtual machine is, at present, an unusual case for an aspiring attacker.

In the previous proposal, there is a strict separation of multi-media and physical processes with the exception that some audio functionality may be available on trustworthy devices. This was limited to a micro-controller which may encode or decode lossy voice data, decode lossy MP3 audio or decode lossless balanced ternary Ambisonics at reduced quality. Slower devices may decode monophonic lossless balanced ternary audio at low quality. The current proposal offers more choices for current and future hardware. As one of many choices, the Contiki operating system is worth consideration. It originally a GUI operating system for a Commodore 64 with optional networking. It is now typically used on AVR micro-controllers without GUI. I previously assumed that Contiki was written in optimized 6502 assembly and then re-written for other systems but this is completely wrong. It is 100% portable C which runs on Commodore 64, MacOS, Linux, AVR, ARM and more.

How does Contiki achieve cross-platform portability with no architecture specific assembly to save processor registers during context switches? That's easy. It doesn't context switch because it implements shared-memory, co-operative multi-tasking. How else do you expect it to work on systems without super-user mode or memory management? I've suffered Amiga WorkBench 1.3, Apple MacOS 6 an RISCOS, so I know that co-operative multi-tasking is flaky. However, when everything is written in a strict, subset of C and statically checked, larger implementations are less flaky than legacy systems.

Contiki's example applications include a text web browser and desktop calculator. Typically, these are compiled together as one program and deployed as one system image. The process list is typically fixed at compilation but it is possible to load additional functionality into a running process. This is akin to adding a plug-in or dynamic library. Although it is possible to have dynamic libraries suchlike, this increases system requirements. Specifically, it requires a filing system and some platform specific understanding of library format. Although there is a suggested GUI and Internet Protocol stack, there are no assumptions about audio, interrupts or filing system. Although Contiki is not advertised as an exo-kernel, it is entirely compatible with the philosophy to iclude what you want, when you want.

With relatively little work, it would be possible to make a text console window system and/or web browser and/or streaming media player with the responsiveness, stability and security of an Amiga 500 running Mod player on interrupt. It is also possible to migrate unmodified binaries to a real-time operating system. In this arrangement, all GUI tasks run co-operatively in shared memory in the bottom priority thread. All real-time processes pre-empt the GUI. If the GUI goes astray, it can be re-initialized in a fraction of a second with minimal loss of state and without affecting critical tasks. This arrangement also allows development and testing under Unix via XWindows or Aqua. In the long-term, it may be possible to use Contiki as a scaffold and then entirely discard its code.

If media player plug-ins are restricted to one scripting language (such as Lua which runs happily on many micro-controllers), it is possible to make a media player interface which is vastly more responsive than Kodi - even when running on vastly inferior hardware. As an example, an 84MHz Atmel micro-controller may drive a VGA display and play stereo audio at 31kHz. Similar micro-controllers are available in bulk for less than US$1. Although this arrangement has a strict playback rate and no facility for video decode, it is otherwise superior to a 900MHz Raspberry Pi running Kodi.

Display Options Threshold/Breakthrough Reply to Article Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 0) by Anonymous Coward on Wednesday April 18 2018, @12:24AM

    by Anonymous Coward on Wednesday April 18 2018, @12:24AM (#668351)

    My ideal operating system is anything I can get for free. I don't care if it's open source, which means I am hated by open source evangelists, despite the fact that I've been writing open source code for 30 years. I don't care if it's Linux, which means I am despised by Linux fanatics, despite the fact that I've been using Linux for 20 years.

    The problem with Linux is the community. The community literally needs to die. And I mean every rabid Linux fanatic literally needs to drop dead.

(1)