Device drivers commonly execute in the kernel to achieve high performance and easy access to kernel services. However, this comes at the price of decreased reliability and increased programming difficulty. Driver programmers are unable to use user-mode development tools and must instead use cumbersome kernel tools. Faults in kernel drivers can cause the entire operating system to crash. User-mode drivers have long been seen as a solution to this problem, but suffer from either poor performance or new interfaces that require a rewrite of existing drivers.
This paper introduces the Microdrivers architecture that achieves high performance and compatibility by leaving critical path code in the kernel and moving the rest of the driver code to a user-mode process. This allows data-handling operations critical to I/O performance to run at full speed, while management operations such as initialization and configuration run at reduced speed in user-level. To achieve compatibility, we present DriverSlicer, a tool that splits existing kernel drivers into a kernel-level component and a user-level component using a small number of programmer annotations. Experiments show that as much as 65% of driver code can be removed from the kernel without affecting common-case performance, and that only 1-6 percent of the code requires annotations.
(Score: 3, Interesting) by TheRaven on Monday November 16 2015, @06:08PM
so basically they act as interfaces to the hardware? isn't that the definition of a driver?
No, that's not the usual definition. Drivers are both interfaces and abstractions. A SATA disk driver doesn't just give a SATA command queue to the rest of the system, it exposes something that stores and retrieves blocks (and provides other functionality) and exposes an interface to the next layer in the stack that's the same as SAS or other disk interfaces.
LOL! right because the kernel is so restrictive, eh? xD
Yes, the kernel provides a load of abstraction that handles the generic case well. It abstracts all of the network transport specific details behind the socket interface, for example. That's great if you want code that doesn't have to care about these things, but if you want code that runs a subset of UDP or TCP really fast then having that code all get out of the way and let you push packets into the device rings directly is going to be a lot faster.
not bothering with the rest of your message because what you wrote indicates you are out of your depth.
Don't read my message then, go and look at how modern GPU drivers work (I've worked on some in the past) or read some of the recent SOSP papers. For example, compare the performance of Namestorm to BIND and then look at why Namestorm is faster (read the paper).
Or, you know, argue with some real examples. I can back up everything that I've said with shipping code in production systems and with papers in top-tier operating system venues. You can back up what you've said with... what exactly?
sudo mod me up