Stories
Slash Boxes
Comments

SoylentNews is people

posted by takyon on Wednesday September 30 2020, @01:46AM   Printer-friendly
from the embrace dept.

Open source's Eric Raymond: Windows 10 will soon be just an emulation layer on Linux kernel

Will Windows lose the last phase of the desktop wars to Linux? Noted open-source advocate Eric Raymond thinks so.

Celebrated open-source software advocate and author Eric Raymond, who's long argued Linux will rule the desktop, reckons it won't be long before Windows 10 becomes an emulation layer over a Linux kernel.

[...] Looking further into the future, Raymond sees Microsoft killing off Windows emulation altogether after it reaches the point where everything under the Windows user interface has already moved to Linux.

"Third-party software providers stop shipping Windows binaries in favor of ELF binaries with a pure Linux API... and Linux finally wins the desktop wars, not by displacing Windows but by co-opting it. Perhaps this is always how it had to be," Raymond projects.

Is It Time for Windows and Linux to Converge?

Last phase of the desktop wars?

The two most intriguing developments in the recent evolution of the Microsoft Windows operating system are Windows System for Linux (WSL) and the porting of their Microsoft Edge browser to Ubuntu.

For those of you not keeping up, WSL allows unmodified Linux binaries to run under Windows 10. No emulation, no shim layer, they just load and go.

[...] Proton is the emulation layer that allows Windows games distributed on Steam to run over Linux. It's not perfect yet, but it's getting close. I myself use it to play World of Warships on the Great Beast.

The thing about games is that they are the most demanding possible stress test for a Windows emulation layer, much more so than business software. We may already be at the point where Proton-like technology is entirely good enough to run Windows business software over Linux. If not, we will be soon.

So, you're a Microsoft corporate strategist. What's the profit-maximizing path forward given all these factors?

It's this: Microsoft Windows becomes a Proton-like emulation layer over a Linux kernel, with the layer getting thinner over time as more of the support lands in the mainline kernel sources. The economic motive is that Microsoft sheds an ever-larger fraction of its development costs as less and less has to be done in-house.

If you think this is fantasy, think again. The best evidence that it's already the plan is that Microsoft has already ported Edge to run under Linux. There is only one way that makes any sense, and that is as a trial run for freeing the rest of the Windows utility suite from depending on any emulation layer.

So, the end state this all points at is: New Windows is mostly a Linux kernel, there's an old-Windows emulation over it, but Edge and the rest of the Windows user-land utilities don't use the emulation. The emulation layer is there for games and other legacy third-party software.

Also at The Register.

Previously: Windows 10 Will Soon Ship with a Full, Open Source, GPLed Linux Kernel
Call Me Crazy, but Windows 11 Could Run On Linux
Microsoft Windows Linux for Everybody


Original Submission #1Original Submission #2Original Submission #3

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Informative) by RS3 on Wednesday September 30 2020, @03:39AM (6 children)

    by RS3 (6367) on Wednesday September 30 2020, @03:39AM (#1058895)

    Maybe I'm missing a few things, and maybe I don't understand the definition of "monolithic kernel". ALL of my Linux machines, including live servers I admin, run dozens of modprobed modules. You only need a very few drivers to get the OS started, then load modules for everything else, right? I mean, you have to have some kind of disk and filesystem driver to do the minimum, then load modules, so I'm not sure how much less "monolithic" you can make a kernel. I'm hoping you'll 'splain. :)

    BTW, check out the kernel size in an Alpine Linux machine (that''s not running X). I don't have one running at this second but if you're curious I'll get the stats. And it'll be with a stock kernel- no one I've customized (don't need to!)

    Starting Score:    1  point
    Moderation   +2  
       Interesting=1, Informative=1, Total=2
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 5, Informative) by Grishnakh on Wednesday September 30 2020, @03:58AM (5 children)

    by Grishnakh (2831) on Wednesday September 30 2020, @03:58AM (#1058904)

    Those modules all still live in the same address space as each other and other kernel code; that's what makes it a monolithic kernel. It's not completely monolithic though; it's usually called a "hybrid kernel" just as Windows is.

    A true microkernel doesn't have drivers in the same address space as the rest of the kernel, and they can only communicate with the kernel through message-passing, which of course is really wasteful and involves extra context switches. That's why true microkernels are so rare; they're just not very efficient. They're usually used for specialty embedded OSes where performance isn't very important but safety and security are.

    • (Score: 2) by RS3 on Wednesday September 30 2020, @05:00AM (2 children)

      by RS3 (6367) on Wednesday September 30 2020, @05:00AM (#1058924)

      Thank you, that is incredibly informative. I did study this stuff starting a long time ago. IIRC, Ring 2 was supposed to be for hardware drivers, but for some reason- maybe context switching too costly- it has rarely been used. Not sure if Intel / AMD / ARM / whoever could make it more efficient.

      That said, there are so many CPU and RAM vulnerabilities, what's the point, and would you trust them to do it right (no I would not...).

      • (Score: 2) by Grishnakh on Wednesday September 30 2020, @07:21PM (1 child)

        by Grishnakh (2831) on Wednesday September 30 2020, @07:21PM (#1059163)

        Intel CPUs were (and I guess still are) designed with 4 "rings", 0 through 3. The kernel lives in 0 I think, and userspace in 3. I have no idea if anyone's ever used those other two rings, but they continue to be put there for backwards compatibility; no modern OS uses them.

        • (Score: 3, Interesting) by RS3 on Wednesday September 30 2020, @07:53PM

          by RS3 (6367) on Wednesday September 30 2020, @07:53PM (#1059183)

          D'oh, I meant ring 1 for drivers. IIRC, ring 2 would be for "hardware abstraction layer" / IO stuff. But yes, AFAIK no OS used or used rings 1 or 2.

          I did some Novell work 25 years ago, and the Netware Server was screaming fast compared to Windows OS (9x, NT, whatever) and AFAIK Netware ran "flat memory model"- no segments / selectors - just 32-bit addresses (even though CPUs were coming out with 36 address lines, called "PAE")- no ring / privilege changes, no address calculations done by the CPU (well, very few- indexed addressing / offsets of course).

          Someday I'll do some research. Ideally you'd have a microkernel at ring 0, drivers at 1, HAL at 2, apps at 3, but the overhead is a problem.

          There are many ways to do memory protection in a more efficient way, but I'll give Intel (and MS) much credit for maintaining backward compatibility. Of course, one can strongly argue that the backward compatibility has ushered us into the security nightmare we all live and deal with every day. How many computers are brought to their knees due to very hungry anti-malware that doesn't always work anyway. Sigh.

          And as I mentioned before, the CPUs themselves have so many internal vulnerabilities, often due to cache, stack, and branch prediction (spectre, etc.), even RAM ("rambleed, rowhammer, side-channel attacks...) are vulnerable. Can't win... Glad I still have a couple of '486 and '386 motherboards. May need them someday! :)

    • (Score: 0) by Anonymous Coward on Wednesday September 30 2020, @07:15PM

      by Anonymous Coward on Wednesday September 30 2020, @07:15PM (#1059158)

      what is the perf loss, and the safety + security gain in %?

    • (Score: 2) by sjames on Thursday October 01 2020, @04:59AM

      by sjames (2882) on Thursday October 01 2020, @04:59AM (#1059344) Journal

      Just to enlarge on an already great answer, the early Linux kernels didn't have modules at all. If you wanted to add a driver, you re-configured and recompiled. Many commercial Unix kernels came with object files and a mini-linker so you could add new driver objects and re-link to make a new kernel image.