After announcing his company was abandoning Unity for GNOME, Shuttleworth posted a thank-you note to the Unity community Friday on Google Plus, but added on Saturday:
"I used to think that it was a privilege to serve people who also loved the idea of service, but now I think many members of the free software community are just deeply anti-social types who love to hate on whatever is mainstream. When Windows was mainstream they hated on it. Rationally, Windows does many things well and deserves respect for those. And when Canonical went mainstream, it became the focus of irrational hatred too. The very same muppets would write about how terrible it was that IOS/Android had no competition and then how terrible it was that Canonical was investing in (free software!) compositing and convergence. Fuck that shit."
"The whole Mir hate-fest boggled my mind - it's free software that does something invisible really well. It became a political topic as irrational as climate change or gun control, where being on one side or the other was a sign of tribal allegiance. We have a problem in the community when people choose to hate free software instead of loving that someone cares enough to take their life's work and make it freely available."
Shuttleworth says that "I came to be disgusted with the hate" on Canonical's display server Mir, saying it "changed my opinion of the free software community."
Full story here.
So, what's the alternative? Should everyone just settle for whoever has the first-mover advantage in low-level software, regardless of the relative merits of each solution? Should all FOSS projects be equal, though some more equal than others?
I agree that standards do make things much easier for everyone. However, deciding on and implementing a standard will always be much more straightforward and less painful under an organisational hierarchy. While the FOSS model will ultimately decide a standard or standards through community usage (hopefully based on merit, but you never know...), you have to accept that the path to doing so will be more messy and painful.
I would not want to see Mir ditched just because it will make things hard for a few people for a while. I'd rather it lives or dies on its own merits. If it's a better solution, then the FOSS ecosystem as a whole will be ultimately better off for it.
Should all FOSS projects be equal, though some more equal than others?
Unfortunately, yes. What would you think if some small group decided to fork the Linux kernel, and make a different kernel that wasn't fully compatible, and then demanded that everyone write their software to work on that kernel too? And look at all the hoopla over systemd adoption (and at least there, most software isn't affected by the init system, only other low-level system services). There's a reason there's been a lot of pushes to try to standardize certain elements of the Linux/FOSS software stack, such as with LSB: fragmentation causes more problems than it solves. We should have learned this lesson long ago: the whole reason UNIX died is because of fragmentation. Windows came along offering a single, standard platform for high-end software vendors, and they all ended up abandoning UNIX because they could write one version for Win32 and be done, instead of having to write 5-10 different versions for UNIXes.
Should everyone just settle for whoever has the first-mover advantage in low-level software, regardless of the relative merits of each solution?
Remember too that Wayland wasn't the invention of some small group of outsiders: it was the product of the de facto group of FOSS display stack experts, the people who were already giving us X and the many important extensions to it which were badly needed for desktop Linux to work. Sorry if that sounds like an appeal to authority, but that team had already proven its worth and importance to the community with their X.org work, so it's perfectly reasonable that a small team from a single company acting against this standardization effort, in an apparent display of NIH, were shunned.
at least there, most software isn't affected by the init system
Don't worry. The systemd devs are working hard on fixing that. [debian.org]
That's a shame to see. But, there are two separate issues in that bug:
1) A gratuitous dependency of procps on libsystemd2) The requirement to mount a separate /usr (if any) in the initramfs
The first is the real problem, dragging in unnecessary library dependencies in a minimalistic core tool. Extended functionality should likely be in a separate tool, or dynamically loaded from a module.
The latter was done ages ago (by me) when I was maintaining sysvinit, and is independent of any init system in use. There are numerous long-standing issues with mounting /usr part way through the init sequence, and this resolves all of them. It was done so that it would continue to work properly for all users, irrespective of their init system, with the exception of users *not* using an initramfs who were mounting a separate /usr. This subset of the userbase is vanishingly small. You can continue to boot without an initramfs if /usr is on /. And you can continue to use a separate /usr so long as you use an initramfs. It's the combination of the two that's not supported (separate /usr and no initramfs). I did a lot of work to make this minimally disruptive--you can even mount separate / and /usr from NFS in the initramfs. Even a separate encrypted /etc.
Back when I maintained procps, it only depended on libc. It didn't even trust libc very much. It was prepared for some kinds of kernel bug that could hang it, producing output until hitting the kernel problem. When I was asked to support SELinux, I insisted on doing so without the external library.
I gave up maintaining it. Sorry. I got a full-time job and had 10 kids. I also couldn't really deal with Red Hat hacking in incompatible changes, such as stealing command option letters that were reserved for other things. One can't say "no" to a distribution that is able to patch your code with piles of shit.
Now you know another way that free software projects die.
-- Albert Cahalan
So you had 2 kids?
I'm in the same situation with some of my own software, and it was one of the reasons I ended up dropping sysvinit maintenance in Debian, and quitting the project entirely (RSI was the primary reason). As a lone volunteer, there's little a single person can do when faced with the juggernaut of paid commercial developers ramming their changes in. Times have changed a lot since free software development in the '90s. I don't like where things have ended up going.
Right now, I'm in the same situation as the tmux developers. My stuff won't work on a "modern" Linux system properly because it falls foul of the systemd session management and mount namespaces. But I don't really want to hack in special support logic and dependencies upon junk like libsystemd. Not just because it compromises the portability and maintainability, but also because I object to the necessity of "fixing" perfectly working code just because a bunch of people decided to break decades old systems programming contracts. I don't think breaking POSIX is at all acceptable, and I've moved more and more to FreeBSD as a result. If I had the same cavalier attitude to basic systems compatibility in my day job, I'd be fired for reckless incompetence, yet RedHat permit this as the foundation of their flagship enterprise product.
the whole reason UNIX died is because of fragmentation
That sounds very retcon-ish having lived thru that era the problem was tying hardware to software (before windows all OS came from your hardware mfgr just like OSX today) and both upfront cost and support costs were milked until people hated the OS provider. Also no security patches.
Back in the days before package managers and GNU autoconf and all that, it still wasn't all that much work to compile C from one machine on another. There was a lot of stupid address width assumptions in the old days. Also floating point was weird.
Agreed. Commercial Unix died because it was FAR more expensive (SW & RISC workstation HW were tied together) than Windows and PCs, and PCs running the new Windows NT were becoming more capable. That's it.In the heyday of commercial Unix, software vendors happily developed their star product for only one or two Unix variants. This was back when only Unix HW & SW were up to the task.
The solution, I think, is to accept the fact that plays like this are going to face criticism. If the play is in fact a good one, then arguments or experiments should show it. Don't forget this is Mark Shuttleworth, who has a history of responding poorly to criticism. It may be that his problem is he can't separate the rational criticism from the insults. Or he may be incapable of dealing with criticism.
I think Mir would have been bad for the community if it caught on the way PulseAudio (aka ESD¹) did. It was fine until people started targeting PA, which we're stuck with now. Mir also had some worrying aspects, I forget what now, but if it was a requirement I think we'd be in a worse world than the one where Wayland is an option.
1. Yes, the Enlightenment Sound Daemon, the one everyone hated because the latency was bad. They changed the name, the latency is still bad.
> PulseAudio (aka ESD
I think you're mistaken. Pulseaudio was written by Lennart Poettering. Check the README file. It was intended to replace ESD; that doesn't mean it was ESD. Check the README:
Copyright 2004 Lennart Poettering <mzcbylcnhqvb (at) 0pointer (dot)
It is intended to be an improved drop-in replacement for the
Enlightened Sound Daemon (ESOUND).[...]Acknowledgements
Eric B. Mitchell for writing ESOUND
-- http://freedesktop.org/software/pulseaudio/releases/polypaudio-0.1.tar.gz [freedesktop.org]
The code is not ESD, but the problems, the performance and the difficulties getting rid of it is pretty much the same.
I think Mir would have been bad for the community if it caught on the way PulseAudio [...] did. It was fine until people started targeting PA, which we're stuck with now.
It is incredibly easy to avoid running pulseaudio today if you do not want to use it. Hardly anybody actually writes applications which call audio output APIs directly, it is much more common to use something like libsdl or libao or openal which support many different audio drivers.
One thing to watch out for is distros installing an ALSA default pcm that automatically spawns pulseaudio and directs output to that. Such configuration can be deleted.
The merit of Mir was never clear. It was trying to solve the same problem as Wayland, in a similar but incompatible way. To be able to justify the extra effort needed to support it, it would have to be significantly different or better than Wayland.
It was making things hard for parts of the system that were already understaffed. So when Canonical decides to throw money at creating more work for the developers rather than helping them out, I can understand why they're not lining up to support Mir.
You come very, *very* close to hitting the point I've been screaming in my head all the way through this discussion:
We need standard INTERFACES, but as much as possible we should avoid standard IMPLEMENTATIONS.
How many mail clients are there? Friggin thousands probably. Which one do you need? Well...for most users, pick one that looks pretty, it really doesn't matter. They all do IMAP and POP3 and SMTP and all of that, so as long as they talk to the server the same way, you can switch from one to the other to the next ten times a day every day and it doesn't matter. Your mail server won't care, your desktop won't care, you aren't going to be forced to reinstall your OS or rip apart half your system or write new code to make that change.
Obviously there are cases where the interface must change. There are places where you must break backward compatibility. But if you do it right, you can swap the new one in and pull the old one out without anyone really noticing. And then you make your changes and your custom extensions, IN AN OPEN AND TRANSPARENT WAY (ie, no M$ E-E-E strategy). Monoculture means attacks are easier (you already know what software to target) and more profitable (the same attack is effective against everyone). So it's a security risk, as well as just making it harder to build my computer the way I want.
Parent mentions that we tend to target a single kernel (ie, the Linux kernel)...but I work with several different kernels every day and I barely notice the differences. Linux (in a half dozen different versions, and a few different architectures), AIX, FreeBSD (also various architectures)...yes, some software only runs on x86, some only runs on ARM, some only runs on BSD, some only runs on Linux. But the vast majority of stuff I use works on any of them, because they're all Unix, so they all share a pretty common interface. We already run the same display server with many different kernels under it and many different display managers above it...so I don't see why can't we run multiple display servers too.
My dream is a system where I can swap out any single component at any time with virtually no impact. When the next Heartbleed comes, targeting OpenSSL, half the world might already be safe because LibreSSL doesn't have the bug, and the other half can just switch over as soon as their bosses approve the change. Meanwhile OpenSSL gets their stuff fixed and maybe the next attack targets Libre. Just like how I decided one day I didn't like KDE, so I switched to Enlightenment. And I can still use the same apps, and the same backends, and didn't really need to change the system configuration in any way. Our networks would be in a much better state if the entire stack could shift that easily. But instead we're going in the other direction, with systemD incorporating as much of the system as possible, and now the major display managers starting to rely on systemD components as well...we've already passed the point where people who ask how to remove systemD are generally advised to just reinstall a whole different distro, if not switching to an entirely different kernel. That's absolutely insane.
Nobody in Linuxland with its fetishization of "freedom" will be part of a standardization process. That would require compromise, leadership, and organization. Thus, the competing "standards", where "standard" means a single implementation. The only reason we got Linux was because Linus and the rest had the POSIX standard to implement, and it got a GUI because X11 was already written for Unix. All it needed was some adaptation. When there was a choice, Linux has been kind of dismissive of existing standards because Linux developers think they can do better. History shows that usually they can't.