Stories
Slash Boxes
Comments

SoylentNews is people

posted by n1 on Tuesday April 11 2017, @01:29AM   Printer-friendly
from the politics dept.

After announcing his company was abandoning Unity for GNOME, Shuttleworth posted a thank-you note to the Unity community Friday on Google Plus, but added on Saturday:

"I used to think that it was a privilege to serve people who also loved the idea of service, but now I think many members of the free software community are just deeply anti-social types who love to hate on whatever is mainstream. When Windows was mainstream they hated on it. Rationally, Windows does many things well and deserves respect for those. And when Canonical went mainstream, it became the focus of irrational hatred too. The very same muppets would write about how terrible it was that IOS/Android had no competition and then how terrible it was that Canonical was investing in (free software!) compositing and convergence. Fuck that shit."

"The whole Mir hate-fest boggled my mind - it's free software that does something invisible really well. It became a political topic as irrational as climate change or gun control, where being on one side or the other was a sign of tribal allegiance. We have a problem in the community when people choose to hate free software instead of loving that someone cares enough to take their life's work and make it freely available."

Shuttleworth says that "I came to be disgusted with the hate" on Canonical's display server Mir, saying it "changed my opinion of the free software community."

Full story here.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Insightful) by Grishnakh on Tuesday April 11 2017, @02:29AM (17 children)

    by Grishnakh (2831) on Tuesday April 11 2017, @02:29AM (#492083)

    As I said in a post above: the objection was that it was contributing to Linux fragmentation for a critical low-level infrastructural component, and creating more unnecessary work for downstream projects such as KDE/Qt and Gnome/Gtk+ which need to talk to the display server.

    It's not trivial to make something like Qt communicate properly with a display server, and adding yet another one in there just adds more work for those developers. If they don't do it, then their software won't work very well on systems which use that display server. In the old days, this wasn't hard: everyone used X, so that was the single standard all graphical applications or toolkits had to work with that. Then along came Wayland, which promised to be better and more modern in multiple ways. But then came Canonical with their own, incompatible Mir, so now devs had 3 different display servers to work with and test code for.

    This isn't like some top-level application program that you can either take or leave.

    The problem isn't the licensing, the problem is that just like we only generally work with a single kernel (the Linux kernel), no one wants to work with multple display servers. People, for very good reason, want standardization for low-level infrastructure components. Fragmentation (at that level) is not helping the cause for Linux adoption; it just adds more work for developers, which means less time available to make things work well and actually compete with other platforms (namely Windows and Mac).

    Free Software gives us choices, and I’d rather have choices than lack them.

    There's a such thing as too many choices. A display server that no one uses and no one supports isn't very helpful, but when a major distro uses it, then that demands attention, attention which draws from other important work. Ignoring it means your software won't work on that distro. This is a prime example of why standardization in FOSS is important. Having lots of choices for text editors is fine; having lots of choices for desktop environments is a little more problematic, but still not that bad since they all generally run each other's applications; having lots of choices for display servers and other low-level infrastructure is a giant problem. You'll never get a working Linux desktop if you can't standardize on the display server.

    Starting Score:    1  point
    Moderation   +4  
       Insightful=2, Interesting=2, Total=4
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 0) by Anonymous Coward on Tuesday April 11 2017, @02:45AM (16 children)

    by Anonymous Coward on Tuesday April 11 2017, @02:45AM (#492092)

    Agreed. Standards matter because when they are adopted, they remove complexity, improve interoperability, reduce the learning curve, and result in more system stability because more programmers can focus their development efforts on the same code.

    Linux (the OS) only got to where it is today because it was an implementation (almost totally) of the well-defined POSIX standard. Think about that.

    • (Score: 4, Interesting) by Mykl on Tuesday April 11 2017, @03:19AM (15 children)

      by Mykl (1112) on Tuesday April 11 2017, @03:19AM (#492107)

      So, what's the alternative? Should everyone just settle for whoever has the first-mover advantage in low-level software, regardless of the relative merits of each solution? Should all FOSS projects be equal, though some more equal than others?

      I agree that standards do make things much easier for everyone. However, deciding on and implementing a standard will always be much more straightforward and less painful under an organisational hierarchy. While the FOSS model will ultimately decide a standard or standards through community usage (hopefully based on merit, but you never know...), you have to accept that the path to doing so will be more messy and painful.

      I would not want to see Mir ditched just because it will make things hard for a few people for a while. I'd rather it lives or dies on its own merits. If it's a better solution, then the FOSS ecosystem as a whole will be ultimately better off for it.

      • (Score: 3, Insightful) by Grishnakh on Tuesday April 11 2017, @03:31AM (7 children)

        by Grishnakh (2831) on Tuesday April 11 2017, @03:31AM (#492113)

        Should all FOSS projects be equal, though some more equal than others?

        Unfortunately, yes. What would you think if some small group decided to fork the Linux kernel, and make a different kernel that wasn't fully compatible, and then demanded that everyone write their software to work on that kernel too? And look at all the hoopla over systemd adoption (and at least there, most software isn't affected by the init system, only other low-level system services). There's a reason there's been a lot of pushes to try to standardize certain elements of the Linux/FOSS software stack, such as with LSB: fragmentation causes more problems than it solves. We should have learned this lesson long ago: the whole reason UNIX died is because of fragmentation. Windows came along offering a single, standard platform for high-end software vendors, and they all ended up abandoning UNIX because they could write one version for Win32 and be done, instead of having to write 5-10 different versions for UNIXes.

        Should everyone just settle for whoever has the first-mover advantage in low-level software, regardless of the relative merits of each solution?

        Remember too that Wayland wasn't the invention of some small group of outsiders: it was the product of the de facto group of FOSS display stack experts, the people who were already giving us X and the many important extensions to it which were badly needed for desktop Linux to work. Sorry if that sounds like an appeal to authority, but that team had already proven its worth and importance to the community with their X.org work, so it's perfectly reasonable that a small team from a single company acting against this standardization effort, in an apparent display of NIH, were shunned.

        • (Score: 5, Informative) by MadTinfoilHatter on Tuesday April 11 2017, @06:07AM (4 children)

          by MadTinfoilHatter (4635) on Tuesday April 11 2017, @06:07AM (#492153)

          at least there, most software isn't affected by the init system

          Don't worry. The systemd devs are working hard on fixing that. [debian.org]

          • (Score: 5, Informative) by rleigh on Tuesday April 11 2017, @08:33AM (3 children)

            by rleigh (4887) on Tuesday April 11 2017, @08:33AM (#492183) Homepage

            That's a shame to see. But, there are two separate issues in that bug:

            1) A gratuitous dependency of procps on libsystemd
            2) The requirement to mount a separate /usr (if any) in the initramfs

            The first is the real problem, dragging in unnecessary library dependencies in a minimalistic core tool. Extended functionality should likely be in a separate tool, or dynamically loaded from a module.

            The latter was done ages ago (by me) when I was maintaining sysvinit, and is independent of any init system in use. There are numerous long-standing issues with mounting /usr part way through the init sequence, and this resolves all of them. It was done so that it would continue to work properly for all users, irrespective of their init system, with the exception of users *not* using an initramfs who were mounting a separate /usr. This subset of the userbase is vanishingly small. You can continue to boot without an initramfs if /usr is on /. And you can continue to use a separate /usr so long as you use an initramfs. It's the combination of the two that's not supported (separate /usr and no initramfs). I did a lot of work to make this minimally disruptive--you can even mount separate / and /usr from NFS in the initramfs. Even a separate encrypted /etc.

            • (Score: 2, Interesting) by Anonymous Coward on Tuesday April 11 2017, @04:37PM (2 children)

              by Anonymous Coward on Tuesday April 11 2017, @04:37PM (#492343)

              Back when I maintained procps, it only depended on libc. It didn't even trust libc very much. It was prepared for some kinds of kernel bug that could hang it, producing output until hitting the kernel problem. When I was asked to support SELinux, I insisted on doing so without the external library.

              I gave up maintaining it. Sorry. I got a full-time job and had 10 kids. I also couldn't really deal with Red Hat hacking in incompatible changes, such as stealing command option letters that were reserved for other things. One can't say "no" to a distribution that is able to patch your code with piles of shit.

              Now you know another way that free software projects die.

              -- Albert Cahalan

              • (Score: 1) by DBeemer on Friday April 14 2017, @09:18AM

                by DBeemer (6398) on Friday April 14 2017, @09:18AM (#493890)

                So you had 2 kids?

              • (Score: 2) by rleigh on Friday April 14 2017, @06:52PM

                by rleigh (4887) on Friday April 14 2017, @06:52PM (#494149) Homepage

                I'm in the same situation with some of my own software, and it was one of the reasons I ended up dropping sysvinit maintenance in Debian, and quitting the project entirely (RSI was the primary reason). As a lone volunteer, there's little a single person can do when faced with the juggernaut of paid commercial developers ramming their changes in. Times have changed a lot since free software development in the '90s. I don't like where things have ended up going.

                Right now, I'm in the same situation as the tmux developers. My stuff won't work on a "modern" Linux system properly because it falls foul of the systemd session management and mount namespaces. But I don't really want to hack in special support logic and dependencies upon junk like libsystemd. Not just because it compromises the portability and maintainability, but also because I object to the necessity of "fixing" perfectly working code just because a bunch of people decided to break decades old systems programming contracts. I don't think breaking POSIX is at all acceptable, and I've moved more and more to FreeBSD as a result. If I had the same cavalier attitude to basic systems compatibility in my day job, I'd be fired for reckless incompetence, yet RedHat permit this as the foundation of their flagship enterprise product.

        • (Score: 3, Informative) by VLM on Tuesday April 11 2017, @01:45PM (1 child)

          by VLM (445) Subscriber Badge on Tuesday April 11 2017, @01:45PM (#492247)

          the whole reason UNIX died is because of fragmentation

          That sounds very retcon-ish having lived thru that era the problem was tying hardware to software (before windows all OS came from your hardware mfgr just like OSX today) and both upfront cost and support costs were milked until people hated the OS provider. Also no security patches.

          Back in the days before package managers and GNU autoconf and all that, it still wasn't all that much work to compile C from one machine on another. There was a lot of stupid address width assumptions in the old days. Also floating point was weird.

          • (Score: 0) by Anonymous Coward on Wednesday April 12 2017, @01:25AM

            by Anonymous Coward on Wednesday April 12 2017, @01:25AM (#492566)

            Agreed. Commercial Unix died because it was FAR more expensive (SW & RISC workstation HW were tied together) than Windows and PCs, and PCs running the new Windows NT were becoming more capable. That's it.
            In the heyday of commercial Unix, software vendors happily developed their star product for only one or two Unix variants. This was back when only Unix HW & SW were up to the task.

      • (Score: 0) by Anonymous Coward on Tuesday April 11 2017, @06:33AM (3 children)

        by Anonymous Coward on Tuesday April 11 2017, @06:33AM (#492158)

        The solution, I think, is to accept the fact that plays like this are going to face criticism. If the play is in fact a good one, then arguments or experiments should show it. Don't forget this is Mark Shuttleworth, who has a history of responding poorly to criticism. It may be that his problem is he can't separate the rational criticism from the insults. Or he may be incapable of dealing with criticism.

        I think Mir would have been bad for the community if it caught on the way PulseAudio (aka ESD¹) did. It was fine until people started targeting PA, which we're stuck with now. Mir also had some worrying aspects, I forget what now, but if it was a requirement I think we'd be in a worse world than the one where Wayland is an option.

        1. Yes, the Enlightenment Sound Daemon, the one everyone hated because the latency was bad. They changed the name, the latency is still bad.

        • (Score: 2) by butthurt on Tuesday April 11 2017, @08:03AM (1 child)

          by butthurt (6141) on Tuesday April 11 2017, @08:03AM (#492171) Journal

          > PulseAudio (aka ESD

          I think you're mistaken. Pulseaudio was written by Lennart Poettering. Check the README file. It was intended to replace ESD; that doesn't mean it was ESD. Check the README:

          Copyright 2004 Lennart Poettering <mzcbylcnhqvb (at) 0pointer (dot)
                de>
          [...]
            It is intended to be an improved drop-in replacement for the
                [11]Enlightened Sound Daemon (ESOUND).
          [...]
          Acknowledgements

                Eric B. Mitchell for writing ESOUND

          -- http://freedesktop.org/software/pulseaudio/releases/polypaudio-0.1.tar.gz [freedesktop.org]

          • (Score: 0) by Anonymous Coward on Tuesday April 11 2017, @11:34AM

            by Anonymous Coward on Tuesday April 11 2017, @11:34AM (#492219)

            The code is not ESD, but the problems, the performance and the difficulties getting rid of it is pretty much the same.

        • (Score: 0) by Anonymous Coward on Tuesday April 11 2017, @05:47PM

          by Anonymous Coward on Tuesday April 11 2017, @05:47PM (#492377)

          I think Mir would have been bad for the community if it caught on the way PulseAudio [...] did. It was fine until people started targeting PA, which we're stuck with now.

          It is incredibly easy to avoid running pulseaudio today if you do not want to use it. Hardly anybody actually writes applications which call audio output APIs directly, it is much more common to use something like libsdl or libao or openal which support many different audio drivers.

          One thing to watch out for is distros installing an ALSA default pcm that automatically spawns pulseaudio and directs output to that. Such configuration can be deleted.

      • (Score: 3, Insightful) by mth on Tuesday April 11 2017, @07:03AM

        by mth (2848) on Tuesday April 11 2017, @07:03AM (#492162) Homepage

        So, what's the alternative? Should everyone just settle for whoever has the first-mover advantage in low-level software, regardless of the relative merits of each solution? Should all FOSS projects be equal, though some more equal than others?

        The merit of Mir was never clear. It was trying to solve the same problem as Wayland, in a similar but incompatible way. To be able to justify the extra effort needed to support it, it would have to be significantly different or better than Wayland.

        I would not want to see Mir ditched just because it will make things hard for a few people for a while. I'd rather it lives or dies on its own merits. If it's a better solution, then the FOSS ecosystem as a whole will be ultimately better off for it.

        It was making things hard for parts of the system that were already understaffed. So when Canonical decides to throw money at creating more work for the developers rather than helping them out, I can understand why they're not lining up to support Mir.

      • (Score: 3, Insightful) by urza9814 on Tuesday April 11 2017, @07:13PM (1 child)

        by urza9814 (3954) on Tuesday April 11 2017, @07:13PM (#492405) Journal

        I agree that standards do make things much easier for everyone. However, deciding on and implementing a standard will always be much more straightforward and less painful under an organisational hierarchy. While the FOSS model will ultimately decide a standard or standards through community usage (hopefully based on merit, but you never know...), you have to accept that the path to doing so will be more messy and painful.

        You come very, *very* close to hitting the point I've been screaming in my head all the way through this discussion:

        We need standard INTERFACES, but as much as possible we should avoid standard IMPLEMENTATIONS.

        How many mail clients are there? Friggin thousands probably. Which one do you need? Well...for most users, pick one that looks pretty, it really doesn't matter. They all do IMAP and POP3 and SMTP and all of that, so as long as they talk to the server the same way, you can switch from one to the other to the next ten times a day every day and it doesn't matter. Your mail server won't care, your desktop won't care, you aren't going to be forced to reinstall your OS or rip apart half your system or write new code to make that change.

        Obviously there are cases where the interface must change. There are places where you must break backward compatibility. But if you do it right, you can swap the new one in and pull the old one out without anyone really noticing. And then you make your changes and your custom extensions, IN AN OPEN AND TRANSPARENT WAY (ie, no M$ E-E-E strategy). Monoculture means attacks are easier (you already know what software to target) and more profitable (the same attack is effective against everyone). So it's a security risk, as well as just making it harder to build my computer the way I want.

        Parent mentions that we tend to target a single kernel (ie, the Linux kernel)...but I work with several different kernels every day and I barely notice the differences. Linux (in a half dozen different versions, and a few different architectures), AIX, FreeBSD (also various architectures)...yes, some software only runs on x86, some only runs on ARM, some only runs on BSD, some only runs on Linux. But the vast majority of stuff I use works on any of them, because they're all Unix, so they all share a pretty common interface. We already run the same display server with many different kernels under it and many different display managers above it...so I don't see why can't we run multiple display servers too.

        My dream is a system where I can swap out any single component at any time with virtually no impact. When the next Heartbleed comes, targeting OpenSSL, half the world might already be safe because LibreSSL doesn't have the bug, and the other half can just switch over as soon as their bosses approve the change. Meanwhile OpenSSL gets their stuff fixed and maybe the next attack targets Libre. Just like how I decided one day I didn't like KDE, so I switched to Enlightenment. And I can still use the same apps, and the same backends, and didn't really need to change the system configuration in any way. Our networks would be in a much better state if the entire stack could shift that easily. But instead we're going in the other direction, with systemD incorporating as much of the system as possible, and now the major display managers starting to rely on systemD components as well...we've already passed the point where people who ask how to remove systemD are generally advised to just reinstall a whole different distro, if not switching to an entirely different kernel. That's absolutely insane.

        • (Score: 1, Insightful) by Anonymous Coward on Wednesday April 12 2017, @01:16AM

          by Anonymous Coward on Wednesday April 12 2017, @01:16AM (#492562)

          Nobody in Linuxland with its fetishization of "freedom" will be part of a standardization process. That would require compromise, leadership, and organization. Thus, the competing "standards", where "standard" means a single implementation. The only reason we got Linux was because Linus and the rest had the POSIX standard to implement, and it got a GUI because X11 was already written for Unix. All it needed was some adaptation. When there was a choice, Linux has been kind of dismissive of existing standards because Linux developers think they can do better. History shows that usually they can't.