Phoronix has an article up about some interesting ideas of Lennart Poettering about what could be a possible future for Linux:
Lennart Poettering of systemd and PulseAudio fame has published a lengthy blog post that shares his vision for how he wishes to change how Linux software systems are put together to address a wide variety of issues. The Btrfs file-system and systemd play big roles with his new vision. Long story short, Lennart is trying to tackle how Linux distributions and software systems themselves are assembled to improve security, deal with the challenges of upstream software vendors integrating into many different distributions, and "the classic Linux distribution scheme is frequently not what end users want."
(Score: 1, Insightful) by Anonymous Coward on Monday September 01 2014, @10:48PM
Unfortunatly it is a mix of 'yeah we need that' and 'wtf'.
The wtf is going to create a shitstorm.
Some distros seem intent on being unique when they do not really need to be. For example see the installer wars of the 90s. We ended up with no less than 5 different ways to install software in linux. Thats not good. Thats confusing. I can not take my knowledge and move around between distros. I have to look up each one to see what damn quirk they put in. I can not assume that the upstream has not tweaked the package in some odd way. I have to look up the documentation every time.
We ended up with several dozen versions of sysv init.d. So as a developer I have to learn the quirks of 3-4 of the major ones and hope someone from the communities in each will help me out.
Some of this crap like 'how do we start our OS' 'how do we install our OS' should have been bashed out years ago. Instead its been used as part of the identity of each distro. This is a 'good' thing. But we should circle the wagons a bit and move experimentation of such simple core stuff out to forked distros.
So yeah we *need* this junk. However, its 'lets drag the whole kitchen sink in'. So instead of a 'yeah thats a better way' we end up reinventing tried and true tools and putting them in the init process. Where you can argue the init service should be 'start it and watch it' let each module handle whatever it is doing. Its a generic system trying to do the process of specific tools. Reinventing the same issues we have dealt with for years in windows. Even MS is trying to yank its OS apart into more modules not less.
(Score: 1) by Anonymous Coward on Monday September 01 2014, @11:28PM
5 different ways to install software in linux. Thats not good
One Microsoft Way:
It's not just an address, it's the thinking that gets you an easily-exploited monoculture.
No, thanks.
-- gewg_
(Score: 0) by Anonymous Coward on Monday September 01 2014, @11:39PM
Just 5? That all? Guess we need to work on improving that number.
(Score: 2) by el_oscuro on Tuesday September 02 2014, @12:53AM
It seems that there are about 4 main ways to software on Linux these days:
1. sudo apt-get install from debian repository
2. sudo yum install from yum repository
3. extract from tarball
4. ./configure, make, and make install
By contrast, EACH program in Windows has a completely separate installer. Most involve extracting from a zipfile, running a setup.exe or .msi installer, then following some prompts. The process can vary considerably by program. Updates and patches are also handled differently by each program instead of a central update process like debian and yum repositories have.
I'll take the Linux installation processes anytime over Windows.
SoylentNews is Bacon! [nueskes.com]
(Score: 0) by Anonymous Coward on Tuesday September 02 2014, @01:11AM
What is with all this sudo rubbish :P I only ever use that on OSX.
IIRC there was a proposal a couple of years back for fat binaries and OSX style .app folders on linux. It didn't get very far although I was supportive. If LP is involved in realising something like this, I remove any and all support. Furthermore, the only 'sand-boxing' I want to see is around systemd and it's access to anything that is not a part of the fucking init system.
(Score: 2) by tangomargarine on Tuesday September 02 2014, @03:04PM
You think installing software shouldn't require rights elevation? WTF? And Windows does it, too, only they call it UAC, so it's on all 3 of Windows, Mac, and Linux.
IIRC there was a proposal a couple of years back for fat binaries and OSX style .app folders on linux.
Oh, you mean all installed programs should be "portable"?
It didn't get very far
Hence why we're continuing to do it the sane way (sudo).
"Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
(Score: 3, Informative) by cykros on Tuesday September 02 2014, @07:54PM
Installing software DOESN'T require privilege escalation. Nothing stops you from installing into your own ~/bin, ~/lib, etc.
Installing it system-wide, of course, does, and should, as system-critical files and directories require elevated privileges to write to them for a good reason.
(Score: 2) by tangomargarine on Tuesday September 02 2014, @08:24PM
Ah, yes, good point. I'm dimly aware there's some debate about where to install stuff to that I'm not privy to.
"Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
(Score: 2) by cykros on Monday September 15 2014, @09:01PM
It's less a matter of debate and more just something to consider when installing software. By installing system wide, you're generally aiming to make it available to more than one user on the system, whereas by installing in a home directory you're putting software in that is only to be used by that user (with normal permissions set up anyway). This makes it more suitable if you are using a multi-user system, such as a public shell server or shared work server. Furthermore, the decision can be impacted on your partitioning (or extra drives) scheme. In many cases, installing software into your /home partition means you'll retain it even if you choose reinstall your root system.
(Score: 2) by DECbot on Tuesday September 02 2014, @02:06AM
I believe that you let out dpkg, but I understand why you might leave that off the list. If there's a .deb, there's likely already a package in the repository or you can just add the ppa. There's not too many systems out there that'd have dpkg and not have apt.
cats~$ sudo chown -R us /home/base
(Score: 1, Informative) by Anonymous Coward on Tuesday September 02 2014, @08:06AM
dpkg is what works under apt-get. It's the same thing, just lower level. Normally you shouldn't touch dpkg directly.
(Score: 2, Funny) by citizenr on Tuesday September 02 2014, @04:21AM
You missed Lennart Poetterings preferred one
/bin/systemd -e -x89fe85aa6 /u "name of package to install goes here"
of course its temporary, plan is to migrate this interface to IPC only.
(Score: 0) by Anonymous Coward on Tuesday September 02 2014, @07:34AM
Of course that's temporary. Didn't you read that he is now interested in btrfs? I'm sure he'll ultimately just migrate all systemd functionality into btrfs, making sure that you cannot boot a Linux system from anything else but btrfs.
(Score: 2) by VLM on Tuesday September 02 2014, @12:07PM
On the bright side, after that destruction is unleashed, going back to JCL on MVS/360 will be an upgrade, so we can expect some retrocomputer action once modern linux is ruined.
(Score: 2) by meisterister on Wednesday September 03 2014, @05:57AM
Wait, don't give him any ideas!
BTW, I'm going to BSD if this SystemD stuff goes on for much longer.
(May or may not have been) Posted from my K6-2, Athlon XP, or Pentium I/II/III.