Mark All as Read
Mark All as Unread
I voted for ext2/ext3 but use ext4. It is probably one of the most well tested and understood operating systems (second to the FAT family). For the longest time, people have tried to get me to use different ones, but I always end up back with the ext2/ext3/ext4 family. Most other file systems I've used (which include HFS, FAT, NTFS, BTRFS, ReiserFS, and BeFS) have all lost data on me due to corruption of some kind or another. The winner is BTRFS, in that I installed the OS rebooted, used the system for a bit, shut it down, started it up the next day and when I rebooted it to install updates, it balked due to disk errors; from formatting to corruption in less than 6 hours.
Ditto. ext4 and have used it on my desktop systems since before it entered the mainline kernel tree.For production systems I manage, they got ext4 root filesystems as soon as it was rolled into Debian/Ubuntu as a selectable filesystem option, and ext4 for data partitions earlier than that -- pretty much as soon as the FS was no longer marked "experimental" in mainline. For DB loads especially, ext4 is noticeably better than ext3.
Yeah, the eds are noobs. I'll fix it real quick...
I voted for ext2/ext3 but use ext4. It is probably one of the most well tested and understood operating systems (second to the FAT family).
Except for the fact that ext4 is a filesystem, not an operating system.
Something similar. Tested and understood - I keep reading about the advantages of alternative file systems. But, when it comes time to create a file system, I just rely on what I know. Ehhh - call it lazy, I guess.
Ditto the experience with ReiserFS. I got half excited about it, all those years ago. I worked with it for awhile, had some problems, then the author went apeshit, then to prison. So much for reliable support. I just fell back to Ext4, because it just works.
Can I vote more than once?....I simply don't have a single "everyday" computer, I have 3. A mac tower where I code some every day, a Windows8 where I game some every day, and a macbook that I lie on the couch or in bed and watch stupid youtube videos.
yeah, i have been using CBM dos recently... sometimes for longer than I do my NTFS, EXT3 exFAT and FAT32 storage.
Came here to say this, I've got a Win10 box and a Linux box.
I assumed the question meant the computer you're currently using.
"Your computer" in the question suggests that the it is assumed that you have and use only one computer. I use several over the course of the day so could justifiably answer NTFS, HFS+ and ext4, plus FAT32 if we're to count ubiquitous portable storage devices, but we're not allowed multiple answers.
And what about the people who spend all day using their Apple pocket computers? What file system do iPhones use and why isn't that an option?
This resource is no longer valid. Please return to the beginning and try again.
This shit is getting olde.
Use the Voting Booth.
I do not have a "system", I have "systems", some with BTRFS, EXT4, EXt2, FAT, exFat, NTFS, ...
So what is goal?
The main system I'm on is a Win 7 PC, used mainly for gaming, PLC software which is wndows only, and everyday nonsense. NTFS.My two Linux dev PC's are both running EXT4 on their main disks. Though, one has a 4TB mdadm raid 1 formatted XFS.
Linux Laptop, a Lenovo T410i, is EXT4 on an SSD.Funny story about that laptop. Before I put Linux on it, I bought an active Display port to HDMI converter to hook it to my TV. No Audio. Searched the net and found that Lenovo never included DP-HDMI audio pass through in their driver and there was NO FIX. Booted Linux Mint from a thumb drive and opened a video only to have Audio play through the HDMI port. Windows 7: 0 Linux: 1.
Another thing that drove me crazy was the way stupid Lenovo designed their fucked up boot process:120GB Disk has three partitions in this order: 100MB Boot, ~115GB System (Windows), 4GB Restore.Now the restore partition is at the end of the disk. You think those pricks would put it after boot like Dell does ,right? Nope. So when I tried moving the whole mess to a 256GB SSD, and expand the system partition, the boot partition for some reason looks for the restore partition at a specific sector. If it doesn't see the restore partition, you get an error and it wont boot. WTF! I dug around but found nothing to help me fix the problem. So the only way to use the entire 256GB SSD was to copy the partition layout verbatim and create another partition after the fucking restore partition. That was when the Mint boot drive was plugged back in and windows was obliterated.
I had a friend who had a similar problem. I believe the way he got around it was to boot into Windows, create the partition for the additional data after the restore partition, and then use the Disk Manager to create a spanned volume of the two partitions. To Windows, it appeared as just one large partition.
I believe the LVM in Linux can do a similar thing,
You are using bootmanager and probably uefi booting. You probably have to mess around with bcdedit to get it to work correctly. That 100 meg partition is a tell tail sign of it.
I actually like FAT32. I don't like the 4GB file limit, but aside from that it has everything a file system should have. It stores files, and it retrieves files. Sure, it has approximately zero attempt at error tolerance/recoverability, but this is made up for by the ease of creating backups (and the fact that FAT32 can be accessed by nearly anything). I can duplicate a whole partition with one XCOPY. How do you duplicate a file system which is encumbered by metadata, forks, permissions, multiple versions of the same file, symbolic links, etc.? With Windows on NTFS (IIRC it was Win7) I couldn't even use DIR /S anymore because one of their stupid USER\LOCALS~1\APPLIC~1 or some such directory linked back on itself and got the system stuck in an infinite loop.
Over-achieving file systems have been a PITA since the old days. On 68K Macs you could download files with a terminal program but couldn't do anything with them without somehow getting the file type/creator codes sorted out. One time I used LHA to make an archive of my entire Amiga harddisk, and then later when I tried to restore everything it was somewhat screwed up because of metadata that hadn't been preserved.
I duplicate with cp -a, the GNU command. Preserves permissions, symlinks, everything. For backups I use mksquashfs.E.g. "mksquashfs ./* ../mybackup.sqfs -comp xz -Xbcj x86 -noappend -no-xattrs"
The FreeBSD handbook used to say that dump was the best. I am curious if you can dump with one filesystem, the restore with another.
The reason I want to test it is I don't see any obvious distinctions between the BSD (using ufs, presumably) and Linux (using ext2 variants, presumably) version of the tools.
If both file systems respect all the same security metadata - then probably yes. Obviously, you couldn't dump to a FAT file system, then restore all the security. But, you do specify *nix-like file systems, so you could probably get the job done.
but aside from that it has everything a file system should have
Really? I can think of a lot of things that FAT32 lacks. Off the top of my head:
That's just the list of basic features that I'd expect from a filesystem. Things like constant-time snapshots, support for large numbers of small files or for very large files, journalling, dynamic filesystem sizes, and so on are so far beyond its capability that it's not even worth comparing them.
There are two FAT tables on the disk. The problem, like with any mirror system, is that when you find an inconsistency the software have to guess which one is correct or compare outputs. Windows and most disk checkers know that most people won't know without side-by-side comparison of the output or didn't want to go through the trouble, so they just report the error and guess.
Really? I can think of a lot of things that FAT32 lacks.
Yes, but you assume this is a bad thing...
An efficient way of storing small files. A 100-byte file will use 64KB on disk on a large FAT32 filesystem. Not much for a single file, but that overhead adds up really quickly when you have a lot of small files.
Why rely on the file system to do your job for you? Combine the small files into one large file yourself. For example, DOOM had its game data stored in a .WAD file instead of cluttering up your disk with a thousand bitmaps. (Lazy developers might do like Torchlight and just stuff all 20,000 files of their game assets into one giant .ZIP file. And then wonder why their game spends so much time loading.)
Support for even UNIX file permissions, let alone ACLs.Any metadata support for storing things like security labels for use with a MAC framework.
This is a good example of something I don't even want to deal with on my personal, single-user system.
An efficient allocation policy for large files. FAT uses a linked list of blocks to manage the free list on disk, so to get even vaguely efficient performance you need to cache the whole thing in RAM. To efficiently allocate large files, you must also construct another data structure to sort contiguous chunks by size. This burns RAM and it's also very easy for these structures to become inconsistent with respect to the real filesystem
It's true that this is not ideal. If I were designing my own filesystem I would not implement it this way. But still, if you have 15,000,000 clusters, with 32 bits per cluster making up the FAT, that's 60MB of RAM. Not a huge deal when you have GBs of RAM. AMD's Radeon drivers waste more RAM than that.
Any kind of resilience at all. I'm not even talking about things like ZFS's block-level checksums. There's absolutely no redundancy and no checksum storage
The storage device itself does data integrity checking. So basically the error would have to occur on the motherboard somewhere. This is possible but experience suggests that it is pretty rare. Back in the old days I would test system stability by creating a .ZIP file and then extracting it again to watch for CRC errors. I found a lot of flaky RAM, motherboards, ribbon cables, etc. by doing this. Although I think the worst instances of file system corruption were caused by programs running wild, writing garbage to the disk because of Win9x's half-assed memory protection. But ever since L2 cache got integrated with the CPU, and Intel and AMD started supplying most of the chipsets, flaky hardware has largely disappeared (except for laptops with chronic overheating problems). The only times I've had file system corruption on hardware of this millenium is when a driver BSODs, and then I might get some lost clusters or cross-linked files, which I usually just delete and go about my business.
One time I used LHA to make an archive of my entire Amiga harddisk, and then later when I tried to restore everything it was somewhat screwed up because of metadata that hadn't been preserved.
From what I remember, Amiga files did not have metadata. If you wanted to associate information with a file (basically icon and program parameters), you needed to put it in an associated .info file (e.g., foo.info for file foo), which was an ordinary file with a specific data structure.
I think there was a flag marking executable files as such. Although it could have been some other issue, I'm not certain at this point.
Bad news for the btrfs folks that it's not even a *choice* here--and no one seems to be complaining about it.
I use ext4 almost everywhere on nonremovable devices, with the exception of a 400-ish GB btrfs partition mounted with lzo compression, where I store kernel source code & kernel compiles.
I tried btrfs with the default gzip first (I have a nice fast processor on that machine) but found that I can pretty reliably generate a kernel panic just by (a) using cp to put a bunch of files on it, or (b) compile a kernel whose sources are hosted there. Changing to lzo made the problem go away. Thought about filing a bug but figured "your file system is crashy" was probably already reported.
I use a raid 1 of two 1TB drives with BTRFS for file backup. I would have preferred to use ZFS, but BTRFS has better linux support and the main feature I wanted from ZFS: checksums on data and metadata. It's reassuring to start a BTRFS scrub and see the results come back OK without having to run hashdeep or some other SHA/MD5 utility on my files.
That said, hashdeep is still great: http://md5deep.sourceforge.net/ [sourceforge.net]
I have seen numerous comments that BTRFS dies under moderate load.
My impression was that it was supposed to be a GPL compatible imitation of ZFS. It seems concering that something that is supposed to enhance file integrity actively makes it worse.
I wonder how much of the problems reported are due to hardware problems like: low or even bad memory. I have been putting off trying ZFS for years due to lack of machines with the recommended 4GB of (ECC preferred) memory. Essentially, the "server grade" filesystem requires (what used to be) "server grade" hardware to be used effectively.
Of course, now I hear that both ZFS and BTRFS are maintained by Oracle. If Oracle wanted, they could simply release ZFS under a GPL compatible license.
You may have seen some old reports. I have been using BTRFS for a couple years with no issues at all. It's set up for mirroring. I do recommend staying away from other raid modes though since the last time I tested that it crashed and burned under a simulated disk failure. "RAID1" worked fine under the same test.
If Oracle wanted, they could simply release ZFS under a GPL compatible license.
This is true if and only if Oracle is the only contributor to ZFS. Is this the case?
Good question. I don't know, and am too lazy to look it up at the moment.
Even if there are outside contributors, if they are limited in number, they can be asked to sign off on a GPL-compatible license as well. Code contributions from hold-outs can be re-written in the worst-case.
Code contributions from hold-outs can be re-written
You mean reimplement an internal API? The legal feasibility of that depends on whether Oracle loses its appeal in another pending lawsuit.
No, ZFS forked after the source code was released in 2005, but was released under the CDDL, and later that year the FSF concluded that the CDDL was not legally compatible with the GPL--thus why it is not in the Linux kernel.
So the actual development of ZFS is very federated. You have the the BSDs and linux projects each porting from the Sun code while Oracle still controls the closed source code used in Solaris. OpenZFS is the umbrella project for ZFS that gives a common way to test compatibility between different ZFS ports. It was determined that versioning ZFS was impractical because of the distributed development did not support the use of common release numbers (and then you'd likely have to work with Oracle). So a flag system was implemented by which ZFS file systems can be shared between different ZFS ports if the receiving system supports the flags used by the sending system.
I guess Oracle could GPL their code, but then it would become a real license mess.
OpenZFS Wiki [wikipedia.org]ZFS Wiki [wikipedia.org]
Tried to mod up, modded down instead :P
I prefer it, since I've had such good experience with ext4. It's the most stable filesystem I have ever seen, with remarkable error recovery. It's very hard to kill. All my systems use it as default. I have btrfs on a thumbdrive because I wanted the compression, but I've had btrfs die on me before. I wouldn't use it for mission critical stuff. ext4 is the best choice for stability on linux I'd say.
An elder nerd advised me to use XFS on my current array, and I have since had exactly zero issues with it. Even when a drives started to fail it recovered gracefully. The documentation is easy to read and the tools are easy to use (as far as filesystems go). It is important to note that so far I'm only trusting it to ephemeral data and nothing too important, but out of this 8 year long experiment I am wholly convinced to use XFS on the next array unless I come in to some sweet hardware that would make switching to ZFS or BTRFS worth while.
I have XFS on some large drives because after formatting it featured more available space than the ext(3 i guess) fs. Now, if that is a fake advantage as it is simply a matter of preallocating stuff for metadata, I dunno. Anyway, with the only downside of fsck + badblocks options not working, I never had problems with it.
XFS is a good filesystem. But it's not perfect. At work, we lost an entire openstack cluster just before Christmas, due to loss of the XFS storage. Likely a transient disk or memory hardware error, but it proved to be completely unrecoverable, even with the XFS tools.
RedHat seem to be falling back to XFS + LVM in the absence of Btrfs being anywhere remotely near production readiness. But XFS doesn't go much beyond metadata journalling; it's still very much a filesystem of the 90s, albeit a good one. It doesn't do data journalling, and it doesn't do block level hashing/checksumming, and it can't self heal or scrub itself. There is zero protection from data errors.
This is an area where there's a good bit of cognitive dissonance going on at the moment. The harsh truth of the matter is that Linux doesn't have a top notch native filesystem *at all* right now. You can use ZFS if you are able to use third-party modules. And at work we use expensive IBM GPFS stuff. But while Linux has a huge number of filesytems provided natively, they are all, for one reason or another, crap in different ways.
I've been trying out NILFS2 on a new system. So far it isn't crap, and its auto-snapshot capability has already saved me from an rm -r I later wanted to undo.
Not that I would describe ext4 and XFS as crap, either, though.
One nasty thing about XFS now is it can't be resized smaller. http://xfs.org/index.php/XFS_FAQ#Q:_Is_there_a_way_to_make_a_XFS_filesystem_larger_or_smaller.3F [xfs.org] and http://xfs.org/index.php/Shrinking_Support [xfs.org]
I'd like to thank systemd for pushing me into the arms of freebsd and ZFS
Sure at the time I was all WTF is this why are they destroying Linux? Why ruin something that works? Like a gang of rabid deletionists on wikipedia they just can't tolerate not destroying stuff that works...
But the grass really is greener on the other side in *BSD-land. It really is better engineered and better designed and "just works" more often.
I think the *BSD foundations and corporations should donate money toward systemd development, best recruitment tool EVER. What *BSD needs now is Linux needs another couple competing audio systems and maybe integrate GRUB into systemd (if its not already). I know make it impossible to run emacs on a systemd machine. Hell make it impossible to run anything but NANO as editor. And make systemd incompatible with gcc too. The future of *BSD is going to be awesome thanks to the systemd developers!
Why the hell do packages require systemd?!?¿ seriously wtf
You should weigh in on the SN OS thread..
I think the *BSD foundations and corporations should donate money toward systemd development
I nominated Lennart Poettering for a lifetime contributors award at one of the BSD contributions a couple of years back (sadly, he didn't get the award). Both PulseAudio and SystemD have caused spikes in FreeBSD adoption, and both have led to some very competent developers deciding to start contributing to FreeBSD. I'm looking forward to his next project. My guess is that it's going to be a shell with natural language processing integrated (and no POSIX sh compatibility) that he will persuade distributions to install as /bin/sh.
I call it systemd-poshd.
I voted by most representation. So, between work and home, there's ext3, reiserfs, zfs, reiserfs, ext4, and NTFS. The two reiser boxes were built before he was found guilty and have had no problems FS wise--they are also the two home machines that get the most use. One is a file server with LVM2 and fakeraid and the other is my desktop, again LVM2 and fakeraid. There might be more ZFS in the near future because the ext3/reiserfs machines are on Ubuntu 12.04 and I don't like the performance of 14.04 on my laptop (fresh install w/ ext4). So, when I get around nuking and paving my laptop with FreeBSD and perhaps if I get bold enough to attempt ZFS on 32-bit machines, ZFS will be the filesystem of my choice.
I'm still not sure what to do with the 12.04 LTS desktop when support stops. It runs great now, with a good number of games working under wine. Reinstalling everything under another OS will be a pain. Because of systemd, it won't pass the Hairyfeet challenge to get to 16.04 and I'm not emotionally ready for systemd or bsd on it. Worst yet, I don't have time to be a system admin to facilitate and tune a change nor the time to backport updates myself.
I used ReiserFS v3 for a long time. Tried 4 for a while. But that's all been over 10 years ago. Thought btrfs is to be all the things originally planned for Reiser4, plus more so that there's no longer any reason to use Reiser at all.
If it's an old laptop, just leave it alone. I still have my first nettop computer that came with Windows 98. It still works, but it is so slow and limited on memory it is useless for newer software. Chuck the old one into storage, buy a new, more powerful laptop and pave that one over instead.
Doing zfs on 32bit is bad news. If your cpu is 32bit only you're better of with UFS on FreeBSD (also because it's likely single core only). Ram isn't quite the issue many make it out to be with zfs. I've run 3Gb fine, and 2Gb is pretty doable too with tuning assuming you're not doing anything intense with on the drive.
I'd do a "do_release_upgrade" to 14.04LTS and leave it there for now.. You've got two more years on 14.04 to figure out where you're going, Linux-wise.. I'm staying on 14.04 till close to EOL to see what my options are for a non-systemd distro, as I've tried 16.04 and I aint going there...Probably back to my "Linux_roots", that being Slackware...
I'm still not sure what to do with the 12.04 LTS desktop when support stops.
Given your username I would suggest VMS with Files-11.
QNX 4 has the most instances in use in my life.
I've never had an issue with it since I've started using it 5 years ago. Performance is very good and xfsfreeze is something everyone can appreciate after it was made a standard feature of the Linux kernel for any filesystem.
I have seven computers I use (well, one of them I need to find a Linux distro that will work in it before it's used): The future Linux box, file system will depend on distro; one large laptop running FAT 32 (Windows 7), a small dual-boot laptop with kubuntu and Win 7, FAT 32; Two Android tablets and I have no idea what file systems they us; One Android phone; one Android TV that can read media files from a thumb drive, so I guess it's FAT 32, too, although I suppose it could be FAT 16 (not likely).
Linux on the desktop? Linux already won now that desktops are a minority of computers. These days, most computers (including phones and "smart" TVs) are running Android, meaning Linux has taken over the computing world.
What interests me is those of you who choose a file system rather than using whatever comes with an OS. What are your criteria for choosing your choice?
Most distributions either have or make it very easy to have the NTFS drivers. And they are useful for occasional use.
But WTF? For your computer's main file system?
The NTFS drivers are reverse engineered from an undocumented proprietary spec. Those drivers are maybe safe. Like having a Samsung Galaxy Note 7.
And the same story goes for HFS+ drivers. Occasionally useful for quick temporary compatibility with some media. But not for a system's filesystem.
The title of the poll is "what is you computer's file system?". The computer may be a Mac (HFS+) or for the really unfortunate, Windows (NTFS). It is reported that there are people that run those, believe it or not.
I would love to know where on the spectrum you fall...
You can pry my copy of WinFS from my cold dead hands. But if I'm a zombie I'll bite ya.
ZFS on FreeBSD since a couple of years back (FreeBSD 10.0)ZFS on Linux as the rootfs (works since Ubuntu 16.04, though you have to install by hand since the installer doesn't support it yet)
I've used Btrfs a good bit; the only file system to result in repeated data loss, loss of the ability to write, and performance problems I've ever used. Maybe it will eventually become production ready, but after it being terrible for so long, I'm not holding my breath.
Using ZFS over encrypted partitions, and the bottleneck is the encryption, not ZFS per se. And that because I'm running them on old CPUs without aesni support. Even on FreeBSD 11-STABLE amd64 boxes with only 4 GB RAM, which run just fine as desktop machines. So ZFS is actually a viable alternative. I was running UFS for a very long time, and wary of switching to ZFS for day-to-day uses, but I went to an all-ZFS-setup with the switch to FreeBSD 11 even on oldish hardware, and I'm not looking back. It's all too convenient to give up the upsides.
On the machines hanging off this KVM switch.This (active) machine: ext3, ext4 and NTFS partitions (Ex Primary/CAD Workstation, now test/transfer box)The machine beside it: ext4 (Primary Workstation)Next Machine: NTFS and ext3 (Games Machine)Next Machine: NTFS, VFAT, ext3 (DAW box)Next Machine: ext4 and NTFS (Media/Graphics/CAD Workstation, Secondary Games Machine, Secondary DAW)Next Machine: ext3 and VFAT (Media streaming to either this monitor and/or the television)
The NFS/SMB mounts on these are exported/shared ext3 filesystems from the general server upstairs
FFS2 with full disk encryption on an SSD.
Works great here. 8 core Xeon with 32GB ECC memory.
Current machine has two NTFS partitions, three ext3 partitions and a FAT-32 partition.
Other machine has FAT and NILFS2, and various other devices scattered around my hovel have various incarnations of FAT, NTFS, and ext
Has anyone got anything bad to say about NILFS2?
I, like most people, have limited experience with NILFS2. The only bad things I can think of off hand is that NILFS has absolutely horrendous performance in certain circumstances (but mostly only shows in benchmarks, not actual use) and that if the system cleaner isn't working properly, you can run out of disk space in a hurry (which murders performance as every operation requires a cleaning first).
MS's shenanigans convinced me to switch my computers to Linux more than a year ago. My system disks or partitions are thus formatted with ext4, but all my data are still in the NTFS disks/partitions from the Windows days.
Put it on an SSD and you've got UFS running on UFS.
I had to wait and see what would be the top answer to the poll, so I could win.
Seriously though, I tried XFS for about two weeks and it blew up. I tried ZFS and the people on #freebsd called me an idiot for using ZFS with 2GB because I asked why it was so slow.
I use ext4. Normal use cases for a home user, ext is fast... super fast! The fastest most stable filesystem I have ever used.
I love zpool, but I need speed more.
I've been running a many TB fileserver with JFS as its filesystem for many years with great success. I've also used JFS as the filesystem for the operating system on multiple linux installs, also with great success. Its my goto filesystem on linux, and will remain so for a while. I initially picked it because it was mature code, and it was oriented towards using a minimum of resources.
Before that I had used reiserfs, but due to the main programmer of that filesystem being incarcerated, I choose to migrate away from it.
Gotta keep those punch cards organized!
Recently converted from all EXT4 to mostly BTRFS and just a couple of small partitions using EXT2.