Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by TheRaven on Saturday February 11 2017, @04:07PM

    by TheRaven (270) on Saturday February 11 2017, @04:07PM (#465799) Journal

    but aside from that it has everything a file system should have

    Really? I can think of a lot of things that FAT32 lacks. Off the top of my head:

    • An efficient way of storing small files. A 100-byte file will use 64KB on disk on a large FAT32 filesystem. Not much for a single file, but that overhead adds up really quickly when you have a lot of small files.
    • Support for even UNIX file permissions, let alone ACLs.
    • Any metadata support for storing things like security labels for use with a MAC framework.
    • An efficient allocation policy for large files. FAT uses a linked list of blocks to manage the free list on disk, so to get even vaguely efficient performance you need to cache the whole thing in RAM. To efficiently allocate large files, you must also construct another data structure to sort contiguous chunks by size. This burns RAM and it's also very easy for these structures to become inconsistent with respect to the real filesystem, which brings me to:
    • Any kind of resilience at all. I'm not even talking about things like ZFS's block-level checksums. There's absolutely no redundancy and no checksum storage with any of the on-disk metadata structures, so it's really easy for a single-bit error to lead to massive filesystem corruption.
    • No support for hard or symbolic links.

    That's just the list of basic features that I'd expect from a filesystem. Things like constant-time snapshots, support for large numbers of small files or for very large files, journalling, dynamic filesystem sizes, and so on are so far beyond its capability that it's not even worth comparing them.

    --
    sudo mod me up
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 0) by Anonymous Coward on Saturday February 11 2017, @06:17PM

    by Anonymous Coward on Saturday February 11 2017, @06:17PM (#465841)

    There are two FAT tables on the disk. The problem, like with any mirror system, is that when you find an inconsistency the software have to guess which one is correct or compare outputs. Windows and most disk checkers know that most people won't know without side-by-side comparison of the output or didn't want to go through the trouble, so they just report the error and guess.

  • (Score: 2) by shortscreen on Saturday February 11 2017, @09:16PM

    by shortscreen (2252) Subscriber Badge on Saturday February 11 2017, @09:16PM (#465890) Journal

    Really? I can think of a lot of things that FAT32 lacks.

    Yes, but you assume this is a bad thing...

    An efficient way of storing small files. A 100-byte file will use 64KB on disk on a large FAT32 filesystem. Not much for a single file, but that overhead adds up really quickly when you have a lot of small files.

    Why rely on the file system to do your job for you? Combine the small files into one large file yourself. For example, DOOM had its game data stored in a .WAD file instead of cluttering up your disk with a thousand bitmaps. (Lazy developers might do like Torchlight and just stuff all 20,000 files of their game assets into one giant .ZIP file. And then wonder why their game spends so much time loading.)

    Support for even UNIX file permissions, let alone ACLs.
    Any metadata support for storing things like security labels for use with a MAC framework.

    This is a good example of something I don't even want to deal with on my personal, single-user system.

    An efficient allocation policy for large files. FAT uses a linked list of blocks to manage the free list on disk, so to get even vaguely efficient performance you need to cache the whole thing in RAM. To efficiently allocate large files, you must also construct another data structure to sort contiguous chunks by size. This burns RAM and it's also very easy for these structures to become inconsistent with respect to the real filesystem

    It's true that this is not ideal. If I were designing my own filesystem I would not implement it this way. But still, if you have 15,000,000 clusters, with 32 bits per cluster making up the FAT, that's 60MB of RAM. Not a huge deal when you have GBs of RAM. AMD's Radeon drivers waste more RAM than that.

    Any kind of resilience at all. I'm not even talking about things like ZFS's block-level checksums. There's absolutely no redundancy and no checksum storage

    The storage device itself does data integrity checking. So basically the error would have to occur on the motherboard somewhere. This is possible but experience suggests that it is pretty rare. Back in the old days I would test system stability by creating a .ZIP file and then extracting it again to watch for CRC errors. I found a lot of flaky RAM, motherboards, ribbon cables, etc. by doing this. Although I think the worst instances of file system corruption were caused by programs running wild, writing garbage to the disk because of Win9x's half-assed memory protection. But ever since L2 cache got integrated with the CPU, and Intel and AMD started supplying most of the chipsets, flaky hardware has largely disappeared (except for laptops with chronic overheating problems). The only times I've had file system corruption on hardware of this millenium is when a driver BSODs, and then I might get some lost clusters or cross-linked files, which I usually just delete and go about my business.