Stories
Slash Boxes
Comments

SoylentNews is people

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by shortscreen on Saturday February 11 2017, @09:16PM

    by shortscreen (2252) Subscriber Badge on Saturday February 11 2017, @09:16PM (#465890) Journal

    Really? I can think of a lot of things that FAT32 lacks.

    Yes, but you assume this is a bad thing...

    An efficient way of storing small files. A 100-byte file will use 64KB on disk on a large FAT32 filesystem. Not much for a single file, but that overhead adds up really quickly when you have a lot of small files.

    Why rely on the file system to do your job for you? Combine the small files into one large file yourself. For example, DOOM had its game data stored in a .WAD file instead of cluttering up your disk with a thousand bitmaps. (Lazy developers might do like Torchlight and just stuff all 20,000 files of their game assets into one giant .ZIP file. And then wonder why their game spends so much time loading.)

    Support for even UNIX file permissions, let alone ACLs.
    Any metadata support for storing things like security labels for use with a MAC framework.

    This is a good example of something I don't even want to deal with on my personal, single-user system.

    An efficient allocation policy for large files. FAT uses a linked list of blocks to manage the free list on disk, so to get even vaguely efficient performance you need to cache the whole thing in RAM. To efficiently allocate large files, you must also construct another data structure to sort contiguous chunks by size. This burns RAM and it's also very easy for these structures to become inconsistent with respect to the real filesystem

    It's true that this is not ideal. If I were designing my own filesystem I would not implement it this way. But still, if you have 15,000,000 clusters, with 32 bits per cluster making up the FAT, that's 60MB of RAM. Not a huge deal when you have GBs of RAM. AMD's Radeon drivers waste more RAM than that.

    Any kind of resilience at all. I'm not even talking about things like ZFS's block-level checksums. There's absolutely no redundancy and no checksum storage

    The storage device itself does data integrity checking. So basically the error would have to occur on the motherboard somewhere. This is possible but experience suggests that it is pretty rare. Back in the old days I would test system stability by creating a .ZIP file and then extracting it again to watch for CRC errors. I found a lot of flaky RAM, motherboards, ribbon cables, etc. by doing this. Although I think the worst instances of file system corruption were caused by programs running wild, writing garbage to the disk because of Win9x's half-assed memory protection. But ever since L2 cache got integrated with the CPU, and Intel and AMD started supplying most of the chipsets, flaky hardware has largely disappeared (except for laptops with chronic overheating problems). The only times I've had file system corruption on hardware of this millenium is when a driver BSODs, and then I might get some lost clusters or cross-linked files, which I usually just delete and go about my business.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2