Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 10 submissions in the queue.
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by shortscreen on Saturday February 04 2017, @11:38PM

    by shortscreen (2252) on Saturday February 04 2017, @11:38PM (#462968) Journal

    I actually like FAT32. I don't like the 4GB file limit, but aside from that it has everything a file system should have. It stores files, and it retrieves files. Sure, it has approximately zero attempt at error tolerance/recoverability, but this is made up for by the ease of creating backups (and the fact that FAT32 can be accessed by nearly anything). I can duplicate a whole partition with one XCOPY. How do you duplicate a file system which is encumbered by metadata, forks, permissions, multiple versions of the same file, symbolic links, etc.? With Windows on NTFS (IIRC it was Win7) I couldn't even use DIR /S anymore because one of their stupid USER\LOCALS~1\APPLIC~1 or some such directory linked back on itself and got the system stuck in an infinite loop.

    Over-achieving file systems have been a PITA since the old days. On 68K Macs you could download files with a terminal program but couldn't do anything with them without somehow getting the file type/creator codes sorted out. One time I used LHA to make an archive of my entire Amiga harddisk, and then later when I tried to restore everything it was somewhat screwed up because of metadata that hadn't been preserved.

    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 3, Informative) by Subsentient on Sunday February 05 2017, @01:23AM

    by Subsentient (1111) on Sunday February 05 2017, @01:23AM (#462985) Homepage Journal

    I duplicate with cp -a, the GNU command. Preserves permissions, symlinks, everything. For backups I use mksquashfs.
    E.g. "mksquashfs ./* ../mybackup.sqfs -comp xz -Xbcj x86 -noappend -no-xattrs"

    --
    "It is no measure of health to be well adjusted to a profoundly sick society." -Jiddu Krishnamurti
  • (Score: 2) by Scruffy Beard 2 on Monday February 06 2017, @04:06AM

    by Scruffy Beard 2 (6030) on Monday February 06 2017, @04:06AM (#463289)

    The FreeBSD handbook used to say that dump was the best. I am curious if you can dump with one filesystem, the restore with another.

    The reason I want to test it is I don't see any obvious distinctions between the BSD (using ufs, presumably) and Linux (using ext2 variants, presumably) version of the tools.

    • (Score: 2) by Runaway1956 on Monday February 06 2017, @08:00AM

      by Runaway1956 (2926) Subscriber Badge on Monday February 06 2017, @08:00AM (#463349) Journal

      If both file systems respect all the same security metadata - then probably yes. Obviously, you couldn't dump to a FAT file system, then restore all the security. But, you do specify *nix-like file systems, so you could probably get the job done.

  • (Score: 2) by TheRaven on Saturday February 11 2017, @04:07PM

    by TheRaven (270) on Saturday February 11 2017, @04:07PM (#465799) Journal

    but aside from that it has everything a file system should have

    Really? I can think of a lot of things that FAT32 lacks. Off the top of my head:

    • An efficient way of storing small files. A 100-byte file will use 64KB on disk on a large FAT32 filesystem. Not much for a single file, but that overhead adds up really quickly when you have a lot of small files.
    • Support for even UNIX file permissions, let alone ACLs.
    • Any metadata support for storing things like security labels for use with a MAC framework.
    • An efficient allocation policy for large files. FAT uses a linked list of blocks to manage the free list on disk, so to get even vaguely efficient performance you need to cache the whole thing in RAM. To efficiently allocate large files, you must also construct another data structure to sort contiguous chunks by size. This burns RAM and it's also very easy for these structures to become inconsistent with respect to the real filesystem, which brings me to:
    • Any kind of resilience at all. I'm not even talking about things like ZFS's block-level checksums. There's absolutely no redundancy and no checksum storage with any of the on-disk metadata structures, so it's really easy for a single-bit error to lead to massive filesystem corruption.
    • No support for hard or symbolic links.

    That's just the list of basic features that I'd expect from a filesystem. Things like constant-time snapshots, support for large numbers of small files or for very large files, journalling, dynamic filesystem sizes, and so on are so far beyond its capability that it's not even worth comparing them.

    --
    sudo mod me up
    • (Score: 0) by Anonymous Coward on Saturday February 11 2017, @06:17PM

      by Anonymous Coward on Saturday February 11 2017, @06:17PM (#465841)

      There are two FAT tables on the disk. The problem, like with any mirror system, is that when you find an inconsistency the software have to guess which one is correct or compare outputs. Windows and most disk checkers know that most people won't know without side-by-side comparison of the output or didn't want to go through the trouble, so they just report the error and guess.

    • (Score: 2) by shortscreen on Saturday February 11 2017, @09:16PM

      by shortscreen (2252) on Saturday February 11 2017, @09:16PM (#465890) Journal

      Really? I can think of a lot of things that FAT32 lacks.

      Yes, but you assume this is a bad thing...

      An efficient way of storing small files. A 100-byte file will use 64KB on disk on a large FAT32 filesystem. Not much for a single file, but that overhead adds up really quickly when you have a lot of small files.

      Why rely on the file system to do your job for you? Combine the small files into one large file yourself. For example, DOOM had its game data stored in a .WAD file instead of cluttering up your disk with a thousand bitmaps. (Lazy developers might do like Torchlight and just stuff all 20,000 files of their game assets into one giant .ZIP file. And then wonder why their game spends so much time loading.)

      Support for even UNIX file permissions, let alone ACLs.
      Any metadata support for storing things like security labels for use with a MAC framework.

      This is a good example of something I don't even want to deal with on my personal, single-user system.

      An efficient allocation policy for large files. FAT uses a linked list of blocks to manage the free list on disk, so to get even vaguely efficient performance you need to cache the whole thing in RAM. To efficiently allocate large files, you must also construct another data structure to sort contiguous chunks by size. This burns RAM and it's also very easy for these structures to become inconsistent with respect to the real filesystem

      It's true that this is not ideal. If I were designing my own filesystem I would not implement it this way. But still, if you have 15,000,000 clusters, with 32 bits per cluster making up the FAT, that's 60MB of RAM. Not a huge deal when you have GBs of RAM. AMD's Radeon drivers waste more RAM than that.

      Any kind of resilience at all. I'm not even talking about things like ZFS's block-level checksums. There's absolutely no redundancy and no checksum storage

      The storage device itself does data integrity checking. So basically the error would have to occur on the motherboard somewhere. This is possible but experience suggests that it is pretty rare. Back in the old days I would test system stability by creating a .ZIP file and then extracting it again to watch for CRC errors. I found a lot of flaky RAM, motherboards, ribbon cables, etc. by doing this. Although I think the worst instances of file system corruption were caused by programs running wild, writing garbage to the disk because of Win9x's half-assed memory protection. But ever since L2 cache got integrated with the CPU, and Intel and AMD started supplying most of the chipsets, flaky hardware has largely disappeared (except for laptops with chronic overheating problems). The only times I've had file system corruption on hardware of this millenium is when a driver BSODs, and then I might get some lost clusters or cross-linked files, which I usually just delete and go about my business.

  • (Score: 2) by KritonK on Thursday February 23 2017, @02:56PM

    by KritonK (465) on Thursday February 23 2017, @02:56PM (#470716)

    One time I used LHA to make an archive of my entire Amiga harddisk, and then later when I tried to restore everything it was somewhat screwed up because of metadata that hadn't been preserved.

    From what I remember, Amiga files did not have metadata. If you wanted to associate information with a file (basically icon and program parameters), you needed to put it in an associated .info file (e.g., foo.info for file foo), which was an ordinary file with a specific data structure.

    • (Score: 2) by shortscreen on Saturday February 25 2017, @04:55AM

      by shortscreen (2252) on Saturday February 25 2017, @04:55AM (#471423) Journal

      I think there was a flag marking executable files as such. Although it could have been some other issue, I'm not certain at this point.