Stories
Slash Boxes
Comments

SoylentNews is people

posted by mrpg on Thursday December 22, @12:41AM   Printer-friendly

Kernel 6.2 promises multiple filesystem improvements:

The forthcoming Linux kernel 6.2 should see improved filesystem handling, including performance gains for SD cards and USB keys, as well as FUSE. As for the next-gen storage subsystems... not so much.

For a mature OS kernel, there are still considerable improvements being made in Linux's handling of existing disk formats, and this should improve when kernel 6.2 appears at some point in early 2023.

A patch from Sony engineer Yuezhang Mo makes it quicker to create new files or directories on an exFAT disk with lots of files on it – and the more files there are, the bigger the improvement. This follows the same programmer's earlier patch to improve exFAT handling, in March. Followin

Following Microsoft publishing the exFAT spec in 2019 and it going into the Linux kernel in 2020, its support has steadily improved. Just recently Linux gained the ability to repair exFAT volumes, thanks to a patch from Samsung developer Namjae Jeon, who maintains the out-of-tree exFAT drive for old kernels – such as the one used in Android. Its commit history shows lots of contributions from the Sony programmer. Another Samsung engineer, Jaeguk Kim, contributed a patch to improve F2FS, the Flash Friendly File System.


Original Submission

This discussion was created by mrpg (5708) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 0) by Anonymous Coward on Thursday December 22, @04:39AM (5 children)

    by Anonymous Coward on Thursday December 22, @04:39AM (#1283563)

    As a normal user, it sounds like the filesystem improvements are good for me. The article bemoans the lack of development with next-gen filesystems, but what are those about? I assume those provide the most advantages for cloud systems, big volumes spread over many individual disks? As a regular user, should I be putting ZSys, OpenZFS, or Stratis on my list of things to keep an eye on?

    • (Score: 2) by sjames on Thursday December 22, @06:45AM (4 children)

      by sjames (2882) on Thursday December 22, @06:45AM (#1283569) Journal

      For a regular user on a PC, BTRFS is probably more interesting (but DON'T use RAID 5/6 ever).

      • (Score: 2) by turgid on Thursday December 22, @05:55PM (3 children)

        by turgid (4318) Subscriber Badge on Thursday December 22, @05:55PM (#1283613) Journal

        I am intrigued. Do go on.

        • (Score: 3, Informative) by sjames on Thursday December 22, @06:52PM (2 children)

          by sjames (2882) on Thursday December 22, @06:52PM (#1283623) Journal

          BTRFS maintains zhecksums on data so that it can detect bitrot rather than silently returning bad data. In RAID 1 mode, it can also correct that bitrot. I would argue that that is a much bigger issue on a PC than a server. The "sphere of impact" is smaller, but the problem is more likely to happen.

          Snapshotting can be a huge benefit. I can use it (for example) to do a trial dist-upgrade and if there are any issues, I can roll back the entire upgrade and be back in business in a few minutes.

          Adding an extra drive and rebalancing is trivially easy, as is swapping out a failing drive.

          ZFS also has those capabilities but it has a much heaver weight "enterprisey" admin that can be a real PITA and less versatile when it comes to expanding. ZFS also really needs a ton more memory than a workstation PC is likely to have.

          My setup is 2 8TB drives in a RAID 1 configuration with a 12TB external USB drive that gets updated regularly via rsync.

          • (Score: 2) by turgid on Thursday December 22, @07:16PM (1 child)

            by turgid (4318) Subscriber Badge on Thursday December 22, @07:16PM (#1283624) Journal

            I was thinking about using ZFS (after trying it out on my own PC). It's on my to-do list. It's interesting that you mention BTRFS. That's been about for some years now, I believe. When you say ZFS needs a "ton more memory" how much? I suppose it depends on the size of your storage and how you configure it, but it's been about for nearly 20 years, when 32GB was considered "a ton." I have machines (PCS) with that much, and more. Have you got any references about their relative performance and reliability?

            • (Score: 2) by sjames on Friday December 23, @04:53AM

              by sjames (2882) on Friday December 23, @04:53AM (#1283684) Journal
              The base requirement isn't so bad these days. Some say 8GB + 1GB/TB of storage. Others say that won't perform very well. The big thing is that it can require a lot more if the filesystem needs to be repaired and it can get scragged if it runs out of memory. But keep in mind, the (for example) 32GB would need to be available for the filesystem, not used for applications. The last time I looked, it was strenuously recommended that it be ECC.

              It's not unreachable, for sure but it strikes me that BTRFS will be a better fit for a typical desktop. It's always possible your desktops are a-typical, of course :-)

              I have run ZFS on servers and currently run BTRFS. BTRFS seems to fit better when machines may be upgraded piecemeal while ZFS works fine in the more enterprise like periodic forklift upgrades. I haven't done the comparison, but from other figures I've seen, ZFS's performance advantage doesn't really show up until you go with high end 15K RPM drives and a bus to match. Again, not too outrageous but not typical of a desktop. I'm not sure if SSDs give ZFS more advantage or negate it.

(1)