Hey, > Today we discovered a few more things and discussed them on IRC. Here’s > a summary. Nice summary :) > We could take this opportunity to reformat /gnu with btrfs, which > performs quite a bit more poorly than ext4 but would be immune to > defragmentation. It’s not clear that defragmentation matters here. It > could just be that the problem is exclusively caused by having these > incredibly large, flat /gnu/store, /gnu/store/.links, and > /gnu/store/trash directories. > > A possible alternative for this file system might also be XFS, which > performs well when presented with unreasonably large directories. > > It may be a good idea to come up with realistic test scenarios that we > could test with each of these three file systems at scale. We could compare xfs, btrfs and ext4 performances on a store subset, 1TiB for instance that we would create on the SAN. Realistic test scenario could be: - Time the copy of new items to the test store. - Time the removal of randomly picked items from the test store. - Time the creation of nar archives from the test store. That will allow us to choose the file-system that has the best performances for our use-case, regardless of fragmentation. Now fragmentation may or may not be a problem as you mentioned. What we could do is repeat the same tests but on a test store that is created and removed N times, to simulate file-system aging. This is more or less what is done in this article[1] by "git pulling" N times a repository and testing read performances. For them btrfs > xfs > ext4 in term of performances, but we might draw different conclusions for our specific use case. Do you think it is realistic? If so, we can start working on some test scripts. Thanks, Mathieu [1]: https://www.usenix.org/system/files/hotstorage19-paper-conway.pdf