From debbugs-submit-bounces@debbugs.gnu.org Fri Mar 26 08:22:31 2021 Received: (at 47379) by debbugs.gnu.org; 26 Mar 2021 12:22:31 +0000 Received: from localhost ([127.0.0.1]:40603 helo=debbugs.gnu.org) by debbugs.gnu.org with esmtp (Exim 4.84_2) (envelope-from ) id 1lPlTz-0002Fq-9A for submit@debbugs.gnu.org; Fri, 26 Mar 2021 08:22:31 -0400 Received: from mail-40137.protonmail.ch ([185.70.40.137]:27997) by debbugs.gnu.org with esmtp (Exim 4.84_2) (envelope-from ) id 1lPlTw-0002FY-HW for 47379@debbugs.gnu.org; Fri, 26 Mar 2021 08:22:29 -0400 Date: Fri, 26 Mar 2021 12:22:19 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=protonmail.com; s=protonmail; t=1616761341; bh=ygw74iDn6yzjQhqi9qzbGKLdCiyGHr226S33RUWL96c=; h=Date:To:From:Reply-To:Subject:From; b=JCNA/ZcWS567HOJoAczJ/MkdmbPhrn7LR+rjiUktC9eI/IYu27OWXb329pGATxawH iiAXQl4KnTEVcDpiD603FbN/BdkWtGo5Agq+VoDJibrZXVT/uAmYekgjyRY5YZgxX3 IMufsFzClpQn9qFhYlC9mmn25sK7lXCK3QNuDNew= To: "47379@debbugs.gnu.org" <47379@debbugs.gnu.org>, Efraim Flashner , Maxime Devos From: raid5atemyhomework Subject: "statfs" test in tests/syscall.scm fails with BTRFS file systems. Message-ID: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-1.2 required=10.0 tests=ALL_TRUSTED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM shortcircuit=no autolearn=disabled version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mailout.protonmail.ch X-Spam-Score: 0.0 (/) X-Debbugs-Envelope-To: 47379 X-BeenThere: debbugs-submit@debbugs.gnu.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: raid5atemyhomework Errors-To: debbugs-submit-bounces@debbugs.gnu.org Sender: "Debbugs-submit" X-Spam-Score: -1.0 (-) > btrfs balance moves the free space around so that you have fewer blocks > with extra freed space. I normally run 'btrfs balance start -dusage=3D70 > -musage=3D80 $mountpoint'. (unless I have it backwards) I think you do? Usually the numbers for `musage` are smaller I think. There is some old advice that you should only balance data and never balanc= e metadata, i.e. `btrfs balance start -dusage=3D70 $mountpoint`. This is b= ecause 1Gb blocks are assigned to either data or metadata, and it's possibl= e for excessive balances to result in a situation where the metadata gets o= nly a single 1Gb block and the rest of storage is assigned to data. Then t= he single metadata 1Gb block gets filled, and when new metadata is needed -= -- such as to rebalance the large number of data blocks so they take up few= er 1Gb blocks and more blocks can be assigned to metadata --- the filesyste= m is unable to continue operating due to the lack of metadata space and you= are stuck in a condition where you cannot delete data, delete snapshots, a= nd rebalance data. This is old advice since the new "GlobalReserve" (not so new I think, it wa= s added way back in 4.x? 3.x?) should provide space for temporary metadata = operations in such a case. Personally I'd rather just let metadata be "loo= se" and unbalanced to avoid the situation altogether; metadata is fairly ti= ny so it taking up more than one 1Gb block usually means it has two 1Gb blo= cks, maybe three at a stretch if you've been doing a lot of file creation a= nd deletion events. Another piece of old advice is to regularly balance. For example, have a d= aily `btrfs balance start -dusage=3D50 -dlimit=3D2 $mountpoint` --- the `dl= imit` makes it so that balancing stops when two 1Gb blocks of data have bee= n merged into some other half-filled 1Gb blocks of data. If you have never= balanced your BTRFS system, you might want to wait for some low-utilizatio= n time period, do a full `btrfs balance start -dusage=3D90 $mountpoint` wit= hout a `dlimit`, then schedule a daily balance of `-dusage=3D50 -dlimit=3D2= ` afterwards. On the other hand, if you're using SSDs, be aware that balan= cing leads to writing, which lowers your drive's longevity (but the point o= f `dlimit` is to prevent excessive amounts of daily work, and if you're reg= ularly writing to your disk (almost) everyday anyway, a small `dusage` and = `dlimit` would be within the noise of your daily-work-activity writes). You also want to do regular `btrfs scrub start $mountpoint`. Once a week f= or consumer-quality drives, once a month for enterprise-quality drives, if = you're not sure which one you have, go weekly. This is advice typical from= ZFS but should still apply to BTRFS. On SSD (or other storage with TRIM commands) you might want to do scheduled= trim regularly once a week or once every two weeks, in order to take alloc= ation pressure off the SSD and let it get better wear-levelling. This is g= enerally done via `fstrim` without any BTRFS-specific commands. Old advice= is to avoid the `discard` mount option (in some cases it can trim so often= that the SSD lifetime is significantly reduced) but that's supposed to be = fixed so maybe with a recent version you can mount `-o discard`, maybe. P= ersonally I'd use explicit scheduled trim still. Do try to schedule this a= t low-activity times, though; unless you've got SATA 3.1 (hard to check, mo= st drives/controllers just say "SATA 3" or "SATA III" which may or may not = mean including SATA 3.1 support), or SAS, or real SCSI, trim commands are s= low. Finally you might also want to do explicit defragmentation (which is a sepa= rate issue from balancing --- balancing ensures you don't have lots of half= -used blocks, defragging means files try to have as much of their data in t= he same 1Gb block) periodically, like once a week or two weeks. See also https://github.com/kdave/btrfsmaintenance for a package that does = btrfs maintenance for you, including balance, scrubbing, trimming, and defr= agging, and schedules those in "recommended" times as well. I think it mig= ht also have auto-snapshotting, though that is a bit more fraught as snapsh= ots are fairly heavyweight on BTRFS. Do note that it's crontab/SystemD-bas= ed though, so needs a good amount of glue code if you want to use it in Gui= x. It's available on Debian as `btrfsmaintenance` package. It's also got = a lot of settings, so you'd be up for a fairly comprehensive configuration = system to adapt it for Guix. Going back on topic... It looks like the test assumes "free" should equal "= available", but that is something that is likely not to work on ***all*** c= opy-on-write filesystems --- including ZFS and bcachefs, not just BTRFS. I= n particular, most copy-on-write filesystems (BTRFS, ZFS, and bcachefs) sup= port transparent compression, meaning "available" is often an estimated mul= tiple of "free". Probably the test should either explicitly use a specific= filesystem (maybe `tmpfs` would work? Or create a 1Gb "dd if=3D/dev/zero` = file in `/tmp` and bind-mount `ext4` onto it) that is simple enough that "f= ree" =3D=3D "available" most of the time, or it should just remove that par= ticular test. Thanks raid5atemyhomework