cgroups increase to 65555 preventing docker to start

  • Open
  • quality assurance status badge
Details
2 participants
  • Dr. Arne Babenhauserheide
  • Sergiu Ivanov
Owner
unassigned
Submitted by
Dr. Arne Babenhauserheide
Severity
normal
D
D
Dr. Arne Babenhauserheide wrote on 30 Aug 2023 10:31
(address . bug-guix@gnu.org)
87msy9sz6j.fsf@web.de
Hi,

when I try to start a docker container, I get the erro

docker: Error response from daemon: failed to create shim task: OCI
runtime create failed: runc create failed: unable to start container
process: unable to apply cgroup configuration: mkdir
/sys/fs/cgroup/docker/dac0a619a2d6f980095c74a6a2b82a31bbfef721d5bc80fe9a9fb94fe48cfa37:
no space left on device: unknown.

Checking the number of cgroups shows that it’s at the limit (if I read
it right, no guarantees for that, cgroups isn’t my strong side), but
this is after only a few days of uptime:

$ find /sys/fs/cgroup/ -type d | wc -l
65534

$ cat /proc/cgroups
#subsys_name hierarchy num_cgroups enabled
cpuset 0 65536 1
cpu 0 65536 1
cpuacct 0 65536 1
blkio 0 65536 1
memory 0 65536 1
devices 0 65536 1
freezer 0 65536 1
net_cls 0 65536 1
perf_event 0 65536 1
net_prio 0 65536 1
hugetlb 0 65536 1
pids 0 65536 1
misc 0 65536 1

Is Guix using cgroups for building?
Is something broken in Guix or do I have to hunt elsewhere for fixes?

Other people seem to have similar problems, but only after massive
docker usage:

Best wishes,
Arne
--
Unpolitisch sein
heißt politisch sein,
ohne es zu merken.
draketo.de
-----BEGIN PGP SIGNATURE-----

iQJEBAEBCAAuFiEE801qEjXQSQPNItXAE++NRSQDw+sFAmTu/+UQHGFybmVfYmFi
QHdlYi5kZQAKCRAT741FJAPD65E7EADQHq9rSqPrtwpv+McIPQ+FVxEp/vT+jKUK
NCFx6AZfvf/acLVh+R5w+LoMjNmdgJR2ORQdLpauhSZTX4CFHCLnpZ1ZTQNSTdJ9
17mZhnFeRIS5p9WjuPGLqMnsaQ0WfI/UZguF7SiaZO/WbRS7E3YcVcTKQe/HM3Ce
pJKwa+U0mJeG7XS7xqsLfHQxkRO/dS7UgkdnleF8msmTlIAbASmyOOVV98rT1jZs
34btb7b1aLhJGC3xUuwo5Lckr6vQUNpqrpzEK1U0lQ9f/Az24E4fhkT2SErR6ZVv
oUMPRP40fiEzltkuBLWbtn7ydAeWCgAAHdPsTD6qTm8c/rrGC8n0MOO6DV2/LJHa
FdTJGxy/iLmzXF1b7JKMhJJUJ4Ok0DMgx3AYcIIYik6Wgg7a+kV8YvJtDDg+6mF8
TZEfceu2L8zYh3nNnaF8J68sok7BSpWUh/Jtao6ZaZs2lkF8wQnO7i3LnhuR5q+Y
6z1ojbB60kP2VlbU/nIVyL9BZuSrixePh7A4fw/qwo4G88xgD1Gxat767koBDd01
k4tw2nJb67E8WqGM0jCuD2I7CcFOBiKwK3sNNU2Yg+VAOhrqfyhpZiUD67QtDVk+
Ba0XwvomWaD4FUDzXTDzxZfYkFIUe93Q4yhx+3ObDJxgfRyDKO5z3gXFCuf6Ue+A
dAo5SOHfdojEBAEBCAAuFiEE3Si95tmHXKvOSosd3M8NswvBBUgFAmTu/+UQHGFy
bmVfYmFiQHdlYi5kZQAKCRDczw2zC8EFSP4dA/wM5LHB95sOmzrcN8CkD2ik77nv
AZ3TWyl5Heb1qlT8fciiLSmTAJ74eewGN0F3CusmD+iuxiRkokuFojiotwf+Agr9
97UvLE3XBieoW0rgpqYJZPsSlnGZ0sGf2LcOcy2VAY8TOe1X8zSDbxB6tBl/TbMT
5rcpIiKdlrdGmf0zaA==
=Sdy7
-----END PGP SIGNATURE-----

S
S
Sergiu Ivanov wrote on 2 Sep 2023 22:25
(address . 65614@debbugs.gnu.org)
87fs3wiaf0.fsf@colimite.fr
Hello,

I have a similar issue, related to cgroups as well: after a recent
update, I cannot start my LXC containers anymore.

This is how I generally start one of my LXC containers, called arch:

# lxc-start -n arch

I add the -F switch to get the following more detailed error log:


lxc-start: arch: cgroups/cgfsng.c: cg_legacy_set_data: 2678 No such file or directory - Failed to setup limits for the "devices" controller. The controller seems to be unused by "cgfsng" cgroup driver or not enabled on the cgroup hierarchy
lxc-start: arch: cgroups/cgfsng.c: cgfsng_setup_limits_legacy: 2745 No such file or directory - Failed to set "devices.deny" to "a"
lxc-start: arch: start.c: lxc_spawn: 1896 Failed to setup legacy device cgroup controller limits
lxc-start: arch: start.c: __lxc_start: 2074 Failed to spawn container "arch"
lxc-start: arch: tools/lxc_start.c: main: 306 The container failed to start
lxc-start: arch: tools/lxc_start.c: main: 311 Additional information can be obtained by setting the --logfile and --logpriority options


I tried tinkering a little with the cgroups-related options in my
container configuration, but that had zero impact on the error message.

-
Sergiu
?
Your comment

Commenting via the web interface is currently disabled.

To comment on this conversation send an email to 65614@debbugs.gnu.org

To respond to this issue using the mumi CLI, first switch to it
mumi current 65614
Then, you may apply the latest patchset in this issue (with sign off)
mumi am -- -s
Or, compose a reply to this issue
mumi compose
Or, send patches to this issue
mumi send-email *.patch