cuirass: inconsistent SQL queries execution time.

  • Done
  • quality assurance status badge
Details
4 participants
  • Ludovic Courtès
  • Christopher Baines
  • Maxim Cournoyer
  • Mathieu Othacehe
Owner
unassigned
Submitted by
Mathieu Othacehe
Severity
normal
M
M
Mathieu Othacehe wrote on 7 Oct 2020 18:04
(address . bug-guix@gnu.org)
87a6wyhviz.fsf@gnu.org
Hello,

Over the last few weeks I made sure that all Cuirass SQL queries were
using indexes. As the "Builds" and "Outputs" tables can be really large,
having queries covered by indexes is imperative for consistent queries
duration.

However, I observed that some queries have inconsistent duration.

Toggle snippet (10 lines)
2020-10-07T17:59:09 Database worker unresponsive for 5 seconds (db-get-builds-max).
2020-10-07T17:59:13 GET /build/3183151/details
2020-10-07T17:59:13 GET /eval/92
2020-10-07T17:59:14 builds request took 3.66e-4 seconds
2020-10-07T17:59:14 Database worker unresponsive for 5 seconds (db-get-builds-max).
2020-10-07T17:59:15 GET /eval/13234
2020-10-07T17:59:16 GET /build/3146487/details
2020-10-07T17:59:19 Database worker unresponsive for 5 seconds (db-get-builds-max).

"db-get-builds-max" query has the following query plan:

Toggle snippet (6 lines)
QUERY PLAN
|--SEARCH TABLE Builds USING INDEX Builds_stoptime_id (stoptime=?)
`--SCALAR SUBQUERY 1
`--SEARCH TABLE Builds USING INDEX Builds_evaluation_index (evaluation=?)

so it should always reasonably fast. The log seem to indicate that the
worker running one of those queries is busy for more than 10 seconds
here. My understanding of SQLite WAL mode is that writers should not
block readers. So I don't get why this query can be intermittently so
slow.

Thanks,

Mathieu
--
M
M
Mathieu Othacehe wrote on 14 Oct 2020 18:53
(address . 43850@debbugs.gnu.org)
87362gg35r.fsf@gnu.org
Hello,

I pushed and deployed several patches that:

- update metrics in a single transaction
- register builds in a single transaction
- use a single write database worker, queuing queries and submitting
them by batches (in a single transaction).
- optimize some SQLite parameters (decrease WAL size, use more RAM
caching)

this should improve the situation, even if I still observe some
inconsistent execution duration.

I still have a few improvement ideas such as pre-allocating a large
database file to decrease fragmentation and running periodic vacuums.

Thanks,

Mathieu
M
M
Mathieu Othacehe wrote on 20 Oct 2020 13:45
(address . 43850@debbugs.gnu.org)
87362986k7.fsf@gnu.org
Hello,

Toggle quote (3 lines)
> this should improve the situation, even if I still observe some
> inconsistent execution duration.

I tried to use the two following pragma:

Toggle snippet (4 lines)
PRAGMA synchronous = OFF
PRAGMA mmap_size = 10737418240

hoping that mmaping the Cuirass database and disabling synchronisation
would help, but there were still inconsistent duration.

I have now copied the database to a tmpfs mounted directory to make sure
that those inconsistent duration are only caused by the I/O pressure on
berlin.

Thanks,

Mathieu
M
M
Mathieu Othacehe wrote on 22 Oct 2020 13:49
(address . 43850@debbugs.gnu.org)
87blguzdig.fsf@gnu.org
Hello,

Toggle quote (4 lines)
> I have now copied the database to a tmpfs mounted directory to make sure
> that those inconsistent duration are only caused by the I/O pressure on
> berlin.

This helps a lot. The Cuirass web service has been running smooth since
two days, without any inconsistent query times.

I'm considering using a tmpfs backed database for good. The problem is
that we would need a save/restore mechanism in case Berlin
reboots.

WDYT?

Thanks,

Mathieu
L
L
Ludovic Courtès wrote on 26 Oct 2020 23:59
(name . Mathieu Othacehe)(address . othacehe@gnu.org)(address . 43850@debbugs.gnu.org)
87blgo8ufs.fsf@gnu.org
Hi,

Mathieu Othacehe <othacehe@gnu.org> skribis:

Toggle quote (7 lines)
>> I have now copied the database to a tmpfs mounted directory to make sure
>> that those inconsistent duration are only caused by the I/O pressure on
>> berlin.
>
> This helps a lot. The Cuirass web service has been running smooth since
> two days, without any inconsistent query times.

Interesting.

Toggle quote (4 lines)
> I'm considering using a tmpfs backed database for good. The problem is
> that we would need a save/restore mechanism in case Berlin
> reboots.

Hmm sounds risky, no?

I wonder if we could instead ensure no I/O-intensive workload runs that
machine. I’m thinking in particular of the derivations that produce
ISO/qcow images that are not offloaded but maybe should.

WDYT? Do you think that’d be enough? Or is tmpfs our only hope?

Thanks,
Ludo’.
C
C
Christopher Baines wrote on 27 Oct 2020 10:28
(address . 43850@debbugs.gnu.org)
87d014yq3t.fsf@cbaines.net
Ludovic Courtès <ludo@gnu.org> writes:

Toggle quote (25 lines)
> Hi,
>
> Mathieu Othacehe <othacehe@gnu.org> skribis:
>
>>> I have now copied the database to a tmpfs mounted directory to make sure
>>> that those inconsistent duration are only caused by the I/O pressure on
>>> berlin.
>>
>> This helps a lot. The Cuirass web service has been running smooth since
>> two days, without any inconsistent query times.
>
> Interesting.
>
>> I'm considering using a tmpfs backed database for good. The problem is
>> that we would need a save/restore mechanism in case Berlin
>> reboots.
>
> Hmm sounds risky, no?
>
> I wonder if we could instead ensure no I/O-intensive workload runs that
> machine. I’m thinking in particular of the derivations that produce
> ISO/qcow images that are not offloaded but maybe should.
>
> WDYT? Do you think that’d be enough? Or is tmpfs our only hope?

I think Ricardo mentioned that the machine running Cuirass uses an SSD
for the root filesystem, so moving the database there may help?
-----BEGIN PGP SIGNATURE-----

iQKlBAEBCgCPFiEEPonu50WOcg2XVOCyXiijOwuE9XcFAl+X6FZfFIAAAAAALgAo
aXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDNF
ODlFRUU3NDU4RTcyMEQ5NzU0RTBCMjVFMjhBMzNCMEI4NEY1NzcRHG1haWxAY2Jh
aW5lcy5uZXQACgkQXiijOwuE9XeKAA//W42HvAHLrMfMRdUSWjwc/r7CGqtznUtu
FvRaNWC3wuZEF8G3PVxWg6+4Hql7FjoT1m5c79/EOKwiOM2vfPs4djKAHRwdWNZe
xCevpKcuLuYV8lqergrmPUgZVaNYAZ1kZm9ysQHwnrreokuVaaRJMOJBkOglnlAG
a9mNAHyWR+SXQdPgm0FGDWcaI6k6r9LsqYorKu3xxgwIP1kcPQFWn2b0WpHMMyUu
o/VSNvZX+X/P2fpzH7yaPCpjV90iOwvPSbvF/OlwRemVGKt7wHvUdJu8Vyxb3L9y
6S2UXXAqnvOPs9X0yNnAQntYVkqPVu/Fz2kkSDFGIV6Y2xs82aGVGNSijvPf+AIx
pRzzVx4XFyvCUMSLm7JHyxIwBAm/Tj/EVeOeIrKaZrCf3mAB2D9VjDej2oTcmcK2
h0p9mP4du6kNb6ee/qfFLB8dxD2z5kYjlUIPLk9thmDvNSfum7Lh29JJewiGul0Z
gwN49Ghalc5Ch0otIm3bGBGucsDfCgrKtLUTYjWbOMqv0PZTcKvQj8X/m3yyqcbO
pTK1PsbjfA9BVOdaXxDDxJvZH2Vl8gr4EjxI7Gq0bzgxV6C6I4xwKYh3YAP7hBsG
PgRQ5LsBXYMWA50NesziZDTjr4A5TqGXfJDGsZdd4YqdJCgIlujDqQs/9QsOVeFh
9dvkk47YyIA=
=54te
-----END PGP SIGNATURE-----

M
M
Mathieu Othacehe wrote on 27 Oct 2020 15:10
(name . Christopher Baines)(address . mail@cbaines.net)
871rhjyd38.fsf@gnu.org
Hello Chris,

Toggle quote (3 lines)
> I think Ricardo mentioned that the machine running Cuirass uses an SSD
> for the root filesystem, so moving the database there may help?

Looks like the database was already on the SSD before my tmpfs
experiment.

Toggle snippet (9 lines)
mathieu@berlin ~$ df -h
Filesystem Size Used Avail Use% Mounted on
none 95G 0 95G 0% /dev
/dev/sda1 916G 321G 549G 37% /
/dev/sdb1 37T 34T 2.6T 94% /gnu
tmpfs 95G 8.0K 95G 1% /dev/shm
tmpfs 10G 2.4G 7.7G 24% /var/lib/cuirass_tmpfs

I don't really get why I/O pressure on /dev/sdb could impact /dev/sda.

Thanks,

Mathieu
M
M
Mathieu Othacehe wrote on 27 Oct 2020 19:11
(name . Christopher Baines)(address . mail@cbaines.net)
87lffrh73j.fsf@gnu.org
Toggle quote (2 lines)
> I don't really get why I/O pressure on /dev/sdb could impact /dev/sda.

Turns out /tmp is mounted on /dev/sda, so all the building and ISO
production are first written on /dev/sda before being copied to the
store in /dev/sdb.

Reducing the build activity of berlin, as Ludo proposed should help
then.

Thanks,

Mathieu
M
M
Maxim Cournoyer wrote on 31 Oct 2020 05:33
(name . Mathieu Othacehe)(address . othacehe@gnu.org)
87wnz7xbej.fsf@gmail.com
Hello,

Mathieu Othacehe <othacehe@gnu.org> writes:

Toggle quote (22 lines)
> Hello Chris,
>
>> I think Ricardo mentioned that the machine running Cuirass uses an SSD
>> for the root filesystem, so moving the database there may help?
>
> Looks like the database was already on the SSD before my tmpfs
> experiment.
>
> mathieu@berlin ~$ df -h
> Filesystem Size Used Avail Use% Mounted on
> none 95G 0 95G 0% /dev
> /dev/sda1 916G 321G 549G 37% /
> /dev/sdb1 37T 34T 2.6T 94% /gnu
> tmpfs 95G 8.0K 95G 1% /dev/shm
> tmpfs 10G 2.4G 7.7G 24% /var/lib/cuirass_tmpfs
>
> I don't really get why I/O pressure on /dev/sdb could impact /dev/sda.
>
> Thanks,
>
> Mathieu

As an aside, running

Toggle snippet (3 lines)
sudo sqlite3 /var/guix/db/db.sqlite vacuum

shaved off some 40 Mb from my large database file:

-rw-r--r-- 1 root root 468889600 Oct 31 00:16 db.sqlite
-rw-r--r-- 1 root root 510648320 Oct 28 23:36 db.sqlite.bak

Perhaps we should run 'vacuum' when invoking 'guix gc' or at some other
key places (where lots of data gets removed from the DB). There's also
the auto_vacuum PRAGMA, which is not enabled currently:

Toggle snippet (4 lines)
sqlite3 /var/guix/db/db.sqlite 'pragma auto_vacuum'
0

But the later doesn't necessarily sound like a good idea:

Note, however, that auto-vacuum only truncates the freelist pages from
the file. Auto-vacuum does not defragment the database nor repack
individual database pages the way that the VACUUM command does. In fact,
because it moves pages around within the file, auto-vacuum can actually
make fragmentation worse. [0]


Maxim
M
M
Mathieu Othacehe wrote on 1 Nov 2020 19:23
(name . Maxim Cournoyer)(address . maxim.cournoyer@gmail.com)
87v9epkkbx.fsf@gnu.org
Hey Maxim,

Toggle quote (4 lines)
> Perhaps we should run 'vacuum' when invoking 'guix gc' or at some other
> key places (where lots of data gets removed from the DB). There's also
> the auto_vacuum PRAGMA, which is not enabled currently:

Vacuuming periodically seems important, however, it didn't resulted in
noticeable improvements during high I/O pressure when I tested it.

I still think that we should vacuum periodically, even though it means
that all database writes are postponed during the duration of the
vacuuming.

Having a separate fiber in charge of that operation could work I guess.

Thanks,

Mathieu
M
M
Mathieu Othacehe wrote on 25 Mar 2021 13:46
(address . 43850-done@debbugs.gnu.org)
87mtur5rku.fsf@gnu.org
Closing as this is not any issue anymore since the switch to PostgreSQL.

Mathieu
Closed
?
Your comment

This issue is archived.

To comment on this conversation send an email to 43850@debbugs.gnu.org

To respond to this issue using the mumi CLI, first switch to it
mumi current 43850
Then, you may apply the latest patchset in this issue (with sign off)
mumi am -- -s
Or, compose a reply to this issue
mumi compose
Or, send patches to this issue
mumi send-email *.patch