offloading should fall back to local build after n tries

  • Open
  • quality assurance status badge
Details
4 participants
  • Ludovic Courtès
  • Maxim Cournoyer
  • ng0
  • zimoun
Owner
unassigned
Submitted by
ng0
Severity
normal
N
(address . bug-guix@gnu.org)
8760ppr3q3.fsf@we.make.ritual.n0.is
When I forgot that my build machine is offline and I did not pass
--no-build-hook, the offloading keeps trying forever until I had to
cancel the build, boot the build-machine and started the build again.

A solution could be a config option or default behavior which after
failing to offload for n times gives up and uses the local builder.

Is this desired at all? Setups like hydra could get problems, but for
small setups with the same architecture there could be a solution beyond
--no-build-hook?
--
ng0
L
L
Ludovic Courtès wrote on 26 Sep 2016 11:20
(name . ng0)(address . ngillmann@runbox.com)(address . 24496@debbugs.gnu.org)
87r387nhjg.fsf@gnu.org
Hello!

ng0 <ngillmann@runbox.com> skribis:

Toggle quote (11 lines)
> When I forgot that my build machine is offline and I did not pass
> --no-build-hook, the offloading keeps trying forever until I had to
> cancel the build, boot the build-machine and started the build again.
>
> A solution could be a config option or default behavior which after
> failing to offload for n times gives up and uses the local builder.
>
> Is this desired at all? Setups like hydra could get problems, but for
> small setups with the same architecture there could be a solution beyond
> --no-build-hook?

Like you say, on Hydra-style setup this could be a problem: the
front-end machine may have --max-jobs=0, meaning that it cannot perform
builds on its own.

So I guess we would need a command-line option to select a different
behavior. I’m not sure how to do that because ‘guix offload’ is
“hidden” behind ‘guix-daemon’, so there’s no obvious place for such an
option.

In the meantime, you could also hack up your machines.scm: it would
return a list where unreachable machines have been filtered out.

Ludo’.
N
(name . Ludovic Courtès)(address . ludo@gnu.org)(address . 24496@debbugs.gnu.org)
87vax8nis5.fsf@we.make.ritual.n0.is
Ludovic Courtès <ludo@gnu.org> writes:

Toggle quote (24 lines)
> Hello!
>
> ng0 <ngillmann@runbox.com> skribis:
>
>> When I forgot that my build machine is offline and I did not pass
>> --no-build-hook, the offloading keeps trying forever until I had to
>> cancel the build, boot the build-machine and started the build again.
>>
>> A solution could be a config option or default behavior which after
>> failing to offload for n times gives up and uses the local builder.
>>
>> Is this desired at all? Setups like hydra could get problems, but for
>> small setups with the same architecture there could be a solution beyond
>> --no-build-hook?
>
> Like you say, on Hydra-style setup this could be a problem: the
> front-end machine may have --max-jobs=0, meaning that it cannot perform
> builds on its own.
>
> So I guess we would need a command-line option to select a different
> behavior. I’m not sure how to do that because ‘guix offload’ is
> “hidden” behind ‘guix-daemon’, so there’s no obvious place for such an
> option.

Could the daemon run with --enable-hydra-style or --disable-hydra-style
and --disable-hydra-style would allow falling back to local build if
after a defined time - keeping slow connections in mind - the machine
did not reply.

Toggle quote (3 lines)
> In the meantime, you could also hack up your machines.scm: it would
> return a list where unreachable machines have been filtered out.

How can I achieve this?

And to append to this bug: it seems to me that offloading requires 1
lsh-key for each
and that you can not directly address them (say I want to create some
system where I want to build on machine 1 AND machine 2. Having 2 x86_64
in machines.scm only selects one of them (if 2 were working, see linked
thread) and builds on the one which is accessible first. If however the
first machine is somehow blocked and it fails, therefore terminates lsh
connection, the build does not happen at all.

Leaving out the problems, what I want to do in short: How could I build
on both systems at the same time when I desire to do so?

Toggle quote (3 lines)
> Ludo’.
>

--
L
L
Ludovic Courtès wrote on 5 Oct 2016 13:36
(name . ng0)(address . ngillmann@runbox.com)(address . 24496@debbugs.gnu.org)
87a8ej81u3.fsf@gnu.org
ng0 <ngillmann@runbox.com> skribis:

Toggle quote (2 lines)
> Ludovic Courtès <ludo@gnu.org> writes:

[...]

Toggle quote (14 lines)
>> Like you say, on Hydra-style setup this could be a problem: the
>> front-end machine may have --max-jobs=0, meaning that it cannot perform
>> builds on its own.
>>
>> So I guess we would need a command-line option to select a different
>> behavior. I’m not sure how to do that because ‘guix offload’ is
>> “hidden” behind ‘guix-daemon’, so there’s no obvious place for such an
>> option.
>
> Could the daemon run with --enable-hydra-style or --disable-hydra-style
> and --disable-hydra-style would allow falling back to local build if
> after a defined time - keeping slow connections in mind - the machine
> did not reply.

That would be too ad-hoc IMO, and the problem mentioned above remains.

Toggle quote (5 lines)
>> In the meantime, you could also hack up your machines.scm: it would
>> return a list where unreachable machines have been filtered out.
>
> How can I achieve this?

Something like:

(define the-machine (build-machine …))

(if (managed-to-connect-timely the-machine)
(list the-machine)
'())

… where ‘managed-to-connect-timely’ would try to connect to the
machine with a timeout.

Toggle quote (4 lines)
> And to append to this bug: it seems to me that offloading requires 1
> lsh-key for each
> build-machine.

The main machine needs to be able to connect to each build machine over
SSH, so indeed, that requires proper SSH key registration (host keys and
authorized user keys).

Toggle quote (8 lines)
> and that you can not directly address them (say I want to create some
> system where I want to build on machine 1 AND machine 2. Having 2
> x86_64 in machines.scm only selects one of them (if 2 were working,
> see linked thread) and builds on the one which is accessible first. If
> however the first machine is somehow blocked and it fails, therefore
> terminates lsh connection, the build does not happen at all.

The code that selects machines is in (guix scripts offload),
specifically ‘choose-build-machine’. It tries to choose the “best”
machine, which means, roughly, the fastest and least loaded one.

HTH,
Ludo’.
Z
Z
zimoun wrote on 16 Dec 2021 13:52
(name . Ludovic Courtès)(address . ludo@gnu.org)
868rwkiuf5.fsf@gmail.com
Hi,

I am just hitting this old bug#24496 [1].

On Mon, 26 Sep 2016 at 18:20, ludo@gnu.org (Ludovic Courtès) wrote:
Toggle quote (6 lines)
> ng0 <ngillmann@runbox.com> skribis:
>
>> When I forgot that my build machine is offline and I did not pass
>> --no-build-hook, the offloading keeps trying forever until I had to
>> cancel the build, boot the build-machine and started the build again.

[...]

Toggle quote (9 lines)
> Like you say, on Hydra-style setup this could be a problem: the
> front-end machine may have --max-jobs=0, meaning that it cannot perform
> builds on its own.
>
> So I guess we would need a command-line option to select a different
> behavior. I’m not sure how to do that because ‘guix offload’ is
> “hidden” behind ‘guix-daemon’, so there’s no obvious place for such an
> option.

When the build machine used to offload is offline and the master daemon
is --max-jobs=0, I expect X tries (leading to timeout) and then just
fails with a hint, where X is defined by user. WDYT?


Toggle quote (3 lines)
> In the meantime, you could also hack up your machines.scm: it would
> return a list where unreachable machines have been filtered out.

Maybe, this could be done by “guix offload”.


Cheers,
simon


L
L
Ludovic Courtès wrote on 17 Dec 2021 16:33
(name . zimoun)(address . zimon.toutoune@gmail.com)
878rwjqm91.fsf@gnu.org
Hi!

zimoun <zimon.toutoune@gmail.com> skribis:

Toggle quote (30 lines)
> I am just hitting this old bug#24496 [1].
>
> On Mon, 26 Sep 2016 at 18:20, ludo@gnu.org (Ludovic Courtès) wrote:
>> ng0 <ngillmann@runbox.com> skribis:
>>
>>> When I forgot that my build machine is offline and I did not pass
>>> --no-build-hook, the offloading keeps trying forever until I had to
>>> cancel the build, boot the build-machine and started the build again.
>
> [...]
>
>> Like you say, on Hydra-style setup this could be a problem: the
>> front-end machine may have --max-jobs=0, meaning that it cannot perform
>> builds on its own.
>>
>> So I guess we would need a command-line option to select a different
>> behavior. I’m not sure how to do that because ‘guix offload’ is
>> “hidden” behind ‘guix-daemon’, so there’s no obvious place for such an
>> option.
>
> When the build machine used to offload is offline and the master daemon
> is --max-jobs=0, I expect X tries (leading to timeout) and then just
> fails with a hint, where X is defined by user. WDYT?
>
>
>> In the meantime, you could also hack up your machines.scm: it would
>> return a list where unreachable machines have been filtered out.
>
> Maybe, this could be done by “guix offload”.

Prior to commit efbf5fdd01817ea75de369e3dd2761a85f8f7dd5, this was the
case: an unreachable machine would have ‘machine-load’ return +inf.0,
and so it would be discarded from the list of candidates.

However, I think this behavior was unintentionally lost in
efbf5fdd01817ea75de369e3dd2761a85f8f7dd5. Maxim, WDYT?

Thanks,
Ludo’.
M
M
Maxim Cournoyer wrote on 17 Dec 2021 22:57
(name . Ludovic Courtès)(address . ludo@gnu.org)
87lf0i6gj6.fsf@gmail.com
Hello Ludovic,

Ludovic Courtès <ludo@gnu.org> writes:

Toggle quote (41 lines)
> Hi!
>
> zimoun <zimon.toutoune@gmail.com> skribis:
>
>> I am just hitting this old bug#24496 [1].
>>
>> On Mon, 26 Sep 2016 at 18:20, ludo@gnu.org (Ludovic Courtès) wrote:
>>> ng0 <ngillmann@runbox.com> skribis:
>>>
>>>> When I forgot that my build machine is offline and I did not pass
>>>> --no-build-hook, the offloading keeps trying forever until I had to
>>>> cancel the build, boot the build-machine and started the build again.
>>
>> [...]
>>
>>> Like you say, on Hydra-style setup this could be a problem: the
>>> front-end machine may have --max-jobs=0, meaning that it cannot perform
>>> builds on its own.
>>>
>>> So I guess we would need a command-line option to select a different
>>> behavior. I’m not sure how to do that because ‘guix offload’ is
>>> “hidden” behind ‘guix-daemon’, so there’s no obvious place for such an
>>> option.
>>
>> When the build machine used to offload is offline and the master daemon
>> is --max-jobs=0, I expect X tries (leading to timeout) and then just
>> fails with a hint, where X is defined by user. WDYT?
>>
>>
>>> In the meantime, you could also hack up your machines.scm: it would
>>> return a list where unreachable machines have been filtered out.
>>
>> Maybe, this could be done by “guix offload”.
>
> Prior to commit efbf5fdd01817ea75de369e3dd2761a85f8f7dd5, this was the
> case: an unreachable machine would have ‘machine-load’ return +inf.0,
> and so it would be discarded from the list of candidates.
>
> However, I think this behavior was unintentionally lost in
> efbf5fdd01817ea75de369e3dd2761a85f8f7dd5. Maxim, WDYT?

I just reviewed this commit, and don't see anywhere where the behavior
would have changed. The discarding happens here:

Toggle snippet (6 lines)
- (if (and node (< load 2.) (>= space %minimum-disk-space))
+ (if (and node
+ (or (not threshold) (< load threshold))
+ (>= space %minimum-disk-space))

previously load could be set to +inf.0. Now it is a float between 0.0
and 1.0, with threshold defaulting to 0.6.

As far as I remember, this has always been a problem for me (busy
offload machines being forever retried with no fallback to the local
machine).

Thanks,

Maxim
Z
Z
zimoun wrote on 18 Dec 2021 01:10
86tuf6rcvq.fsf@gmail.com
Hi,

I have not checked all the details, since the code of “guix offload” is
run by root, IIUC and so it is not as friendly as usual to debug. :-)

On Fri, 17 Dec 2021 at 16:57, Maxim Cournoyer <maxim.cournoyer@gmail.com> wrote:

Toggle quote (6 lines)
>> However, I think this behavior was unintentionally lost in
>> efbf5fdd01817ea75de369e3dd2761a85f8f7dd5. Maxim, WDYT?
>
> I just reviewed this commit, and don't see anywhere where the behavior
> would have changed. The discarding happens here:

[...]

Toggle quote (3 lines)
> previously load could be set to +inf.0. Now it is a float between 0.0
> and 1.0, with threshold defaulting to 0.6.

My /etc/guix/machines.scm contains only one machine and --max-jobs=0.

Because the machine is unreachable, IIUC, ’node’ is (or should be) false
and ’load’ is thus not involved, I guess. Indeed, ’report-load’
displays nothing, and instead I get:

Toggle snippet (16 lines)
The following derivation will be built:
/gnu/store/c1qicg17ygn1a0biq0q4mkprzy4p2x74-hello-2.10.drv
process 75621 acquired build slot '/var/guix/offload/x.x.x.x:22/0'
guix offload: error: failed to connect to 'x.x.x.x': Timeout connecting to x.x.x.x
waiting for locks or build slots...
process 75621 acquired build slot '/var/guix/offload/x.x.x.x:22/0'
guix offload: error: failed to connect to 'x.x.x.x': Timeout connecting to x.x.x.x
process 75621 acquired build slot '/var/guix/offload/x.x.x.x:22/0'
guix offload: error: failed to connect to 'x.x.x.x': Timeout connecting to x.x.x.x
process 75621 acquired build slot '/var/guix/offload/x.x.x.x:22/0'
guix offload: error: failed to connect to 'x.x.x.x': Timeout connecting to x.x.x.x
process 75621 acquired build slot '/var/guix/offload/x.x.x.x:22/0'
C-c C-c


Well, if the machine is not reachable, then ’session’ is false, right?

Toggle snippet (28 lines)
@@ -472,11 +480,15 @@ (define (machine-faster? m1 m2)
(let* ((session (false-if-exception (open-ssh-session best
%short-timeout)))
(node (and session (remote-inferior session)))
- (load (and node (normalized-load best (node-load node))))
+ (load (and node (node-load node)))
+ (threshold (build-machine-overload-threshold best))
(space (and node (node-free-disk-space node))))
+ (when load (report-load best load))
(when node (close-inferior node))
(when session (disconnect! session))
- (if (and node (< load 2.) (>= space %minimum-disk-space))
+ (if (and node
+ (or (not threshold) (< load threshold))
+ (>= space %minimum-disk-space))
[...]
(begin
;; BEST is unsuitable, so try the next one.
(when (and space (< space %minimum-disk-space))
(format (current-error-port)
"skipping machine '~a' because it is low \
on disk space (~,2f MiB free)~%"
(build-machine-name best)
(/ space (expt 2 20) 1.)))
(release-build-slot slot)
(loop others)))))

Therefore, the ’else’ branch goes and so the codes does ’(loop others)’.

However, I miss why ’others’ is not empty (only one machine in
/etc/guix/machines.scm). Well, the message «waiting for locks or build
slots...» suggests that something is restarted and it is not that ’loop’
we are observing but another one.

On daemon side, I do not know what this ’waitingForAWhile’ and
’lastWokenUp’ mean.

Toggle snippet (12 lines)
/* If we are polling goals that are waiting for a lock, then wake
up after a few seconds at most. */
if (!waitingForAWhile.empty()) {
useTimeout = true;
if (lastWokenUp == 0)
printMsg(lvlError, "waiting for locks or build slots...");
if (lastWokenUp == 0 || lastWokenUp > before) lastWokenUp = before;
timeout.tv_sec = std::max((time_t) 1, (time_t) (lastWokenUp + settings.pollInterval - before));
} else lastWokenUp = 0;


Bah it requires more investigations and I agree with Maxim that
efbf5fdd01817ea75de369e3dd2761a85f8f7dd5 is probably not the issue
there.

Cheers,
simon
L
L
Ludovic Courtès wrote on 21 Dec 2021 15:28
(name . Maxim Cournoyer)(address . maxim.cournoyer@gmail.com)
878rwe6nho.fsf@gnu.org
Hi,

Maxim Cournoyer <maxim.cournoyer@gmail.com> skribis:

Toggle quote (11 lines)
> I just reviewed this commit, and don't see anywhere where the behavior
> would have changed. The discarding happens here:
>
> - (if (and node (< load 2.) (>= space %minimum-disk-space))
> + (if (and node
> + (or (not threshold) (< load threshold))
> + (>= space %minimum-disk-space))
>
> previously load could be set to +inf.0. Now it is a float between 0.0
> and 1.0, with threshold defaulting to 0.6.

Ah alright, so we’re fine.

Toggle quote (4 lines)
> As far as I remember, this has always been a problem for me (busy
> offload machines being forever retried with no fallback to the local
> machine).

OK, I guess I’m overlooking something.

Thanks,
Ludo’.
?
Your comment

Commenting via the web interface is currently disabled.

To comment on this conversation send an email to 24496@debbugs.gnu.org

To respond to this issue using the mumi CLI, first switch to it
mumi current 24496
Then, you may apply the latest patchset in this issue (with sign off)
mumi am -- -s
Or, compose a reply to this issue
mumi compose
Or, send patches to this issue
mumi send-email *.patch