Hi Ricardo,
Ricardo Wurmus <rekado@elephly.net> writes:
Toggle quote (15 lines)
>>>>> This has built fine on berlin. We have a completed build for
>>>>> /gnu/store/3c28p8b07709isd9jlcnnnyrpgz4ndz8-libdrm-2.4.97.
>>>>
>>>> What kind of hardware was it built on?
>>>
>>> I’m not sure. We’re using a few Overdrive 1000 machines that have quite
>>> a bit more RAM than the other armhf nodes.
>>
>> Are there any other kinds of build slaves that build armhf binaries for
>> Berlin?
>
> Yes. We have a Beagleboard (x15.sjd.se), which is set up for 2 parallel
> builds and we use the Qemu service on 5 of our x86_64 machines to build
> for armhf. (We do the same for aarch64, but using 5 different nodes.)
So, many of the armhf builds are done in an emulator. This is exactly
what I was curious about. One problem with doing this is that tests
performed during these builds do not necessarily reflect what will
happen on real armhf hardware.
I'll give just one example of where this approach will fail badly: tests
of thread synchronization. The memory models used in ARM and x86_64 are
quite different, and an ARM emulator running on x86_64 will effectively
have a much stronger memory model than real ARM hardware does.
It's much harder to perform safe thread synchronization on ARM than on
x86_64. Many programmers use idioms that they believe are safe, and
which work on x86_64, but are buggy on many architectures with weaker
memory models. Those are the kinds of bugs that will *not* be
discovered by test suites when we perform the builds under QEMU.
I hope that we will soon phase out the practice of performing builds
within emulators.
In the meantime, it would be good to know which machine built 'libdrm'
for armhf. Was that information recorded?
Can you find the failed NSS build log from the X15? It would useful to
see which tests failed, and whether they're the same ones that failed on
hydra-slave3, which is a Novena with 4 GB of RAM. Here's the relevant
Thanks!
Mark