Hi Ludo, Am Mittwoch, den 29.09.2021, 23:47 +0200 schrieb Ludovic Courtès: > [...] > > > but zimoun and I still disagree on the target. zimoun says (guix > > packages) for reasons unknown to me, whereas I say (gnu packages), > > because it's closer to where it's used and doesn't imply that this > > is > > going to be a part of the (guix) download schemes anytime soon. > > (gnu packages) is higher-level: it’s part of the distro and includes > CLI helpers such as ‘specification->package’. So I think (guix …) is > somewhat more appropriate. > > (That said, what matters more to me is how we’re going to replace it > with a proper solution.) (gnu packages) being high-level is part of the reason I want it there. Stuff that's hidden quite deep inside (guix something) will be slower to change and replace with the proper solution. When you pull on a lever, the outside moves faster :) > > > A better solution IMO would be to improve the ‘snippet’ mechanism > > > in the first place. ‘computed-origin-method’ improves on it in > > > two ways: (1) lazy evaluation of the gexp, and (2) allows the use > > > of a different base name. > > > > > > I would think #2 is addressed by the ‘file-name’ field (isn’t > > > it?). > > > > > > As for #1, it can be addressed by making the ‘snippet’ field > > > delayed or thunked. It’s a one line change; the only thing we > > > need is to measure, or attempt to measure, the impact it has on > > > module load time. > > > > > > Thoughts? > > This would work for packages, whose source are some base source > > with patches or snippets applied, as is indeed the case for linux > > and icecat. However, there are also other potential uses for > > computed origins. > > It’s hard for me to talk about potential uses in the abstract. :-) > > There might be cases where an origin simply isn’t the right tool and > one would prefer ‘computed-file’ or something else. It really > depends on the context. > > [...] > > > I think that some version of `computed-origin-method' will > > eventually need to become public API as such packages may not > > always be best described as "a base package with a snippet". If we > > had recursive origins – i.e. origins, that can take origins as > > inputs – we might be able to do some of that, but I don't think it > > would necessarily work for linux-libre or icecat, as with those you > > don't want the tainted versions to be kept around. Perhaps this > > could be worked around by not interning the intermediate origins, > > but only using their file-names inside the temporary directory in > > which the snippet is applied? > > “Recursive origins” are a bit of a stretch as a concept IMO; what you > describe is a case where I’d probably use ‘computed-file’ instead. In other words, we could/should use computed-file for linux-libre and icecat? If we reasonably can, would it make sense to use that in lieu of computed-origin-method to actually advertise the existence of computed-file to Guix users/packagers? > > Another thing is that the final act of the linux-libre promise is > > not the packing of the tarball, but the deblob-check. Guix > > currently lacks a way of modeling such checks in their origin, but > > I'd argue it would need one if we wanted to do computed origins via > > snippets. This is not required by icecat and so one > > "simplification" could be that computed-origin-method would not > > require the user to create a tarball, but instead simply provide a > > name for the tarball and a directory to create it from (via a > > promise again). > > Ah, I had overlooked that ‘deblob-check’ bit. It could be that > allowing for custom pack-and-repack procedures would be enough to > address it. I think asking users to supply their own implementation of a 200 line long function to be a bit much to only do part of the job. On the other hand, the promise for linux-libre takes 400 lines and for icecat more than 600, but I think there are some things we ought to factor out. Particularly, looking up tools like tar or gzip and even the actual packing are always the same. What we can't currently control is the top directory name and the output name. Both of that could be customized by supplying a "repack- name" field, which is used as basis for the directory name and the tarball name. Another thing we can't easily control are extraneous inputs to the patches, although the patch-inputs field *does* exist. > > A combination of the above might make computed origins obsolete for > > good, but the question remains whether that is a better > > design. What do y'all think? > > The design goal is to have clearly identified types: , > , . For each of these, we want some > flexibility: build system, origin method, etc. However, beyond some > level of stretching, it may be clearer to just use the catch-all > ‘computed-file’ or to devise a new type. After all, that’s how > came to be (we could have used instead with a > suitable build system). > > There’s a tension between “purely declarative” and “flexible”, and > it’s about striking a balance, subjectively. To be fair, I did think that "computed-tarball" might be a good abstraction in some sense, but on another hand origins are computed tarballs with a record interface. On a somewhat related note, origins have this weird situation going on where some things like git or svn checkouts need to be defined through them, whereas others may pass unhindered. I feel that this contributes to the equation of source = origin, that might have caused "computed- origin-method" to exist in the first place. What do you think? Liliana