[PATCH 0/5] Distributing substitutes over IPFS

OpenSubmitted by Ludovic Courtès.
Details
6 participants
  • Alex Griffin
  • Alex Potsides
  • Hector Sanjuan
  • Ludovic Courtès
  • Pierre Neidhardt
  • Molly Mackinlay
Owner
unassigned
Severity
normal
L
L
Ludovic Courtès wrote on 29 Dec 2018 00:12
(address . guix-patches@gnu.org)
20181228231205.8068-1-ludo@gnu.org
Hello Guix!
Here is a first draft adding support to distribute and retrieve substitutesover IPFS. This builds on discussions at the R-B Summit with Héctor Sanjuanof IPFS, lewo of Nix, Pierre Neidhardt, and also on the work FlorianPaul Schmidt posted on guix-devel last month.
The IPFS daemon exposes an HTTP API and the (guix ipfs) module providesbindings to a subset of that API. This module also implements a custom“directory” format to store directory trees in IPFS (IPFS already provides“UnixFS” and “tar” but they store too many or too few file attributes.)
‘guix publish’ and ‘guix substitute’ use (guix ipfs) tostore and retrieve store items. Complete directory trees are stored inIPFS “as is”, rather than as compressed archives (nars). This allows fordeduplication in IPFS. ‘guix publish’ adds a new “IPFS” field innarinfos and ‘guix substitute’ can then query those objects over IPFS.So the idea is that you still get narinfos over HTTP(S), and then youhave the option of downloading substitutes over IPFS.
I’ve pushed these patches in ‘wip-ipfs-substitutes’. This is rough on theedges and probably buggy, but the adventurous among us might want to giveit a spin. :-)
Thanks,Ludo’.
Ludovic Courtès (5): Add (guix json). tests: 'file=?' now recurses on directories. Add (guix ipfs). publish: Add IPFS support. DRAFT substitute: Add IPFS support.
Makefile.am | 3 + doc/guix.texi | 33 +++++ guix/ipfs.scm | 250 ++++++++++++++++++++++++++++++++++++ guix/json.scm | 63 +++++++++ guix/scripts/publish.scm | 67 +++++++--- guix/scripts/substitute.scm | 106 ++++++++------- guix/swh.scm | 35 +---- guix/tests.scm | 26 +++- tests/ipfs.scm | 55 ++++++++ 9 files changed, 535 insertions(+), 103 deletions(-) create mode 100644 guix/ipfs.scm create mode 100644 guix/json.scm create mode 100644 tests/ipfs.scm
-- 2.20.1
L
L
Ludovic Courtès wrote on 29 Dec 2018 00:15
[PATCH 2/5] tests: 'file=?' now recurses on directories.
(address . 33899@debbugs.gnu.org)(name . Ludovic Courtès)(address . ludo@gnu.org)
20181228231554.8220-2-ludo@gnu.org
* guix/tests.scm (not-dot?): New procedure.(file=?)[executable?]: New procedure.In 'regular case, check whether the executable bit is preserved.Add 'directory case.--- guix/tests.scm | 26 ++++++++++++++++++++++---- 1 file changed, 22 insertions(+), 4 deletions(-)
Toggle diff (55 lines)diff --git a/guix/tests.scm b/guix/tests.scmindex f4948148c4..c9ae2718e4 100644--- a/guix/tests.scm+++ b/guix/tests.scm@@ -26,9 +26,12 @@ #:use-module (gcrypt hash) #:use-module (guix build-system gnu) #:use-module (gnu packages bootstrap)+ #:use-module (srfi srfi-1)+ #:use-module (srfi srfi-26) #:use-module (srfi srfi-34) #:use-module (srfi srfi-64) #:use-module (rnrs bytevectors)+ #:use-module (ice-9 ftw) #:use-module (ice-9 binary-ports) #:use-module (web uri) #:export (open-connection-for-tests@@ -138,16 +141,31 @@ too expensive to build entirely in the test store." (loop (1+ i))) bv)))) +(define (not-dot? entry)+ (not (member entry '("." ".."))))+ (define (file=? a b)- "Return true if files A and B have the same type and same content."+ "Return true if files A and B have the same type and same content,+recursively."+ (define (executable? file)+ (->bool (logand (stat:mode (lstat file)) #o100)))+ (and (eq? (stat:type (lstat a)) (stat:type (lstat b))) (case (stat:type (lstat a)) ((regular)- (equal?- (call-with-input-file a get-bytevector-all)- (call-with-input-file b get-bytevector-all)))+ (and (eqv? (executable? a) (executable? b))+ (equal?+ (call-with-input-file a get-bytevector-all)+ (call-with-input-file b get-bytevector-all)))) ((symlink) (string=? (readlink a) (readlink b)))+ ((directory)+ (let ((lst1 (scandir a not-dot?))+ (lst2 (scandir b not-dot?)))+ (and (equal? lst1 lst2)+ (every file=?+ (map (cut string-append a "/" <>) lst1)+ (map (cut string-append b "/" <>) lst2))))) (else (error "what?" (lstat a)))))) -- 2.20.1
L
L
Ludovic Courtès wrote on 29 Dec 2018 00:15
[PATCH 1/5] Add (guix json).
(address . 33899@debbugs.gnu.org)(name . Ludovic Courtès)(address . ludo@gnu.org)
20181228231554.8220-1-ludo@gnu.org
* guix/swh.scm: Use (guix json).(define-json-reader, define-json-mapping): Move to...* guix/json.scm: ... here. New file.* Makefile.am (MODULES): Add it.--- Makefile.am | 1 + guix/json.scm | 63 +++++++++++++++++++++++++++++++++++++++++++++++++++ guix/swh.scm | 35 +--------------------------- 3 files changed, 65 insertions(+), 34 deletions(-) create mode 100644 guix/json.scm
Toggle diff (136 lines)diff --git a/Makefile.am b/Makefile.amindex 0e5ca02ed3..da3720e3a6 100644--- a/Makefile.am+++ b/Makefile.am@@ -77,6 +77,7 @@ MODULES = \ guix/discovery.scm \ guix/git-download.scm \ guix/hg-download.scm \+ guix/json.scm \ guix/swh.scm \ guix/monads.scm \ guix/monad-repl.scm \diff --git a/guix/json.scm b/guix/json.scmnew file mode 100644index 0000000000..d446f6894e--- /dev/null+++ b/guix/json.scm@@ -0,0 +1,63 @@+;;; GNU Guix --- Functional package management for GNU+;;; Copyright © 2018 Ludovic Courtès <ludo@gnu.org>+;;;+;;; This file is part of GNU Guix.+;;;+;;; GNU Guix is free software; you can redistribute it and/or modify it+;;; under the terms of the GNU General Public License as published by+;;; the Free Software Foundation; either version 3 of the License, or (at+;;; your option) any later version.+;;;+;;; GNU Guix is distributed in the hope that it will be useful, but+;;; WITHOUT ANY WARRANTY; without even the implied warranty of+;;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the+;;; GNU General Public License for more details.+;;;+;;; You should have received a copy of the GNU General Public License+;;; along with GNU Guix. If not, see <http://www.gnu.org/licenses/>.++(define-module (guix json)+ #:use-module (json)+ #:use-module (srfi srfi-9)+ #:export (define-json-mapping))++;;; Commentary:+;;;+;;; This module provides tools to define mappings from JSON objects to SRFI-9+;;; records. This is useful when writing bindings to HTTP APIs.+;;;+;;; Code:++(define-syntax-rule (define-json-reader json->record ctor spec ...)+ "Define JSON->RECORD as a procedure that converts a JSON representation,+read from a port, string, or hash table, into a record created by CTOR and+following SPEC, a series of field specifications."+ (define (json->record input)+ (let ((table (cond ((port? input)+ (json->scm input))+ ((string? input)+ (json-string->scm input))+ ((hash-table? input)+ input))))+ (let-syntax ((extract-field (syntax-rules ()+ ((_ table (field key json->value))+ (json->value (hash-ref table key)))+ ((_ table (field key))+ (hash-ref table key))+ ((_ table (field))+ (hash-ref table+ (symbol->string 'field))))))+ (ctor (extract-field table spec) ...)))))++(define-syntax-rule (define-json-mapping rtd ctor pred json->record+ (field getter spec ...) ...)+ "Define RTD as a record type with the given FIELDs and GETTERs, à la SRFI-9,+and define JSON->RECORD as a conversion from JSON to a record of this type."+ (begin+ (define-record-type rtd+ (ctor field ...)+ pred+ (field getter) ...)++ (define-json-reader json->record ctor+ (field spec ...) ...)))diff --git a/guix/swh.scm b/guix/swh.scmindex 89cddb2bdd..c5f2153a22 100644--- a/guix/swh.scm+++ b/guix/swh.scm@@ -23,6 +23,7 @@ #:use-module (web client) #:use-module (web response) #:use-module (json)+ #:use-module (guix json) #:use-module (srfi srfi-1) #:use-module (srfi srfi-9) #:use-module (srfi srfi-11)@@ -127,40 +128,6 @@ url (string-append url "/"))) -(define-syntax-rule (define-json-reader json->record ctor spec ...)- "Define JSON->RECORD as a procedure that converts a JSON representation,-read from a port, string, or hash table, into a record created by CTOR and-following SPEC, a series of field specifications."- (define (json->record input)- (let ((table (cond ((port? input)- (json->scm input))- ((string? input)- (json-string->scm input))- ((hash-table? input)- input))))- (let-syntax ((extract-field (syntax-rules ()- ((_ table (field key json->value))- (json->value (hash-ref table key)))- ((_ table (field key))- (hash-ref table key))- ((_ table (field))- (hash-ref table- (symbol->string 'field))))))- (ctor (extract-field table spec) ...)))))--(define-syntax-rule (define-json-mapping rtd ctor pred json->record- (field getter spec ...) ...)- "Define RTD as a record type with the given FIELDs and GETTERs, à la SRFI-9,-and define JSON->RECORD as a conversion from JSON to a record of this type."- (begin- (define-record-type rtd- (ctor field ...)- pred- (field getter) ...)-- (define-json-reader json->record ctor- (field spec ...) ...)))- (define %date-regexp ;; Match strings like "2014-11-17T22:09:38+01:00" or ;; "2018-09-30T23:20:07.815449+00:00"".-- 2.20.1
L
L
Ludovic Courtès wrote on 29 Dec 2018 00:15
[PATCH 5/5] DRAFT substitute: Add IPFS support.
(address . 33899@debbugs.gnu.org)(name . Ludovic Courtès)(address . ludo@gnu.org)
20181228231554.8220-5-ludo@gnu.org
Missing:
- documentation - command-line options - progress report when downloading over IPFS - fallback when we fail to fetch from IPFS
* guix/scripts/substitute.scm (<narinfo>)[ipfs]: New field.(read-narinfo): Read "IPFS".(process-substitution/http): New procedure, with code formerly in'process-substitution'.(process-substitution): Check for IPFS and call 'ipfs:restore-file-tree'when IPFS is true.--- guix/scripts/substitute.scm | 106 +++++++++++++++++++++--------------- 1 file changed, 61 insertions(+), 45 deletions(-)
Toggle diff (174 lines)diff --git a/guix/scripts/substitute.scm b/guix/scripts/substitute.scmindex 53b1777241..8be15e4f13 100755--- a/guix/scripts/substitute.scm+++ b/guix/scripts/substitute.scm@@ -42,6 +42,7 @@ #:use-module (guix progress) #:use-module ((guix build syscalls) #:select (set-thread-name))+ #:use-module ((guix ipfs) #:prefix ipfs:) #:use-module (ice-9 rdelim) #:use-module (ice-9 regex) #:use-module (ice-9 match)@@ -281,7 +282,7 @@ failure, return #f and #f." (define-record-type <narinfo> (%make-narinfo path uri uri-base compression file-hash file-size nar-hash nar-size- references deriver system signature contents)+ references deriver system ipfs signature contents) narinfo? (path narinfo-path) (uri narinfo-uri)@@ -294,6 +295,7 @@ failure, return #f and #f." (references narinfo-references) (deriver narinfo-deriver) (system narinfo-system)+ (ipfs narinfo-ipfs) (signature narinfo-signature) ; canonical sexp ;; The original contents of a narinfo file. This field is needed because we ;; want to preserve the exact textual representation for verification purposes.@@ -335,7 +337,7 @@ s-expression: ~s~%") "Return a narinfo constructor for narinfos originating from CACHE-URL. STR must contain the original contents of a narinfo file." (lambda (path url compression file-hash file-size nar-hash nar-size- references deriver system signature)+ references deriver system ipfs signature) "Return a new <narinfo> object." (%make-narinfo path ;; Handle the case where URL is a relative URL.@@ -352,6 +354,7 @@ must contain the original contents of a narinfo file." ((or #f "") #f) (_ deriver)) system+ ipfs (false-if-exception (and=> signature narinfo-signature->canonical-sexp)) str)))@@ -386,7 +389,7 @@ No authentication and authorization checks are performed here!" (narinfo-maker str url) '("StorePath" "URL" "Compression" "FileHash" "FileSize" "NarHash" "NarSize"- "References" "Deriver" "System"+ "References" "Deriver" "System" "IPFS" "Signature")))) (define (narinfo-sha256 narinfo)@@ -947,13 +950,58 @@ authorized substitutes." (wtf (error "unknown `--query' command" wtf)))) +(define* (process-substitution/http narinfo destination uri+ #:key print-build-trace?)+ (unless print-build-trace?+ (format (current-error-port)+ (G_ "Downloading ~a...~%") (uri->string uri)))++ (let*-values (((raw download-size)+ ;; Note that Hydra currently generates Nars on the fly+ ;; and doesn't specify a Content-Length, so+ ;; DOWNLOAD-SIZE is #f in practice.+ (fetch uri #:buffered? #f #:timeout? #f))+ ((progress)+ (let* ((comp (narinfo-compression narinfo))+ (dl-size (or download-size+ (and (equal? comp "none")+ (narinfo-size narinfo))))+ (reporter (if print-build-trace?+ (progress-reporter/trace+ destination+ (uri->string uri) dl-size+ (current-error-port))+ (progress-reporter/file+ (uri->string uri) dl-size+ (current-error-port)+ #:abbreviation nar-uri-abbreviation))))+ (progress-report-port reporter raw)))+ ((input pids)+ ;; NOTE: This 'progress' port of current process will be+ ;; closed here, while the child process doing the+ ;; reporting will close it upon exit.+ (decompressed-port (and=> (narinfo-compression narinfo)+ string->symbol)+ progress)))+ ;; Unpack the Nar at INPUT into DESTINATION.+ (restore-file input destination)+ (close-port input)++ ;; Wait for the reporter to finish.+ (every (compose zero? cdr waitpid) pids)++ ;; Skip a line after what 'progress-reporter/file' printed, and another+ ;; one to visually separate substitutions.+ (display "\n\n" (current-error-port))))+ (define* (process-substitution store-item destination #:key cache-urls acl print-build-trace?) "Substitute STORE-ITEM (a store file name) from CACHE-URLS, and write it to DESTINATION as a nar file. Verify the substitute against ACL." (let* ((narinfo (lookup-narinfo cache-urls store-item (cut valid-narinfo? <> acl)))- (uri (and=> narinfo narinfo-uri)))+ (uri (and=> narinfo narinfo-uri))+ (ipfs (and=> narinfo narinfo-ipfs))) (unless uri (leave (G_ "no valid substitute for '~a'~%") store-item))@@ -961,47 +1009,15 @@ DESTINATION as a nar file. Verify the substitute against ACL." ;; Tell the daemon what the expected hash of the Nar itself is. (format #t "~a~%" (narinfo-hash narinfo)) - (unless print-build-trace?- (format (current-error-port)- (G_ "Downloading ~a...~%") (uri->string uri)))-- (let*-values (((raw download-size)- ;; Note that Hydra currently generates Nars on the fly- ;; and doesn't specify a Content-Length, so- ;; DOWNLOAD-SIZE is #f in practice.- (fetch uri #:buffered? #f #:timeout? #f))- ((progress)- (let* ((comp (narinfo-compression narinfo))- (dl-size (or download-size- (and (equal? comp "none")- (narinfo-size narinfo))))- (reporter (if print-build-trace?- (progress-reporter/trace- destination- (uri->string uri) dl-size- (current-error-port))- (progress-reporter/file- (uri->string uri) dl-size- (current-error-port)- #:abbreviation nar-uri-abbreviation))))- (progress-report-port reporter raw)))- ((input pids)- ;; NOTE: This 'progress' port of current process will be- ;; closed here, while the child process doing the- ;; reporting will close it upon exit.- (decompressed-port (and=> (narinfo-compression narinfo)- string->symbol)- progress)))- ;; Unpack the Nar at INPUT into DESTINATION.- (restore-file input destination)- (close-port input)-- ;; Wait for the reporter to finish.- (every (compose zero? cdr waitpid) pids)-- ;; Skip a line after what 'progress-reporter/file' printed, and another- ;; one to visually separate substitutions.- (display "\n\n" (current-error-port)))))+ (if ipfs+ (begin+ (unless print-build-trace?+ (format (current-error-port)+ (G_ "Downloading from IPFS ~s...~%") ipfs))+ (ipfs:restore-file-tree ipfs destination))+ (process-substitution/http narinfo destination uri+ #:print-build-trace?+ print-build-trace?)))) ;;;-- 2.20.1
L
L
Ludovic Courtès wrote on 29 Dec 2018 00:15
[PATCH 4/5] publish: Add IPFS support.
(address . 33899@debbugs.gnu.org)(name . Ludovic Courtès)(address . ludo@gnu.org)
20181228231554.8220-4-ludo@gnu.org
* guix/scripts/publish.scm (show-help, %options): Add '--ipfs'.(narinfo-string): Add IPFS parameter and honor it.(render-narinfo/cached): Add #:ipfs? and honor it.(bake-narinfo+nar, make-request-handler, run-publish-server): Likewise.(guix-publish): Honor '--ipfs' and parameterize %IPFS-BASE-URL.--- doc/guix.texi | 33 ++++++++++++++++++++ guix/scripts/publish.scm | 67 ++++++++++++++++++++++++++++------------ 2 files changed, 80 insertions(+), 20 deletions(-)
Toggle diff (233 lines)diff --git a/doc/guix.texi b/doc/guix.texiindex fcb5b8c088..f2af5a1558 100644--- a/doc/guix.texi+++ b/doc/guix.texi@@ -8470,6 +8470,15 @@ caching of the archives before they are sent to clients---see below for details. The @command{guix weather} command provides a handy way to check what a server provides (@pxref{Invoking guix weather}). +@cindex peer-to-peer, substitute distribution+@cindex distributed storage, of substitutes+@cindex IPFS, for substitutes+It is also possible to publish substitutes over @uref{https://ipfs.io, IFPS},+a distributed, peer-to-peer storage mechanism. To enable it, pass the+@option{--ipfs} option alongside @option{--cache}, and make sure you're+running @command{ipfs daemon}. Capable clients will then be able to choose+whether to fetch substitutes over HTTP or over IPFS.+ As a bonus, @command{guix publish} also serves as a content-addressed mirror for source files referenced in @code{origin} records (@pxref{origin Reference}). For instance, assuming @command{guix@@ -8560,6 +8569,30 @@ thread per CPU core is created, but this can be customized. See When @option{--ttl} is used, cached entries are automatically deleted when they have expired. +@item --ifps[=@var{gateway}]+When used in conjunction with @option{--cache}, instruct @command{guix+publish} to publish substitutes over the @uref{https://ipfs.io, IPFS+distributed data store} in addition to HTTP.++@quotation Note+As of version @value{VERSION}, IPFS support is experimental. You're welcome+to share your experience with the developers by emailing+@email{guix-devel@@gnu.org}!+@end quotation++The IPFS HTTP interface must be reachable at @var{gateway}, by default+@code{localhost:5001}. To get it up and running, it is usually enough to+install IPFS and start the IPFS daemon:++@example+$ guix package -i go-ipfs+$ ipfs init+$ ipfs daemon+@end example++For more information on how to get started with IPFS, please refer to the+@uref{https://docs.ipfs.io/introduction/usage/, IPFS documentation}.+ @item --workers=@var{N} When @option{--cache} is used, request the allocation of @var{N} worker threads to ``bake'' archives.diff --git a/guix/scripts/publish.scm b/guix/scripts/publish.scmindex a236f3e45c..2accd632ab 100644--- a/guix/scripts/publish.scm+++ b/guix/scripts/publish.scm@@ -59,6 +59,7 @@ #:use-module ((guix build utils) #:select (dump-port mkdir-p find-files)) #:use-module ((guix build syscalls) #:select (set-thread-name))+ #:use-module ((guix ipfs) #:prefix ipfs:) #:export (%public-key %private-key @@ -78,6 +79,8 @@ Publish ~a over HTTP.\n") %store-directory) compress archives at LEVEL")) (display (G_ " -c, --cache=DIRECTORY cache published items to DIRECTORY"))+ (display (G_ "+ --ipfs[=GATEWAY] publish items over IPFS via GATEWAY")) (display (G_ " --workers=N use N workers to bake items")) (display (G_ "@@ -168,6 +171,10 @@ compression disabled~%")) (option '(#\c "cache") #t #f (lambda (opt name arg result) (alist-cons 'cache arg result)))+ (option '("ipfs") #f #t+ (lambda (opt name arg result)+ (alist-cons 'ipfs (or arg (ipfs:%ipfs-base-url))+ result))) (option '("workers") #t #f (lambda (opt name arg result) (alist-cons 'workers (string->number* arg)@@ -237,12 +244,15 @@ compression disabled~%")) (define* (narinfo-string store store-path key #:key (compression %no-compression)- (nar-path "nar") file-size)+ (nar-path "nar") file-size ipfs) "Generate a narinfo key/value string for STORE-PATH; an exception is raised if STORE-PATH is invalid. Produce a URL that corresponds to COMPRESSION. The narinfo is signed with KEY. NAR-PATH specifies the prefix for nar URLs.+ Optionally, FILE-SIZE can specify the size in bytes of the compressed NAR; it-informs the client of how much needs to be downloaded."+informs the client of how much needs to be downloaded.++When IPFS is true, it is the IPFS object identifier for STORE-PATH." (let* ((path-info (query-path-info store store-path)) (compression (actual-compression store-path compression)) (url (encode-and-join-uri-path@@ -295,7 +305,12 @@ References: ~a~%~a" (apply throw args)))))) (signature (base64-encode-string (canonical-sexp->string (signed-string info)))))- (format #f "~aSignature: 1;~a;~a~%" info (gethostname) signature)))+ (format #f "~aSignature: 1;~a;~a~%~a" info (gethostname) signature++ ;; Append IPFS info below the signed part.+ (if ipfs+ (string-append "IPFS: " ipfs "\n")+ "")))) (define* (not-found request #:key (phrase "Resource not found")@@ -406,10 +421,12 @@ items. Failing that, we could eventually have to recompute them and return (define* (render-narinfo/cached store request hash #:key ttl (compression %no-compression) (nar-path "nar")- cache pool)+ cache pool ipfs?) "Respond to the narinfo request for REQUEST. If the narinfo is available in CACHE, then send it; otherwise, return 404 and \"bake\" that nar and narinfo-requested using POOL."+requested using POOL.++When IPFS? is true, additionally publish binaries over IPFS." (define (delete-entry narinfo) ;; Delete NARINFO and the corresponding nar from CACHE. (let ((nar (string-append (string-drop-right narinfo@@ -447,7 +464,8 @@ requested using POOL." (bake-narinfo+nar cache item #:ttl ttl #:compression compression- #:nar-path nar-path)))+ #:nar-path nar-path+ #:ipfs? ipfs?))) (when ttl (single-baker 'cache-cleanup@@ -465,7 +483,7 @@ requested using POOL." (define* (bake-narinfo+nar cache item #:key ttl (compression %no-compression)- (nar-path "/nar"))+ (nar-path "/nar") ipfs?) "Write the narinfo and nar for ITEM to CACHE." (let* ((compression (actual-compression item compression)) (nar (nar-cache-file cache item@@ -502,7 +520,11 @@ requested using POOL." #:nar-path nar-path #:compression compression #:file-size (and=> (stat nar #f)- stat:size))+ stat:size)+ #:ipfs+ (and ipfs?+ (ipfs:content-name+ (ipfs:add-file-tree item)))) port)))))) ;; XXX: Declare the 'X-Nar-Compression' HTTP header, which is in fact for@@ -766,7 +788,8 @@ blocking." cache pool narinfo-ttl (nar-path "nar")- (compression %no-compression))+ (compression %no-compression)+ ipfs?) (define nar-path? (let ((expected (split-and-decode-uri-path nar-path))) (cut equal? expected <>)))@@ -793,7 +816,8 @@ blocking." #:pool pool #:ttl narinfo-ttl #:nar-path nar-path- #:compression compression)+ #:compression compression+ #:ipfs? ipfs?) (render-narinfo store request hash #:ttl narinfo-ttl #:nar-path nar-path@@ -847,13 +871,14 @@ blocking." (define* (run-publish-server socket store #:key (compression %no-compression) (nar-path "nar") narinfo-ttl- cache pool)+ cache pool ipfs?) (run-server (make-request-handler store #:cache cache #:pool pool #:nar-path nar-path #:narinfo-ttl narinfo-ttl- #:compression compression)+ #:compression compression+ #:ipfs? ipfs?) concurrent-http-server `(#:socket ,socket))) @@ -902,6 +927,7 @@ blocking." (repl-port (assoc-ref opts 'repl)) (cache (assoc-ref opts 'cache)) (workers (assoc-ref opts 'workers))+ (ipfs (assoc-ref opts 'ipfs)) ;; Read the key right away so that (1) we fail early on if we can't ;; access them, and (2) we can then drop privileges.@@ -930,14 +956,15 @@ consider using the '--user' option!~%"))) (set-thread-name "guix publish") (with-store store- (run-publish-server socket store- #:cache cache- #:pool (and cache (make-pool workers- #:thread-name- "publish worker"))- #:nar-path nar-path- #:compression compression- #:narinfo-ttl ttl))))))+ (parameterize ((ipfs:%ipfs-base-url ipfs))+ (run-publish-server socket store+ #:cache cache+ #:pool (and cache (make-pool workers+ #:thread-name+ "publish worker"))+ #:nar-path nar-path+ #:compression compression+ #:narinfo-ttl ttl))))))) ;;; Local Variables: ;;; eval: (put 'single-baker 'scheme-indent-function 1)-- 2.20.1
L
L
Ludovic Courtès wrote on 29 Dec 2018 00:15
[PATCH 3/5] Add (guix ipfs).
(address . 33899@debbugs.gnu.org)(name . Ludovic Courtès)(address . ludo@gnu.org)
20181228231554.8220-3-ludo@gnu.org
* guix/ipfs.scm, tests/ipfs.scm: New files.* Makefile.am (MODULES, SCM_TESTS): Add them.--- Makefile.am | 2 + guix/ipfs.scm | 250 +++++++++++++++++++++++++++++++++++++++++++++++++ tests/ipfs.scm | 55 +++++++++++ 3 files changed, 307 insertions(+) create mode 100644 guix/ipfs.scm create mode 100644 tests/ipfs.scm
Toggle diff (339 lines)diff --git a/Makefile.am b/Makefile.amindex da3720e3a6..975d83db6c 100644--- a/Makefile.am+++ b/Makefile.am@@ -101,6 +101,7 @@ MODULES = \ guix/cve.scm \ guix/workers.scm \ guix/zlib.scm \+ guix/ipfs.scm \ guix/build-system.scm \ guix/build-system/android-ndk.scm \ guix/build-system/ant.scm \@@ -384,6 +385,7 @@ SCM_TESTS = \ tests/cve.scm \ tests/workers.scm \ tests/zlib.scm \+ tests/ipfs.scm \ tests/file-systems.scm \ tests/uuid.scm \ tests/system.scm \diff --git a/guix/ipfs.scm b/guix/ipfs.scmnew file mode 100644index 0000000000..e941feda6f--- /dev/null+++ b/guix/ipfs.scm@@ -0,0 +1,250 @@+;;; GNU Guix --- Functional package management for GNU+;;; Copyright © 2018 Ludovic Courtès <ludo@gnu.org>+;;;+;;; This file is part of GNU Guix.+;;;+;;; GNU Guix is free software; you can redistribute it and/or modify it+;;; under the terms of the GNU General Public License as published by+;;; the Free Software Foundation; either version 3 of the License, or (at+;;; your option) any later version.+;;;+;;; GNU Guix is distributed in the hope that it will be useful, but+;;; WITHOUT ANY WARRANTY; without even the implied warranty of+;;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the+;;; GNU General Public License for more details.+;;;+;;; You should have received a copy of the GNU General Public License+;;; along with GNU Guix. If not, see <http://www.gnu.org/licenses/>.++(define-module (guix ipfs)+ #:use-module (guix json)+ #:use-module (guix base64)+ #:use-module ((guix build utils) #:select (dump-port))+ #:use-module (srfi srfi-1)+ #:use-module (srfi srfi-11)+ #:use-module (srfi srfi-26)+ #:use-module (rnrs io ports)+ #:use-module (rnrs bytevectors)+ #:use-module (ice-9 match)+ #:use-module (ice-9 ftw)+ #:use-module (web uri)+ #:use-module (web client)+ #:use-module (web response)+ #:export (%ipfs-base-url+ add-file+ add-file-tree+ restore-file-tree++ content?+ content-name+ content-hash+ content-size++ add-empty-directory+ add-to-directory+ read-contents+ publish-name))++;;; Commentary:+;;;+;;; This module implements bindings for the HTTP interface of the IPFS+;;; gateway, documented here: <https://docs.ipfs.io/reference/api/http/>. It+;;; allows you to add and retrieve files over IPFS, and a few other things.+;;;+;;; Code:++(define %ipfs-base-url+ ;; URL of the IPFS gateway.+ (make-parameter "http://localhost:5001"))++(define* (call url decode #:optional (method http-post)+ #:key body (false-if-404? #t) (headers '()))+ "Invoke the endpoint at URL using METHOD. Decode the resulting JSON body+using DECODE, a one-argument procedure that takes an input port; when DECODE+is false, return the input port. When FALSE-IF-404? is true, return #f upon+404 responses."+ (let*-values (((response port)+ (method url #:streaming? #t+ #:body body++ ;; Always pass "Connection: close".+ #:keep-alive? #f+ #:headers `((connection close)+ ,@headers))))+ (cond ((= 200 (response-code response))+ (if decode+ (let ((result (decode port)))+ (close-port port)+ result)+ port))+ ((and false-if-404?+ (= 404 (response-code response)))+ (close-port port)+ #f)+ (else+ (close-port port)+ (throw 'ipfs-error url response)))))++;; Result of a file addition.+(define-json-mapping <content> make-content content?+ json->content+ (name content-name "Name")+ (hash content-hash "Hash")+ (bytes content-bytes "Bytes")+ (size content-size "Size" string->number))++;; Result of a 'patch/add-link' operation.+(define-json-mapping <directory> make-directory directory?+ json->directory+ (hash directory-hash "Hash")+ (links directory-links "Links" json->links))++;; A "link".+(define-json-mapping <link> make-link link?+ json->link+ (name link-name "Name")+ (hash link-hash "Hash")+ (size link-size "Size" string->number))++;; A "binding", also known as a "name".+(define-json-mapping <binding> make-binding binding?+ json->binding+ (name binding-name "Name")+ (value binding-value "Value"))++(define (json->links json)+ (match json+ (#f '())+ (links (map json->link links))))++(define %multipart-boundary+ ;; XXX: We might want to find a more reliable boundary.+ (string-append (make-string 24 #\-) "2698127afd7425a6"))++(define (bytevector->form-data bv port)+ "Write to PORT a 'multipart/form-data' representation of BV."+ (display (string-append "--" %multipart-boundary "\r\n"+ "Content-Disposition: form-data\r\n"+ "Content-Type: application/octet-stream\r\n\r\n")+ port)+ (put-bytevector port bv)+ (display (string-append "\r\n--" %multipart-boundary "--\r\n")+ port))++(define* (add-data data #:key (name "file.txt") recursive?)+ "Add DATA, a bytevector, to IPFS. Return a content object representing it."+ (call (string-append (%ipfs-base-url)+ "/api/v0/add?arg=" (uri-encode name)+ "&recursive="+ (if recursive? "true" "false"))+ json->content+ #:headers+ `((content-type+ . (multipart/form-data+ (boundary . ,%multipart-boundary))))+ #:body+ (call-with-bytevector-output-port+ (lambda (port)+ (bytevector->form-data data port)))))++(define (not-dot? entry)+ (not (member entry '("." ".."))))++(define (file-tree->sexp file)+ "Add FILE, recursively, to the IPFS, and return an sexp representing the+directory's tree structure.++Unlike IPFS's own \"UnixFS\" structure, this format preserves exactly what we+need: like the nar format, it preserves the executable bit, but does not save+the mtime or other Unixy attributes irrelevant in the store."+ ;; The natural approach would be to insert each directory listing as an+ ;; object of its own in IPFS. However, this does not buy us much in terms+ ;; of deduplication, but it does cause a lot of extra round trips when+ ;; fetching it. Thus, this sexp is \"flat\" in that only the leaves are+ ;; inserted into the IPFS.+ (let ((st (lstat file)))+ (match (stat:type st)+ ('directory+ (let* ((parent file)+ (entries (map (lambda (file)+ `(entry ,file+ ,(file-tree->sexp+ (string-append parent "/" file))))+ (scandir file not-dot?)))+ (size (fold (lambda (entry total)+ (match entry+ (('entry name (kind value size))+ (+ total size))))+ 0+ entries)))+ `(directory ,entries ,size)))+ ('symlink+ `(symlink ,(readlink file) 0))+ ('regular+ (let ((size (stat:size st)))+ (if (zero? (logand (stat:mode st) #o100))+ `(file ,(content-name (add-file file)) ,size)+ `(executable ,(content-name (add-file file)) ,size)))))))++(define (add-file-tree file)+ "Add FILE to the IPFS, recursively, using our own canonical directory+format. Return the resulting content object."+ (add-data (string->utf8 (object->string+ `(file-tree (version 0)+ ,(file-tree->sexp file))))))++(define (restore-file-tree object file)+ "Restore to FILE the tree pointed to by OBJECT."+ (let restore ((tree (match (read (read-contents object))+ (('file-tree ('version 0) tree)+ tree)))+ (file file))+ (match tree+ (('file object size)+ (call-with-output-file file+ (lambda (output)+ (dump-port (read-contents object) output))))+ (('executable object size)+ (call-with-output-file file+ (lambda (output)+ (dump-port (read-contents object) output)))+ (chmod file #o555))+ (('symlink target size)+ (symlink target file))+ (('directory (('entry names entries) ...) size)+ (mkdir file)+ (for-each restore entries+ (map (cut string-append file "/" <>) names))))))++(define* (add-file file #:key (name (basename file)))+ "Add FILE under NAME to the IPFS and return a content object for it."+ (add-data (match (call-with-input-file file get-bytevector-all)+ ((? eof-object?) #vu8())+ (bv bv))+ #:name name))++(define* (add-empty-directory #:key (name "directory"))+ "Return a content object for an empty directory."+ (add-data #vu8() #:recursive? #t #:name name))++(define* (add-to-directory directory file name)+ "Add FILE to DIRECTORY under NAME, and return the resulting directory.+DIRECTORY and FILE must be hashes identifying objects in the IPFS store."+ (call (string-append (%ipfs-base-url)+ "/api/v0/object/patch/add-link?arg="+ (uri-encode directory)+ "&arg=" (uri-encode name) "&arg=" (uri-encode file)+ "&create=true")+ json->directory))++(define* (read-contents object #:key offset length)+ "Return an input port to read the content of OBJECT from."+ (call (string-append (%ipfs-base-url)+ "/api/v0/cat?arg=" object)+ #f))++(define* (publish-name object)+ "Publish OBJECT under the current peer ID."+ (call (string-append (%ipfs-base-url)+ "/api/v0/name/publish?arg=" object)+ json->binding))diff --git a/tests/ipfs.scm b/tests/ipfs.scmnew file mode 100644index 0000000000..3b662b22bd--- /dev/null+++ b/tests/ipfs.scm@@ -0,0 +1,55 @@+;;; GNU Guix --- Functional package management for GNU+;;; Copyright © 2018 Ludovic Courtès <ludo@gnu.org>+;;;+;;; This file is part of GNU Guix.+;;;+;;; GNU Guix is free software; you can redistribute it and/or modify it+;;; under the terms of the GNU General Public License as published by+;;; the Free Software Foundation; either version 3 of the License, or (at+;;; your option) any later version.+;;;+;;; GNU Guix is distributed in the hope that it will be useful, but+;;; WITHOUT ANY WARRANTY; without even the implied warranty of+;;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the+;;; GNU General Public License for more details.+;;;+;;; You should have received a copy of the GNU General Public License+;;; along with GNU Guix. If not, see <http://www.gnu.org/licenses/>.++(define-module (test-ipfs)+ #:use-module (guix ipfs)+ #:use-module ((guix utils) #:select (call-with-temporary-directory))+ #:use-module (guix tests)+ #:use-module (web uri)+ #:use-module (srfi srfi-64))++;; Test the (guix ipfs) module.++(define (ipfs-gateway-running?)+ "Return true if the IPFS gateway is running at %IPFS-BASE-URL."+ (let* ((uri (string->uri (%ipfs-base-url)))+ (socket (socket AF_INET SOCK_STREAM 0)))+ (define connected?+ (catch 'system-error+ (lambda ()+ (format (current-error-port)+ "probing IPFS gateway at localhost:~a...~%"+ (uri-port uri))+ (connect socket AF_INET INADDR_LOOPBACK (uri-port uri))+ #t)+ (const #f)))++ (close-port socket)+ connected?))++(unless (ipfs-gateway-running?)+ (test-skip 1))++(test-assert "add-file-tree + restore-file-tree"+ (call-with-temporary-directory+ (lambda (directory)+ (let* ((source (dirname (search-path %load-path "guix/base32.scm")))+ (target (string-append directory "/r"))+ (content (pk 'content (add-file-tree source))))+ (restore-file-tree (content-name content) target)+ (file=? source target)))))-- 2.20.1
H
H
Hector Sanjuan wrote on 7 Jan 2019 15:43
Re: [PATCH 0/5] Distributing substitutes over IPFS
(name . 33899@debbugs.gnu.org)(address . 33899@debbugs.gnu.org)
NIQ7NwCAv476krhArm683GoWozb7fXZrcRzXorY2s-X_QAZ71GhHzrFj7788bRBP3fpklDWfxy-we7928SLdmd-iairDLG_sNDRjbJwxYyo=@hector.link
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐On Saturday, December 29, 2018 12:12 AM, Ludovic Courtès <ludo@gnu.org> wrote:
Toggle quote (28 lines)> Hello Guix!>> Here is a first draft adding support to distribute and retrieve substitutes> over IPFS. This builds on discussions at the R-B Summit with Héctor Sanjuan> of IPFS, lewo of Nix, Pierre Neidhardt, and also on the work Florian> Paul Schmidt posted on guix-devel last month.>> The IPFS daemon exposes an HTTP API and the (guix ipfs) module provides> bindings to a subset of that API. This module also implements a custom> “directory” format to store directory trees in IPFS (IPFS already provides> “UnixFS” and “tar” but they store too many or too few file attributes.)>> ‘guix publish’ and ‘guix substitute’ use (guix ipfs) to> store and retrieve store items. Complete directory trees are stored in> IPFS “as is”, rather than as compressed archives (nars). This allows for> deduplication in IPFS. ‘guix publish’ adds a new “IPFS” field in> narinfos and ‘guix substitute’ can then query those objects over IPFS.> So the idea is that you still get narinfos over HTTP(S), and then you> have the option of downloading substitutes over IPFS.>> I’ve pushed these patches in ‘wip-ipfs-substitutes’. This is rough on the> edges and probably buggy, but the adventurous among us might want to give> it a spin. :-)>> Thanks,> Ludo’.

Hey! Happy new year! This is great news. I'm very glad to see this.I haven't tried this yet but looking at the code there are a coupleof things to point out.
1) The doc strings usually refer to the IPFS HTTP API as GATEWAY. go-ipfshas a read/write API (on :5001) and a read-only API that we call "gateway"and which runs on :8080. The gateway, apart from handling most of theread-only methods from the HTTP API, also handles paths like "/ipfs/<cid>"or "/ipns/<name>" gracefully, and returns an autogenerated webpage fordirectory-type CIDs. The gateway does not allow to "publish". Therefore I thinkthe doc strings should say "IPFS daemon API" rather than "GATEWAY".
2) I'm not proficient enough in schema to grasp the details of the"directory" format. If I understand it right, you keep a separate manifestobject listing the directory structure, the contents and the executable bitfor each. Thus, when adding a store item you add all the files separately andthis manifest. And when retrieving a store item you fetch the manifest andreconstruct the tree by fetching the contents in it (and applying theexecutable flag). Is this correct? This works, but it can be improved:
You can add all the files/folders in a single request. If I'mreading it right, now each files is added separately (and gets pinnedseparately). It would probably make sense to add it all in a single request,letting IPFS to store the directory structure as "unixfs". You canadditionally add the sexp file with the dir-structure and executable flagsas an extra file to the root folder. This would allow to fetch the whole thingwith a single request too /api/v0/get?arg=<hash>. And to pin a single hashrecursively (and not each separately). After getting the whole thing, youwill need to chmod +x things accordingly.
It will probably need some trial an error to get the multi-part rightto upload all in a single request. The Go code HTTP Clients doingthis can be found at:
https://github.com/ipfs/go-ipfs-files/blob/master/multifilereader.go#L96
As you see, a directory part in the multipart will have the content-type Headerset to "application/x-directory". The best way to see how "abspath" etc is setis probably to sniff an `ipfs add -r <testfolder>` operation (localhost:5001).
Once UnixFSv2 lands, you will be in a position to just drop the sexp filealtogether.
Let me know if you have any doubts, I'll make my best to answer them. In themeantime I'll try to get more familiar with Guix.
Cheers,
Hector
PS. There is a place where it says "ifps" instead of "ipfs". A very common typo.
L
L
Ludovic Courtès wrote on 14 Jan 2019 14:17
(name . Hector Sanjuan)(address . code@hector.link)
87r2dfv0nj.fsf@gnu.org
Hi Hector,
Happy new year to you too! :-)
Hector Sanjuan <code@hector.link> skribis:
Toggle quote (8 lines)> 1) The doc strings usually refer to the IPFS HTTP API as GATEWAY. go-ipfs> has a read/write API (on :5001) and a read-only API that we call "gateway"> and which runs on :8080. The gateway, apart from handling most of the> read-only methods from the HTTP API, also handles paths like "/ipfs/<cid>"> or "/ipns/<name>" gracefully, and returns an autogenerated webpage for> directory-type CIDs. The gateway does not allow to "publish". Therefore I think> the doc strings should say "IPFS daemon API" rather than "GATEWAY".
Indeed, I’ll change that.
Toggle quote (8 lines)> 2) I'm not proficient enough in schema to grasp the details of the> "directory" format. If I understand it right, you keep a separate manifest> object listing the directory structure, the contents and the executable bit> for each. Thus, when adding a store item you add all the files separately and> this manifest. And when retrieving a store item you fetch the manifest and> reconstruct the tree by fetching the contents in it (and applying the> executable flag). Is this correct? This works, but it can be improved:
That’s correct.
Toggle quote (10 lines)> You can add all the files/folders in a single request. If I'm> reading it right, now each files is added separately (and gets pinned> separately). It would probably make sense to add it all in a single request,> letting IPFS to store the directory structure as "unixfs". You can> additionally add the sexp file with the dir-structure and executable flags> as an extra file to the root folder. This would allow to fetch the whole thing> with a single request too /api/v0/get?arg=<hash>. And to pin a single hash> recursively (and not each separately). After getting the whole thing, you> will need to chmod +x things accordingly.
Yes, I’m well aware of “unixfs”. The problems, as I see it, is that itstores “too much” in a way (we don’t need to store the mtimes orpermissions; we could ignore them upon reconstruction though), and “notenough” in another way (the executable bit is lost, IIUC.)
Toggle quote (13 lines)> It will probably need some trial an error to get the multi-part right> to upload all in a single request. The Go code HTTP Clients doing> this can be found at:>> https://github.com/ipfs/go-ipfs-files/blob/master/multifilereader.go#L96>> As you see, a directory part in the multipart will have the content-type Header> set to "application/x-directory". The best way to see how "abspath" etc is set> is probably to sniff an `ipfs add -r <testfolder>` operation (localhost:5001).>> Once UnixFSv2 lands, you will be in a position to just drop the sexp file> altogether.
Yes, that makes sense. In the meantime, I guess we have to keep usingour own format.
What are the performance implications of adding and retrieving files oneby one like I did? I understand we’re doing N HTTP requests to thelocal IPFS daemon where “ipfs add -r” makes a single request, but thisalone can’t be much of a problem since communication is happeninglocally. Does pinning each file separately somehow incur additionaloverhead?
Thanks for your feedback!
Ludo’.
H
H
Hector Sanjuan wrote on 18 Jan 2019 10:08
(name . Ludovic Courtès)(address . ludo@gnu.org)
neM1uqJ3yxqbJiTzV6-q6R-8GNGjv7l_7TJhhQIGXpDQbLoS8yIYrJ4KxKYmFwpi1O9YePH3d5i3fknYgv7nfuMrXFgYoxsk_Xxgs9_Sd2U=@hector.link
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐On Monday, January 14, 2019 2:17 PM, Ludovic Courtès <ludo@gnu.org> wrote:
Toggle quote (43 lines)> Hi Hector,>> Happy new year to you too! :-)>> Hector Sanjuan code@hector.link skribis:>> > 1. The doc strings usually refer to the IPFS HTTP API as GATEWAY. go-ipfs> > has a read/write API (on :5001) and a read-only API that we call "gateway"> > and which runs on :8080. The gateway, apart from handling most of the> > read-only methods from the HTTP API, also handles paths like "/ipfs/<cid>"> > or "/ipns/<name>" gracefully, and returns an autogenerated webpage for> > directory-type CIDs. The gateway does not allow to "publish". Therefore I think> > the doc strings should say "IPFS daemon API" rather than "GATEWAY".> >>> Indeed, I’ll change that.>> > 2. I'm not proficient enough in schema to grasp the details of the> > "directory" format. If I understand it right, you keep a separate manifest> > object listing the directory structure, the contents and the executable bit> > for each. Thus, when adding a store item you add all the files separately and> > this manifest. And when retrieving a store item you fetch the manifest and> > reconstruct the tree by fetching the contents in it (and applying the> > executable flag). Is this correct? This works, but it can be improved:> >>> That’s correct.>> > You can add all the files/folders in a single request. If I'm> > reading it right, now each files is added separately (and gets pinned> > separately). It would probably make sense to add it all in a single request,> > letting IPFS to store the directory structure as "unixfs". You can> > additionally add the sexp file with the dir-structure and executable flags> > as an extra file to the root folder. This would allow to fetch the whole thing> > with a single request too /api/v0/get?arg=<hash>. And to pin a single hash> > recursively (and not each separately). After getting the whole thing, you> > will need to chmod +x things accordingly.>> Yes, I’m well aware of “unixfs”. The problems, as I see it, is that it> stores “too much” in a way (we don’t need to store the mtimes or> permissions; we could ignore them upon reconstruction though), and “not> enough” in another way (the executable bit is lost, IIUC.)
Actually the only metadata that Unixfs stores is size:https://github.com/ipfs/go-unixfs/blob/master/pb/unixfs.protoand by allmeans the amount of metadata is negligible for the actual data storedand serves to give you a progress bar when you are downloading.
Having IPFS understand what files are part of a single item is importantbecause you can pin/unpin,diff,patch all of them as a whole. Unixfsalso takes care of handling the case where the directories need tobe sharded because there are too many entries. When the userputs the single root hash in ipfs.io/ipfs/<hash>, it will displaycorrectly the underlying files and the people will beable to navigate the actual tree with both web and cli. Note thatevery file added to IPFS is getting wrapped as a Unixfs blockanyways. You are just saving some "directory" nodes by addingthem separately.
There is an alternative way which is using IPLD to implement a customblock format that carries the executable bit information and nothingelse. But I don't see significant advantages at this point for the extrawork it requires.
Toggle quote (22 lines)>> > It will probably need some trial an error to get the multi-part right> > to upload all in a single request. The Go code HTTP Clients doing> > this can be found at:> > https://github.com/ipfs/go-ipfs-files/blob/master/multifilereader.go#L96> > As you see, a directory part in the multipart will have the content-type Header> > set to "application/x-directory". The best way to see how "abspath" etc is set> > is probably to sniff an `ipfs add -r <testfolder>` operation (localhost:5001).> > Once UnixFSv2 lands, you will be in a position to just drop the sexp file> > altogether.>> Yes, that makes sense. In the meantime, I guess we have to keep using> our own format.>> What are the performance implications of adding and retrieving files one> by one like I did? I understand we’re doing N HTTP requests to the> local IPFS daemon where “ipfs add -r” makes a single request, but this> alone can’t be much of a problem since communication is happening> locally. Does pinning each file separately somehow incur additional> overhead?>
Yes, pinning separately is slow and incurs in overhead. Pins are storedin a merkle tree themselves so it involves reading, patching and saving. Thisgets quite slow when you have very large pinsets because your pins block sizegrow. Your pinset will grow very large if you do this. Additionally thepinning operation itself requires global lock making it more slow.
But, even if it was fast, you will not have a way to easily unpinanything that becomes obsolete or have an overview of to where thingsbelong. It is also unlikely that a single IPFS daemon will be able tostore everything you build, so you might find yourself using IPFS Clustersoon to distribute the storage across multiple nodes and then you willbe effectively adding remotely.

Toggle quote (4 lines)> Thanks for your feedback!>> Ludo’.
Thanks for working on this!
Hector
L
L
Ludovic Courtès wrote on 18 Jan 2019 10:52
(name . Hector Sanjuan)(address . code@hector.link)
8736pqthqm.fsf@gnu.org
Hello,
Hector Sanjuan <code@hector.link> skribis:
Toggle quote (3 lines)> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐> On Monday, January 14, 2019 2:17 PM, Ludovic Courtès <ludo@gnu.org> wrote:
[...]
Toggle quote (10 lines)>> Yes, I’m well aware of “unixfs”. The problems, as I see it, is that it>> stores “too much” in a way (we don’t need to store the mtimes or>> permissions; we could ignore them upon reconstruction though), and “not>> enough” in another way (the executable bit is lost, IIUC.)>> Actually the only metadata that Unixfs stores is size:> https://github.com/ipfs/go-unixfs/blob/master/pb/unixfs.proto and by all> means the amount of metadata is negligible for the actual data stored> and serves to give you a progress bar when you are downloading.
Yes, the format I came up with also store the size so we can eventuallydisplay a progress bar.
Toggle quote (5 lines)> Having IPFS understand what files are part of a single item is important> because you can pin/unpin,diff,patch all of them as a whole. Unixfs> also takes care of handling the case where the directories need to> be sharded because there are too many entries.
Isn’t there a way, then, to achieve the same behavior with the customformat? The /api/v0/add entry point has a ‘pin’ argument; I suppose wecould leave it to false except when we add the top-level “directory”node? Wouldn’t that give us behavior similar to that of Unixfs?
Toggle quote (4 lines)> When the user puts the single root hash in ipfs.io/ipfs/<hash>, it> will display correctly the underlying files and the people will be> able to navigate the actual tree with both web and cli.
Right, though that’s less important in my view.
Toggle quote (4 lines)> Note that every file added to IPFS is getting wrapped as a Unixfs> block anyways. You are just saving some "directory" nodes by adding> them separately.
Hmm weird. When I do /api/v0/add, I’m really just passing a bytevector; there’s no notion of a “file” here, AFAICS. Or am I missingsomething?
Toggle quote (27 lines)>> > It will probably need some trial an error to get the multi-part right>> > to upload all in a single request. The Go code HTTP Clients doing>> > this can be found at:>> > https://github.com/ipfs/go-ipfs-files/blob/master/multifilereader.go#L96>> > As you see, a directory part in the multipart will have the content-type Header>> > set to "application/x-directory". The best way to see how "abspath" etc is set>> > is probably to sniff an `ipfs add -r <testfolder>` operation (localhost:5001).>> > Once UnixFSv2 lands, you will be in a position to just drop the sexp file>> > altogether.>>>> Yes, that makes sense. In the meantime, I guess we have to keep using>> our own format.>>>> What are the performance implications of adding and retrieving files one>> by one like I did? I understand we’re doing N HTTP requests to the>> local IPFS daemon where “ipfs add -r” makes a single request, but this>> alone can’t be much of a problem since communication is happening>> locally. Does pinning each file separately somehow incur additional>> overhead?>>>> Yes, pinning separately is slow and incurs in overhead. Pins are stored> in a merkle tree themselves so it involves reading, patching and saving. This> gets quite slow when you have very large pinsets because your pins block size> grow. Your pinset will grow very large if you do this. Additionally the> pinning operation itself requires global lock making it more slow.
OK, I see.
Toggle quote (7 lines)> But, even if it was fast, you will not have a way to easily unpin> anything that becomes obsolete or have an overview of to where things> belong. It is also unlikely that a single IPFS daemon will be able to> store everything you build, so you might find yourself using IPFS Cluster> soon to distribute the storage across multiple nodes and then you will> be effectively adding remotely.
Currently, ‘guix publish’ stores things as long as they are requested,and then for the duration specified with ‘--ttl’. I suppose we couldhave similar behavior with IPFS: if an item hasn’t been requested forthe specified duration, then we unpin it.
Does that make sense?
Thanks for your help!
Ludo’.
H
H
Hector Sanjuan wrote on 18 Jan 2019 12:26
(name . Ludovic Courtès)(address . ludo@gnu.org)
pa258vxfuaaiuhOEdEpZPTqfDHe4hoICI7EPyJ284kk6qVWc_VakU3cwxbx46BVXobg0LMWyrjE_uX6nJ6XoJj78-mfjtsejqHlx3bmkNFw=@hector.link
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐On Friday, January 18, 2019 10:52 AM, Ludovic Courtès <ludo@gnu.org> wrote:
Toggle quote (17 lines)> Hello,>> Hector Sanjuan code@hector.link skribis:>> > ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐> > On Monday, January 14, 2019 2:17 PM, Ludovic Courtès ludo@gnu.org wrote:>> [...]>
>> Isn’t there a way, then, to achieve the same behavior with the custom> format? The /api/v0/add entry point has a ‘pin’ argument; I suppose we> could leave it to false except when we add the top-level “directory”> node? Wouldn’t that give us behavior similar to that of Unixfs?>
Yes. What you could do is to add every file flatly/separately (with pin=false)and at the end add an IPLD object with references to all thefiles that you added and including the exec bit information (and size?).This is just a JSON file:
{ "name": "package name", "contents": [ { "path": "/file/path", # so you know where to extract it later "exec": true, "ipfs": { "/": "Qmhash..." } }, ...}
This needs to be added to IPFS with the /api/v0/dag/put endpoint (thisconverts it to CBOR - IPLD-Cbor is the actual block format used here).When this is pinned (?pin=true), this will pin all the things referencedfrom it recursively in the way we want.
So this will be quite similar to unixfs. But note that if this blobever grows over the 2M block-size limit because you have a package withmany files, you will need to start solving problems that unixfs solvesautomatically now (directory sharding).
Because IPLD-cbor is supported, ipfs, the gateway etc will know how todisplay these manifests, the info in it and their links.

Toggle quote (14 lines)> > When the user puts the single root hash in ipfs.io/ipfs/<hash>, it> > will display correctly the underlying files and the people will be> > able to navigate the actual tree with both web and cli.>> Right, though that’s less important in my view.>> > Note that every file added to IPFS is getting wrapped as a Unixfs> > block anyways. You are just saving some "directory" nodes by adding> > them separately.>> Hmm weird. When I do /api/v0/add, I’m really just passing a byte> vector; there’s no notion of a “file” here, AFAICS. Or am I missing> something?
They are wrapped in Unixfs blocks anyway by default. From the momentthe file is >256K it will get chunked into several pieces anda Unixfs block (or multiple, if a really big file) is necessary toreference them. In this case the root hash will be a Unixfs nodewith links to the parts.
There is a "raw-leaves" option which does not wrap the individualblocks with unixfs, so if the file is small to not be chunked,you can avoid the default unixfs-wrapping this way.

Toggle quote (28 lines)>> > > > It will probably need some trial an error to get the multi-part right> > > > to upload all in a single request. The Go code HTTP Clients doing> > > > this can be found at:> > > > https://github.com/ipfs/go-ipfs-files/blob/master/multifilereader.go#L96> > > > As you see, a directory part in the multipart will have the content-type Header> > > > set to "application/x-directory". The best way to see how "abspath" etc is set> > > > is probably to sniff an `ipfs add -r <testfolder>` operation (localhost:5001).> > > > Once UnixFSv2 lands, you will be in a position to just drop the sexp file> > > > altogether.> > >> > > Yes, that makes sense. In the meantime, I guess we have to keep using> > > our own format.> > > What are the performance implications of adding and retrieving files one> > > by one like I did? I understand we’re doing N HTTP requests to the> > > local IPFS daemon where “ipfs add -r” makes a single request, but this> > > alone can’t be much of a problem since communication is happening> > > locally. Does pinning each file separately somehow incur additional> > > overhead?> >> > Yes, pinning separately is slow and incurs in overhead. Pins are stored> > in a merkle tree themselves so it involves reading, patching and saving. This> > gets quite slow when you have very large pinsets because your pins block size> > grow. Your pinset will grow very large if you do this. Additionally the> > pinning operation itself requires global lock making it more slow.>> OK, I see.
I should add that even if you want to /add all files separately (and thenput the IPLD manifest I described above), you can still add them all in the samerequest (it becomes easier as you just need to put more parts in the multipartand don't have to worry about names/folders/paths).
The /add endpoint will forcefully close the HTTP connection for every/add (long story) and small delays might add up to a big one. Specially relevantif using IPFS Cluster, where /add might send the blocks somewhere else and doesneeds to do some other things.

Toggle quote (15 lines)>> > But, even if it was fast, you will not have a way to easily unpin> > anything that becomes obsolete or have an overview of to where things> > belong. It is also unlikely that a single IPFS daemon will be able to> > store everything you build, so you might find yourself using IPFS Cluster> > soon to distribute the storage across multiple nodes and then you will> > be effectively adding remotely.>> Currently, ‘guix publish’ stores things as long as they are requested,> and then for the duration specified with ‘--ttl’. I suppose we could> have similar behavior with IPFS: if an item hasn’t been requested for> the specified duration, then we unpin it.>> Does that make sense?
Yes, in fact I wanted IPFS Cluster to support a TTL so that things areautomatically unpinned when it expires too.
Toggle quote (5 lines)>> Thanks for your help!>> Ludo’.
Thanks!
Hector
A
A
Alex Griffin wrote on 13 May 2019 20:51
(address . 33899@debbugs.gnu.org)
5e3fb8d9-1a83-4031-ab02-c4e10e2be1ea@www.fastmail.com
Do I understand correctly that the only reason you don't just store nar files is for deduplication? Reading [this page][1] suggests to me that you might be overthinking it. IPFS already uses a content-driven chunking algorithm that might provide good enough deduplication on its own. It also looks like you can use your own chunker, so a future improvement could be implementing a custom chunker that makes sure to split nar files at the file boundaries within them.
[1]: https://github.com/ipfs/archives
-- Alex Griffin
P
P
Pierre Neidhardt wrote on 1 Jul 2019 23:36
87zhlxe8t9.fsf@ambrevar.xyz
Hi!
(Re-sending to debbugs, sorry for the double email :p)
A little update/recap after many months! :)
I talked with Héctor and some other people from IPFS + I reviewed Ludo'spatch so now I have a little better understanding of the current stateof affair.
- We could store the substitutes as tarballs on IPFS, but this has some possible downsides:
- We would need to use IPFS' tar chunker to deduplicate the content of the tarball. But the tar chunker is not well maintained currently, and it's not clear whether it's reproducible at the moment, so it would need some more work.
- Tarballs might induce some performance cost. Nix had attempted something similar in the past and this may have incurred a significant performance penalty, although this remains to be confirmed. Lewo?
- Ludo's patch stores all files on IPFS individually. This way we don't need to touch the tar chunker, so it's less work :) This raises some other issues however:
- Extra metadata: IPFS stores files on UnixFSv1 which does not include the executable bit.
- Right now we store a s-exp manifest with a list of files and a list of executable bits. But maybe we don't have to roll out our own.
- UnixFSv1 has some metadata field, but Héctor and Alex did not recommend using it (not sure why though).
- We could use UnixFSv2 but it's not released yet and it's unclear when it's going to be released. So we can't really count on it right now.
- IPLD: As Héctor suggested in the previous email, we could leverage IPLD and generate a JSON object that references the files with their paths together with an "executable?" property. A problem would arise if this IPLD object grows over the 2M block-size limit because then we would have to shard it (something that UnixFS would do automatically for us).
- Flat storage vs. tree storage: Right now we are storing the files separately, but this has some shortcomings, namely we need multiple "get" requests instead of just one, and that IPFS does not "know" that those files are related. (We lose the web view of the tree, etc.) Storing them as tree could be better. I don't understand if that would work with the "IPLD manifest" suggested above. Héctor?
- Pinning: Pinning all files separately incurs an overhead. It's enough to just pin the IPLD object since it propagates recursively. When adding a tree, then it's no problem since pinning is only done once.
- IPFS endpoint calls: instead of adding each file individually, it's possible to add them all in one go. Can we add all files at once while using a flat storage? (I.e. not adding them all under a common root folder.)
To sum up, here is what remains to be done on the current patch:
- Add all files in one go without pinning them.- Store as the file tree? Can we still us the IPLD object to reference the files in the tree? Else use the "raw-leaves" option to avoid wrapping small files in UnixFS blocks.- Remove the Scheme manifest if IPLD can do.- Generate the IPLD object and pin it.
Any corrections?Thoughts?
Cheers!
-- Pierre Neidhardthttps://ambrevar.xyz/
-----BEGIN PGP SIGNATURE-----
iQEzBAEBCAAdFiEEUPM+LlsMPZAEJKvom9z0l6S7zH8FAl0afOIACgkQm9z0l6S7zH+oEgf/U76CtGhULJqKEVxBD8+LYT2ggfqVlaC2NnMckdg+JUHCC4dpqMdBLks3Wye4tFYLfXinaN8SYmLc2ubA0ZiD24xHC5qAO10PPsyiea8VUChJf3jyTkV1TNn/Knp2Aj8ALk84nEPoTIkG9vBkKgXQj2szzD1lEOTCdyOcLaDFvBzV31uyDMnqbW18bsvPDmWfoUsDVk8VB/sgrPBK9qA7UF2Hv5qvuAcf1Y5xJ1PTRbWhf2T3cNZe/Wt3+/hYEwMdEx9UJvW9P71rhA0iATlOjqhpju0HG3MYN95VGYcKL9R9xDJJH1jqUde0tZyA3prRss1O7/0JMP9+s+4XIuU7uw===WJum-----END PGP SIGNATURE-----
P
P
Pierre Neidhardt wrote on 6 Jul 2019 10:44
87imsfd030.fsf@ambrevar.xyz
Link to the Nix integration discussion:https://github.com/NixOS/nix/issues/859.
-- Pierre Neidhardthttps://ambrevar.xyz/
-----BEGIN PGP SIGNATURE-----
iQEzBAEBCAAdFiEEUPM+LlsMPZAEJKvom9z0l6S7zH8FAl0gX1MACgkQm9z0l6S7zH/1mAgAlaFU5i8qcYJsTuIa0XvGq8h0XCSzJmg9iImrYj+zqfxrZrHkl0PwdZGrZ7vGKlxmIyCNRJuVTCi1wdyWcqzKVScJ1YI5FazCTR788pV8Qiu3ytmtcI5Vc6+a2aYX3dcbYBhSQIYRH0Mr5CB9YfCPy5XhnTYuU8Pz2rkxm0h+fHKjVifzfmfnRHZzNZ9M3+8ZN13mzI0j2nvltsAsVE9zMyD1BkREZ8itvsx9OuYLa9SBQ7sZawyNJKiILsBBr3AJHl3xCCVYNetY8tMDS98TPEtRiOKp9xro8crjhr/WzEWfCCyYmLwKK7a5es7eWdkaB7ZY90pWy8gxQE2voTkkAw===pYr5-----END PGP SIGNATURE-----
L
L
Ludovic Courtès wrote on 12 Jul 2019 22:15
(name . Pierre Neidhardt)(address . mail@ambrevar.xyz)
87ftnbf1rt.fsf@gnu.org
Hello!
Pierre Neidhardt <mail@ambrevar.xyz> skribis:
Toggle quote (2 lines)> A little update/recap after many months! :)
Thank you, and apologies for the delay!
Toggle quote (12 lines)> - Extra metadata: IPFS stores files on UnixFSv1 which does not> include the executable bit.>> - Right now we store a s-exp manifest with a list of files and a> list of executable bits. But maybe we don't have to roll out our own.>> - UnixFSv1 has some metadata field, but Héctor and Alex did not> recommend using it (not sure why though).>> - We could use UnixFSv2 but it's not released yet and it's unclear when> it's going to be released. So we can't really count on it right now.
UnixFSv1 is not an option because it lacks the executable bit; UnixFSv2would be appropriate, though it stores timestamps that we don’t need(not necessarily a problem).
Toggle quote (8 lines)> - Flat storage vs. tree storage: Right now we are storing the files> separately, but this has some shortcomings, namely we need multiple> "get" requests instead of just one, and that IPFS does> not "know" that those files are related. (We lose the web view of> the tree, etc.) Storing them as tree could be better.> I don't understand if that would work with the "IPLD manifest"> suggested above. Héctor?
I don’t consider the web view a strong argument :-) since we could writetools to deal with whatever format we use.
Regarding multiple GET requests: we could pipeline them, and it seemsmore like an implementation detail to me. The real question is whethermaking separate GET requests prevents some optimization in IPFS.
Toggle quote (4 lines)> - Pinning: Pinning all files separately incurs an overhead. It's> enough to just pin the IPLD object since it propagates recursively.> When adding a tree, then it's no problem since pinning is only done once.
Where’s the overhead exactly?
Toggle quote (5 lines)> - IPFS endpoint calls: instead of adding each file individually, it's> possible to add them all in one go. Can we add all files at once> while using a flat storage? (I.e. not adding them all under a common> root folder.)
Again, is the concern that we’re making one GET and thus one round tripper file, or is there some additional cost under the hood?
Thanks for the summary and explanations!
Ludo’.
M
M
Molly Mackinlay wrote on 12 Jul 2019 22:02
(name . Pierre Neidhardt)(address . mail@ambrevar.xyz)
CA+UaANWE=VKwVaF_uxLTf7jc9wNVRhU105GBqER6fFTwa3zr8Q@mail.gmail.com
Thanks for the update Pierre! Also adding Alex, Jessica, Eric and Andrewfrom the package managers discussions at IPFS Camp as FYI.
Generating the ipld manifest with the metadata and the tree of files shouldalso be fine AFAIK - I’m sure Hector and Eric can expand more on how tocompose them, but data storage format shouldn’t make a big difference forthe ipld manifest.
On Mon, Jul 1, 2019 at 2:36 PM Pierre Neidhardt <mail@ambrevar.xyz> wrote:
Toggle quote (92 lines)> Hi!>> (Re-sending to debbugs, sorry for the double email :p)>> A little update/recap after many months! :)>> I talked with Héctor and some other people from IPFS + I reviewed Ludo's> patch so now I have a little better understanding of the current state> of affair.>> - We could store the substitutes as tarballs on IPFS, but this has> some possible downsides:>> - We would need to use IPFS' tar chunker to deduplicate the content of> the tarball. But the tar chunker is not well maintained currently,> and it's not clear whether it's reproducible at the moment, so it> would need some more work.>> - Tarballs might induce some performance cost. Nix had attempted> something similar in the past and this may have incurred a significant> performance penalty, although this remains to be confirmed.> Lewo?>> - Ludo's patch stores all files on IPFS individually. This way we don't> need to touch the tar chunker, so it's less work :)> This raises some other issues however:>> - Extra metadata: IPFS stores files on UnixFSv1 which does not> include the executable bit.>> - Right now we store a s-exp manifest with a list of files and a> list of executable bits. But maybe we don't have to roll out our> own.>> - UnixFSv1 has some metadata field, but Héctor and Alex did not> recommend using it (not sure why though).>> - We could use UnixFSv2 but it's not released yet and it's unclear when> it's going to be released. So we can't really count on it right now.>> - IPLD: As Héctor suggested in the previous email, we could leverage> IPLD and generate a JSON object that references the files with> their paths together with an "executable?" property.> A problem would arise if this IPLD object grows over the 2M> block-size limit because then we would have to shard it (something> that UnixFS would do automatically for us).>> - Flat storage vs. tree storage: Right now we are storing the files> separately, but this has some shortcomings, namely we need multiple> "get" requests instead of just one, and that IPFS does> not "know" that those files are related. (We lose the web view of> the tree, etc.) Storing them as tree could be better.> I don't understand if that would work with the "IPLD manifest"> suggested above. Héctor?>> - Pinning: Pinning all files separately incurs an overhead. It's> enough to just pin the IPLD object since it propagates recursively.> When adding a tree, then it's no problem since pinning is only done> once.>> - IPFS endpoint calls: instead of adding each file individually, it's> possible to add them all in one go. Can we add all files at once> while using a flat storage? (I.e. not adding them all under a common> root folder.)>> To sum up, here is what remains to be done on the current patch:>> - Add all files in one go without pinning them.> - Store as the file tree? Can we still us the IPLD object to reference> the files in the tree? Else use the "raw-leaves" option to avoid> wrapping small files in UnixFS blocks.> - Remove the Scheme manifest if IPLD can do.> - Generate the IPLD object and pin it.>> Any corrections?> Thoughts?>> Cheers!>> --> Pierre Neidhardt> https://ambrevar.xyz/>> --> You received this message because you are subscribed to the Google Groups> "Go IPFS Working Group" group.> To unsubscribe from this group and stop receiving emails from it, send an> email to go-ipfs-wg+unsubscribe@ipfs.io.> To view this discussion on the web visit> https://groups.google.com/a/ipfs.io/d/msgid/go-ipfs-wg/87zhlxe8t9.fsf%40ambrevar.xyz> .>
Attachment: file
H
H
Hector Sanjuan wrote on 15 Jul 2019 00:31
(name . Ludovic Courtès)(address . ludo@gnu.org)
HAv_FWZRa_dCADY6U07oX-E2vTrtcxk8ltkpfvTpYCsIh4O5PuItHGIh6w-g5GGmoqVEfkjDUOplpUErdGEPAiFYdrjjDhwd9RJ4OyjjGgY=@hector.link
Hey! Thanks for reviving this discussion!
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐On Friday, July 12, 2019 10:15 PM, Ludovic Courtès <ludo@gnu.org> wrote:
Toggle quote (47 lines)> Hello!>> Pierre Neidhardt mail@ambrevar.xyz skribis:>> > A little update/recap after many months! :)>> Thank you, and apologies for the delay!>> > - Extra metadata: IPFS stores files on UnixFSv1 which does not> > include the executable bit.> > - Right now we store a s-exp manifest with a list of files and a> > list of executable bits. But maybe we don't have to roll out our own.> >> > - UnixFSv1 has some metadata field, but Héctor and Alex did not> > recommend using it (not sure why though).> >> > - We could use UnixFSv2 but it's not released yet and it's unclear when> > it's going to be released. So we can't really count on it right now.> >>> UnixFSv1 is not an option because it lacks the executable bit; UnixFSv2> would be appropriate, though it stores timestamps that we don’t need> (not necessarily a problem).>> > - Flat storage vs. tree storage: Right now we are storing the files> > separately, but this has some shortcomings, namely we need multiple> > "get" requests instead of just one, and that IPFS does> > not "know" that those files are related. (We lose the web view of> > the tree, etc.) Storing them as tree could be better.> > I don't understand if that would work with the "IPLD manifest"> > suggested above. Héctor?> >>> I don’t consider the web view a strong argument :-) since we could write> tools to deal with whatever format we use.>> Regarding multiple GET requests: we could pipeline them, and it seems> more like an implementation detail to me. The real question is whether> making separate GET requests prevents some optimization in IPFS.>> > - Pinning: Pinning all files separately incurs an overhead. It's> > enough to just pin the IPLD object since it propagates recursively.> > When adding a tree, then it's no problem since pinning is only done once.> >>> Where’s the overhead exactly?
There are reasons why we are proposing to create a single DAG with anIPLD object at the root. Pinning has a big overhead because itinvolves locking, reading, parsing, and writing an internal pin-DAG. Thisis specially relevant when the pinset is very large.
Doing multiple GET requests also has overhead, like being unable to usea single bitswap session (which, when downloading something new means abig overhead since every request will have to find providers).
And it's not just the web view, it's the ability to walk/traverse allthe object related to a given root natively, which allows also to comparemultiple trees and to be more efficient for some things ("pin update"for example). Your original idea is to create a manifest withreferences to different parts. I'm just asking you tocreate that manifest in a format where those references are understoodnot only by you, the file creator, but by IPFS and any tool that canread IPLD, by making this a IPLD object (which is just a json).
The process of adding "something" to ipfs is as follows.
----1. Add to IPFS: multipart upload equivalent to "ipfs add -r":
~/ipfstest $ ipfs add -r -q .QmXVgwHR2c8KiPPxaoZAj4M4oNGW1qjZSsxMNE8sLWZWTP
2. Add manifest as IPLD object. dag/put a json file like:
cat <<EOF | ipfs dag put{ "executables": ["ipfstest/1.txt"], "root": { "/": "QmXVgwHR2c8KiPPxaoZAj4M4oNGW1qjZSsxMNE8sLWZWTP" }}EOF---
That's it "QmXVgwHR2c8KiPPxaoZAj4M4oNGW1qjZSsxMNE8sLWZWTP" is the rootof your package files.
"bafyreievcw5qoowhepwskcxybochrui65bbtsliuy7r6kyail4w5lyqnjm"is the root of your manifest file with the list of executablesand a pointer to the other root.

Hector
L
L
Ludovic Courtès wrote on 15 Jul 2019 11:24
(name . Hector Sanjuan)(address . code@hector.link)
87ef2rr6om.fsf@gnu.org
Hello Héctor! :-)
Hector Sanjuan <code@hector.link> skribis:
Toggle quote (2 lines)> On Friday, July 12, 2019 10:15 PM, Ludovic Courtès <ludo@gnu.org> wrote:
[...]
Toggle quote (25 lines)>> > - Pinning: Pinning all files separately incurs an overhead. It's>> > enough to just pin the IPLD object since it propagates recursively.>> > When adding a tree, then it's no problem since pinning is only done once.>> >>>>> Where’s the overhead exactly?>> There are reasons why we are proposing to create a single DAG with an> IPLD object at the root. Pinning has a big overhead because it> involves locking, reading, parsing, and writing an internal pin-DAG. This> is specially relevant when the pinset is very large.>> Doing multiple GET requests also has overhead, like being unable to use> a single bitswap session (which, when downloading something new means a> big overhead since every request will have to find providers).>> And it's not just the web view, it's the ability to walk/traverse all> the object related to a given root natively, which allows also to compare> multiple trees and to be more efficient for some things ("pin update"> for example). Your original idea is to create a manifest with> references to different parts. I'm just asking you to> create that manifest in a format where those references are understood> not only by you, the file creator, but by IPFS and any tool that can> read IPLD, by making this a IPLD object (which is just a json).
OK, I see. Put this way, it seems like creating a DAG with an IPLDobject as its root is pretty compelling.
Thanks for clarifying!
Ludo’.
P
P
Pierre Neidhardt wrote on 15 Jul 2019 12:10
87zhlf7glw.fsf@ambrevar.xyz
Héctor mentioned a possible issue with the IPLD manifest growing too big(in case of too many files in a package), that is, above 2MB.Then we would need to implement some form of sharding.
Héctor, do you confirm? Any idea on how to tackle this elegantly?
-- Pierre Neidhardthttps://ambrevar.xyz/
-----BEGIN PGP SIGNATURE-----
iQEzBAEBCAAdFiEEUPM+LlsMPZAEJKvom9z0l6S7zH8FAl0sUSsACgkQm9z0l6S7zH/mJQf+NTkbrr1KF6tXPlj0AoVhvX8pNkzoqt+3elFFZyXYmmAsegpCmM0u1p9TwXijMS+RsKYQkDlq+45gyyXjUFnZFYBlRjY2CVJoYbfpK5BDGTgdxfdch7vTL9A04hL7drMpQkjrSpE9mVAwTF7ngGYNzZNyWOE+z3gp/u+bFb69a1SPh6WzCavtdCTsjOnLCJJTFzKl9EaA3VvYFtZ8iS1eunC1qIk5dLjIkbcc0oWK4QQqyC3EZNtDOcyA3eMwYkVzC8+56XCXT+x8qVgJWo4n73TErviCQEjG6P3ni+JqEPCqkK3tEzncrLw6I3lUbvMHXWUGWxmMIf38/OKuD00Whg===bQTd-----END PGP SIGNATURE-----
H
H
Hector Sanjuan wrote on 15 Jul 2019 12:21
(name . Pierre Neidhardt)(address . mail@ambrevar.xyz)
TbeM3tgSHfcbwcjeZ6QdzDTvWhrkzeTEnWCbfJxJfx4sPZRGupGThXLKTPafQnN2u-MrQ_zcST3TawHVICLzaQAq9gG11BXkNNqgzBkXg6U=@hector.link
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐On Monday, July 15, 2019 12:10 PM, Pierre Neidhardt <mail@ambrevar.xyz> wrote:
Toggle quote (7 lines)> Héctor mentioned a possible issue with the IPLD manifest growing too big> (in case of too many files in a package), that is, above 2MB.> Then we would need to implement some form of sharding.>> Héctor, do you confirm? Any idea on how to tackle this elegantly?>
Doing the DAG node the way I proposed it (referencing a single root) should beok... Unless you put too many executable files in that list, it should largelystay within the 2MB limit.

--Hector
A
A
Alex Potsides wrote on 15 Jul 2019 11:20
(name . Molly Mackinlay)(address . molly@protocol.ai)
CAL7-xizWV2EEtyCf=3HbEdDaTdz3qjskVqZfh6LrF-QBCfKNGg@mail.gmail.com
The reason not to use the UnixFSv1 metadata field was that it's in the spechttps://github.com/ipfs/specs/tree/master/unixfs#data-format but it's notreally been implemented. As it stands in v1, you'd have to add explicitmetadata types to the spec (executable, owner?, group?, etc) becauseprotobufs need to know about everything ahead of time and eachimplementation would have update to implement those. This is all possible& not a technical blocker, but since most effort is centred around UnixFSv2the timescales might not fit with people's requirements.
The more pragmatic approach Hector suggested was to wrap a CID thatresolves to the UnixFSv1 file in a JSON object that you could use to storeapplication-specific metadata - something similar to the UnixFSv1.5 sectionhttps://github.com/ipfs/camp/blob/master/DEEP_DIVES/package-managers/README.md#unixfs-v15in our notes from the Package Managers deep dive we did at camp.
a.





On Fri, Jul 12, 2019 at 9:03 PM Molly Mackinlay <molly@protocol.ai> wrote:
Toggle quote (105 lines)> Thanks for the update Pierre! Also adding Alex, Jessica, Eric and Andrew> from the package managers discussions at IPFS Camp as FYI.>> Generating the ipld manifest with the metadata and the tree of files> should also be fine AFAIK - I’m sure Hector and Eric can expand more on how> to compose them, but data storage format shouldn’t make a big difference> for the ipld manifest.>> On Mon, Jul 1, 2019 at 2:36 PM Pierre Neidhardt <mail@ambrevar.xyz> wrote:>>> Hi!>>>> (Re-sending to debbugs, sorry for the double email :p)>>>> A little update/recap after many months! :)>>>> I talked with Héctor and some other people from IPFS + I reviewed Ludo's>> patch so now I have a little better understanding of the current state>> of affair.>>>> - We could store the substitutes as tarballs on IPFS, but this has>> some possible downsides:>>>> - We would need to use IPFS' tar chunker to deduplicate the content of>> the tarball. But the tar chunker is not well maintained currently,>> and it's not clear whether it's reproducible at the moment, so it>> would need some more work.>>>> - Tarballs might induce some performance cost. Nix had attempted>> something similar in the past and this may have incurred a significant>> performance penalty, although this remains to be confirmed.>> Lewo?>>>> - Ludo's patch stores all files on IPFS individually. This way we don't>> need to touch the tar chunker, so it's less work :)>> This raises some other issues however:>>>> - Extra metadata: IPFS stores files on UnixFSv1 which does not>> include the executable bit.>>>> - Right now we store a s-exp manifest with a list of files and a>> list of executable bits. But maybe we don't have to roll out our>> own.>>>> - UnixFSv1 has some metadata field, but Héctor and Alex did not>> recommend using it (not sure why though).>>>> - We could use UnixFSv2 but it's not released yet and it's unclear>> when>> it's going to be released. So we can't really count on it right>> now.>>>> - IPLD: As Héctor suggested in the previous email, we could leverage>> IPLD and generate a JSON object that references the files with>> their paths together with an "executable?" property.>> A problem would arise if this IPLD object grows over the 2M>> block-size limit because then we would have to shard it (something>> that UnixFS would do automatically for us).>>>> - Flat storage vs. tree storage: Right now we are storing the files>> separately, but this has some shortcomings, namely we need multiple>> "get" requests instead of just one, and that IPFS does>> not "know" that those files are related. (We lose the web view of>> the tree, etc.) Storing them as tree could be better.>> I don't understand if that would work with the "IPLD manifest">> suggested above. Héctor?>>>> - Pinning: Pinning all files separately incurs an overhead. It's>> enough to just pin the IPLD object since it propagates recursively.>> When adding a tree, then it's no problem since pinning is only done>> once.>>>> - IPFS endpoint calls: instead of adding each file individually, it's>> possible to add them all in one go. Can we add all files at once>> while using a flat storage? (I.e. not adding them all under a common>> root folder.)>>>> To sum up, here is what remains to be done on the current patch:>>>> - Add all files in one go without pinning them.>> - Store as the file tree? Can we still us the IPLD object to reference>> the files in the tree? Else use the "raw-leaves" option to avoid>> wrapping small files in UnixFS blocks.>> - Remove the Scheme manifest if IPLD can do.>> - Generate the IPLD object and pin it.>>>> Any corrections?>> Thoughts?>>>> Cheers!>>>> -->> Pierre Neidhardt>> https://ambrevar.xyz/>>>> -->> You received this message because you are subscribed to the Google Groups>> "Go IPFS Working Group" group.>> To unsubscribe from this group and stop receiving emails from it, send an>> email to go-ipfs-wg+unsubscribe@ipfs.io.>> To view this discussion on the web visit>> https://groups.google.com/a/ipfs.io/d/msgid/go-ipfs-wg/87zhlxe8t9.fsf%40ambrevar.xyz>> .>>>
Attachment: file
?