[PATCH] gnu: Add gpt4all-backend

  • Open
  • quality assurance status badge
Details
One participant
  • Andy Tai
Owner
unassigned
Submitted by
Andy Tai
Severity
normal
A
A
Andy Tai wrote on 14 Aug 2023 14:09
(address . guix-patches@gnu.org)(name . Andy Tai)(address . atai@atai.org)
b7d71736646f8ef8bc2ace7bd9cc392b313abd14.1692014931.git.atai@atai.org
* gnu/packages/machine-learning.scm (gpt4all-backend): New variable
---
gnu/packages/machine-learning.scm | 51 +++++++++++++++++++++++++++++++
1 file changed, 51 insertions(+)

Toggle diff (63 lines)
diff --git a/gnu/packages/machine-learning.scm b/gnu/packages/machine-learning.scm
index c32180615b..4c9d56fbe2 100644
--- a/gnu/packages/machine-learning.scm
+++ b/gnu/packages/machine-learning.scm
@@ -4801,3 +4801,54 @@ (define-public oneapi-dnnl
"OneAPI Deep Neural Network Library (oneDNN) is a cross-platform
performance library of basic building blocks for deep learning applications.")
(license license:asl2.0)))
+
+
+(define-public gpt4all-backend
+ (let ((commit "108d950874e457ced4d5d1f0569dfb43bbd25734")
+ (version "0.3.0")
+ (revision "1"))
+ (package
+ (name "gpt4all-backend")
+ (version (git-version version revision commit))
+ (source (origin
+ (method git-fetch)
+ (uri (git-reference
+ (url "https://github.com/nomic-ai/gpt4all")
+ (commit commit)
+ (recursive? #t)))
+ (file-name (git-file-name name version))
+ (sha256
+ (base32
+ "1x2w3x5abqkwjp043ijqfsmsm72783a245y5bri9bs1cpfyr0xyr"))))
+ (build-system cmake-build-system)
+ ;; (inputs (list llama-cpp))
+ ;; TODO: when upstream supports using system installed llama-cpp, use it
+ (outputs '("out"))
+ (arguments
+ (list #:tests? #f ;no test target
+ #:configure-flags #~(list (string-append
+ "-DCMAKE_INSTALL_PREFIX="
+ #$output))
+ #:phases #~(modify-phases %standard-phases
+ (add-after 'unpack 'chdir
+ (lambda _
+ (chdir "gpt4all-backend")))
+ (add-after 'chdir 'fix-install-path
+ (lambda _
+ (substitute* "CMakeLists.txt"
+ (("CMAKE_INSTALL_PREFIX")
+ "CMAKE_INSTALL_PREFIX_ignored"))))
+ (replace 'install
+ (lambda _
+ (mkdir-p #$output)
+ (invoke "cmake" "-P" "cmake_install.cmake"))))))
+ (home-page "https://gpt4all.io/index.html")
+ (synopsis "C/C model backend used by GPT4All for inference on the CPU")
+ (description
+ "GPT4All backend acts as a universal library/wrapper for all models that
+the GPT4All ecosystem supports. Language bindings are built on top of this universal
+library. The native GPT4all Chat application directly uses this library for all
+inference.")
+ (license license:expat))))
+
+

base-commit: b15381460ed74e72792ff182dd2ca7a06ba59b0c
--
2.41.0
A
A
Andy Tai wrote on 14 Aug 2023 14:56
[PATCH v2] gnu: Add gpt4all-backend
(address . 65287@debbugs.gnu.org)(name . Andy Tai)(address . atai@atai.org)
837e0362b70ffb44f87a9002306a0efc178cee79.1692017713.git.atai@atai.org
* gnu/packages/machine-learning.scm (gpt4all-backend): New variable
---
gnu/packages/machine-learning.scm | 56 +++++++++++++++++++++++++++++++
1 file changed, 56 insertions(+)

Toggle diff (68 lines)
diff --git a/gnu/packages/machine-learning.scm b/gnu/packages/machine-learning.scm
index c32180615b..7af7ba8b5f 100644
--- a/gnu/packages/machine-learning.scm
+++ b/gnu/packages/machine-learning.scm
@@ -4801,3 +4801,59 @@ (define-public oneapi-dnnl
"OneAPI Deep Neural Network Library (oneDNN) is a cross-platform
performance library of basic building blocks for deep learning applications.")
(license license:asl2.0)))
+
+
+(define-public gpt4all-backend
+ (let ((commit "108d950874e457ced4d5d1f0569dfb43bbd25734")
+ (version "0.3.0")
+ (revision "1"))
+ (package
+ (name "gpt4all-backend")
+ (version (git-version version revision commit))
+ (source (origin
+ (method git-fetch)
+ (uri (git-reference
+ (url "https://github.com/nomic-ai/gpt4all")
+ (commit commit)
+ (recursive? #t)))
+ (file-name (git-file-name name version))
+ (sha256
+ (base32
+ "1x2w3x5abqkwjp043ijqfsmsm72783a245y5bri9bs1cpfyr0xyr"))))
+ (build-system cmake-build-system)
+ ;; (inputs (list llama-cpp))
+ ;; TODO: when upstream supports using system installed llama-cpp, use it
+ (arguments
+ (list #:tests? #f ;no test target
+ #:configure-flags #~(list (string-append
+ "-DCMAKE_INSTALL_PREFIX="
+ #$output))
+ #:phases #~(modify-phases %standard-phases
+ (add-after 'unpack 'chdir
+ (lambda _
+ (mkdir-p #$output) ;ensure it exists
+ (chdir "gpt4all-backend")))
+ (add-after 'chdir 'fix-install-path
+ (lambda _
+ (substitute* "CMakeLists.txt"
+ (("CMAKE_INSTALL_PREFIX")
+ "CMAKE_INSTALL_PREFIX_ignored"))))
+ (replace 'install
+ (lambda* (#:key outputs #:allow-other-keys)
+ (let* ((out #$output)
+ (lib (string-append out "/lib")))
+ (mkdir-p lib)
+ ;; Install the .so targets.
+ (for-each (lambda (file)
+ (install-file file lib))
+ (find-files "." "\\.so"))))))))
+ (home-page "https://gpt4all.io/index.html")
+ (synopsis "C/C model backend used by GPT4All for inference on the CPU")
+ (description
+ "GPT4All backend acts as a universal library/wrapper for all models that
+the GPT4All ecosystem supports. Language bindings are built on top of this universal
+library. The native GPT4all Chat application directly uses this library for all
+inference.")
+ (license license:expat))))
+
+

base-commit: b15381460ed74e72792ff182dd2ca7a06ba59b0c
--
2.41.0
A
A
Andy Tai wrote on 14 Aug 2023 15:00
[PATCH v3] gnu: Add gpt4all-backend
(address . 65287@debbugs.gnu.org)(name . Andy Tai)(address . atai@atai.org)
94fc5008f51f3a5594e2ab995c9cca06e8bc3843.1692018019.git.atai@atai.org
* gnu/packages/machine-learning.scm (gpt4all-backend): New variable
---
gnu/packages/machine-learning.scm | 56 +++++++++++++++++++++++++++++++
1 file changed, 56 insertions(+)

Toggle diff (68 lines)
diff --git a/gnu/packages/machine-learning.scm b/gnu/packages/machine-learning.scm
index c32180615b..6ba78f35c6 100644
--- a/gnu/packages/machine-learning.scm
+++ b/gnu/packages/machine-learning.scm
@@ -4801,3 +4801,59 @@ (define-public oneapi-dnnl
"OneAPI Deep Neural Network Library (oneDNN) is a cross-platform
performance library of basic building blocks for deep learning applications.")
(license license:asl2.0)))
+
+
+(define-public gpt4all-backend
+ (let ((commit "108d950874e457ced4d5d1f0569dfb43bbd25734")
+ (version "0.3.0")
+ (revision "1"))
+ (package
+ (name "gpt4all-backend")
+ (version (git-version version revision commit))
+ (source (origin
+ (method git-fetch)
+ (uri (git-reference
+ (url "https://github.com/nomic-ai/gpt4all")
+ (commit commit)
+ (recursive? #t)))
+ (file-name (git-file-name name version))
+ (sha256
+ (base32
+ "1x2w3x5abqkwjp043ijqfsmsm72783a245y5bri9bs1cpfyr0xyr"))))
+ (build-system cmake-build-system)
+ ;; (inputs (list llama-cpp))
+ ;; TODO: when upstream supports using system installed llama-cpp, use it
+ (arguments
+ (list #:tests? #f ;no test target
+ #:configure-flags #~(list (string-append
+ "-DCMAKE_INSTALL_PREFIX="
+ #$output))
+ #:phases #~(modify-phases %standard-phases
+ (add-after 'unpack 'chdir
+ (lambda _
+ (mkdir-p #$output) ;ensure it exists
+ (chdir "gpt4all-backend")))
+ (add-after 'chdir 'fix-install-path
+ (lambda _
+ (substitute* "CMakeLists.txt"
+ (("CMAKE_INSTALL_PREFIX")
+ "CMAKE_INSTALL_PREFIX_ignored"))))
+ (replace 'install
+ (lambda* (#:key outputs #:allow-other-keys)
+ (let* ((out #$output)
+ (lib (string-append out "/lib")))
+ (mkdir-p lib)
+ ;; Install the .so targets.
+ (for-each (lambda (file)
+ (install-file file lib))
+ (find-files "." "\\.so"))))))))
+ (home-page "https://gpt4all.io/index.html")
+ (synopsis "C/C++ model backend used by GPT4All for inference on the CPU")
+ (description
+ "GPT4All backend acts as a universal library/wrapper for all models that
+the GPT4All ecosystem supports. Language bindings are built on top of this universal
+library. The native GPT4all Chat application directly uses this library for all
+inference.")
+ (license license:expat))))
+
+

base-commit: b15381460ed74e72792ff182dd2ca7a06ba59b0c
--
2.41.0
?
Your comment

Commenting via the web interface is currently disabled.

To comment on this conversation send an email to 65287@debbugs.gnu.org

To respond to this issue using the mumi CLI, first switch to it
mumi current 65287
Then, you may apply the latest patchset in this issue (with sign off)
mumi am -- -s
Or, compose a reply to this issue
mumi compose
Or, send patches to this issue
mumi send-email *.patch