[PATCH 0/8] Upscale your anime pictures, now with 99% less malware

  • Open
  • quality assurance status badge
Details
One participant
  • Liliana Marie Prikler
Owner
unassigned
Submitted by
Liliana Marie Prikler
Severity
normal
L
L
Liliana Marie Prikler wrote on 26 Nov 2022 12:47
(address . guix-patches@gnu.org)
a9bdd1f5c70f0879615809184653358d7f07def3.camel@gmail.com
Hi Guix,

A certain post on This Week in GNOME [1] particular mentioning [2] got
me curious as to what Tencent is up to with their machine learning
stuff.
As it turns out, ncnn is a rather powerful platform that allows running
neural networks on your GPU via Vulkan. That's right, Vulkan, none of
that CUDA bs. Sadly, the training part appears to still be done using
good ol' pytorch, so I packaged the python variant as well. To convert
models from one to the other, you do have to jump through some hoops
via onnx, but ncnn comes with a tool that translates from onnx' format
to theirs, so it should hopefully also be useful in other applications.

In order to keep the Guix package clean, I removed "everything" that
depends on CUDA and also dropped (some/most? of) the bits that download
pretrained models over the aether. However, I only performed
non-automatic tests with a pretrained model for real-esrgan-ncnn, so
take this with a grain of salt.

Anyway, I hope you have some good, freedom-respecting time with this.

Cheers


Liliana Marie Prikler (8):
gnu: Add ncnn.
gnu: Add real-esrgan-ncnn.
gnu: Add python-addict.
gnu: Add python-basicsr.
gnu: Add python-filterpy.
gnu: Add python-facexlib.
gnu: Add python-gfpgan.
gnu: Add python-real-esrgan.

gnu/local.mk | 2 +
gnu/packages/machine-learning.scm | 318 ++
.../patches/python-basicsr-fuck-nvidia.patch | 3233 +++++++++++++++++
.../python-gfpgan-unfuse-leaky-relu.patch | 57 +
...real-resgan-ncnn-simplify-model-path.patch | 195 +
gnu/packages/python-science.scm | 50 +
gnu/packages/python-xyz.scm | 18 +
7 files changed, 3873 insertions(+)
create mode 100644 gnu/packages/patches/python-basicsr-fuck-nvidia.patch
create mode 100644 gnu/packages/patches/python-gfpgan-unfuse-leaky-relu.patch
create mode 100644 gnu/packages/patches/real-resgan-ncnn-simplify-model-path.patch


base-commit: 7e0ad0dd0f2829d6f3776648ba7c88acf9888d7a
--
2.38.1
L
L
Liliana Marie Prikler wrote on 20 Nov 2022 00:14
[PATCH 1/8] gnu: Add ncnn.
(address . 59607@debbugs.gnu.org)
a9149be6281538a384cd71d3de658246b502f177.camel@gmail.com
* gnu/packages/machine-learning.scm (ncnn): New variable.
---
gnu/packages/machine-learning.scm | 32 +++++++++++++++++++++++++++++++
1 file changed, 32 insertions(+)

Toggle diff (52 lines)
diff --git a/gnu/packages/machine-learning.scm b/gnu/packages/machine-learning.scm
index fbc06f96b6..e984e3004b 100644
--- a/gnu/packages/machine-learning.scm
+++ b/gnu/packages/machine-learning.scm
@@ -102,6 +102,7 @@ (define-module (gnu packages machine-learning)
#:use-module (gnu packages swig)
#:use-module (gnu packages tls)
#:use-module (gnu packages video)
+ #:use-module (gnu packages vulkan)
#:use-module (gnu packages web)
#:use-module (gnu packages xml)
#:use-module (gnu packages xdisorg)
@@ -749,6 +750,37 @@ (define (delete-ifdefs file)
in terms of new algorithms.")
(license license:gpl3+)))
+(define-public ncnn
+ (package
+ (name "ncnn")
+ (version "20220729")
+ (source (origin
+ (method git-fetch)
+ (uri (git-reference
+ (url "https://github.com/Tencent/ncnn")
+ (commit version)))
+ (file-name (git-file-name name version))
+ (sha256
+ (base32 "02na1crxph8m3sqb1c32v83ppxjcmaxyncql89q5mf9ggddmx5c5"))))
+ (build-system cmake-build-system)
+ (arguments
+ (list #:configure-flags
+ #~(list "-DNCNN_AVX=OFF"
+ "-DNCNN_BUILD_TESTS=TRUE"
+ "-DNCNN_SYSTEM_GLSLANG=ON"
+ (string-append "-DGLSLANG_TARGET_DIR="
+ #$(this-package-input "glslang")
+ "/lib/cmake")
+ "-DNCNN_VULKAN=ON")
+ #:tests? #f)) ; XXX: half of the tests fail
+ (inputs (list glslang vulkan-headers vulkan-loader))
+ (native-inputs (list protobuf))
+ (home-page "https://github.com/Tencent/ncnn")
+ (synopsis "Neural network for mobile platforms")
+ (description "NCNN is a framework for building neural networks written in
+C++. It supports parallel computing as well as GPU acceleration via Vulkan.")
+ (license license:bsd-3)))
+
(define-public onnx
(package
(name "onnx")
--
2.38.1
L
L
Liliana Marie Prikler wrote on 20 Nov 2022 02:16
[PATCH 2/8] gnu: Add real-esrgan-ncnn.
(address . 59607@debbugs.gnu.org)
3823ee69ed8ec08192f0c2ebc26fe8b3d399b381.camel@gmail.com
* gnu/packages/machine-learning.scm (real-esrgan-ncnn): New variable.
---
gnu/packages/machine-learning.scm | 44 ++++
...real-resgan-ncnn-simplify-model-path.patch | 195 ++++++++++++++++++
2 files changed, 239 insertions(+)
create mode 100644 gnu/packages/patches/real-resgan-ncnn-simplify-model-path.patch

Toggle diff (258 lines)
diff --git a/gnu/packages/machine-learning.scm b/gnu/packages/machine-learning.scm
index e984e3004b..0566f4bd69 100644
--- a/gnu/packages/machine-learning.scm
+++ b/gnu/packages/machine-learning.scm
@@ -781,6 +781,50 @@ (define-public ncnn
C++. It supports parallel computing as well as GPU acceleration via Vulkan.")
(license license:bsd-3)))
+(define-public real-esrgan-ncnn
+ (package
+ (name "real-esrgan-ncnn")
+ (version "0.2.0")
+ (source (origin
+ (method git-fetch)
+ (uri (git-reference
+ (url "https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan")
+ (commit (string-append "v" version))))
+ (file-name (git-file-name name version))
+ (patches
+ (search-patches
+ "real-resgan-ncnn-simplify-model-path.patch"))
+ (sha256
+ (base32 "1hlrq8b4848vgj2shcxz68d98p9wd5mf619v5d04pwg40s85zqqp"))))
+ (build-system cmake-build-system)
+ (arguments
+ (list #:tests? #f ; No tests
+ #:configure-flags
+ #~(list "-DUSE_SYSTEM_NCNN=TRUE"
+ "-DUSE_SYSTEM_WEBP=TRUE"
+ (string-append "-DGLSLANG_TARGET_DIR="
+ #$(this-package-input "glslang")
+ "/lib/cmake"))
+ #:phases #~(modify-phases %standard-phases
+ (add-after 'unpack 'chdir
+ (lambda _
+ (chdir "src")))
+ (replace 'install
+ (lambda* (#:key outputs #:allow-other-keys)
+ (let ((bin (string-append (assoc-ref outputs "out")
+ "/bin")))
+ (mkdir-p bin)
+ (install-file "realesrgan-ncnn-vulkan" bin)))))))
+ (inputs (list glslang libwebp ncnn vulkan-headers vulkan-loader))
+ (home-page "https://github.com/xinntao/Real-ESRGAN")
+ (synopsis "Restore low-resolution images")
+ (description "Real-ESRGAN is a @acronym{GAN, Generative Adversarial Network}
+aiming to restore low-resolution images. The techniques used are described in
+the paper 'Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure
+Synthetic Data' by Xintao Wang, Liangbin Xie, Chao Dong, and Ying Shan.
+This package provides an implementation built on top of ncnn.")
+ (license license:expat)))
+
(define-public onnx
(package
(name "onnx")
diff --git a/gnu/packages/patches/real-resgan-ncnn-simplify-model-path.patch b/gnu/packages/patches/real-resgan-ncnn-simplify-model-path.patch
new file mode 100644
index 0000000000..9a02269718
--- /dev/null
+++ b/gnu/packages/patches/real-resgan-ncnn-simplify-model-path.patch
@@ -0,0 +1,195 @@
+diff --git a/src/main.cpp b/src/main.cpp
+index ebe0e62..ddfb742 100644
+--- a/src/main.cpp
++++ b/src/main.cpp
+@@ -109,8 +109,7 @@ static void print_usage()
+ fprintf(stderr, " -o output-path output image path (jpg/png/webp) or directory\n");
+ fprintf(stderr, " -s scale upscale ratio (can be 2, 3, 4. default=4)\n");
+ fprintf(stderr, " -t tile-size tile size (>=32/0=auto, default=0) can be 0,0,0 for multi-gpu\n");
+- fprintf(stderr, " -m model-path folder path to the pre-trained models. default=models\n");
+- fprintf(stderr, " -n model-name model name (default=realesr-animevideov3, can be realesr-animevideov3 | realesrgan-x4plus | realesrgan-x4plus-anime | realesrnet-x4plus)\n");
++ fprintf(stderr, " -m model-path model and parameter file name (sans .bin and .param extension)\n");
+ fprintf(stderr, " -g gpu-id gpu device to use (default=auto) can be 0,1,2 for multi-gpu\n");
+ fprintf(stderr, " -j load:proc:save thread count for load/proc/save (default=1:2:2) can be 1:2,2,2:2 for multi-gpu\n");
+ fprintf(stderr, " -x enable tta mode\n");
+@@ -438,8 +437,7 @@ int main(int argc, char** argv)
+ path_t outputpath;
+ int scale = 4;
+ std::vector<int> tilesize;
+- path_t model = PATHSTR("models");
+- path_t modelname = PATHSTR("realesr-animevideov3");
++ path_t model = PATHSTR("");
+ std::vector<int> gpuid;
+ int jobs_load = 1;
+ std::vector<int> jobs_proc;
+@@ -451,7 +449,7 @@ int main(int argc, char** argv)
+ #if _WIN32
+ setlocale(LC_ALL, "");
+ wchar_t opt;
+- while ((opt = getopt(argc, argv, L"i:o:s:t:m:n:g:j:f:vxh")) != (wchar_t)-1)
++ while ((opt = getopt(argc, argv, L"i:o:t:m:g:j:f:vxh")) != (wchar_t)-1)
+ {
+ switch (opt)
+ {
+@@ -461,18 +459,12 @@ int main(int argc, char** argv)
+ case L'o':
+ outputpath = optarg;
+ break;
+- case L's':
+- scale = _wtoi(optarg);
+- break;
+ case L't':
+ tilesize = parse_optarg_int_array(optarg);
+ break;
+ case L'm':
+ model = optarg;
+ break;
+- case L'n':
+- modelname = optarg;
+- break;
+ case L'g':
+ gpuid = parse_optarg_int_array(optarg);
+ break;
+@@ -497,7 +489,7 @@ int main(int argc, char** argv)
+ }
+ #else // _WIN32
+ int opt;
+- while ((opt = getopt(argc, argv, "i:o:s:t:m:n:g:j:f:vxh")) != -1)
++ while ((opt = getopt(argc, argv, "i:o:t:m:g:j:f:vxh")) != -1)
+ {
+ switch (opt)
+ {
+@@ -507,18 +499,12 @@ int main(int argc, char** argv)
+ case 'o':
+ outputpath = optarg;
+ break;
+- case 's':
+- scale = atoi(optarg);
+- break;
+ case 't':
+ tilesize = parse_optarg_int_array(optarg);
+ break;
+ case 'm':
+ model = optarg;
+ break;
+- case 'n':
+- modelname = optarg;
+- break;
+ case 'g':
+ gpuid = parse_optarg_int_array(optarg);
+ break;
+@@ -549,6 +535,12 @@ int main(int argc, char** argv)
+ return -1;
+ }
+
++ if (model.empty())
++ {
++ fprintf(stderr, "no model given\n");
++ return -1;
++ }
++
+ if (tilesize.size() != (gpuid.empty() ? 1 : gpuid.size()) && !tilesize.empty())
+ {
+ fprintf(stderr, "invalid tilesize argument\n");
+@@ -671,61 +663,17 @@ int main(int argc, char** argv)
+ }
+ }
+
+- int prepadding = 0;
+-
+- if (model.find(PATHSTR("models")) != path_t::npos
+- || model.find(PATHSTR("models2")) != path_t::npos)
+- {
+- prepadding = 10;
+- }
+- else
+- {
+- fprintf(stderr, "unknown model dir type\n");
+- return -1;
+- }
++ int prepadding = 10;
+
+- // if (modelname.find(PATHSTR("realesrgan-x4plus")) != path_t::npos
+- // || modelname.find(PATHSTR("realesrnet-x4plus")) != path_t::npos
+- // || modelname.find(PATHSTR("esrgan-x4")) != path_t::npos)
+- // {}
+- // else
+- // {
+- // fprintf(stderr, "unknown model name\n");
+- // return -1;
+- // }
+
+ #if _WIN32
+- wchar_t parampath[256];
+- wchar_t modelpath[256];
+-
+- if (modelname == PATHSTR("realesr-animevideov3"))
+- {
+- swprintf(parampath, 256, L"%s/%s-x%s.param", model.c_str(), modelname.c_str(), std::to_string(scale));
+- swprintf(modelpath, 256, L"%s/%s-x%s.bin", model.c_str(), modelname.c_str(), std::to_string(scale));
+- }
+- else{
+- swprintf(parampath, 256, L"%s/%s.param", model.c_str(), modelname.c_str());
+- swprintf(modelpath, 256, L"%s/%s.bin", model.c_str(), modelname.c_str());
+- }
+-
++ path_t parampath = model + L".param";
++ path_t modelpath = model + L".bin";
+ #else
+- char parampath[256];
+- char modelpath[256];
+-
+- if (modelname == PATHSTR("realesr-animevideov3"))
+- {
+- sprintf(parampath, "%s/%s-x%s.param", model.c_str(), modelname.c_str(), std::to_string(scale).c_str());
+- sprintf(modelpath, "%s/%s-x%s.bin", model.c_str(), modelname.c_str(), std::to_string(scale).c_str());
+- }
+- else{
+- sprintf(parampath, "%s/%s.param", model.c_str(), modelname.c_str());
+- sprintf(modelpath, "%s/%s.bin", model.c_str(), modelname.c_str());
+- }
++ path_t parampath = model + ".param";
++ path_t modelpath = model + ".bin";
+ #endif
+
+- path_t paramfullpath = sanitize_filepath(parampath);
+- path_t modelfullpath = sanitize_filepath(modelpath);
+-
+ #if _WIN32
+ CoInitializeEx(NULL, COINIT_MULTITHREADED);
+ #endif
+@@ -781,17 +729,14 @@ int main(int argc, char** argv)
+ uint32_t heap_budget = ncnn::get_gpu_device(gpuid[i])->get_heap_budget();
+
+ // more fine-grained tilesize policy here
+- if (model.find(PATHSTR("models")) != path_t::npos)
+- {
+- if (heap_budget > 1900)
+- tilesize[i] = 200;
+- else if (heap_budget > 550)
+- tilesize[i] = 100;
+- else if (heap_budget > 190)
+- tilesize[i] = 64;
+- else
+- tilesize[i] = 32;
+- }
++ if (heap_budget > 1900)
++ tilesize[i] = 200;
++ else if (heap_budget > 550)
++ tilesize[i] = 100;
++ else if (heap_budget > 190)
++ tilesize[i] = 64;
++ else
++ tilesize[i] = 32;
+ }
+
+ {
+@@ -801,7 +746,7 @@ int main(int argc, char** argv)
+ {
+ realesrgan[i] = new RealESRGAN(gpuid[i], tta_mode);
+
+- realesrgan[i]->load(paramfullpath, modelfullpath);
++ realesrgan[i]->load(parampath, modelpath);
+
+ realesrgan[i]->scale = scale;
+ realesrgan[i]->tilesize = tilesize[i];
--
2.38.1
L
L
Liliana Marie Prikler wrote on 20 Nov 2022 10:10
[PATCH 3/8] gnu: Add python-addict.
(address . 59607@debbugs.gnu.org)
06c7b8e263ad8d7191cadc00d1704e07ff7f5664.camel@gmail.com
* gnu/packages/python-xyz.scm (python-addict): New variable.
---
gnu/packages/python-xyz.scm | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)

Toggle diff (31 lines)
diff --git a/gnu/packages/python-xyz.scm b/gnu/packages/python-xyz.scm
index 7fefbc5bff..d0591b4614 100644
--- a/gnu/packages/python-xyz.scm
+++ b/gnu/packages/python-xyz.scm
@@ -581,6 +581,24 @@ (define-public python-dotmap
dictionary, can be convert to a dictionary, and is ordered by insertion.")
(license license:expat)))
+(define-public python-addict
+ (package
+ (name "python-addict")
+ (version "2.4.0")
+ (source (origin
+ (method url-fetch)
+ (uri (pypi-uri "addict" version))
+ (sha256
+ (base32
+ "1574sicy5ydx9pvva3lbx8qp56z9jbdwbj26aqgjhyh61q723cmk"))))
+ (build-system python-build-system)
+ (home-page "https://github.com/mewwts/addict")
+ (synopsis "Dictionaries with items as attributes")
+ (description
+ "This package provides a dictionary type, whose values are both gettable
+and settable using attributes, in addition to standard item-syntax.")
+ (license license:expat)))
+
(define-public python-twodict
(package
(name "python-twodict")
--
2.38.1
L
L
Liliana Marie Prikler wrote on 25 Nov 2022 21:37
[PATCH 5/8] gnu: Add python-filterpy.
(address . 59607@debbugs.gnu.org)
7657246e20fc5dede55b714864d8eae90ec73dfd.camel@gmail.com
* gnu/packages/python-science.scm (python-filterpy): New variable.
---
gnu/packages/python-science.scm | 50 +++++++++++++++++++++++++++++++++
1 file changed, 50 insertions(+)

Toggle diff (63 lines)
diff --git a/gnu/packages/python-science.scm b/gnu/packages/python-science.scm
index 52fe1460bb..d69e43be4e 100644
--- a/gnu/packages/python-science.scm
+++ b/gnu/packages/python-science.scm
@@ -634,6 +634,56 @@ (define-public python-fbpca
analysis} (PCA), SVD, and eigendecompositions via randomized methods")
(license license:bsd-3)))
+(define-public python-filterpy
+ (package
+ (name "python-filterpy")
+ (version "1.4.5")
+ (source (origin
+ (method git-fetch)
+ (uri (git-reference
+ (url "https://github.com/rlabbe/filterpy")
+ (commit version)))
+ (sha256
+ (base32
+ "1i7v7jfi0ysc2rz9fkxyrmdbh4a1ahcn6vgjajj0zs6wla3jnmxm"))))
+ (build-system python-build-system)
+ (arguments
+ (list #:phases
+ #~(modify-phases %standard-phases
+ (add-before 'check 'mark-failing-tests
+ (lambda _
+ (substitute* "filterpy/kalman/tests/test_kf.py"
+ (("from __future__ import .*" all)
+ (string-append all "\nimport pytest\n"))
+ (("def test_(noisy_1d|steadystate)" all)
+ (string-append "@pytest.mark.xfail()\n"
+ all)))
+ (substitute* "filterpy/kalman/tests/test_fm.py"
+ (("from pytest import .*" all)
+ (string-append all "\nimport pytest\n"))
+ (("def test_noisy_1d" all)
+ (string-append "@pytest.mark.xfail()\n"
+ all)))
+ (substitute* "filterpy/stats/tests/test_stats.py"
+ (("from __future__ import .*" all)
+ (string-append all "\nimport pytest\n"))
+ (("def test_mahalanobis" all)
+ (string-append "@pytest.mark.xfail()\n"
+ all)))))
+ (replace 'check
+ (lambda* (#:key tests? #:allow-other-keys)
+ (when tests?
+ (invoke "pytest" "-vv")))))))
+ (propagated-inputs (list python-numpy python-scipy))
+ (native-inputs (list python-pytest))
+ (home-page "https://filterpy.readthedocs.io/")
+ (synopsis "Kalman and Bayesian filters")
+ (description
+ "This package provides implementations of various filters, such as the
+Kalman filter, its extended and unscented variants, recursive least squares,
+and others.")
+ (license license:expat)))
+
(define-public python-geosketch
(package
(name "python-geosketch")
--
2.38.1
L
L
Liliana Marie Prikler wrote on 20 Nov 2022 17:25
[PATCH 4/8] gnu: Add python-basicsr.
(address . 59607@debbugs.gnu.org)
d06f0940078e449917736ca7f65e1411553ab42f.camel@gmail.com
* gnu/packages/patches/python-basicsr-fuck-nvidia.patch: New file.
* gnu/local.mk (dist_patch_DATA): Register it here.
* gnu/packages/machine-learning.scm (python-basicsr): New variable.
---
gnu/local.mk | 1 +
gnu/packages/machine-learning.scm | 66 +
.../patches/python-basicsr-fuck-nvidia.patch | 3233 +++++++++++++++++
3 files changed, 3300 insertions(+)
create mode 100644 gnu/packages/patches/python-basicsr-fuck-nvidia.patch

Toggle diff (437 lines)
diff --git a/gnu/local.mk b/gnu/local.mk
index 7278c50e4f..8dd1abe07a 100644
--- a/gnu/local.mk
+++ b/gnu/local.mk
@@ -1720,6 +1720,7 @@ dist_patch_DATA = \
%D%/packages/patches/python-apsw-3.39.2.1-test-fix.patch \
%D%/packages/patches/python-aionotify-0.2.0-py3.8.patch \
%D%/packages/patches/python-argcomplete-1.11.1-fish31.patch \
+ %D%/packages/patches/python-basicsr-fuck-nvidia.patch \
%D%/packages/patches/python-cross-compile.patch \
%D%/packages/patches/python-configobj-setuptools.patch \
%D%/packages/patches/python-dateutil-pytest-compat.patch \
diff --git a/gnu/packages/machine-learning.scm b/gnu/packages/machine-learning.scm
index 0566f4bd69..a5767a2c31 100644
--- a/gnu/packages/machine-learning.scm
+++ b/gnu/packages/machine-learning.scm
@@ -750,6 +750,72 @@ (define (delete-ifdefs file)
in terms of new algorithms.")
(license license:gpl3+)))
+(define-public python-basicsr
+ (package
+ (name "python-basicsr")
+ (version "1.4.2")
+ (source (origin
+ (method git-fetch)
+ (uri
+ (git-reference
+ (url "https://github.com/XPixelGroup/BasicSR")
+ (commit (string-append "v" version))))
+ (patches
+ (search-patches
+ "python-basicsr-fuck-nvidia.patch"))
+ (modules '((guix build utils)))
+ (snippet
+ #~(begin (substitute* (find-files "." "\\.py")
+ (("\\.cuda\\(\\)") "")
+ (("pretrained=True") "weights=None"))
+ ;; Instead of images files, a custom lmdb is used
+ (delete-file-recursively "tests/data")))
+ (sha256
+ (base32
+ "0qjk1hf1qjla3f6hb8fd6dv9w3b77568z8g17mlcxl91bp031z2i"))))
+ (build-system python-build-system)
+ (arguments
+ (list
+ #:phases
+ #~(modify-phases %standard-phases
+ (add-after 'unpack 'fix-requirements
+ (lambda _
+ (substitute* "requirements.txt"
+ (("opencv-python") "") ; installed without egg-info
+ (("tb-nightly") ""))))
+ (add-before 'check 'pre-check
+ (lambda _
+ (setenv "HOME" (getcwd))
+ ;; Missing data...
+ (delete-file-recursively "tests/test_data")
+ ;; Model is fetched over the web
+ (delete-file-recursively "tests/test_models")))
+ (replace 'check
+ (lambda* (#:key tests? #:allow-other-keys)
+ (when tests?
+ (invoke "pytest" "-vv")))))))
+ (propagated-inputs (list opencv ; used via python bindings
+ python-addict
+ python-future
+ python-lmdb
+ python-numpy
+ python-pillow
+ python-pyyaml
+ python-requests
+ python-scikit-image
+ python-scipy
+ python-pytorch
+ python-torchvision
+ python-tqdm
+ python-yapf))
+ (native-inputs (list lmdb python-cython python-pytest))
+ (home-page "https://github.com/xinntao/BasicSR")
+ (synopsis "Image and Video Super-Resolution Toolbox")
+ (description "BasicSR is a pytorch-based toolbox to perform image restoration
+tasks such as super-scaling, denoising, deblurring, and removal of JPEG
+artifacts.")
+ (license license:asl2.0)))
+
(define-public ncnn
(package
(name "ncnn")
diff --git a/gnu/packages/patches/python-basicsr-fuck-nvidia.patch b/gnu/packages/patches/python-basicsr-fuck-nvidia.patch
new file mode 100644
index 0000000000..30cc1cb9ad
--- /dev/null
+++ b/gnu/packages/patches/python-basicsr-fuck-nvidia.patch
@@ -0,0 +1,3233 @@
+diff --git a/basicsr/archs/arch_util.py b/basicsr/archs/arch_util.py
+index 11b82a7..875b2b6 100644
+--- a/basicsr/archs/arch_util.py
++++ b/basicsr/archs/arch_util.py
+@@ -10,7 +10,7 @@ from torch.nn import functional as F
+ from torch.nn import init as init
+ from torch.nn.modules.batchnorm import _BatchNorm
+
+-from basicsr.ops.dcn import ModulatedDeformConvPack, modulated_deform_conv
++from basicsr.ops.dcn import ModulatedDeformConvPack
+ from basicsr.utils import get_root_logger
+
+
+@@ -228,12 +228,8 @@ class DCNv2Pack(ModulatedDeformConvPack):
+ logger = get_root_logger()
+ logger.warning(f'Offset abs mean is {offset_absmean}, larger than 50.')
+
+- if LooseVersion(torchvision.__version__) >= LooseVersion('0.9.0'):
+- return torchvision.ops.deform_conv2d(x, offset, self.weight, self.bias, self.stride, self.padding,
+- self.dilation, mask)
+- else:
+- return modulated_deform_conv(x, offset, mask, self.weight, self.bias, self.stride, self.padding,
+- self.dilation, self.groups, self.deformable_groups)
++ return torchvision.ops.deform_conv2d(x, offset, self.weight, self.bias, self.stride, self.padding,
++ self.dilation, mask)
+
+
+ def _no_grad_trunc_normal_(tensor, mean, std, a, b):
+diff --git a/basicsr/archs/basicvsrpp_arch.py b/basicsr/archs/basicvsrpp_arch.py
+index d9699cb..e726b8b 100644
+--- a/basicsr/archs/basicvsrpp_arch.py
++++ b/basicsr/archs/basicvsrpp_arch.py
+@@ -69,14 +69,6 @@ class BasicVSRPlusPlus(nn.Module):
+ self.backbone = nn.ModuleDict()
+ modules = ['backward_1', 'forward_1', 'backward_2', 'forward_2']
+ for i, module in enumerate(modules):
+- if torch.cuda.is_available():
+- self.deform_align[module] = SecondOrderDeformableAlignment(
+- 2 * mid_channels,
+- mid_channels,
+- 3,
+- padding=1,
+- deformable_groups=16,
+- max_residue_magnitude=max_residue_magnitude)
+ self.backbone[module] = ConvResidualBlocks((2 + i) * mid_channels, mid_channels, num_blocks)
+
+ # upsampling module
+diff --git a/basicsr/archs/stylegan2_arch.py b/basicsr/archs/stylegan2_arch.py
+index 9ab37f5..42cb08c 100644
+--- a/basicsr/archs/stylegan2_arch.py
++++ b/basicsr/archs/stylegan2_arch.py
+@@ -4,7 +4,6 @@ import torch
+ from torch import nn
+ from torch.nn import functional as F
+
+-from basicsr.ops.fused_act import FusedLeakyReLU, fused_leaky_relu
+ from basicsr.ops.upfirdn2d import upfirdn2d
+ from basicsr.utils.registry import ARCH_REGISTRY
+
+@@ -141,8 +140,7 @@ class EqualLinear(nn.Module):
+ bias. Default: ``True``.
+ bias_init_val (float): Bias initialized value. Default: 0.
+ lr_mul (float): Learning rate multiplier. Default: 1.
+- activation (None | str): The activation after ``linear`` operation.
+- Supported: 'fused_lrelu', None. Default: None.
++ activation (None | str): Ignored.
+ """
+
+ def __init__(self, in_channels, out_channels, bias=True, bias_init_val=0, lr_mul=1, activation=None):
+@@ -150,10 +148,7 @@ class EqualLinear(nn.Module):
+ self.in_channels = in_channels
+ self.out_channels = out_channels
+ self.lr_mul = lr_mul
+- self.activation = activation
+- if self.activation not in ['fused_lrelu', None]:
+- raise ValueError(f'Wrong activation value in EqualLinear: {activation}'
+- "Supported ones are: ['fused_lrelu', None].")
++ self.activation = None
+ self.scale = (1 / math.sqrt(in_channels)) * lr_mul
+
+ self.weight = nn.Parameter(torch.randn(out_channels, in_channels).div_(lr_mul))
+@@ -167,12 +162,7 @@ class EqualLinear(nn.Module):
+ bias = None
+ else:
+ bias = self.bias * self.lr_mul
+- if self.activation == 'fused_lrelu':
+- out = F.linear(x, self.weight * self.scale)
+- out = fused_leaky_relu(out, bias)
+- else:
+- out = F.linear(x, self.weight * self.scale, bias=bias)
+- return out
++ return F.linear(x, self.weight * self.scale, bias=bias)
+
+ def __repr__(self):
+ return (f'{self.__class__.__name__}(in_channels={self.in_channels}, '
+@@ -318,7 +308,7 @@ class StyleConv(nn.Module):
+ sample_mode=sample_mode,
+ resample_kernel=resample_kernel)
+ self.weight = nn.Parameter(torch.zeros(1)) # for noise injection
+- self.activate = FusedLeakyReLU(out_channels)
++ self.activate = ScaledLeakyReLU()
+
+ def forward(self, x, style, noise=None):
+ # modulate
+@@ -693,10 +683,7 @@ class ConvLayer(nn.Sequential):
+ and not activate))
+ # activation
+ if activate:
+- if bias:
+- layers.append(FusedLeakyReLU(out_channels))
+- else:
+- layers.append(ScaledLeakyReLU(0.2))
++ layers.append(ScaledLeakyReLU(0.2))
+
+ super(ConvLayer, self).__init__(*layers)
+
+diff --git a/basicsr/data/prefetch_dataloader.py b/basicsr/data/prefetch_dataloader.py
+index 5088425..0cf35e6 100644
+--- a/basicsr/data/prefetch_dataloader.py
++++ b/basicsr/data/prefetch_dataloader.py
+@@ -99,7 +99,7 @@ class CUDAPrefetcher():
+ self.loader = iter(loader)
+ self.opt = opt
+ self.stream = torch.cuda.Stream()
+- self.device = torch.device('cuda' if opt['num_gpu'] != 0 else 'cpu')
++ self.device = torch.device('cpu')
+ self.preload()
+
+ def preload(self):
+diff --git a/basicsr/models/base_model.py b/basicsr/models/base_model.py
+index 05c8d2e..36442a2 100644
+--- a/basicsr/models/base_model.py
++++ b/basicsr/models/base_model.py
+@@ -15,7 +15,7 @@ class BaseModel():
+
+ def __init__(self, opt):
+ self.opt = opt
+- self.device = torch.device('cuda' if opt['num_gpu'] != 0 else 'cpu')
++ self.device = torch.device('cpu')
+ self.is_train = opt['is_train']
+ self.schedulers = []
+ self.optimizers = []
+@@ -91,14 +91,7 @@ class BaseModel():
+ Args:
+ net (nn.Module)
+ """
+- net = net.to(self.device)
+- if self.opt['dist']:
+- find_unused_parameters = self.opt.get('find_unused_parameters', False)
+- net = DistributedDataParallel(
+- net, device_ids=[torch.cuda.current_device()], find_unused_parameters=find_unused_parameters)
+- elif self.opt['num_gpu'] > 1:
+- net = DataParallel(net)
+- return net
++ return net.to(self.device)
+
+ def get_optimizer(self, optim_type, params, lr, **kwargs):
+ if optim_type == 'Adam':
+diff --git a/basicsr/ops/dcn/__init__.py b/basicsr/ops/dcn/__init__.py
+index 32e3592..68033e0 100644
+--- a/basicsr/ops/dcn/__init__.py
++++ b/basicsr/ops/dcn/__init__.py
+@@ -1,7 +1,4 @@
+-from .deform_conv import (DeformConv, DeformConvPack, ModulatedDeformConv, ModulatedDeformConvPack, deform_conv,
+- modulated_deform_conv)
++from .deform_conv import (DeformConv, DeformConvPack, ModulatedDeformConv, ModulatedDeformConvPack)
+
+ __all__ = [
+- 'DeformConv', 'DeformConvPack', 'ModulatedDeformConv', 'ModulatedDeformConvPack', 'deform_conv',
+- 'modulated_deform_conv'
+-]
++ 'DeformConv', 'DeformConvPack', 'ModulatedDeformConv', 'ModulatedDeformConvPack',]
+diff --git a/basicsr/ops/dcn/deform_conv.py b/basicsr/ops/dcn/deform_conv.py
+index 6268ca8..38ced57 100644
+--- a/basicsr/ops/dcn/deform_conv.py
++++ b/basicsr/ops/dcn/deform_conv.py
+@@ -2,191 +2,9 @@ import math
+ import os
+ import torch
+ from torch import nn as nn
+-from torch.autograd import Function
+-from torch.autograd.function import once_differentiable
+ from torch.nn import functional as F
+ from torch.nn.modules.utils import _pair, _single
+
+-BASICSR_JIT = os.getenv('BASICSR_JIT')
+-if BASICSR_JIT == 'True':
+- from torch.utils.cpp_extension import load
+- module_path = os.path.dirname(__file__)
+- deform_conv_ext = load(
+- 'deform_conv',
+- sources=[
+- os.path.join(module_path, 'src', 'deform_conv_ext.cpp'),
+- os.path.join(module_path, 'src', 'deform_conv_cuda.cpp'),
+- os.path.join(module_path, 'src', 'deform_conv_cuda_kernel.cu'),
+- ],
+- )
+-else:
+- try:
+- from . import deform_conv_ext
+- except ImportError:
+- pass
+- # avoid annoying print output
+- # print(f'Cannot import deform_conv_ext. Error: {error}. You may need to: \n '
+- # '1. compile with BASICSR_EXT=True. or\n '
+- # '2. set BASICSR_JIT=True during running')
+-
+-
+-class DeformConvFunction(Function):
+-
+- @staticmethod
+- def forward(ctx,
+- input,
+- offset,
+- weight,
+- stride=1,
+- padding=0,
+- dilation=1,
+- groups=1,
+- deformable_groups=1,
+- im2col_step=64):
+- if input is not None and input.dim() != 4:
+- raise ValueError(f'Expected 4D tensor as input, got {input.dim()}D tensor instead.')
+- ctx.stride = _pair(stride)
+- ctx.padding = _pair(padding)
+- ctx.dilation = _pair(dilation)
+- ctx.groups = groups
+- ctx.deformable_groups = deformable_groups
+- ctx.im2col_step = im2col_step
+-
+- ctx.save_for_backward(input, offset, weight)
+-
+- output = input.new_empty(DeformConvFunction._output_size(input, weight, ctx.padding, ctx.dilation, ctx.stride))
+-
+- ctx.bufs_ = [input.new_empty(0), input.new_empty(0)] # columns, ones
+-
+- if not input.is_cuda:
+- raise NotImplementedError
+- else:
+- cur_im2col_step = min(ctx.im2col_step, input.shape[0])
+- assert (input.shape[0] % cur_im2col_step) == 0, 'im2col step must divide batchsize'
+- deform_conv_ext.deform_conv_forward(input, weight,
+- offset, output, ctx.bufs_[0], ctx.bufs_[1], weight.size(3),
+- weight.size(2), ctx.stride[1], ctx.stride[0], ctx.padding[1],
+- ctx.padding[0], ctx.dilation[1], ctx.dilation[0], ctx.groups,
+- ctx.deformable_groups, cur_im2col_step)
+- return output
+-
+- @staticmethod
+- @once_differentiable
+- def backward(ctx, grad_output):
+- input, offset, weight = ctx.saved_tensors
+-
+- grad_input = grad_offset = grad_weight = None
+-
+- if not grad_output.is_cuda:
+- raise NotImplementedError
+- else:
+- cur_im2col_step = min(ctx.im2col_step, input.shape[0])
+- assert (input.shape[0] % cur_im2col_step) == 0, 'im2col step must divide batchsize'
+-
+- if ctx.needs_input_grad[0] or ctx.needs_input_grad[1]:
+- grad_input = torch.zeros_like(input)
+- grad_offset = torch.zeros_like(offset)
+- deform_conv_ext.deform_conv_backward_input(input, offset, grad_output, grad_input,
+- grad_offset, weight, ctx.bufs_[0], weight.size(3),
+- weight.size(2), ctx.stride[1], ctx.stride[0], ctx.padding[1],
+- ctx.padding[0], ctx.dilation[1], ctx.dilation[0], ctx.groups,
+- ctx.deformable_groups, cur_im2col_step)
+-
+- if ctx.needs_input_grad[2]:
+- grad_weight = torch.zeros_like(weight)
+- deform_conv_ext.deform_conv_backward_parameters(input, offset, grad_output, grad_weight,
+- ctx.bufs_[0], ctx.bufs_[1], weight.size(3),
+- weight.size(2), ctx.stride[1], ctx.stride[0],
+- ctx.padding[1], ctx.padding[0], ctx.dilation[1],
+- ctx.dilation[0], ctx.groups, ctx.deformable_groups, 1,
+- cur_im2col_step)
+-
+- return (grad_input, grad_offset, grad_weight, None, None, None, None, None)
+-
+- @staticmethod
+- def _output_size(input, weight, padding, dilation, stride):
+- channels = weight.size(0)
+- output_size = (input.size(0), channels)
+- for d in range(input.dim() - 2):
+- in_size = input.size(d + 2)
+- pad = padding[d]
+- kernel = dilation[d] * (weight.size(d + 2) - 1) + 1
+- stride_ = stride[d]
+- output_size += ((in_size + (2 * pad) - kernel) // stride_ + 1, )
+- if not all(map(lambda s: s > 0, output_size)):
+- raise ValueError(f'convolution input is too small (output would be {"x".join(map(str, output_size))})')
+- return output_size
+-
+-
+-class ModulatedDeformConvFunction(Function):
+-
+- @staticmethod
+- def forward(ctx,
+- input,
+- offset,
+- mask,
+- weight,
+- bias=None,
+- stride=1,
+- padding=0,
+- dilation=1,
+- groups=1,
+- deformable_groups=1):
+- ctx.stride = stride
+- ctx.padding = padding
+- ctx.dilation = dilation
+- ctx.groups = groups
+- ctx.deformable_groups = deformable_groups
+- ctx.with_bias = bias is not None
+- if not ctx.with_bias:
+- bias = input.new_empty(1) # fake tensor
+- if not input.is_cuda:
+- raise NotImplementedError
+- if weight.requires_grad or mask.requires_grad or offset.requires_grad or input.requires_grad:
+- ctx.save_for_backward(input, offset, mask, weight, bias)
+- output = input.new_empty(ModulatedDeformConvFunction._infer_shape(ctx, input, weight))
+- ctx._bufs = [input.new_empty(0), input.new_empty(0)]
+- deform_conv_ext.modulated_deform_conv_forward(input, weight, bias, ctx._bufs[0], offset, mask, output,
+- ctx._bufs[1], weight.shape[2], weight.shape[3], ctx.stride,
+- ctx.stride, ctx.padding, ctx.padding, ctx.dilation, ctx.dilation,
+- ctx.groups, ctx.deformable_groups, ctx.with_bias)
+- return output
+-
+- @staticmethod
+- @once_differentiable
+- def backward(ctx, grad_output):
+- if not grad_output.is_cuda:
+- raise NotImplementedError
+- input, offset, mask, weight, bias = ctx.saved_tensors
+- grad_input = torch.zeros_like(input)
+- grad_offset = torch.zeros_like(offset)
+- grad_mask = torch.zeros_like(mask)
+- grad_weight = torch.zeros_like(weight)
+- grad_bias = torch.zeros_like(bias)
+-
This message was truncated. Download the full message here.
L
L
Liliana Marie Prikler wrote on 25 Nov 2022 23:31
[PATCH 6/8] gnu: Add python-facexlib.
(address . 59607@debbugs.gnu.org)
237e70d8b319a981b9b992e4d7577aca8a1e988f.camel@gmail.com
* gnu/packages/machine-learning.scm (python-facexlib): New variable.
---
gnu/packages/machine-learning.scm | 43 +++++++++++++++++++++++++++++++
1 file changed, 43 insertions(+)

Toggle diff (56 lines)
diff --git a/gnu/packages/machine-learning.scm b/gnu/packages/machine-learning.scm
index a5767a2c31..6116834578 100644
--- a/gnu/packages/machine-learning.scm
+++ b/gnu/packages/machine-learning.scm
@@ -816,6 +816,49 @@ (define-public python-basicsr
artifacts.")
(license license:asl2.0)))
+(define-public python-facexlib
+ (package
+ (name "python-facexlib")
+ (version "0.2.5")
+ (source (origin
+ (method url-fetch)
+ (uri (pypi-uri "facexlib" version))
+ (modules '((guix build utils)))
+ (snippet
+ #~(begin (substitute* (find-files "." "\\.py$")
+ (("\\.cuda\\(\\)") "")
+ (("'cuda' if .* else ") "")
+ (("'cuda'") "'cpu'"))))
+ (sha256
+ (base32
+ "1r378mb167k2hynrn1wsi78xbh2aw6x68i8f70nmcqsxxp20rqii"))))
+ (build-system python-build-system)
+ (arguments
+ (list #:tests? #f ; No tests
+ #:phases
+ #~(modify-phases %standard-phases
+ (add-after 'unpack 'fix-requirements
+ (lambda _
+ (substitute* "requirements.txt"
+ (("opencv-python") ""))))))) ; installed without egg-info
+ (propagated-inputs (list opencv
+ python-filterpy
+ python-numba
+ python-numpy
+ python-pillow
+ python-scipy
+ python-pytorch
+ python-torchvision
+ python-tqdm))
+ (native-inputs (list python-cython))
+ (home-page "https://github.com/xinntao/facexlib")
+ (synopsis "Basic face library")
+ (description "This package provides a collection of face-related
+functions, such as detection, alignment, recognition or tracking.")
+ (license (list license:gpl3+
+ license:asl2.0
+ license:expat))))
+
(define-public ncnn
(package
(name "ncnn")
--
2.38.1
L
L
Liliana Marie Prikler wrote on 22 Nov 2022 21:55
[PATCH 8/8] gnu: Add python-real-esrgan.
(address . 59607@debbugs.gnu.org)
54e7f9a4a00e598b67876573dd91dd49095e966e.camel@gmail.com
* gnu/packages/machine-learning.scm (python-real-esrgan): New variable.
---
gnu/packages/machine-learning.scm | 62 +++++++++++++++++++++++++++++++
1 file changed, 62 insertions(+)

Toggle diff (75 lines)
diff --git a/gnu/packages/machine-learning.scm b/gnu/packages/machine-learning.scm
index 73263962f5..c49886f50d 100644
--- a/gnu/packages/machine-learning.scm
+++ b/gnu/packages/machine-learning.scm
@@ -930,6 +930,68 @@ (define-public python-gfpgan
images.")
(license license:asl2.0)))
+(define-public python-real-esrgan
+ (package
+ (name "python-real-esrgan")
+ (version "0.3.0")
+ (source (origin
+ (method git-fetch)
+ (uri (git-reference
+ (url "https://github.com/xinntao/Real-ESRGAN")
+ (commit (string-append "v" version))))
+ (file-name (git-file-name name version))
+ (modules '((guix build utils)))
+ (snippet
+ #~(begin (substitute* (find-files "." "\\.py")
+ (("\\.cuda\\(\\)") ""))))
+ (sha256
+ (base32
+ "19qp8af1m50zawcnb31hq41nicad5k4hz7ffwizana17w069ilx5"))))
+ (build-system python-build-system)
+ (arguments
+ (list
+ #:phases
+ #~(modify-phases %standard-phases
+ (add-after 'unpack 'fix-api
+ (lambda _
+ (substitute* (find-files "." "\\.py$")
+ (("from basicsr\\.losses\\.losses import .*")
+ "import basicsr.losses\n")
+ (("GANLoss")
+ "basicsr.losses.gan_loss.GANLoss")
+ (("(L1|Perceptual)Loss" loss)
+ (string-append "basicsr.losses.basic_loss." loss)))))
+ (add-after 'unpack 'fix-requirements
+ (lambda _
+ (substitute* "requirements.txt"
+ (("opencv-python") "")))) ; installed without egg-info
+ (add-before 'check 'pre-check
+ (lambda _
+ (setenv "HOME" (getcwd))
+ ;; Test requires pretrained model.
+ (delete-file "tests/test_utils.py")))
+ (replace 'check
+ (lambda* (#:key tests? #:allow-other-keys)
+ (when tests?
+ (invoke "pytest" "-vv")))))))
+ (propagated-inputs (list opencv
+ python-basicsr
+ python-facexlib
+ python-gfpgan
+ python-numpy
+ python-pillow
+ python-pytorch
+ python-torchvision
+ python-tqdm))
+ (native-inputs (list python-cython python-pytest))
+ (home-page "https://github.com/xinntao/Real-ESRGAN")
+ (synopsis "Restore low-resolution images")
+ (description "Real-ESRGAN is a @acronym{GAN, Generative Adversarial Network}
+aiming to restore low-resolution images. The techniques used are described in
+the paper 'Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure
+Synthetic Data' by Xintao Wang, Liangbin Xie, Chao Dong, and Ying Shan.")
+ (license license:bsd-3)))
+
(define-public ncnn
(package
(name "ncnn")
--
2.38.1
L
L
Liliana Marie Prikler wrote on 26 Nov 2022 11:46
[PATCH 7/8] gnu: Add python-gfpgan.
(address . 59607@debbugs.gnu.org)
9f038ba9a340cb10fea00673080c5d67d0a71cb9.camel@gmail.com
* gnu/packages/patches/python-gfpgan-unfuse-leaky-relu.patch: New file.
* gnu/local.mk (dist_patch_DATA): Register it here.
* gnu/packages/machine-learning.scm (python-gfpgan): New variable.
---
gnu/local.mk | 1 +
gnu/packages/machine-learning.scm | 71 +++++++++++++++++++
.../python-gfpgan-unfuse-leaky-relu.patch | 57 +++++++++++++++
3 files changed, 129 insertions(+)
create mode 100644 gnu/packages/patches/python-gfpgan-unfuse-leaky-relu.patch

Toggle diff (159 lines)
diff --git a/gnu/local.mk b/gnu/local.mk
index 8dd1abe07a..9312bec6d4 100644
--- a/gnu/local.mk
+++ b/gnu/local.mk
@@ -1729,6 +1729,7 @@ dist_patch_DATA = \
%D%/packages/patches/python-execnet-read-only-fix.patch \
%D%/packages/patches/python-fixtures-remove-monkeypatch-test.patch \
%D%/packages/patches/python-flask-restful-werkzeug-compat.patch \
+ %D%/packages/patches/python-gfpgan-unfuse-leaky-relu.patch \
%D%/packages/patches/python-ipython-documentation-chars.patch \
%D%/packages/patches/python-ipython-documentation-repro.patch \
%D%/packages/patches/python-keras-integration-test.patch \
diff --git a/gnu/packages/machine-learning.scm b/gnu/packages/machine-learning.scm
index 6116834578..73263962f5 100644
--- a/gnu/packages/machine-learning.scm
+++ b/gnu/packages/machine-learning.scm
@@ -859,6 +859,77 @@ (define-public python-facexlib
license:asl2.0
license:expat))))
+(define-public python-gfpgan
+ (package
+ (name "python-gfpgan")
+ (version "1.3.8")
+ (source (origin
+ (method git-fetch)
+ (uri (git-reference
+ (url "https://github.com/TencentARC/GFPGAN")
+ (commit (string-append "v" version))))
+ (file-name (git-file-name name version))
+ (patches (search-patches
+ "python-gfpgan-unfuse-leaky-relu.patch"))
+ (modules '((guix build utils)))
+ (snippet
+ #~(begin (substitute* (find-files "." "\\.py$")
+ (("\\.cuda\\(\\)") "")
+ (("torch\\.cuda\\.is_available\\(\\)") "False")
+ (("'cuda' if False else ") "")
+ (("'cuda'") "'cpu'"))))
+ (sha256
+ (base32
+ "1s2wn3r4h1if35yvacaihk2r9fswdh16nxgh229g07p2562pgcky"))))
+ (build-system python-build-system)
+ (arguments
+ (list
+ #:phases
+ #~(modify-phases %standard-phases
+ (add-after 'unpack 'fix-api
+ (lambda _
+ (substitute* (find-files "." "\\.py$")
+ (("from basicsr\\.losses\\.losses import .*")
+ "import basicsr.losses\n")
+ (("GANLoss")
+ "basicsr.losses.gan_loss.GANLoss")
+ (("(L1|Perceptual)Loss" loss)
+ (string-append "basicsr.losses.basic_loss." loss)))))
+ (add-after 'unpack 'fix-requirements
+ (lambda _
+ (substitute* "requirements.txt"
+ (("opencv-python") "") ; installed without egg-info
+ (("tb-nightly") ""))))
+ (add-before 'check 'delete-failing-tests
+ (lambda _
+ ;; XXX: Possibly genuine errors
+ (delete-file "tests/test_ffhq_degradation_dataset.py")
+ (delete-file "tests/test_gfpgan_model.py")
+ ;; Test requires pretrained model.
+ (delete-file "tests/test_utils.py")))
+ (replace 'check
+ (lambda* (#:key tests? #:allow-other-keys)
+ (when tests?
+ (invoke "pytest" "-vv")))))))
+ (propagated-inputs (list opencv
+ python-basicsr
+ python-facexlib
+ python-lmdb
+ python-numpy
+ python-pyyaml
+ python-scipy
+ python-pytorch
+ python-torchvision
+ python-tqdm
+ python-yapf))
+ (native-inputs (list python-cython python-pytest))
+ (home-page "https://github.com/TencentARC/GFPGAN")
+ (synopsis "Restore images of faces")
+ (description "GFPGAN uses generative adversarial networks to remove
+degradations, such as blur, noise, or compression artifacts, from facial
+images.")
+ (license license:asl2.0)))
+
(define-public ncnn
(package
(name "ncnn")
diff --git a/gnu/packages/patches/python-gfpgan-unfuse-leaky-relu.patch b/gnu/packages/patches/python-gfpgan-unfuse-leaky-relu.patch
new file mode 100644
index 0000000000..cd09312676
--- /dev/null
+++ b/gnu/packages/patches/python-gfpgan-unfuse-leaky-relu.patch
@@ -0,0 +1,57 @@
+diff --git a/gfpgan/archs/gfpganv1_arch.py b/gfpgan/archs/gfpganv1_arch.py
+index eaf3162..34ae5a2 100644
+--- a/gfpgan/archs/gfpganv1_arch.py
++++ b/gfpgan/archs/gfpganv1_arch.py
+@@ -3,7 +3,6 @@ import random
+ import torch
+ from basicsr.archs.stylegan2_arch import (ConvLayer, EqualConv2d, EqualLinear, ResBlock, ScaledLeakyReLU,
+ StyleGAN2Generator)
+-from basicsr.ops.fused_act import FusedLeakyReLU
+ from basicsr.utils.registry import ARCH_REGISTRY
+ from torch import nn
+ from torch.nn import functional as F
+@@ -170,10 +169,7 @@ class ConvUpLayer(nn.Module):
+
+ # activation
+ if activate:
+- if bias:
+- self.activation = FusedLeakyReLU(out_channels)
+- else:
+- self.activation = ScaledLeakyReLU(0.2)
++ self.activation = ScaledLeakyReLU(0.2)
+ else:
+ self.activation = None
+
+diff --git a/gfpgan/archs/stylegan2_bilinear_arch.py b/gfpgan/archs/stylegan2_bilinear_arch.py
+index 1342ee3..5cffb44 100644
+--- a/gfpgan/archs/stylegan2_bilinear_arch.py
++++ b/gfpgan/archs/stylegan2_bilinear_arch.py
+@@ -1,7 +1,6 @@
+ import math
+ import random
+ import torch
+-from basicsr.ops.fused_act import FusedLeakyReLU, fused_leaky_relu
+ from basicsr.utils.registry import ARCH_REGISTRY
+ from torch import nn
+ from torch.nn import functional as F
+@@ -190,7 +189,7 @@ class StyleConv(nn.Module):
+ sample_mode=sample_mode,
+ interpolation_mode=interpolation_mode)
+ self.weight = nn.Parameter(torch.zeros(1)) # for noise injection
+- self.activate = FusedLeakyReLU(out_channels)
++ self.activate = ScaledLeakyReLU()
+
+ def forward(self, x, style, noise=None):
+ # modulate
+@@ -568,10 +567,7 @@ class ConvLayer(nn.Sequential):
+ and not activate))
+ # activation
+ if activate:
+- if bias:
+- layers.append(FusedLeakyReLU(out_channels))
+- else:
+- layers.append(ScaledLeakyReLU(0.2))
++ layers.append(ScaledLeakyReLU(0.2))
+
+ super(ConvLayer, self).__init__(*layers)
+
--
2.38.1
?