Based on the original GFPGAN, built with gradio, tailored for my use case.
- Simple to use, one-click inference
- Could take advantage of AMD GPU and be able to release the it to save power after each run
- Allow retrieval of previously processed images in case of connection failure
- Easy deployment and replication through Docker image
- Now support GPEN
Web UI screenshot
-
Rename the corresponding
pyproject.*.toml
file of your target runtime topyproject.toml
-
Install dependencies with Pypoetry
poetry install --no-root
- Run with
scripts/run_cpu.sh
orscripts/run_rocm.sh
if an AMD GPU is available.
Note that in order to run with AMD GPUs, ROCm package is required, instruction on the installation could be found over ollama development docs.
Poetry package manager is required to generate a requirements.txt
file for docker build.
Install the export plugin
poetry self add poetry-plugin-export
Lock dependencies
poetry lock
Build with ROCm
./scripts/build_docker_rocm.sh`
Or with CPU-only
./scripts/build_docker_cpu.sh`
Download all models to ./weights
directory
./scripts/download_models.sh
Rename the corresponding docker-compose.*.yaml
file of your target runtime to docker-compose.yaml
docker compose up -d
Note that the ROCm image is massive, almost 20GB when compressed and 60GB to store locally. The CPU-only image is available on Docker Hub, but a image build is required to run ROCm version.