Quickstart
Get the benchmark running in a few simple steps. This guide assumes you have access to a SLURM cluster.
Environment Setup
First, connect to your SLURM cluster. This benchmark is designed to run on high-performance computing (HPC) clusters such as Jean-Zay (IDRIS, France) or similar SLURM-based systems.
Load required modules (example for Jean-Zay):
module load pytorch-gpu/py3/2.7.0
This loads PyTorch with GPU support and Python 3. Check your cluster’s documentation for the equivalent module names.
Installation
Clone the benchmark repository:
git clone https://github.com/bmalezieux/benchmark_invprob_inference.git
cd benchmark_invprob_inference
Install BenchOpt and the benchmark package:
pip install benchopt
pip install .
Running the Benchmark
To run the benchmark, just run this command from the project root:
benchopt run .
--parallel-config ./configs/config_parallel.yml \
--config ./configs/highres_imaging.yml
What each argument does:
--parallel-config— SLURM configuration (number of GPUs per job, CPU cores, walltime)--config— Experiment definition (dataset, solvers, image sizes, noise levels, parameters)
See Configuration Guide for details on customizing configurations.
What happens during execution:
Configuration parsing — BenchOpt reads both configs and generates a grid of experiments
Job submission — Each job executes one complete reconstruction pipeline: a solver (PnP) running on a specific dataset and parameter combination
Parallel execution — Each job can run in parallel on multiple GPUs if specified in the SLURM config
Results collection — Convergence curves (PSNR), runtime, and memory usage are saved for each job
Viewing Results
After the benchmark completes, open the HTML report:
outputs/benchmark_invprob_inference.html
The report includes:
Runtime comparisons across solvers and configurations
Convergence curves (PSNR vs iterations)
Memory and computational resource usage
Interactive plots for detailed exploration