Skip to content

PLAN-Lab/RewardFlow

Repository files navigation

$\color{orange}{\textbf{{[CVPR 2026]}}}$ RewardFlow: Generate Images by Optimizing What You Reward

RewardFlow is an inversion-free framework that steers pretrained diffusion and flow-matching models at inference time using multi-reward Langevin dynamics. It combines differentiable rewards for semantic alignment, perceptual fidelity, localized grounding, object consistency, and human preference. RewardFlow achieves state-of-the-art zero-shot fidelity and alignment without fine-tuning.

RewardFlow Teaser

Overview

This repository contains RewardFlow code and scripts for:

  • Code runs smoothly for single image
  • Rewardflow is meant for the Faithfull generation of the images, Any Image generator model has the probabilitic inductive bais to metigate this issue with bring the multireward alliginment between text and image. It might reduce the FiD a bit but prompt aware conistancy is at best.
  • running single-image inference (test_rewardflow.py)
  • All code tested on A100 80GB, There is SDP-Numel error with Ada and Litz architechures.
  • Dynamic execution available: only reward functions that fit to memory are loaded to avoid OOM issues

Repository Layout

  • test_rewardflow.py: quick single-image RewardFlow inference test.
  • download.py: helper script to download model files from Hugging Face.

End-to-End Setup and Run

Run all commands from repo root:

cd /data/home/onkar/offical_code/RewardFlow

1) Create Environment

conda create -n rewardflow python=3.10 -y
conda activate rewardflow
pip install --upgrade pip

Install PyTorch for your CUDA version (example for CUDA 12.4):

pip install torch torchvision --index-url https://download.pytorch.org/whl/cu124

Install RewardFlow / diffusers code and runtime dependencies:

pip install -e ".[torch]"
pip install torchmetrics transformers bitsandbytes sentencepiece opencv-python timm pillow

Notes:

  • eval.py uses torchmetrics, transformers, and a DINO model loaded via torch.hub.
  • First metric run may download model artifacts (internet required).

2) Download RewardFlow Weights

export HF_TOKEN=hf_your_token_here   # only needed for private access
python download.py \
  --repo-id onkarsus13/RewardFlow \
  --local-dir /data/onkar/models/RewardFlow

3) Run Quick Inference Test (test_rewardflow.py)

test_rewardflow.py has hardcoded paths. Update these two values first:

  • model_dir (set to your downloaded weights path, e.g. /data/onkar/models/RewardFlow)
  • input image path in Image.open(...) (point to a real image under annotation_images)

Then run:

python test_rewardflow.py

Contact

Please contact to onkarsus13@gmail.com if you have face any challenges regarding the running the code.

Citation

⭐ If you find this work useful, please cite our paper

@inproceedings{rewardflow2026,
  title     = {RewardFlow: Generate Images by Optimizing What You Reward},
  author    = {Susladkar, Onkar Kishor and Jang, Dong-Hwan and Prakash, Tushar and Juvekar, Adheesh Sunil and Shah, Vedant and Barik, Ayush and Bashir, Nabeel and Wahed, Muntasir and Shrirao, Ritish and Lourentzou, Ismini},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year      = {2026}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages