RewardFlow is an inversion-free framework that steers pretrained diffusion and flow-matching models at inference time using multi-reward Langevin dynamics. It combines differentiable rewards for semantic alignment, perceptual fidelity, localized grounding, object consistency, and human preference. RewardFlow achieves state-of-the-art zero-shot fidelity and alignment without fine-tuning.
This repository contains RewardFlow code and scripts for:
- Code runs smoothly for single image
- Rewardflow is meant for the Faithfull generation of the images, Any Image generator model has the probabilitic inductive bais to metigate this issue with bring the multireward alliginment between text and image. It might reduce the FiD a bit but prompt aware conistancy is at best.
- running single-image inference (
test_rewardflow.py) - All code tested on A100 80GB, There is SDP-Numel error with Ada and Litz architechures.
- Dynamic execution available: only reward functions that fit to memory are loaded to avoid OOM issues
test_rewardflow.py: quick single-image RewardFlow inference test.download.py: helper script to download model files from Hugging Face.
Run all commands from repo root:
cd /data/home/onkar/offical_code/RewardFlowconda create -n rewardflow python=3.10 -y
conda activate rewardflow
pip install --upgrade pipInstall PyTorch for your CUDA version (example for CUDA 12.4):
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu124Install RewardFlow / diffusers code and runtime dependencies:
pip install -e ".[torch]"
pip install torchmetrics transformers bitsandbytes sentencepiece opencv-python timm pillowNotes:
eval.pyusestorchmetrics,transformers, and a DINO model loaded viatorch.hub.- First metric run may download model artifacts (internet required).
export HF_TOKEN=hf_your_token_here # only needed for private access
python download.py \
--repo-id onkarsus13/RewardFlow \
--local-dir /data/onkar/models/RewardFlowtest_rewardflow.py has hardcoded paths. Update these two values first:
model_dir(set to your downloaded weights path, e.g./data/onkar/models/RewardFlow)- input image path in
Image.open(...)(point to a real image underannotation_images)
Then run:
python test_rewardflow.pyPlease contact to onkarsus13@gmail.com if you have face any challenges regarding the running the code.
⭐ If you find this work useful, please cite our paper
@inproceedings{rewardflow2026,
title = {RewardFlow: Generate Images by Optimizing What You Reward},
author = {Susladkar, Onkar Kishor and Jang, Dong-Hwan and Prakash, Tushar and Juvekar, Adheesh Sunil and Shah, Vedant and Barik, Ayush and Bashir, Nabeel and Wahed, Muntasir and Shrirao, Ritish and Lourentzou, Ismini},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2026}
}