CompVis Group @ LMU Munich
This repository contains the official implementation of the paper "TREAD: Token Routing for Efficient Architecture-agnostic Diffusion Training".
We propose TREAD, a new method to increase the efficiency of diffusion training by improving upon iteration speed and performance at the same time. For this, we use uni-directional token transportation to modulate the information flow in the network.
In order to train a diffusion model, we offer a minimalistic training script in train.py
. In its simplest form it can be started using:
accelerate launch train.py model=tread
or
accelerate launch train.py model=dit
with configs/config.yaml
having all the relevant information and settings for the actual training run. Please adjust this as needed before training.
Note:
We expect precomputed latents in this version.
Under model
one can decide between dit
and tread
which are the preconfigured versions here with the former being the standard dit and the latter being supported by TREAD. How these changes are implemented can be seen in dit.py
and routing_module.py
.
In our paper, we show that TREAD can also work on other architectures. In practice, one needs to be more careful with the routing process in order to adhere to the characteristics of the specific architecture as some have a spatial bias (RWKV, Mamba, etc.). For simplicity, we only provide code for the Transformer architecture as it is the most widely used while being robust and easy to work with.
For most experiments we use the EDM training and sampling to stay consistent with prior art, and the FID calculation is done via the ADM evaluation suite. We provide a fid.py
to evaluate our models during training using the same reference batches as ADM.
TREAD works great during training! How about inference?
It turns out TREAD can be applied during guided inference as well to gain additional performance and reduce FLOPS at the same time!
Instead of dropping the class label (CFG), we can guide with a selection rate delta. Since TREAD's selection rate (0.5) generalizes to other rates, this can be tuned in inference-time only.
We demonstrate this in rf.py
which contains minimal flow matching code for training and sampling:
sample
: normal sampling
sample_tread
: TREAD sampling 🔥
If you use this codebase or otherwise found our work valuable, please cite our paper:
@article{krause2025tread,
title={TREAD: Token Routing for Efficient Architecture-agnostic Diffusion Training},
author={Krause, Felix and Phan, Timy and Gui, Ming and Baumann, Stefan Andreas and Hu, Vincent Tao and Ommer, Bj{\"o}rn},
journal={arXiv preprint arXiv:2501.04765},
year={2025}
}
Thanks to the open source codebases such as DiT, MaskDiT, ADM, and EDM. Our codebase is built on them.