Source code of 'Uncertainty Guided Refinement for Fine-grained Salient Object Detection', which is accepted by TIP 2025. You can check the manuscript on Arxiv or IEEE.
Python 3.9.13 and Pytorch 1.11.0. Details can be found in requirements.txt
.
All datasets used can be downloaded at here [arrr].
We use the training set of DUTS to train our UGRAN.
We use the testing set of DUTS, ECSSD, HKU-IS, PASCAL-S, DUT-O, and SOD to test our UGRAN. After downloading, put them into /datasets
folder.
Your /datasets
folder should look like this:
-- datasets
|-- DUT-O
| |--imgs
| |--gt
|-- DUTS-TR
| |--imgs
| |--gt
|-- ECSSD
| |--imgs
| |--gt
...
-
Download the pretrained backbone weights and put them into
pretrained_model/
folder. ResNet [uxcz], SwinTransformer are currently supported. -
Run
python train_test.py --train=True --test=True --eval=True --record='record.txt'
for training and testing. The predictions will be inpreds/
folder and the training records will be inrecord.txt
file.
Pre-calculated saliency maps: UGRAN-R [b7fx], UGRAN-S [gfxr]
Pre-trained weights: UGRAN-R [c3eq], UGRAN-S [n7tr]
For PR curve and F curve, we use the code provided by this repo: [BASNet, CVPR-2019].
For MAE, Weighted F measure, E score, and S score, we use the code provided by this repo: [PySODMetrics].
Our idea is inspired by InSPyReNet and MiNet. Thanks for their excellent work. We also appreciate the data loading and enhancement code provided by plemeri, as well as the efficient evaluation tool provided by lartpang.
If you think our work is helpful, please cite
@ARTICLE{10960487,
author={Yuan, Yao and Gao, Pan and Dai, Qun and Qin, Jie and Xiang, Wei},
journal={IEEE Transactions on Image Processing},
title={Uncertainty-Guided Refinement for Fine-Grained Salient Object Detection},
year={2025},
volume={34},
number={},
pages={2301-2314},
doi={10.1109/TIP.2025.3557562}}