Skip to content

boqian-li/ETCH

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🏆 ICCV 2025 Highlight Paper 🏆

image This repository is the official implementation of ETCH, a novel body fitting pipeline that estimates cloth-to-body surface mapping through locally approximate SE(3) equivariance, encoding tightness as displacement vectors from the cloth to the underlying body.

News 🚩

  • [2025-08-15] All Pretrained Models are also available at huggingface now!

  • [2025-08-04] We release the All-in-One model, which is trained on the 4D-Dress dataset, CAPE dataset, and Generative dataset, totally 94501 samples. Please download the all-in-one model from here.

  • [2025-08-04] We release the code for ETCH, please feel free to have a try!

Overview

Our key novelty is modeling cloth-to-body SE(3)-equivariant tightness vectors for clothed humans, abbreviated as ETCH, which resembles ``etching'' from the outer clothing down to the inner body.

Following this outer-to-inner mapping, ETCH regresses sparse body markers, simplifying clothed human fitting into an inner-body marker fitting task.

Environment Setup ⚙️

conda env create -f environment.yml
conda activate etch
cd external
git clone https://github.com/facebookresearch/theseus.git && cd theseus
pip install -e .
cd ../..

Data Preparation 📃

  1. please note that we placed data samples in the datafolder folder for convenience.

  2. Generate Anchor Points with Tightness Vectors (for training)

    python scripts/generate_infopoints.py
  3. Get splitted ids (pkl file)

    python scripts/get_splitted_ids_{datasetname}.py
  4. For body_models, please download with this link, and place it under the datafolder/ folder.

  5. please note that before the above processes, there are some preprocessing steps on the original data:

    for 4D-Dress dataset, we apply zero-translation mesh.apply_translation(-translation) to the original scan and the body model;

    for CAPE dataset, we used the processed meshes extracted from PTF, in which we noticed that the SMPL body meshes are marginally different from the original SMPL body meshes but more precise.

Dataset Organization 📂

The dataset folder tree is like:

datafolder/
├── datasetfolder/
│   ├── model/ # scans
│   │   ├── id_0
│   │   │   └── id_0.obj
│   ├── smpl(h)/ # body models
│   │   ├── id_0
│   │   │   ├── info_id_0.npz
│   │   │   └── mesh_smpl_id_0.obj # SMPL body mesh
├── useful_data_datasetname/
├── gt_datasetname_data/
│   ├── npz/
│   │   └── id_0.npz
│   └── ply 
│       └── id_0.ply

please refer to the datafolder folder for more details.

Training 🚀

CUDA_VISIBLE_DEVICES=0 python src/train.py --batch_size 2 --i datasetname_settingname 
# batch_size should <= num_data, if you just have the sample data, you can set batch_size to 1

Evaluation 📊

CUDA_VISIBLE_DEVICES=0 python src/eval.py --batch_size 3 --model_path path_to_pretrained_model --i datasetname_settingname

# please note that the train_ids has no overlap with the val_ids, the sample data is from train_ids, so if you want to test the pretrained model on the sample data, you should set the activated_ids_path to the train_ids.pkl file for successful selection.

Pretrained Model used in the paper

Please download the pretrained model used in the paper from huggingface or here.

🔥 All-in-One Model 🔥

We provide the All-in-One model, which is trained on the 4D-Dress dataset, CAPE dataset, and Generative dataset, totally 94501 samples. Please download the all-in-one model from huggingface or here.

For demo inference, you can use the following command:

CUDA_VISIBLE_DEVICES=0 python src/inference_demo.py --scan_path path_to_scan_obj_file --gender gender --model_path path_to_allinone_pretrained_model

Please note that during the training of All-in-One model and in the inference_demo.py file, we centering the scan as input, and re-center the predicted SMPL mesh to the original scan. For more details, please refer to the src/inference_demo.py file.

We also provide the animation function, which can be used to animate the scan with the predicted SMPL mesh. please refer to the src/animation.py file for more details.

Citation

@inproceedings{li2025etch,
  title     = {{ETCH: Generalizing Body Fitting to Clothed Humans via Equivariant Tightness}},
  author    = {Li, Boqian and Feng, Haiwen and Cai, Zeyu and Black, Michael J. and Xiu, Yuliang},
  booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  year      = {2025}
}

Acknowledgments

We thank Marilyn Keller for the help in Blender rendering, Brent Yi for fruitful discussions, Ailing Zeng and Yiyu Zhuang for HuGe100K dataset, Jingyi Wu and Xiaoben Li for their help during rebuttal and building this open-source project, and the members of Endless AI Lab for their help and discussions. This work is funded by the Research Center for Industries of the Future (RCIF) at Westlake University, the Westlake Education Foundation. Yuliang Xiu also received funding from the Max Planck Institute for Intelligent Systems.

Here are some great resources we benefit from:

Contributors

Kudos to all of our amazing contributors! This open-source project is made possible by the contributions of the following individuals:

License

ETCH is released under the MIT License.

Disclosure

While MJB is a co-founder and Chief Scientist at Meshcapade, his research in this project was performed solely at, and funded solely by, the Max Planck Society.

Contact

For technical questions, please contact Boqian Li via boqianlihuster@gmail.com.

About

[ICCV 2025 Highlight] ETCH: Generalizing Body Fitting to Clothed Humans via Equivariant Tightness

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  

Languages