For the task of hanging clothes, learning how to insert a hanger into a garment is crucial but has been seldom explored in robotics. In this work, we address the problem of inserting a hanger into various unseen garments that are initially laid out flat on a table. This task is challenging due to its long-horizon nature, the high degrees of freedom of the garments, and the lack of data. To simplify the learning process, we first propose breaking the task into several stages. Then, we formulate each stage as a policy learning problem and propose low-dimensional action parameterization. To overcome the challenge of limited data, we build our own simulator and create 144 synthetic clothing assets to effectively collect high-quality training data. Our approach uses single-view depth images and object masks as input, which mitigates the Sim2Real appearance gap and achieves high generalization capabilities for new garments. Extensive experiments in both simulation and the real world validate our proposed method. By training on various garments in the simulator, our method achieves a 75% success rate with 8 different unseen garments in the real world.
@misc{chen2024robohangerlearninggeneralizablerobotic,
title={RoboHanger: Learning Generalizable Robotic Hanger Insertion for Diverse Garments},
author={Yuxing Chen and Songlin Wei and Bowen Xiao and Jiangran Lyu and Jiayi Chen and Feng Zhu and He Wang},
year={2024},
eprint={2412.01083},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2412.01083},
}