SpeedFolding

Learning Efficient Bimanual Folding of Garments

Best RoboCup Paper Award
Best Paper Award

Yahav Avigal*,1, Lars Berscheid*,1,2, Tamim Asfour2, Torsten Kröger2, Ken Goldberg1

1UC Berkeley 2Karlsruhe Institute of Technology (KIT) *equal contribution

Abstract

Folding garments reliably and efficiently is a long standing challenge in robotic manipulation. An intuitive approach is to initially manipulate the garment to a canonical smooth configuration before folding, a task that has proven to be challenging due to the complex dynamics and high dimensional configuration space of garments. In this work, we develop SpeedFolding, a reliable and efficient bimanual system to manipulate an initially crumpled garment to (1) a smooth and (2) a subsequent folded configuration following user-defined instructions given as folding lines. Our primary contribution is a novel neural network architecture that is able to predict two corresponding poses to parameterize a diverse set of bimanual action primitives. After learning from 4300 human-annotated or self-supervised actions, the robot is able to fold garments from a random initial configuration in under 120s on average with a success rate of 93%. Real-world experiments show that the system is able to generalize to unseen garments of different color, shape, and stiffness. SpeedFolding decreases the folding time by over 30% in comparison to baselines, and outperforms prior works requiring 10 to 20min per fold by 5-10x.

Conference Video

Set of Primitives

We define a set of manipulation primitives to smooth and then fold a garment from an arbitrary initial state. Each primitive is parametrized by their pick or place poses, which are either learned or calculated by a heuristic. In particular, we learn (1) the two pick poses for a fling primitive, (2) two pick poses for a drag primitive, and (3) the corresponding poses for a pick-and-place primitive. For predicting these, we introduce the novel Bimama-Net architecture for BiManual Manipulation.

Primitives for Smoothing

Fling Primitive

Drag Primitive

Pick-and-place Primitive

Training

To scale the self-supervised training to over 2000 actions, the robot needs to collect data with as little human intervention as possible. The timelapse video below shows the robot interacting with the garment over a duration of around 40min, performing the set of primitives above to learn smoothing. The collected data is available below.

Timelapse (60x)

End-to-end Folding

Below, example videos show the end-to-end folding from an arbitrary initial configuration of the t-shirt. On average, SpeedFolding takes less than 120s while achieving a success rate of 93%, significantly outperforming prior works requiring 10 to 20min per fold.

(3x)

1 / 5

2 / 5

3 / 5

4 / 5

5 / 5

Supplementary Material

Instruction Templates

We define the goal state for each class of garment as a sequence of folding lines on top of a template mask. Besides their order, the folding lines also contain information about the direction of the fold. The template mask itself is fitted to a segmented image of the smoothed garment using particle swarm optimization.

T-Shirt

Towel

Dataset

We've collected a dataset of over 4400 real-world actions, either human-annotated or through self-supervision. The complete dataset can be downloaded here (15.2GB). Camera images are stored in the collection directory, and are grouped by their respective episode_id. Both color and depth images are saved as *.png files; parameters for the depth encoding are available in the repository. Additionally, we provide the segmented mask of the garment as well as orthographic projections of all images, resulting in six images per action in total. All action information is stored in the SQLite database in the root directory, in particular in the actions table. Each action has an episode_id, an integer action_id and a json data column. Amongst others, each action contains a type field and two planar poses with a x, y, theta coordinate relative to the image dimension. In contrast to human-annotated actions (), self-supervised ones () have a numeric reward field (that might also be calculated from the before/after images and a trained Ready to Fold predictor). Here are some examples actions:

2022-02-14-09-39-29-655
0.857 fling
2022-02-11-15-38-43-497
0.783 fling
2021-12-21-17-52-37-900
fling
2022-02-06-13-20-32-986
fling-to-fold
2021-12-29-14-36-38-653
0.198 fling
2021-12-29-14-37-48-278
-0.371 fling
2022-02-11-15-25-22-509
0.220 drag
2022-01-05-10-14-10-154
pick-and-place
2022-01-31-15-23-42-223
0.123 pick-and-place
2022-02-04-16-05-11-145
ready-to-fold
2022-02-04-15-00-19-830
ready-to-fold
2021-12-28-11-44-54-592
ready-to-fold

Download Dataset (15.2 GB)

More information can be found in our repository.