ViViDex: Learning Vision-based Dexterous Manipulation from Human Videos - ANR - Agence nationale de la recherche
Pré-Publication, Document De Travail Année : 2024

ViViDex: Learning Vision-based Dexterous Manipulation from Human Videos

Résumé

In this work, we aim to learn a unified vision-based policy for multi-fingered robot hands to manipulate a variety of objects in diverse poses. Though prior work has shown benefits of using human videos for policy learning, performance gains have been limited by the noise in estimated trajectories. Moreover, reliance on privileged object information such as ground-truth object states further limits the applicability in realistic scenarios. To address these limitations, we propose a new framework ViViDex to improve vision-based policy learning from human videos. It first uses reinforcement learning with trajectory guided rewards to train state-based policies for each video, obtaining both visually natural and physically plausible trajectories from the video. We then rollout successful episodes from state-based policies and train a unified visual policy without using any privileged information. We propose coordinate transformation to further enhance the visual point cloud representation, and compare behavior cloning and diffusion policy for the visual policy training. Experiments both in simulation and on the real robot demonstrate that ViViDex outperforms state-of-the-art approaches on three dexterous manipulation tasks.
Fichier principal
Vignette du fichier
2404.15709v2.pdf (2.66 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04721332 , version 1 (04-10-2024)

Identifiants

Citer

Zerui Chen, Shizhe Chen, Etienne Arlaud, Ivan Laptev, Cordelia Schmid. ViViDex: Learning Vision-based Dexterous Manipulation from Human Videos. 2024. ⟨hal-04721332⟩
48 Consultations
18 Téléchargements

Altmetric

Partager

More