Document Type

Article

Source Publication Title

PETRA 2021

Abstract

In this paper, we present a novel method to learn end-to-end visuomotor policies for robotic manipulators. The method computes state-action mappings in a supervised learning manner from video demonstrations and robot trajectories. We show that the robot learns to perform different tasks by associating image features with the corresponding movement primitives of different grasp poses. To evaluate the effectiveness of the proposed learning method, we conduct experiments with a PR2 robot in a simulation environment. The purpose of these experiments is to evaluate the system’s ability to perform manipulation tasks.

Publication Date

7-2-2021

Language

English

License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Share

COinS