A new approach to reproduce human and animal movements in robots

A new approach to reproduce human and animal movements in robots

Credit: Bohez et al.

In recent years, developers have created a wide range of sophisticated robots capable of operating in specific environments more and more efficiently. The body structure of many of these systems is inspired by nature, animals and humans.

Although many existing robots have bodies that resemble those of humans or other animal species, programming them to also move like the animal they are inspired by is not always an easy task. This usually involves the development of advanced locomotion controllers, which can require considerable resources and development effort.

DeepMind researchers recently created a new technique that can be used to effectively train robots to mimic human or animal movements. This new tool, presented in a pre-published article on arXiv, is inspired by previous work that exploited data representing human and animal movements in the real world, collected using motion capture Technology.

“We investigate the use of prior knowledge of human and animal movement to learn reusable locomotion skills for real legged robots,” the DeepMind team wrote in their paper. “Our approach builds on previous work on mimicking human or canine motion capture (MoCap) data to learn a movement skill module. Once learned, that skill module can be reused for complex tasks. downstream.”

Much of the robot locomotion controllers developed in the past have modular designs, in which a system is divided into different parts (i.e. modules), which interact with each other. While some of these controllers have shown promising results, their development often requires significant engineering efforts. Additionally, modular designs are usually task-specific, so they do not generalize well to different tasks, situations, and environments.

As an alternative to these controllers, some researchers have proposed a method called “trajectory optimization”, which combines a motion planner with a tracking controller. These approaches require less engineering than modular controllers, but they often have to perform extensive computations and therefore may be too slow to be applied in real time.

In their paper, Steven Bohez and his colleagues at DeepMind presented an alternative approach to training humanoid, legged robots to move in a way that resembles human and animal locomotion styles. Their technique abstracts the motor skills of humans and animals from data collected with motion capture technology, then uses that data to train real-world robots.

In developing their approach, the team went through four main steps. First, they redirected motion capture data to real-world robots. Subsequently, they formed a policy to mimic the desired motion trajectories in the motion capture data in a simulated environment.

“This policy has a hierarchical structure in which a tracking policy encodes the desired reference trajectory into a latent action that then instructs a proprioception-conditioned low-level controller,” the researchers wrote in their paper.

After training this policy to mimic the reference trajectories, the researchers were able to reuse the low-level controller, which has fixed parameters, by training a new task policy to produce latent actions. This allows their controllers to replicate complex human or animal movements in robots, such as dribbling a ball. Finally, Bohez and his colleagues transferred the controllers they developed from simulations to real hardware.

“Importantly, due to the prior imposed by the MoCap data, our approach does not require extensive reward engineering to produce responsive and natural-looking behavior at the time of reuse,” they wrote. writes the researchers in their article. “This facilitates the creation of well-regulated, task-oriented controllers suitable for deployment on real robots.”

So far, the DeepMind team has evaluated its approach in a series of experiments, both in simulation and real-world environments. In these tests, they successfully used their technique to train the controller to reproduce two main behaviors, namely walking and dribbling. They then evaluated the quality of the movements made using their approach on two real-world robots: the ANYmal quadruped and OP3 humanoid robots.

The results collected by Bohez and his colleagues are very promising, suggesting that their approach could help develop robots that mimic humans and animals more realistically. In their next studies, they would like to train their policies on new animal and human behaviors, and then try to replicate them in robots.

“We want to extend our datasets with a wider variety of behaviors and further explore the range of downstream tasks that the skills module allows,” the researchers wrote in their paper.

A system for reproducing different animal locomotion skills in robots

More information:
Steven Bohez et al, Imitate and Reuse: Learning Reusable Robot Movement Skills from Human and Animal Behaviors. arXiv:2203.17138v1 [cs.RO], arxiv.org/abs/2203.17138

Project page: https://sites.google.com/view/robot-npmp

© 2022 Science X Network

Quote: A new approach to replicate human and animal movements in robots (May 5, 2022) retrieved May 6, 2022 from https://techxplore.com/news/2022-05-approach-human-animal-movements-robots.html

This document is subject to copyright. Except for fair use for purposes of private study or research, no part may be reproduced without written permission. The content is provided for information only.

Leave a Comment