Robust 3D Bipedal Locomotion using Reinforcement Learning and Control

Abstract

While recent learning-based solutions have shown promising results in achieving dynamic locomotion for bipedal robots, they still suffer from poor sampling efficiency. Most of these approaches rely on end-to-end learning and require previous knowledge of reference trajectories, and only some of them have been successfully deployed in hardware. In this talk, I will present a reinforcement learning (RL) framework to design cascade feedback control policies for 3D bipedal locomotion. By decoupling the problem of bipedal locomotion as a two-phase process: trajectory planning and feedback regulation, we propose a modular solution that incorporates the physical insights of dynamic locomotion and its hybrid nature into the learning process of the policy. We leverage the exploration potential of RL algorithms to find reference trajectories for dynamic locomotion using a reduced state of the robot. Then, we improve these reference trajectories using feedback regulation to obtain stable and robust walking gaits. This decoupled structure significantly simplifies the neural network’s complexity, enhancing sampling efficiency and robustness of the learned policy. The proposed framework learns stable and robust walking gaits from scratch and allows the controller to realize omnidirectional walking with accurate tracking of the desired velocity and heading angle. The learned policies also perform robustly against various adversarial forces applied to the torso and walking blindly on a series of challenging and unstructured terrains.

Date
Feb 24, 2022 1:00 PM
Event
Invited Talk Wandercraft
Location
Wandercraft
Online
Guillermo Castillo
Guillermo Castillo
Ph.D. Canditate in Electrical and Computer Engineering

If you are interested in the work I do, you can contact me at castillomartinez.2@osu.edu