Reinforcement Learning Based Legged Locomotion of Quadrupedal in a Complex Environment
In this work, we propose a Reinforcement learning based legged locomotion quadruped comprising active model that learns to localize itself in a space known by the system in complex cluttered environment. Legged robots, like quadrupeds, would be introduced in terrains with difficult-to-model and predict geometries, so they must be fitted with the ability to generalize well to unexpected circumstances, according to the problem statement. We proposed a novel method for training neural-network policies towards terrain-aware functionality in this paper, which incorporates state-of-the-art frameworks based-model for robot navigation and reinforcement learning. Our strategy is as follows instead of using physical simulation, we formulate Markov decision processes (MDP) based on the assessment of complex feasibility parameters. Using both exteroceptive and proprioceptive measurements, we employ policy-gradient approaches that develop policies that plan and execute foothold and baseline movements in a variety of environments independently. We put our method to the test on a difficult range of virtual terrain scenarios that include features like narrow bridges, holes, and stepping stones, as well as train policies that successfully locomotive in all situations. The simulation can be achieved on high performing CPU system using Open AI GYM and also using TensorFlow on Rex GYM environment tool.
Keywords—Reinforcement Learning (RF), Legged Locomotion, MDP, Quadrupedal, Open AI GYM