Publication

Autonomy Artificial Intelligence Robotics (AAIR)
2021

DEEP REINFORCEMENT LEARNING FOR SIMULTANEOUS PATH PLANNING AND STABILIZATION OF OFFROAD VEHICLES

by Ameya Salvi1; Jake Buzhardt; Phanindra Tallapragada; Venkat Krovi1; Mark Brudnak; Jonathon M. Smereka

Abstract

Motion planning algorithms for vehicles in an offroad environment have to contend with the significant vertical motion induced by the uneven terrain. Besides the obvious problems related to driver comfort, for autonomous vehicles, such “bumpy” vertical motion can induce significant mechanical noise in the real time data acquired from onboard sensors such as cameras to the point that perception becomes especially challenging. This paper advances a framework to address the problem of vertical motion in offroad autonomous motion control for vehicular systems. This framework is first developed to demonstrate the stabilization of the sprung mass in a modified quarter-car tracking a desired velocity while traversing a terrain with changing height. Even for an idealized model such as the quarter-car the dynamics turn out to be nonlinear and a model-based controller is not obvious. We therefore formulate this control problem as a Markov decision process and solve it using deep reinforcement learning. The control inputs that are learned are the torque on the wheel and the stiffness of the active suspension. It is demonstrated here that a time-varying velocity can be tracked with reduced chassis oscillations using these control inputs. We anticipate that reducing such oscillations will lead to sensor stabilization, which will improve perception and reduce the required frequency of recalibration. The deep reinforcement learning approach advanced in this paper remains useful for offroad motion planning when complex terramechanics and uncertain model parameters are introduced or the vehicle model increases in complexity.