Document Type

Honors Thesis

Abstract

The path planning problem is crucial in the field of autonomous navigation. By applying integral reinforcement learning, it is possible not just to create a system that iteratively improves upon itself in pursuit of a target destination for a vehicle but also achieves optimality with respect to desired parameters such as minimal time and energy expenditures. The challenge of solving the reinforcement learning problem with a nonlinear system is addressed with two neural networks - one for the actor portion of the reinforcement learning problem and one for the critic portion. This project explores the minimum-time energy path planning problem for a nonlinear multi-robot system that is subject to collision avoidance, input constraints, and an unknown environmental disturbance. We develop an online, adaptive solution to the problem with integral reinforcement learning (IRL) on an actor-critic structure to learn the solution to the Hamilton-Jacobi-Bellman equation for such a system, and the experience replay technique allows for the update of the critic neural network weights with the utilization of past and present data, as opposed to most existing reinforcement learning algorithms, which exclusively use current data. To achieve collision avoidance, an artificial potential field is used. In addition, the unknown environmental disturbance is rejected via an H-infinity design. An IRL-based control law is found, and its convergence is verified through simulation.

Publication Date

5-1-2022

Language

English

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.