Graduation Semester and Year
2021
Language
English
Document Type
Thesis
Degree Name
Master of Science in Electrical Engineering
Department
Electrical Engineering
First Advisor
Yan Wan
Abstract
This work studies a multi-player H-infinity differential game for systems of general linear dynamics. In this game, multiple players design their control inputs to minimize their cost functions in the presence of worst-case disturbances. We first derive the optimal control and disturbance policies using the solutions to Hamilton-Jacobi-Isaacs (HJI) equations. We then prove that the derived optimal policies stabilize the system and constitute a Nash equilibrium solution. Two integral reinforcement learning (IRL) -based algorithms, including the policy iteration IRL and o -policy IRL, are developed to solve the differential game online. We show that the off-policy IRL can solve the multi-player H-infinity differential game online without using any system dynamics information. Simulation studies are conducted to validate the theoretical analysis and demonstrate the effectiveness of the developed learning algorithms.
Keywords
Differential game, Worst-case disturbance, Reinforcement learning
Disciplines
Electrical and Computer Engineering | Engineering
License
This work is licensed under a Creative Commons Attribution-NonCommercial-Share Alike 4.0 International License.
Recommended Citation
An, Peiliang, "MULTI-PLAYER H1 DIFFERENTIAL GAME USING ON-POLICY AND OFF-POLICY REINFORCEMENT LEARNING" (2021). Electrical Engineering Theses. 339.
https://mavmatrix.uta.edu/electricaleng_theses/339
Comments
Degree granted by The University of Texas at Arlington