Author

Onur Daskiran

Graduation Semester and Year

2016

Language

English

Document Type

Dissertation

Degree Name

Doctor of Philosophy in Aerospace Engineering

Department

Mechanical and Aerospace Engineering

First Advisor

Atilla Dogan

Second Advisor

Brian Huff

Abstract

Designing control systems for airship has unique challenges as compared to conventional aircraft. Highly nonlinear dynamics, different mass/inertia relations, vast uncertainties in the model parameters and underactuation are the main reasons behind this. Airship dynamics is greatly influenced by the variations in the environmental (e.g.,room temperature) and internal (e.g., helium distribution in envelope) factors that can completely change the response characteristics of the blimp and make it infeasible for a model-based controller to perform. On the other hand, a skilled RC pilot can operate the manual flight easily under these conditions. This makes LfD (learning from demonstration) and RL (reinforcement learning) techniques suitable candidates to address the issues that model-based control design fails to do. In general, LfD covers the methods that aim to learn a control policy directly from the previously provided expert demonstrations. In reinforcement learning, it is aimed to reach an optimal policy through trial and error while a reward function continuously describes whether the action taken in a specific state creates good or bad outcome. This dissertation research develops a three stage LfD/RL method which uses continuous multi-dimensional states and actions. Stages and subroutines used in the method is first explained in detail, then implemented on three simple example cases to show the performance and the convergence characteristics of exploration using discrete and continuous state-action spaces. The method is used for learning and executing 1D and 2D waypoint navigation tasks of a ground vehicle (UGV) for both simulation and hardware implementation. In order to apply the method to the motion of a low speed airship, a realistic airship flight simulator is designed by performing measurements and tests and pilot demonstrations are recorded with this simulator. Finally, the method used to learn and execute commanded position and orientation tasks demonstrated by the pilot, similar undemonstrated tasks and a case when these tasks are combined to represent a full mission. It is shown that selection of correct function approximator parameters are crucial in order to obtain satisfactory response when LfD/RL method is used.

Keywords

Airship, Reinforcement learning

Disciplines

Aerospace Engineering | Engineering | Mechanical Engineering

Comments

Degree granted by The University of Texas at Arlington

26448-2.zip (8814 kB)

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.