Bosen Lian

ORCID Identifier(s)


Graduation Semester and Year




Document Type


Degree Name

Doctor of Philosophy in Electrical Engineering


Electrical Engineering

First Advisor

Frank L Lewis

Second Advisor

Michael A Niestroy


Consensus-based distributed Kalman filters for estimation with multiple targets have attracted considerable attention. Most of the existing Kalman filters use the average consensus approach, which tends to have a low convergence speed. They also rarely consider the impacts of limited sensing range and target mobility on the information flow topology. The robustness properties, i.e., gain margins and phase margins of distributed Kalman filtering algorithms are still open problems. In the interactions of controlled dynamical agents, it is often assumed that the agents are "rational" in the sense of attempting to act in such a way as to optimize some prescribed performance reward functions. Optimal control and reinforcement learning solve optimal control input solutions given a performance index. Inverse optimal control and inverse reinforcement learning can reconstruct the performance index given demonstrations. However, inverse optimal control needs to know system dynamics while inverse reinforcement learning can be model-free. This dissertation first presents new distributed estimation methods for multi-agent systems. A novel distributed Kalman consensus filter (DKCF) with an information-weighted and consensus-based structure is proposed for estimation with random mobile targets in continuous-time dynamics. A new moving target information-flow topology for the measurement of targets is developed based on the sensors' sensing ranges, targets' random mobility, and local information-weighted neighbors. This work also studies the robustness margins (i.e., gain margins and phase margins) of a DKCF. It shows that the robustness results of the DKCF are improved compared to the single-agent KF This dissertation then studies new inverse reinforcement learning (RL) algorithms for multi-agent systems. New inverse reinforcement learning algorithms are proposed to solve two-player zero-sum games by proposing both model-based and model-free algorithms. The games are solved by extracting the unknown cost function of an expert by a learner using demonstrated expert's behaviors. Next, this work extends these results to multiplayer non-zero-sum games, where both the expert and the learner have N-player noncooperative control input players.


Distributed estimation, Inverse reinforcement learning


Electrical and Computer Engineering | Engineering


Degree granted by The University of Texas at Arlington