Author

Rasool Fakoor

Graduation Semester and Year

2012

Language

English

Document Type

Thesis

Degree Name

Master of Science in Computer Science

Department

Computer Science and Engineering

First Advisor

Manfred Huber

Abstract

Markov Decision Processes (MDPs) and Partially Observable Markov Decision Processes (POMDPs) are very powerful and general frameworks to model decision and decision learning tasks in a wide range of problem domains. As a result, they are widely used in complex and real-world situations such as robot control tasks. However, the modeling power and generality of the frameworks comes at a cost in that the complexity of the underlying models and corresponding algorithms grows dramatically as the complexity of the task domain increases. To address this , this work presents an integrated and adaptive approach that attempts to reduce the complexity of the decision learning problem in partially Observable Markov Decision Processes by separating the overall model into decision and perceptual processes. The goal here is to focus the decision learning on the aspects of the space that are important for decision making while the observations and attributes that are important for estimating the state of the decision process are handled separately by the perceptual process. In this way, the separation into different processes can significantly reduce the complexity of decision learning. In the proposed framework and algorithm, a Monte Carlo based sampling method is used for both the perceptual and decision processes in order to be able to deal efficiently with continuous domains. To illustrate the potential of the approach, we show analytically and experimentally how much the complexity of solving a POMDP can be reduced to increase the range of decision learning tasks that can be addressed.

Disciplines

Computer Sciences | Physical Sciences and Mathematics

Comments

Degree granted by The University of Texas at Arlington

Share

COinS