Author

Saumya Bhatt

Graduation Semester and Year

2021

Language

English

Document Type

Thesis

Degree Name

Master of Science in Computer Science

Department

Computer Science and Engineering

First Advisor

Manfred Huber

Second Advisor

Gun Deok Park

Abstract

The Vision and Language Navigation task came to life from the idea that we can build a robot or an autonomous system that can be instructed in human language and that will navigate using the instructions given. For example, we tell the agent to “Go down past some room dividers toward a glass top desk and turn into the dining area. Wait next to the large glass dining table” and not only does it reach the goal state but it follows the instructions while navigating. With the current developments, this may not seem like a distant problem anymore and in recent years a number of systems have been developed that attempt to address this task. To accomplish this task, the artificial agent must understand two modalities with which humans perceive the world, vision, and language, and then translate these into actions. While significant progress has been made in recent years to develop systems capable of performing this task, these systems still fail in a significant number of cases. To investigate reasons and potential ways to overcome this, this thesis explores a few ways in which the navigation task with multiple modalities can be grounded and can be aligned temporally and visually. This thesis analyzes the failures of the previously used Environment Drop method with Back translation and investigates what happens when pre-trained embeddings, as well as auxiliary tasks, are utilized with it. In particular, it proposes an augmentation to the architecture for the Vision and language Navigation task with pretrained language tokens and a navigator with reasoning to oversee the progress and to co-ground vision and language rather than to only use temporal attention mechanism. The underlying base architecture on which the modifications have been implemented has been a highly successful method and uses the Environment Drop method with Back translation. While results with the modified architecture and proposed improvements did not show a significant increase in the success rate of the chosen base architecture, the analysis of the results has provided valuable insights to help determine the direction of potential further research.

Keywords

Vision and language Navigation, Multimodal, Natural language processing, Machine translation, Matterport3D, Room2Room, Autonomous systems, LSTM, Vision and language grounding, Robot, Navigation

Disciplines

Computer Sciences | Physical Sciences and Mathematics

Comments

Degree granted by The University of Texas at Arlington

30243-1.zip (1920 kB)

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.