Graduation Semester and Year

Fall 2024

Language

English

Document Type

Thesis

Degree Name

Master of Science in Computer Science

Department

Computer Science and Engineering

First Advisor

Manfred Huber

Abstract

What visual attributes do cats have in common, and what features set them apart from dogs? How are we able to tell the difference between the two? While we do not fully understand the mechanism humans use for object detection, one popular theory suggests that it boils down to identifying distinct visual features specific to each object. For example, all cats have vertical slit-shaped pupils when their eyes are constricted, which is something we do not see in dogs. These slit-shaped pupils are a feature ‘prototypical’ to cats. Object classification is a computer vision task that involves identifying and categorizing an object in an image or a video into one or more predefined categories. Fine-gained classification involves distinguishing be- tween visually or conceptually similar categories, for example, species or subspecies classification. Convolutional Neural Networks (CNNs) are exceptionally good at this task, yet their “black-box” nature limits interpretability, making them unsuitable for high-stakes applications where understanding the decision process is critical. This

limitation has led to the development of models that incorporate a more transparent and interpretable decision-making process. This thesis focuses on developing an interpretable neural network architecture that learns and stores prototypical features of each category and makes a classifica- tion decision based on how strongly these features are present in a test image. In addition, these models also provide insight into the decision-making process, using these standout features to generate explanations for each test image. Such features are known as ‘part-prototypes,’ and the models that utilize them for decision-making are called prototype-based models. The primary goal of this thesis is to construct a prototype-based architecture with enhanced interpretability by modifying an existing framework. A major issue with existing prototype-based models is the lack of prototype reliability measured using prototype consistency and stability. Prototypes with low consistency fail to activate similar features across images, and those with low stability fail to activate the same features in noisy vs. noise-free versions of images. The modification proposed here replaces the standard feed-forward CNN for prototype learning with a Variational Autoencoder (VAE) bottleneck. The VAE’s ability to encode representations that are resilient to noise and variation helps the model identify prototypical features that are consistent across different examples. By doing so, the network learns more stable and consistent prototypes. The results show that incorporating a variational autoencoder in the prototype learning bottleneck significantly improves prototypes’ stability and consistency. This enhances the quality of the explanations provided by the network, as the prototypes can be relied upon to activate visually similar regions consistently. Additionally, the findings reveal a positive correlation between the prototypes’ stability and the model’s

overall accuracy. The results also show that consistency is key in improving model interpretability by generating more robust and trustworthy explanations.

Keywords

Artificial Intelligence, Machine Learning, Neural Networks, Interpretability, Explainability

Disciplines

Computer Engineering

License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.