Document Type

Honors Thesis

Abstract

Advancements in artificial intelligence (AI) show promise for the technology’s use in widespread biomedical applications. As these models grow more complex, understanding how they work becomes increasingly more difficult. To use these systems in the healthcare setting, it is imperative to reduce model ambiguity and increase user trust in their decision-making. Explainable AI (XAI) techniques were used to optimize the development of a super-resolution convolutional neural network (SRCNN). Image augmentation was performed on the training data, and k-fold cross-validation was used to obtain more reliable metrics. Activation maps were used to show the output of each convolutional layer, and the final neural network (NN) weights were visualized. Using these techniques, the model was shown to focus primarily on the circular lenslet patterns of input LFM images, with the center of images being the main focus of the model. The final trained model was able to outperform bicubic interpolation in PSNR by 27% and SSIM by 7%.

Publication Date

5-1-2022

Language

English

Faculty Mentor of Honors Project

Juhyun Lee

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.