Document Type
Honors Thesis
Abstract
Advancements in artificial intelligence (AI) show promise for the technology’s use in widespread biomedical applications. As these models grow more complex, understanding how they work becomes increasingly more difficult. To use these systems in the healthcare setting, it is imperative to reduce model ambiguity and increase user trust in their decision-making. Explainable AI (XAI) techniques were used to optimize the development of a super-resolution convolutional neural network (SRCNN). Image augmentation was performed on the training data, and k-fold cross-validation was used to obtain more reliable metrics. Activation maps were used to show the output of each convolutional layer, and the final neural network (NN) weights were visualized. Using these techniques, the model was shown to focus primarily on the circular lenslet patterns of input LFM images, with the center of images being the main focus of the model. The final trained model was able to outperform bicubic interpolation in PSNR by 27% and SSIM by 7%.
Publication Date
5-1-2022
Language
English
Faculty Mentor of Honors Project
Juhyun Lee
License
This work is licensed under a Creative Commons Attribution-NonCommercial-Share Alike 4.0 International License.
Recommended Citation
Laudermilk, Nicholas, "An Explainable Artificial Intelligence Approach to Convolutional Neural Network Optimization and Understanding" (2022). 2022 Spring Honors Capstone Projects. 9.
https://mavmatrix.uta.edu/honors_spring2022/9