Graduation Semester and Year

Fall 2025

Language

English

Document Type

Dissertation

Degree Name

Doctor of Philosophy in Computer Science

Department

Computer Science and Engineering

First Advisor

Habeeb Olufowobi

Abstract

Perception systems are fundamental to intelligent machines, enabling them to sense, understand, and interpret complex environments. However, as perception increasingly underpins critical applications such as autonomous vehicles, IoT healthcare devices, and smart trading platforms, challenges related to security, scalability, and environmental understanding have become more pressing. This work addresses three core research questions: (1) How can we identify, analyze, and mitigate adversarial vulnerabilities in perception systems to ensure reliable operation under adversarial conditions? (2.1) How can AVPS models be efficiently scaled and fine-tuned across decentralized and resource-constrained environments while preserving privacy and performance? (2.2) How can we scale generative models in decentralized settings and while preserving performance? (3) How can we develop richer, more structured representations of environments to enhance the perception capabilities of AVPS, combining the strengths of different sensor modalities?

To identify and conduct adversarial analysis, we propose a comprehensive systematization of knowledge (SoK) analyzing vulnerabilities and defenses in autonomous vehicular perception systems (AVPS), alongside a targeted adversarial analysis of luminescent markers in AVPS settings. For efficient scaling, we propose a federated learning framework for fine-tuning vision-language models (VLMs) with a personalized low-rank adaptation (pLoRA) strategy, enabling decentralized, efficient, and personalized model updates. For training MoE-based generative models at scale, we develop a framework to train MoE-based generative models using federated learning. Finally, to advance environmental understanding, we explore multimodal integration between wireless perception and camera images. These approaches lay the groundwork for richer, more structured representations that enhance the perception capabilities of autonomous agents beyond conventional modalities.

Through these contributions, this research moves toward building perception systems that are adversarially robust, scalable for large deployments, and capable of deeper environmental interpretation, laying the foundation for more secure and intelligent autonomous systems.

Keywords

Computer vision, Vision language model, Federated learning, Security, Adversarial analysis

Disciplines

Artificial Intelligence and Robotics | Cybersecurity | Other Computer Sciences

License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.