Graduation Semester and Year

Spring 2025

Language

English

Document Type

Thesis

Degree Name

Master of Science in Computer Engineering

Department

Computer Science and Engineering

First Advisor

Diego Patino

Second Advisor

Marnim Galib

Third Advisor

Alex Dillhoff

Abstract

In the rapidly evolving landscape of autonomous driving technology, lane detection systems stand as fundamental guardians of vehicular safety. The National Highway Traffic Safety Administration identifies unintentional lane departures as responsible for approximately one-third of all road accidents—a sobering statistic that underscores the critical importance of robust lane detection methodologies. This thesis embarks on an academic exploration at the intersection of neuromorphic engineering and computer vision, examining how the distinctive properties of event-based cameras might be harnessed to enhance lane detection capabilities under challenging environmental conditions. Unlike conventional frame-based imaging sensors that capture entire scenes at fixed intervals, event-based cameras operate on a fundamentally different paradigm, registering only pixel-level brightness changes with microsecond precision. This asynchronous sensing approach offers intriguing advantages—high dynamic range exceeding 140 dB, negligible motion blur, and exceptional temporal resolution—that may prove transformative for safety-critical applications in dynamic environments. Our research framework systematically investigates the application of Histogram of Gradients (HOG) feature extraction to event data streams, exploring how these structured representations can be leveraged by contemporary deep learning architectures including transformer-based models and attention mechanisms. Through careful experimental design utilizing the Microsoft AirSim simulation environment and the TuSimple dataset, we examine the comparative performance of various architectural approaches when processing event-based inputs. The central hypothesis guiding this work posits that the integration of event-based vision with tailored feature extraction might transcend the limitations of traditional frame-based methods, particularly in scenarios involving poor visibility, dynamic lighting, or rapid motion. By developing and evaluating a comprehensive pipeline for event-based lane detection, this thesis aims to contribute meaningful insights to both the theoretical understanding and practical implementation of more resilient autonomous driving systems.

Keywords

Lane detection, Event-based cameras, Histogram of Gradients (HOG), Deep learning architectures, Vision Transformers, Autonomous driving

Disciplines

Computer and Systems Architecture | Other Computer Engineering

License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.