ORCID Identifier(s)

0000-0003-4578-6767

Graduation Semester and Year

Summer 2025

Language

English

Document Type

Dissertation

Degree Name

Doctor of Philosophy in Computer Science

Department

Computer Science and Engineering

First Advisor

Dr. William J. Beksi

Second Advisor

Dr. Animesh Chakravarthy

Third Advisor

Dr. Farhad A. Kamangar

Fourth Advisor

Dr. Diego Patino

Abstract

Event cameras offer a fundamentally different sensing paradigm by asynchronously capturing brightness changes at high temporal resolution, directly encoding motion in the scene. However, their sparse and non-traditional data format poses significant challenges for dense motion estimation, particularly in the context of optical flow. Contrast Maximization (CM) has emerged as a powerful model-based framework for estimating optical flow from event data by optimizing the sharpness of motion-compensated event representations. This dissertation builds upon and significantly advances the CM framework through two complementary contributions.

First, we propose Edge-Informed Contrast Maximization (EINCM), a hybrid approach that augments the traditional events-only CM framework with edge information extracted from frames. Using a correlation-based objective to align the warped event structures with edges for spatial consistency at a reference time, EINCM aids the CM process and improves the robustness of the optimization. We further incorporate multi-scale and multi-reference formulations to enable quicker and more accurate convergences and improved optical flow estimation across varying motion dynamics.

Second, we present Orientation-Prior Contrast Maximization (OPCM), a biologically-inspired extension that incorporates inertial measurements to guide the CM process. By deriving orientation maps from 3D camera velocities, OPCM injects inertia-informed priors into the CM optimization, effectively constraining the search space and further enhancing convergence. This orientation-guided approach leads to improved stability and accuracy, particularly in scenes with low texture or ambiguous motion.

Together, these contributions establish a robust and extensible framework for event-based optical flow estimation, leveraging multi-modal signals to overcome the inherent limitations of sparse event data. Our methods achieve state-of-the-art results on multiple benchmark datasets. This work opens new avenues for the integration of complementary sensing modalities in neuromorphic vision systems.

Keywords

Event-Based Vision, Optical Flow, Contrast Maximization, Multi-Modal, Hybrid Sensing, Model-Based Optimization, Edges, Inertia, Egomotion, Camera Velocity

Disciplines

Other Computer Engineering | Robotics

License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Available for download on Saturday, August 22, 2026

Share

COinS