Author

Zahra Anvari

Graduation Semester and Year

2021

Language

English

Document Type

Dissertation

Degree Name

Doctor of Philosophy in Computer Science

Department

Computer Science and Engineering

First Advisor

Vassilis Athitsos

Abstract

The problem of image reconstruction and restoration refers to recovering the clean images from corrupted ones. Corruption or degradation can occur due to atmospheric conditions such as rain, fog, mist, snow, dust, and air pollution or technical drawbacks of imaging devices such as motion blurriness, compression noise, low-resolution, etc. Image reconstruction algorithms aim at reducing these artifacts and degradation and generate clear images. Scenes captured under bad weather conditions such as rain, fog, mist, and haze suffer from visibility issues thus introduce obstacles for computer vision applications, e.g. object detection, recognition, tracking, and segmentation. In this dissertation, we focus on single image dehazing problem. In single image dehazing, we would like to restore the haze free image from a hazy image. Most of the recent image dehazing methods rely on paired datasets, which means for each hazy image there's a single clean/haze-free image as a ground truth. In practice, however, there is a range of clean images that can correspond to a hazy image, due to factors such as contrast or light intensity changes throughout the day. In fact, it is infeasible to capture both ground truth/clear image and the hazy image of the same scene simultaneously. Thus there is an emerging need to develop solutions that do not rely on the ground truth images and could operate with unpaired supervision. To address this, we first cast the unpaired image dehazing problem to an image-to-image translation problem and then propose a novel cycle-consistent generative adversarial network, called ECDN, that operates without paired supervision and benefits from (i) a global-local discriminator architecture to handle spatially varying haze (ii) an encoder-decoder generator architecture with residual blocks to better preserve the details (iii) skip connections in the generator to improve the performance of the network and convergence (iv) customized cyclic perceptual loss and a self-regularized color loss to generate more realistic images and mitigate the color distortion problem. Through empirical analysis we show that the proposed network can effectively remove haze and generate visually pleasing haze-free images. In addition, most existing methods assume that haze has a uniform/homogeneous distribution and haze can have a single color, i.e. grayish white color similar to smoke, while in reality haze can be distributed non-uniformly with different patterns and colors. To quantify the challenges and assess the performance of these methods, we introduce a sunlight haze benchmark dataset, Sun-Haze, containing 107 hazy images with different types of haze created by sunlight having a variety of intensity and color. We evaluate a representative set of state-of-the-art image dehazing methods on this benchmark dataset in terms of standard metrics such as PSNR, SSIM, CIEDE2000, PI and NIQE. This uncovers the limitation of the current methods, and questions their underlying assumptions as well as their practicality.

Keywords

Image restoration and reconstruction, GAN, Deep learning, Single image dehazing

Disciplines

Computer Sciences | Physical Sciences and Mathematics

Comments

Degree granted by The University of Texas at Arlington

Share

COinS