ORCID Identifier(s)

0000-0001-9409-4738

Graduation Semester and Year

2019

Language

English

Document Type

Dissertation

Degree Name

Doctor of Philosophy in Electrical Engineering

Department

Electrical Engineering

First Advisor

Michael T Manry

Abstract

Training methods for both shallow and deep neural nets are dominated by first order algorithms related to back propagation and conjugate gradient. However, these methods lack affine invariance so performance is damaged by nonzero input means, dependent inputs, dependent hidden units and the use of only one learning factor. This dissertation reviews affine invariance and shows how MLP training can be made partially affine invariant when Newton's method is used to train small numbers of MLP parameters. Several novel methods are proposed for scalable partially affine invariant MLP training. The potential application of the algorithm to deep learning is discussed. Ten-fold testing errors for several datasets show that the proposed algorithm outperforms back propagation and conjugate gradient, and that it scales far better than Levenberg-Marquardt.

Keywords

Back propagation, Vanishing gradient, Balanced gradient

Disciplines

Electrical and Computer Engineering | Engineering

Comments

Degree granted by The University of Texas at Arlington

28604-2.zip (948 kB)

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.