Graduation Semester and Year

2010

Language

English

Document Type

Thesis

Degree Name

Master of Science in Electrical Engineering

Department

Electrical Engineering

First Advisor

Michael T Manry

Abstract

The effects of transforming the net function vector in the Multilayer Perceptron (MLP) are analyzed. The use of optimal diagonal transformation matrices on the net function vector is proved to be equivalent to training the MLP using multiple optimal learning factors (MOLF). A method for linearly compressing large ill-conditioned MOLF Hessian matrices into smaller well-conditioned ones is developed. This compression approach is shown to be equivalent to using several hidden units per learning factor. The technique is extended to large networks. In simulations, the proposed algorithm performs almost as well as the Levenberg Marquardt (LM) algorithm with the computational complexity of a first order training algorithm.

Disciplines

Electrical and Computer Engineering | Engineering

Comments

Degree granted by The University of Texas at Arlington

Share

COinS