Graduation Semester and Year




Document Type


Degree Name

Master of Science in Electrical Engineering


Electrical Engineering

First Advisor

Michael T Manry


A systematic two-step batch approach for constructing a sparse neural network is presented. Unlike other sparse neural networks, the proposed paradigm uses orthogonal least squares (OLS) to train the network. OLS based pruning is proposed to induce sparsity in the network. Based on the usefulness of the basic functions in the hidden units, the weights connecting the output to hidden units and output to input units are modified to form a sparse neural network. The proposed hybrid training algorithm has been compared with the fully connected MLP and sparse softmax classifier that uses second order training algorithm. The simulation results show that the proposed algorithm has significant improvement in terms of convergence speed, network size, generalization and ease of training over fully connected MLP. Analysis of the proposed training algorithm on various linear and non-linear data files is carried out. The ability of the proposed algorithm is further substantiated by clearly differentiating two separate data sets when feed into the proposed algorithm. The experimental results are reported using 10-fold cross validation. Inducing sparsity into a fully connected neural network, pruning of the hidden units, Newton’s method for optimization, and orthogonal least squares are the subject matter of the present work.


Neural networks, Sparsity, Second order algorithm, Orthogonal least square, Hessian matrix


Electrical and Computer Engineering | Engineering


Degree granted by The University of Texas at Arlington