Ce Bian

Graduation Semester and Year




Document Type


Degree Name

Doctor of Philosophy in Mathematics



First Advisor

Li Wang

Second Advisor

Ren-Cang Li


Over the past few years, the size of data dimensions or features has been increasing in various fields of science and engineering, owing to the rapid pace of data collection and the development of more advanced storage methods. However, to handle high-dimensional data, dimensionality reduction is essential before performing classification or regression tasks to eliminate noisy features. There are several numerical methods available for reducing data dimensionality, such as Canonical Correlation Analysis (CCA), Principal Component Analysis (PCA), and Linear Discriminant Analysis (LDA). While these methods offer valuable approaches to data dimensionality reduction, they do come with certain limitations. CCA, for instance, primarily focuses on finding correlations between two sets of variables, which might not fully capture the complexities of intricate relationships within multidimensional data. PCA, while excellent at preserving variance, can struggle to emphasize class separability when applied to classification tasks. Acknowledging these limitations, this thesis introduces an innovative supervised dimensionality reduction algorithm that tackles both the reduction of data dimensionality and the concurrent classification of the data. Unlike conventional methods, this algorithm embarks on the dual task of revealing the projection matrix for dimension reduction alongside identifying the classifier hyperplane for data classification. The result is a model that excels in both accuracy and efficiency, enabled by its simultaneous learning of low-dimensional representation and classification models. What distinguishes this proposed model is its versatility. It accommodates not only the dimensionality reduction and classification of single-view data but also extends its prowess to multi-view data. Through numerical simulations, the effectiveness and computational efficiency of the proposed model are showcased when contrasted against state-of-the-art methods in dimensionality reduction and classification. A noteworthy feature of this novel approach is its capacity to generate two classifiers in tandem. This unique attribute widens its applicability across diverse classification experiments encompassing a variety of data types. In effect, the method’s dual-classifier capability amplifies its utility and establishes it as a versatile choice for tackling complex classification challenges.


Dimensionality reduction, Multi-view learning, Subspace learning, Classification


Mathematics | Physical Sciences and Mathematics


Degree granted by The University of Texas at Arlington

Included in

Mathematics Commons