ORCID Identifier(s)

0000-0002-1982-1787

Graduation Semester and Year

2020

Language

English

Document Type

Thesis

Degree Name

Master of Science in Computer Science

Department

Computer Science and Engineering

First Advisor

Manfred Huber

Abstract

K-Nearest Neighbors (KNN) has remained one of the most popular methods for supervised machine learning tasks. However, its performance often depends on the characteristics of the dataset and on appropriate feature scaling. In this thesis, characteristics of a dataset that make it suitable for being used within KNN are explored. As part of this, two new measures for dataset dispersion, called mean neighborhood target variance (MNTV), and mean neighborhood target entropy (MNTE) are developed to help determine the performance we expect while using KNN regressors and classifiers, respectively. It is empirically demonstrated that these measures of dispersion can be indicative of the performance of KNN regression and classification. This idea is extended to learn feature weights that help improve the accuracy of KNN classification and regression. For this, it is argued that the MNTV and MNTE, when used to learn feature weights, cannot be optimized using traditional gradient-based optimization methods and we develop optimization strategies based on metaheuristic methods, namely genetic algorithms and particle swarm optimization. The feature-weighting method is tried in both regression and classification contexts on publicly available datasets, and the performance is compared to KNN without feature weighting. The results indicate that the performance of KNN with appropriate feature weighting leads to better performance. In a separate branch of the work, the ideas of MNTV and MNTE are used to develop a sample-weighting algorithm that assigns sampling probabilities to each instance in a training set. This too is tried in both regression and classification with subsamples drawn using the sampling probabilities, and the performance is compared to KNN without subsampling the training set.

Keywords

Machine learning, Data analysis, Metaheutistic optimization, K-nearest neighbors, Classification, Regression

Disciplines

Computer Sciences | Physical Sciences and Mathematics

Comments

Degree granted by The University of Texas at Arlington

Share

COinS