ORCID Identifier(s)

0009-0007-2603-5833

Document Type

Article

Abstract

Hand gesture recognition plays a vital role in facilitating natural and intuitive human-computer interaction, with applications ranging from sign language translation to touchless control systems. This study presents a comparative evaluation of traditional machine learning models and a deep convolutional neural network (CNN) for static hand gesture classification. The experimental dataset comprises 24,000 training images and 6,000 testing images, spanning 20 gesture classes. Traditional models, including k-Nearest Neighbors (KNN) and Support Vector Machines (SVM), utilize handcrafted features such as convex hull, convexity defects, and Hu moments. In contrast, the deep learning approach fine-tunes a ResNet18 architecture to learn features directly from raw images. Evaluation metrics include classification accuracy and confusion matrix analysis. Results show that while KNN and SVM achieve strong performance (93.12% and 93.22% test accuracy, respectively), ResNet18 significantly outperforms both, attaining 98.98% test accuracy. These findings highlight the trade-off between computational efficiency in traditional models and the superior accuracy achievable through deep CNNs.

Disciplines

Artificial Intelligence and Robotics | Computer Sciences | Data Science

Publication Date

2025

Language

English

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.