Document Type
Article
Abstract
Hand gesture recognition plays a vital role in facilitating natural and intuitive human-computer interaction, with applications ranging from sign language translation to touchless control systems. This study presents a comparative evaluation of traditional machine learning models and a deep convolutional neural network (CNN) for static hand gesture classification. The experimental dataset comprises 24,000 training images and 6,000 testing images, spanning 20 gesture classes. Traditional models, including k-Nearest Neighbors (KNN) and Support Vector Machines (SVM), utilize handcrafted features such as convex hull, convexity defects, and Hu moments. In contrast, the deep learning approach fine-tunes a ResNet18 architecture to learn features directly from raw images. Evaluation metrics include classification accuracy and confusion matrix analysis. Results show that while KNN and SVM achieve strong performance (93.12% and 93.22% test accuracy, respectively), ResNet18 significantly outperforms both, attaining 98.98% test accuracy. These findings highlight the trade-off between computational efficiency in traditional models and the superior accuracy achievable through deep CNNs.
Disciplines
Artificial Intelligence and Robotics | Computer Sciences | Data Science
Publication Date
2025
Language
English
License
This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.
Recommended Citation
Khadka, Anamol and Desai, Prit, "Comparative Evaluation of Traditional Machine Learning and Deep CNN Models for Static Hand Gesture Recognition" (2025). Computer Science and Engineering Student Research. 2.
https://mavmatrix.uta.edu/cse_studentresearch/2