scispace - formally typeset
J

Jungpil Shin

Researcher at University of Aizu

Publications -  164
Citations -  1018

Jungpil Shin is an academic researcher from University of Aizu. The author has contributed to research in topics: Computer science & Gesture. The author has an hindex of 11, co-authored 114 publications receiving 534 citations.

Papers
More filters
Journal ArticleDOI

Sensor Fault Classification Based on Support Vector Machine and Statistical Time-Domain Features

TL;DR: This paper deals with the problem of fault detection and diagnosis in sensors considering erratic, drift, hard-over, spike, and stuck faults, and shows that an increase in the number of features hardly increases the total accuracy of the classifier, but using ten features gives the highest accuracy for fault classification in an SVM.
Journal ArticleDOI

Hand Gesture and Character Recognition Based on Kinect Sensor

TL;DR: The purpose of this research was to see if Kinect sensor can recognize numeric and alphabetic characters written with the hand in the air, and it takes some time to master them both.
Journal ArticleDOI

A Survey of Speaker Recognition: Fundamental Theories, Recognition Methods and Opportunities

TL;DR: In this paper, the authors present a survey of the main aspects of automatic speaker recognition, such as speaker identification, verification, diarization, and performance of current speaker recognition systems.
Proceedings ArticleDOI

Hand Gesture Feature Extraction Using Deep Convolutional Neural Network for Recognizing American Sign Language

TL;DR: In this proposed model, Deep Convolutional Neural Network (DCNN) is used for extracting efficient hand features to recognize the American Sign Language (ASL) using hand gestures and the Multi-class Support Vector Machine (MCSVM) is use for identifying the hand sign.
Journal ArticleDOI

American Sign Language Alphabet Recognition by Extracting Feature from Hand Pose Estimation.

TL;DR: In this paper, a media-pipe hands algorithm was used for estimating hand joints from RGB images of hands obtained from a web camera and two types of features were generated from the estimated coordinates of the joints obtained for classification: the distances between the joint points and the angles between vectors and 3D axes.