Open Access   Article Go Back

Gradient Feature based Static Sign Language Recognition

M. Mahadeva Prasad1

Section:Research Paper, Product Type: Journal Paper
Volume-6 , Issue-12 , Page no. 531-534, Dec-2018

CrossRef-DOI:   https://doi.org/10.26438/ijcse/v6i12.531534

Online published on Dec 31, 2018

Copyright © M. Mahadeva Prasad . This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

View this paper at   Google Scholar | DPI Digital Library

How to Cite this Paper

  • IEEE Citation
  • MLA Citation
  • APA Citation
  • BibTex Citation
  • RIS Citation

IEEE Style Citation: M. Mahadeva Prasad, “Gradient Feature based Static Sign Language Recognition,” International Journal of Computer Sciences and Engineering, Vol.6, Issue.12, pp.531-534, 2018.

MLA Style Citation: M. Mahadeva Prasad "Gradient Feature based Static Sign Language Recognition." International Journal of Computer Sciences and Engineering 6.12 (2018): 531-534.

APA Style Citation: M. Mahadeva Prasad, (2018). Gradient Feature based Static Sign Language Recognition. International Journal of Computer Sciences and Engineering, 6(12), 531-534.

BibTex Style Citation:
@article{Prasad_2018,
author = {M. Mahadeva Prasad},
title = {Gradient Feature based Static Sign Language Recognition},
journal = {International Journal of Computer Sciences and Engineering},
issue_date = {12 2018},
volume = {6},
Issue = {12},
month = {12},
year = {2018},
issn = {2347-2693},
pages = {531-534},
url = {https://www.ijcseonline.org/full_paper_view.php?paper_id=3374},
doi = {https://doi.org/10.26438/ijcse/v6i12.531534}
publisher = {IJCSE, Indore, INDIA},
}

RIS Style Citation:
TY - JOUR
DO = {https://doi.org/10.26438/ijcse/v6i12.531534}
UR - https://www.ijcseonline.org/full_paper_view.php?paper_id=3374
TI - Gradient Feature based Static Sign Language Recognition
T2 - International Journal of Computer Sciences and Engineering
AU - M. Mahadeva Prasad
PY - 2018
DA - 2018/12/31
PB - IJCSE, Indore, INDIA
SP - 531-534
IS - 12
VL - 6
SN - 2347-2693
ER -

VIEWS PDF XML
418 191 downloads 108 downloads
  
  
           

Abstract

In this paper, the work carried out to design the gradient feature based static sign language is presented. Sign languages are the gestures used by the hearing and speaking impaired people for communication. The sign languages are classified into static or dynamic or both static and dynamic sign languages. In static sign languages, still hand postures are used to convey information. In the dynamic sign languages, sequence of hand postures is used to convey information. In the present work, efforts have been made to design the computer vision based static sign language recognition system for the American Sign Language alphabets. The images that represent the static sign language alphabets are grouped into training and test images. The training sign language images are subjected to preprocessing. From the preprocessed images, magnitude and direction gradient features are extracted. These features are used to train the recognition system. The test images are subjected to preprocessing and feature extraction. The extracted features from the test sign language images are used to test the designed sign language recognition system. To classify the static sign language hand gestures, nearest neighbor classifier has been used. Independent experiments are carried out to evaluate the performance of the gradient magnitude and the gradient direction features. The average recognition accuracy of 95.4% for magnitude gradient feature and 80.3% for direction gradient feature are obtained.

Key-Words / Index Term

Sign Language Recognition System; American Sign Language; Static Sign Language; Gradient Features

References

[1] Mahmoud Elmezain, Ayoub Al-Hamadi, Omer Rashid, Bernd Michaelis, “Posture and Gesture Recognition for Human-Computer Interaction”, Advanced Technologies, Kankesu Jayanthakumaran (Ed.), InTech Publisher, pp. 415-440, 2009.
[2] Richard Bowden, Andrew Zisserman, Timor Kadir, Mike Brady, “Vision based Interpretation of Natural Sign Languages”, In the Proceedings of the 2003 Int. Conf. on Computer Vision System, pp. 391-401, 2003
[3] Prashan Premaratne, “Human Computer Interaction using Hand Gestures,” Springer, 2014.
[4] D. Karthikeyan, G. Muthulakshmi, “English Letters Finger Spelling Sign Language Recognition System”, Int. Jl. of Engineering Trends and Technology, Vol. 10, No. 7, pp. 334-339, 2014.
[5] Twinkel Verma, S.M. Kataria, “Hand Gesture Recognition Techniques”, Int. Research Jl. of Engineering and Technology, Vol. 03, Issue 4, pp. 2316-2319, 2016.
[6] R.M. McGuire, J. Hernandez-Rebollar, T. Starner, V. Hender-son, H. Brashear, D.S. Ross, “Towards a One-way American Sign Language Translator”, In the Proceedings of IEEE Int. Conf. on Automatic Face and Gesture Recognition, pp. 620-625, 2004.
[7] Qi Wang, Xilin Chen, Liang-Guo Zhang, Chunli Wang, Wen Gao, “Viewpoint Invariant Sign Language Recognition”, Computer Vision and Image Understanding, Vol. 1081, pp. 87–97, 2007.
[8] Nancy, Gianetan Singh Selhan, “An Analysis of Hand Gesture Technique using Finger Movement Detection based on Color Marker”, Int. Jl. of Computer Science and Communication, Vol. 3, No. 1, pp. 129-133, 2012.
[9] https://www.nidcd.nih.gov/sites/default/files/Content%20Images/NIDCD-ASL-hands-2014.jpg
[10] https://en.wikipedia.org/w/index.php?title=American_manual_alphabet&oldid=873597841
[11] https://www.kaggle.com/grassknoted/asl-alphabet/data
[12] Gonzalez R.C., Woods R.E., “Digital Image Processing”, 3rd Edn., Pearson Education, Inc., 2013.