Indonesian Sign Language Recognition Using Kinect and Dynamic Time Warping

Wijayanti Nurul Khotimah, Nanik Suciati, Tiara Anggita


Sign Language Recognition System (SLRS) is a system to recognise sign language and then translate them into text. This system can be developed by using a sensor-based technique. Some studies have implemented various feature extraction and classification methods to recognise sign language in the different country. However, their systems were user dependent (the accuracy was high when the trained and the tested user were the same people, but it was getting worse when the tested user was different to the trained user). Therefore in this study, we proposed a feature extraction method which is invariant to a user. We used the distance between two users’ skeleton instead of using the users’ skeleton positions because the skeleton distance is independent to the user posture. Finally, forty-five features were extracted in this proposed method. Further, we classified the features by using a classification method that is suitable with sign language gestures characteristic (time-dependent sequence data). The classification method is Dynamic Time Wrapping. For the experiment, we used twenty Indonesian sign languages from different semantic groups (greetings, questions, pronouns, places, family and others) and different gesture characteristic (static gesture and dynamic gesture). Then the system was tested by a different user with the user who did the training. The result was promising, this proposed method produced high accuracy, reach 91% which shows that this proposed method is user independent.


Indonesian sign language recognition, Dynamic Time Wrapping, User independent system


N. B. Ibrahim, M. M. Selim, and H. H. Zayed, “An Automatic Arabic Sign Language Recognition System (ArSLRS),” J. King Saud Univ. - Comput. Inf. Sci., Oct. 2017.

M. I. Sadek, M. N. Mikhael, and H. A. Mansour, “A new approach for designing a smart glove for Arabic Sign Language Recognition system based on the statistical analysis of the Sign Language,” in 2017 34th National Radio Science Conference (NRSC), 2017, pp. 380–388.

W. N. Khotimah, Y. A. Susanto, and N. Suciati, “Combining Decision Tree and Back Propagation Genetic Algorithm Neural Network for Recognizing Word Gestures in Indonesian Sign Language using Kinect,” J. Theor. Appl. Inf. Technol., vol. 95, no. 2, pp. 292–298, Jan. 2017.

C. H. Chuan, E. Regina, and C. Guardino, “American Sign Language Recognition Using Leap Motion Sensor,” in 2014 13th International Conference on Machine Learning and Applications, 2014, pp. 541–544.


L. E. Potter, J. Araullo, and L. Carter, “The Leap Motion Controller: A View on Sign Language,” in Proceedings of the 25th Australian Computer-Human Interaction Conference: Augmentation, Application, Innovation, Collaboration, New York, NY, USA, 2013, pp. 175–178.

T. C. Hou, W. N. W. Zakaria, T. S. Jing, R. Tomari, T. K. Sek, and A. A. M. Suberi, “Vision Based Human Decoy System for Spot Cooling,” TELKOMNIKA Telecommun. Comput. Electron. Control, vol. 15, no. 4, pp. 1512–1519, Dec. 2017.

C. Chansri and J. Srinonchat, “Hand Gesture Recognition for Thai Sign Language in Complex Background Using Fusion of Depth and Color Video,” Procedia Comput. Sci., vol. 86, pp. 257–260, Jan. 2016.

R. Elakkiya, K. Selvamani, and A. Kannan, “An intelligent framework for recognizing sign language from continuous video sequence using boosted subunits,” in IET Conference Proceedings; Stevenage, Stevenage, United Kingdom, Stevenage, 2013.

P. Doliotis, “Viewpoint invariant gesture recognition and 3D hand pose estimation using RGB-D,” Ph.D., The University of Texas at Arlington, United States -- Texas, 2013.

I. M. Al-saihati, Real Time Arabic Sign Language Recognition. Dahran, Saudi Arabia: King Fahd University of Petrolium and Minerals, 2006.

P. Yin, Segmental discriminative analysis for American Sign Language recognition and verification. Georgia Institute of Technology, 2010.

T. Aujeszky and M. Eid, “A gesture recogintion architecture for Arabic sign language communication system,” Multimed. Tools Appl., vol. 75, no. 14, pp. 8493–8511, 2016.

T. Kamnardsiri, L. Hongsit, and N. Wongta, “Designing a Sign Language Intelligent Game-Based Learning Framework With Kinect,” in ICEL2016-Proceedings of the 11th International Conference on e-Learning: ICEl2016, 2016, p. 64.

G. C. Lee, F.-H. Yeh, and Y.-H. Hsiao, “Kinect-based taiwanese sign-language recognition system,” Multimed. Tools Appl., vol. 75, no. 1, pp. 261–279, 2016.

A. Agarwal and M. K. Thakur, “Sign language recognition using Microsoft Kinect,” in Contemporary Computing (IC3), 2013 Sixth International Conference on, 2013, pp. 181–185.

C. Sun, T. Zhang, B.-K. Bao, and C. Xu, “Latent support vector machine for sign language recognition with Kinect,” in Image Processing (ICIP), 2013 20th IEEE International Conference on, 2013, pp. 4190–4194.

M. M. Saad, N. Jamil, and R. Hamzah, “Evaluation of Support Vector Machine and Decision Tree for Emotion Recognition of Malay Folklores,” Bull. Electr. Eng. Inform., vol. 7, no. 3, pp. 479–486, 2018.

S. I. M. Quadri, An Integrated System for Arabic Sign Language Recognition. U.M.I., 2007.

U. Von Agris, J. Zieren, U. Canzler, B. Bauer, and K.-F. Kraiss, “Recent developments in visual sign language recognition,” Univers. Access Inf. Soc., vol. 6, no. 4, pp. 323–362, 2008.

I. N. Yulita, “Feature Extraction Analysis for Hidden Markov Models in Sundanese Speech Recognition,” TELKOMNIKA Telecommun. Comput. Electron. Control, vol. 16, no. 5, 2018.

S. G. Moreira Almeida, F. G. Guimarães, and J. Arturo Ramírez, “Feature extraction in Brazilian Sign Language Recognition based on phonological structure and using RGB-D sensors,” Expert Syst. Appl., vol. 41, no. 16, pp. 7259–7271, Nov. 2014.

M. Müller, “Dynamic time warping,” Inf. Retr. Music Motion, pp. 69–84, 2007.

Sutarman, M. A. Majid, and J. M. Zain, “A REVIEW ON THE DEVELOPMENT OF INDONESIAN SIGN LANGUAGE RECOGNITION SYSTEM,” J. Comput. Sci., vol. 9, no. 11, pp. 1496–1505, Nov. 2013.

W. N. Khotimah, N. Suciati, Y. E. Nugyasa, and R. Wijaya, “Dynamic Indonesian sign language recognition by using weighted K-Nearest Neighbor,” in Information & Communication Technology and System (ICTS), 2017 11th International Conference on, 2017, pp. 269–274.

Total views : 6 times


  • There are currently no refbacks.

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

shopify stats IJEECS visitor statistics