Deep Convolutional Real Time Model (DCRTM) for American Sign Language (ASL) Recognition

Deep Convolutional Real Time Model (DCRTM) for American Sign Language (ASL) Recognition

Hadj Ahmed Bouarara, Chaima Bentadj, Mohamed Elhadi Rahmani
DOI: 10.4018/IJSPPC.309079
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Sign language is a kind of communication rich of expressions, and it has the same properties as spoken languages. In this paper, the authors discuss the use of transfer learning techniques to develop an intelligent system that recognizes American Sign Language. The idea behind was that rather than creating a new model of deep convolutional neural network and spend a lot of time in experimentations, the authors used already pre-trained models to benefit from their advantages. In this study, they used four different models (YOLOv3, real-time model, VGG16, and AlexNet). The obtained results were very encouraging. All of them could recognize more than 90% of images.
Article Preview
Top

1 Introduction

Over 300 sign languages (shown in table 1) are used by 70 million deaf individuals worldwide, according to the World Federation of the Deaf in 2020. (Rastgoo, 2020) Children often learn languages by hearing their parents speak them; however, if their parents are deaf and use ASL to communicate, then the language may be taught to the child. However, statistics indicate that nine out of ten deaf babies are born to hearing parents, who may struggle to teach their deaf babies sign language because they lack sign language experience (Tavella, 2022). In other words, these parents begin teaching their deaf children sign language through other deaf individuals, and to do that, they enlist the help of extra sign language experts. It is recommended that parents introduce their deaf children to sign language as soon as possible because it is well known that the first few years of a person's life are the most important for a child's development of language abilities. Thanks to screening programmes, parents can now find out before their newborns leave hospitals if they are deaf or hard of hearing thanks to technological advancements. Following that, parents can begin their child's language-learning process at this crucial early developmental stage (Matchin, 2022).

As natural language, sign language is a kind of communication rich of expressions, and has the same properties as spoken languages. American Sign Language (ASL) is a sign language used by deaf or hard-hearing people in North American. It can be done by hands gestures and facial expressions. ASL as any language, it contains fundamental features, words formation and order and rules of pronunciation (Abdullahi, 2022). For example, English speakers may ask a question by raising the pitch of their voices and by adjusting word order; ASL users ask a question by raising their eyebrows, widening their eyes, and tilting their bodies forward. The sociological factors, including age and gender, can affect ASL and the way signs are given. Also, differences in the rhythm of signing among different regions in America are considered as dialects, like spoken English in which some words pronounced differently from region to region (Hassan, 2022). The main part in ASL used to spell out English words is the. Figure 1 shows the ASL fingerspelling alphabet (also referred to as the American manual alphabet).

Due to augmented need of sign language experts, development of automatic recognition of human signs became a highly active research field. Recently, a large number of systems based on machine learning were developed for sign language translation. Like any system for object detection from images, sign language recognition took an important part of deep learning researches. A large variation of neural networks architectures have been used, especially the Convolutional Neural Networks (CNN). These last collect abstract features from sign images in the first layer, then, group them into more defined features in the next layers in order to recognize those signs (Oyedotun, 2017).

Complete Article List

Search this Journal:
Reset
Volume 16: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 15: 1 Issue (2023)
Volume 14: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 13: 4 Issues (2021)
Volume 12: 4 Issues (2020)
View Complete Journal Contents Listing