Abstract:
Sign language, which is a highly visual-spatial, linguistically complete and natural language, is the main mode of communication among deaf people. In this paper, an American Sign Language (ASL) word recognition system is being developed using artificial neural networks (ANN) to translate the ASL words into English. The system uses a sensory glove Cyberglove (TM) and a Hock of Birds (R) 3D motion tracker to extract the gesture features. The finger joint angle data obtained from strain gauges in the sensory glove define the hand-shape while the data from the tracker describe the trajectory of hand movement. The trajectory of hand is normalized for increase of the signer position flexibility. The data from these devices are processed by two neural networks, a velocity network and a word recognition network. The velocity network uses hand speed to determine the duration of words. To convey the meaning of a sign, signs are defined by feature vectors such as hand shape, hand location, orientation, movement, bounding box, and distance. The second network is used as a classifier to convert ASL signs into words based on features. We trained and tested our ANN model for 60 ASL words for different number of samples. Our test results show that the accuracy of recognition is 92 %.