UCLA researchers in the Department of Bioengineering have developed a novel machine learning assisted wearable sensor system for the direct translation of sign language into voice with high performance.
For the large community of deaf signers around the world who rely on sign language for conversations, communication with people who are unfamiliar with sign language is a challenge. A number of sign language translating devices have been developed using surface electromyography (SEMG) techniques (e.g. the piezoresistive effect, ionic conduction, the capacitive effect, etc.) and photography and image processing. The production and use of these translators are limited by issues such as the position for the worn sensors for SEMG based translation and lighting conditions for vision-based translation. Moreover, most of the translation systems convert sign language into text, making it inconvenient for practical communication. There is a need for a stable, accurate, and portable sign language translation system that can directly convert sign language into voice for better communication between the signers and non-signers.
UCLA researchers in the Department of Bioengineering have developed an integrated stretchable sensor array (ISSA) system for real-time translation of sign language into voice. Sensors are integrated into the fingers of a glove and the analog signals generated by each finger are processed into digital signals which are subsequently translated to voice. The ISSA system has been successfully prototyped and used to demonstrate the 660 sign language gestures recognition patterns based on American Sign Language, with a recognition rate of 98.63% and a rapid translation of <1 second.
Prototyped tested with 660 sign language gestures recognition patterns at a 98.63% accuracy.
gesture, recognition, sign language, textile, wearables, physical therapy