Machine translation of sign languages
Technologies are emerging today that aim to translate signed languages into written or spoken language, and written or spoken language to sign language, without the use of a human interpreter. This new technological phenomenon is not just limited to the United States of America. Inventors all over the world are very interested in this idea and many different products have been or are being produced with this unique ability. One of the obstacles in developing these technologies is that sign languages possess very different phonological features than spoken languages. Therefore, the developers of automatic sign language translation incorporate technologies such as computer vision and machine learning to recognize specific phonological parameters unique to sign languages, and speech recognition and natural language processing allow interactive communication between hearing and deaf people. Some of these technologies are developed by a group of engineers and specialists from the Deaf community.
History
The history of automatic sign language translation started with the development of hardware such as finger-spelling robotic hands. In 1977, a finger-spelling hand project, Ralph created a robotic hand that can translate alphabets into finger-spellings.[1] Later, the use of gloves with motion sensors became the mainstream, and some projects such as the CyberGlove and VPL Data Glove were born.[2] The wearable hardware made it possible to capture the signers’ hand shapes and movements with the help of the computer software. However, the cameras with the development of computer vision replaced those wearable devices due to the efficiency and less physical restrictions on signers.[2] To process the data collected through the devices, researchers implemented neural networks such as the Stuttgart Neural Network Simulator[3] for the pattern recognition in their projects such as the CyberGlove. Researchers also use many other approaches for sign recognition. For example, Hidden Markov Models is used to analyze the data statistically,[2]and the GRASP and other machine learning programs use the training sets to improve the accuracy of sign recognition. [4]
SignAloud
SignAloud is a technology that incorporates a pair of gloves made by a group of students at University of Washington that transliterate[5] American Sign Language(ASL) into English.[6] In the Spring of 2016 Thomas Pryor and Navid Azodi, two hearing students form the University of Washington, created the idea for this device. Azodi has a rich background and involvement in business administration, while Pryor has a wealth of experience in engineering.[7] In May 2016, the creators told NPR that they are working more closely with people who use ASL so that they can better understand their audience and tailor their product to the needs of these people rather than the assumed needs.[8] However, no further versions have been released since then.The students' invention was one of seven to win the Lemelson-MIT Student Prize, which seeks to award and applaud young inventors. Their invention fell under the "Use it!" category of the award which includes technological advances to existing products. They were awarded $10,000.[9][10]
The gloves have sensors that track the users hand movements and then send the data to a computer system via Bluetooth. The computer system analyzes the data and matches it to English words, which are then spoken aloud by a digital voice.[8] The gloves do not have capability for written English input to glove movement output or the ability to hear language and then sign it to a deaf person, which means they do not provide reciprocal communication. The device also does not incorporate facial expressions and other nonmanual markers of sign languages, which may alter the actual interpretation from ASL.[11]
ProDeaf[12]
ProDeaf (WebLibras) is a computer software that can translate both text and voice into Portuguese Libras (Portuguese Sign Language) "with the goal of improving communication between the deaf and hearing."[13] There is currently a beta edition in production for American Sign Language as well. The original team began the project in 2010 with a combination of experts including linguists, designers, programmers, and translators, both hearing and deaf. The team originated at Federal University of Pernambuco (UFPE) from a group of students involved in a computer science project. The group had a deaf team member who had difficulty communicating with the rest of the group. In order to complete the project and help the teammate communicate, the group created Proativa Soluções and have been moving forward ever since.[14] The current beta version in American Sign Language is very limited. For example, there is a dictionary section and the only word under the letter 'j' is 'jump'. If the device has not been programmed with the word, then the digital avatar must fingerspell the word. The last update of the app was in June of 2016, but ProDeaf has been featured in over 400 stories across the country's most popular media outlets.[15]
The application cannot read sign language and turn it into word or text, so it only serves as a one-way communication. Additionally, the user cannot sign to the app and receive an English translation in any form, as English is still in the beta edition.
Kinect Sign Language Translator[16]
Since 2012, researchers from the Chinese Academy of Sciences and specialists of deaf education from Beijing Union University in China have been collaborating with Microsoft Research Asian team to create Kinect Sign Language Translator. The translator consists of two modes: translator mode and communication mode. The translator mode is capable of translating single words from sign into written words and vice versa. The communication mode can translate full sentences and the conversation can be automatically translated with the use of the 3D avatar. The translator mode can also detect the postures and hand shapes of a signer as well as the movement trajectory using the technologies of machine learning, pattern recognition, and computer vision. The device also allows for reciprocal communication because the speech recognition technology allows the spoken language to be translated into the sign language and the 3D modeling avatar can sign back to the deaf people. [17]
The original project was started in China based on translating Chinese Sign Language. In 2013, the project was presented at Microsoft Research Faculty Summit and Microsoft company meeting.[18] Currently, this project is also being worked by researchers in the United States to implement American Sign Language translation.[19] As of now, the device is still a prototype, and the accuracy of translation in the communication mode is still not perfect.
SignAll[20]
SignAll is an automatic sign language translation system provided by Dolphio Technologies[21] in Hungary. The team is "pioneering the first automated sign language translation solution, based on computer vision and natural language processing (NLP), to enable everyday communication between individuals with hearing who use spoken English and deaf or hard of hearing individuals who use ASL." The system of SignAll uses Kinect from Microsoft and other web cameras with depth sensors connected to a computer. The computer vision technology can recognize the handshape and the movement of a signer, and the system of natural language processing converts the collected data from computer vision into a simple English phrase. The developer of the device is deaf and the rest of the project team consists of many engineers and linguist specialists from deaf and hearing communities. The technology has the capability of incorporating all five parameters of ASL, which help the device accurately interpret the signer. SignAll has been endorsed by many companies including Deloitte and LT-innovate and has created partnerships with Microsoft Bizspark and Hungary's Renewal.[22]
MotionSavvy[23]
MotionSavvy was the first sign language to voice system. The device was created in 2012 by a group from Rochester Institute of Technology / National Technical Institute for the Deaf and "emerged from the Leap Motion accelerator AXLR8R."[24] The team used a tablet case that leverages the power of the Leap Motion controller. The entire six person team was created by Deaf students from the schools deaf-education branch.[25] The device is currently one of only two reciprocal communication devices solely for American Sign Language. It allows deaf individuals to sign to the device which is then interpreted or vice versa, taking spoken English and interpreting that into American Sign Language. The device is shipping for $198. Some other features include the ability to interact, live time feedback, sign builder, and crowdsign.
The device has been reviewed by everyone from technology magazines to TIME. Wired said, "It wasn’t hard to see just how transformative a technology like [UNI] could be” and that “[UNI] struck me as sort of magical."Katy Steinmetz at TIME said, "This technology could change the way deaf people live." Sean Buckley at Engadget mentioned, "UNI could become an incredible communication tool."
References
- ↑ Jaffe, David. "Evolution of mechanical fingerspelling hands for people who are deaf- blind". The Journal of Rehabilitation Research and Development: 236–44.
- 1 2 3 Parton, Becky. "Sign Language Recognition and Translation: A Multidisciplined Approach From the Field of Artificial Intelligence". Journal of Deaf Studies and Deaf Education.
- ↑ Weissmann, J.; Salomon, R. (1999-01-01). "Gesture recognition for virtual reality applications using data gloves and neural networks". International Joint Conference on Neural Networks, 1999. IJCNN '99. 3: 2043–2046 vol.3. doi:10.1109/IJCNN.1999.832699.
- ↑ Bowden, Richard. "Vision based interpretation of natural sign languages". 3rd International Conference on Computer Vision Systems.
- ↑ "What is the difference between translation and transliteration". english.stackexchange.com. Retrieved 2017-04-06.
- ↑ "SignAloud".
- ↑ "Thomas Pryor and Navid Azodi | Lemelson-MIT Program". lemelson.mit.edu. Retrieved 2017-03-09.
- 1 2 "These Gloves Offer A Modern Twist On Sign Language". NPR.org. Retrieved 2017-03-09.
- ↑ "Collegiate Inventors Awarded Lemelson-MIT Student Prize | Lemelson-MIT Program". lemelson.mit.edu. Retrieved 2017-03-09.
- ↑ "UW undergraduate team wins $10,000 Lemelson-MIT Student Prize for gloves that translate sign language | UW Today". www.washington.edu. Retrieved 2017-04-09.
- ↑ "Nonmanual markers in American Sign Language (ASL)". www.lifeprint.com. Retrieved 2017-04-06.
- ↑ "ProDeaf". prodeaf.net. Retrieved 2017-04-09.
- ↑ "ProDeaf". www.prodeaf.net. Retrieved 2017-03-09.
- ↑ "ProDeaf". www.prodeaf.net. Retrieved 2017-03-16.
- ↑ "ProDeaf Tradutor para Libras on the App Store". App Store. Retrieved 2017-03-09.
- ↑ Xilin, Chen (2013). "Kinect Sign Language Translator expands communication possibilities" (PDF). Microsoft Research Connections.
- ↑ Zhou, Ming. "Sign Language Recognition and Translation with Kinect" (PDF). IEEE Conference.
- ↑ "Kinect Sign Language Translator".
- ↑ Zafrulla, Zahoor; Brashear, Helene; Starner, Thad; Hamilton, Harley; Presti, Peter (2011-01-01). "American Sign Language Recognition with the Kinect". Proceedings of the 13th International Conference on Multimodal Interfaces. ICMI '11. New York, NY, USA: ACM: 279–286. ISBN 9781450306416. doi:10.1145/2070481.2070532.
- ↑ "SignAll. We translate sign language. Automatically.". www.signall.us. Retrieved 2017-04-09.
- ↑ "Dolphio | Unique IT Technologies". www.dolphio.hu. Retrieved 2017-04-06.
- ↑ "SignAll. We translate sign language. Automatically.". www.signall.us. Retrieved 2017-03-09.
- ↑ "MotionSavvy UNI: 1st sign language to voice system". Indiegogo. Retrieved 2017-03-09.
- ↑ "Rochester Institute of Technology (RIT)". Rochester Institute of Technology (RIT). Retrieved 2017-04-06.
- ↑ Tsotsis, Alexia. "MotionSavvy Is A Tablet App That Understands Sign Language". TechCrunch. Retrieved 2017-04-09.