AMAZON
Assistant Alexa learns sign language through modification
When you think of artificial intelligence assistants, interaction is usually done through voice commands, but what to do when users are unable to hear or speak?It is to think of these people that a software producer, Abhishek Singh, has developed a modification for the peripheral Amazon Echo, which through a connection to a laptop, manages to communicate with the assistant of artificial intelligence Alexa. The system uses a webcam to capture the movements of the sign language and transmit them to the assistant. The AI responds to the user via voice and text on the screen.In order to produce this form of communication, the programmer used machine learning software to develop an algorithm capable of verifying text and voice after recognition of the movements performed during the gesture communication. The modification was based on Google's TensorFlow software, which allows you to program machine learning applications in a JavaScript environment, making them more easily compatible with internet browsers.The programmer spent much of the time training the "machine" by introducing various visual cues used in sign language into the program. According to The Verge, during the development of this technology, the programmer was unable to find online databases of sign language signs, forcing him to create the basic gestures for his experiences from scratch.He also said that the introduction of data in the machine is easy and that plans to make the code available in open-source for everyone to contribute. If possible, you want companies like Amazon to be sensitized and adopt similar systems in their solutions, or even their prototype. In the video you can watch a demonstration of Abhishek Singh's technology. Could it be that in the future "voice assist" could also be "gesture assistants"?
Sapo
No comments:
Post a Comment