TECH

Engineers at Cornell University have developed glasses with built-in sonar to recognize silent users' speech. If a person cannot make sounds for some reason, but his mouth remains mobile, he can imitate speech by moving his lips.
The EchoSpeech device is capable of capturing these movements and translating them into voice commands.
The principle of operation of EchoSpeech is simple - a pair of acoustic signal sources is placed in one arc of the glasses frame, and a pair of microphones for receiving them is placed in the other arc. The waves are directed towards the mouth and are reflected from a person's lips as they move, forming characteristic signals. The result is analog sonar for reading voice data even if no sound is produced.
The machine learning system allows the glasses to be adapted to each new user and their anatomy in just a few minutes. An important point - EchoSpeech does not use the Internet connection and access to powerful servers for information processing, all processes are autonomous, confidentiality is preserved. The device can recognize about 30 different commands with an accuracy of 95%. No power-hungry components are used here, so EchoSpeech can run for up to 10 hours on a single charge.
by: mundophone
No comments:
Post a Comment