Wearable sonar uses voice to track facial expressions

Wearable sonar uses voice to track facial expressions

The research team tested the EarIO system on 16 participants, running the algorithm on a conventional smartphone.

Engineers at Cornell University have developed a new wearable device that can use sonar to observe a person’s facial expressions and recreate them on a digital avatar. Removing cameras from surveillance systems can alleviate privacy concerns. EarIO, as the team calls the device, consists of an earpiece with a microphone and speaker on each side, and can be attached to any standard pair of headphones. Each speaker emits sound pulses outside the range of human hearing towards the wearer’s face, and the echoes are picked up by microphones. Echo profiles are slightly altered by the way the user’s skin moves, stretches and wrinkles when making different facial expressions or during conversation. Specially trained algorithms can recognize these echo profiles and quickly reconstruct the user’s facial expression and display it on a digital avatar.”With the power of artificial intelligence, the algorithm finds complex relationships between muscle movement and facial expressions that the human eye cannot identify. We can use this to infer complex information that is more difficult to capture in the entire front of the face,” said Ke Li. , co-author of the study according to New Atlas . The research team tested the EarIO system on 16 participants, running the algorithm on a conventional smartphone. And indeed, the device could reconstruct facial expressions about as well as a conventional camera. Background noise such as wind, speech or street noise did not interfere with the ability to register faces. According to the team, sonar has some advantages over a camera. Acoustic data requires much less power and processing power, which also means the device can be smaller and lighter. Cameras can also record a lot of other personal information that the user may not want to share, so sonar can be much more private. By enabling us to reproduce our physical facial expression in a digital avatar, the technology can find many practical uses in the field of games, VR or the metaverse. The team says further work is needed to turn off other distractions (such as when the user turns their head) and to streamline the AI ​​algorithm’s training system.The research was published in the journal Proceedings of the Association for Computing Machinery on Interactive, Mobile, Wearable and Ubiquitous Technologies .Hardware, software, tests, interesting and colorful news from the world of IT by clicking here!

Leave a Comment

Your email address will not be published.