News

Published on May 13th, 2018 📆 | 5804 Views ⚑

0

Voice-based AI services like Siri, Google Assistant may be vulnerable to subsonic commands


iSpeech
At the I/O conference a few days ago, Google’s Duplex Voice AI was very impressive in its dialogue with the merchants. It demonstrated the operation of helping the owner to order meals and book haircut services in the context of complex contexts. Even reported that John Hennessy, chairman of Google parent Alphabet, confirmed in an interview afterward that Duplex had passed the Turing test.

The breakthrough of AI has brought surprises to us again and again. AI voice assistants are also becoming more intelligent and realistic. However, the various security issues that come with them have also triggered people’s thinking and panic, as large as AI has. Self-awareness will bring about changes in the destiny of human beings, as well as the security risks of AI Internet of Things products. Recently, a new study indicated that Google Voice Assistant and other similar artificial voice services based on voice commands, such as Amazon Alexa and Apple Siri, may be vulnerable to subsonic command attacks.

According to the study, these subsonic commands cannot be heard by humans, but tools such as Google Assistant, Siri, and Alexa can detect these commands, which means that if these instructions can easily be given to you without your knowledge voice assistant.

[adsense size='1' ]

According to the New York Times, the study was conducted by the team of Berkeley and Princeton University in the United States and Zhejiang University in China. They have now created a way to replace the sounds heard by Google Assistant, Siri, and Alexa with audio files that the human ear cannot hear. Although the human ear can’t hear it, this audio can be heard and executed by machine learning software.





[adsense size='1' ]

Researchers claim that cybercriminals can use these subsonic commands to cause various damages. They may place audio on YouTube videos or websites, causing Google Assistant to order products online, start malicious websites, and so on without user consent. For devices like Google Home, these cloaking commands may turn off the security camera, the display light, and the door will be unlocked.

There is no relevant information indicating that these subsonic commands are actually used as an attack. Google, Apple, and Amazon all claim that they have taken steps to solve the problem.

Source: androidauthority



Comments are closed.