scispace - formally typeset
Open AccessPosted Content

Light Commands: Laser-Based Audio Injection Attacks on Voice-Controllable Systems

Reads0
Chats0
TLDR
A new class of signal injection attacks on microphones by physically converting light to sound is proposed, showing how an attacker can inject arbitrary audio signals to a target microphone by aiming an amplitude-modulated light at the microphone's aperture.
Abstract
We propose a new class of signal injection attacks on microphones by physically converting light to sound. We show how an attacker can inject arbitrary audio signals to a target microphone by aiming an amplitude-modulated light at the microphone's aperture. We then proceed to show how this effect leads to a remote voice-command injection attack on voice-controllable systems. Examining various products that use Amazon's Alexa, Apple's Siri, Facebook's Portal, and Google Assistant, we show how to use light to obtain control over these devices at distances up to 110 meters and from two separate buildings. Next, we show that user authentication on these devices is often lacking, allowing the attacker to use light-injected voice commands to unlock the target's smartlock-protected front doors, open garage doors, shop on e-commerce websites at the target's expense, or even unlock and start various vehicles connected to the target's Google account (e.g., Tesla and Ford). Finally, we conclude with possible software and hardware defenses against our attacks.

read more

Citations
More filters
Posted Content

SoK: The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems

TL;DR: It is argued that substantial additional work is required to provide adequate mitigations in the speech and speaker recognition space and systematizing existing research and providing a taxonomy through which the community can evaluate future work is needed.
Journal ArticleDOI

A Survey on Sensor-Based Threats and Attacks to Smart Devices and Applications

TL;DR: This paper presents a detailed survey about existing sensor- based threats and attacks to smart devices and countermeasures that have been developed to secure smart devices from sensor-based threats.
Journal ArticleDOI

When Speakers Are All Ears: Characterizing Misactivations of IoT Smart Speakers

TL;DR: This paper focuses on the privacy risk from smart speaker misactivations, i.e., when they activate, transmit, and/or record audio from their environment when the wake word is not spoken, and characterize the sources and implications for consumers.
Proceedings ArticleDOI

Symphony: localizing multiple acoustic sources with a single microphone array

TL;DR: Symphony is the first approach to tackle the problem of concurrently localizing multiple acoustic sources with a smart device with a single microphone array and includes a geometry-based filtering module to distinguish signals from different sources along different paths and a coherence-based module to identify signals from the same source.
Proceedings ArticleDOI

EarArray: Defending against DolphinAttack via Acoustic Attenuation.

TL;DR: EarArray is a lightweight method that can not only detect inaudible voice commands but also identify the direction of attackers without requiring any extra hardware or hardware modification.
References
More filters
Book ChapterDOI

Optical Fault Induction Attacks

TL;DR: A technology to block a new class of attacks on secure microcontrollers and smartcards whereby a logical 1 or 0 is not encoded by a high or low voltage on a single line, but by (HL or (LH) on a pair of lines.
Proceedings ArticleDOI

DolphinAttack: Inaudible Voice Commands

TL;DR: A totally inaudible attack, DolphinAttack, that modulates voice commands on ultrasonic carriers to achieve inaudibility and is validated on popular speech recognition systems, including Siri, Google Now, Samsung S Voice, Huawei HiVoice, Cortana and Alexa.
Proceedings Article

Hidden voice commands

TL;DR: This paper explores in this paper how voice interfaces can be attacked with hidden voice commands that are unintelligible to human listeners but which are interpreted as commands by devices.
Journal ArticleDOI

Potential Cyberattacks on Automated Vehicles

TL;DR: This paper investigates the potential cyberattacks specific to automated vehicles, with their special needs and vulnerabilities, and analyzes the threats on autonomous automated vehicles and cooperative automated vehicles.