scispace - formally typeset
Search or ask a question

Showing papers on "Audio signal processing published in 2014"


Patent
24 Jul 2014
TL;DR: In this paper, a multi-channel parallel scan data signal processor/digitizer processes the analog scan data signals along multiple cascaded multi-stage signal processing channels, to generate digital data signals corresponding to a laser scanned symbol, while a synchronized digital gain control module automatically processes the digital signals in response to start of scan (SOS) signals generated by a SOS detector.
Abstract: A laser scanning symbol reading system includes an analog scan data signal processor for producing digital data signals, wherein during each scanning cycle, a light collection and photo-detection module generates an analog scan data signal corresponding to a laser scanned symbol, a multi-channel parallel scan data signal processor/digitizer processes the analog scan data signal along multiple cascaded multi-stage signal processing channels, to generate digital data signals corresponding thereto, while a synchronized digital gain control module automatically processes the digital data signals in response to start of scan (SOS) signals generated by a SOS detector. Each signal processing channel supports different stages of amplification and filtering using a different set of band-pass filtering and gain parameters in each channel, to produce multiple digital first derivative data signals, and/or multiple digital scan data intensity data signals, having different signal amplitudes and dynamic range characteristics for use in decode processing.

296 citations


Journal ArticleDOI
TL;DR: In this article, analog-to-digital and digital-toanalog converters (ADCs and DACs), as well as digital signal processing (DSP) functions for optical coherent modems are examined.
Abstract: We examine analog-to-digital and digital-to-analog converters (ADCs and DACs), as well as digital signal processing (DSP) functions for optical coherent modems.

206 citations


Journal ArticleDOI
TL;DR: Audio is a domain where signal separation has long been considered as a fascinating objective, potentially offering a wide range of new possibilities and experiences in professional and personal contexts, by better taking advantage of audio material and finely analyzing complex acoustic scenes.
Abstract: Audio is a domain where signal separation has long been considered as a fascinating objective, potentially offering a wide range of new possibilities and experiences in professional and personal contexts, by better taking advantage of audio material and finely analyzing complex acoustic scenes. It has thus always been a major area for research in signal separation and an exciting challenge for industrial applications.

156 citations


Patent
Gyeong-Tae Lee1, Jong-Bae Kim1, Jong-in Jo1, Whan-oh Sung1, Joo-Yeon Lee1, Hwan Shim1 
12 Sep 2014
TL;DR: In this article, an audio system consisting of a plurality of speaker modules connected to each other, a detection module configured to detect information of the plurality of speakers and user information, and a home control module was presented.
Abstract: An audio system, an audio outputting method, and a speaker apparatus are disclosed. The audio system includes a plurality of speaker modules connected to each other, a detection module configured to detect information of the plurality of speaker modules and user information, and a home control module configured to receive an audio signal, process the received audio signal based on the information of the plurality of speaker modules and the user information, and transmit the processed audio signal to the plurality of speaker modules.

147 citations


Patent
08 May 2014
TL;DR: In this paper, the location of a mobile device within a vehicle was determined based on the results of the digital signal processing on the sampled at least two audio signals, based on which the mobile device was located within the driver space of the vehicle during a predetermined period of time.
Abstract: Systems, methods, and devices for determining the location of one or more mobile devices within a vehicle comprising: (a) a controller located within the vehicle and configured to transmit at least two audio signals, a first audio signal directed generally into a driver space within the vehicle and a second audio signal directed generally into a passenger space within the vehicle, and (b) software code stored in memory of the mobile device and having instructions executable by a processor that performs the steps of: (i) detecting the at least two audio signals, (ii) sampling the at least two audio signals for a predetermined period of time; (iii) performing digital signal processing on the sampled at least two audio signals; and (iv) based on the results of the digital signal processing, determining whether the mobile device was located within the driver space of the vehicle during the predetermined period of time.

127 citations


Journal ArticleDOI
TL;DR: A charge-controlled memristor model is derived and the corresponding SPICE model is constructed, which can provide great storage capacity and high audio quality with a simple small circuit structure and special write and read operations are demonstrated through numerical analysis and circuit simulations.
Abstract: Since the development of the HP memristor, much attention has been paid to studies of memristive devices and applications, particularly memristor-based nonvolatile semiconductor memory. Owing to its unique properties, theoretically, one could restart a memristor-based computer immediately without the need for reloading the data. Further, current memories are mainly binary and can store only ones and zeros, whereas memristors have multilevel states, which means a single memristor unit can replace many binary transistors and realize higher-density memory. It is believed that memristors can also implement analog storage besides binary and multilevel information memory. In this paper, an implementation scheme for analog memristive memory is considered. A charge-controlled memristor model is derived and the corresponding SPICE model is constructed. Special write and read operations are demonstrated through numerical analysis and circuit simulations. In addition, an audio analog record/play system using a memristor crossbar array is designed. This system can provide great storage capacity (long recording time) and high audio quality with a simple small circuit structure. A series of computer simulations and analyses verify the effectiveness of the proposed scheme.

126 citations


Book
21 Apr 2014
TL;DR: In this article, the authors present an introduction to audio analysis, providing theoretical background to many state-of-the-art techniques in the field of audio analysis including audio feature extraction, audio classification, audio segmentation, and music information retrieval.
Abstract: Introduction to Audio Analysis serves as a standalone introduction to audio analysis, providing theoretical background to many state-of-the-art techniques. It covers the essential theory necessary to develop audio engineering applications, but also uses programming techniques, notably MATLAB, to take a more applied approach to the topic. Basic theory and reproducible experiments are combined to demonstrate theoretical concepts from a practical point of view and provide a solid foundation in the field of audio analysis. Audio feature extraction, audio classification, audio segmentation, and music information retrieval are all addressed in detail, along with material on basic audio processing and frequency domain representations and filtering. Throughout the text, reproducible MATLAB examples are accompanied by theoretical descriptions, illustrating how concepts and equations can be applied to the development of audio analysis systems and components. A blend of reproducible MATLAB code and essential theory provides enable the reader to delve into the world of audio signals and develop real-world audio applications in various domains. Practical approach to signal processing: The first book to focus on audio analysis from a signal processing perspective, demonstrating practical implementation alongside theoretical concepts Bridge the gap between theory and practice: The authors demonstrate how to apply equations to real-life code examples and resources, giving you the technical skills to develop real-world applications Library of MATLAB code: The book is accompanied by a well-documented library of MATLAB functions and reproducible experiments

117 citations


Patent
12 Mar 2014
TL;DR: In this article, a variation of a method for augmenting a listening experience of a user through an audio device includes detecting a location of the audio device; selecting a set of audio output feedbacks; identifying a common feature across audio outputs; transforming an audio signal into a processed audio signal according to a hearing profile of the user and the common feature.
Abstract: One variation of a method for augmenting a listening experience of a user through an audio device includes: detecting a location of the audio device; selecting a set of audio output feedbacks, each audio output feedback in the set of audio output feedback entered by an individual and associated with a physical site proximal to the location; identifying a common feature across audio output feedbacks within the set of audio output feedbacks; transforming an audio signal into a processed audio signal according to a hearing profile of the user and the common feature; and outputting the processed audio signal through the audio device.

114 citations


Patent
07 Feb 2014
TL;DR: In this article, techniques for specifying audio rendering information in a bitstream are described, and a device configured to generate the bitstream may perform various aspects of the techniques, such as identifying an audio renderer used when generating the multi-channel audio content.
Abstract: In general, techniques are described for specifying audio rendering information in a bitstream. A device configured to generate the bitstream may perform various aspects of the techniques. The bitstream generation device may comprise one or more processors configured to specify audio rendering information that includes a signal value identifying an audio renderer used when generating the multi-channel audio content. A device configured to render multi-channel audio content from a bitstream may also perform various aspects of the techniques. The rendering device may comprise one or more processors configured to determine audio rendering information that includes a signal value identifying an audio renderer used when generating the multi-channel audio content, and render a plurality of speaker feeds based on the audio rendering information.

112 citations


Patent
24 Feb 2014
TL;DR: In this paper, techniques for testing a wireless communications device are disclosed. In one particular exemplary embodiment, the techniques may be realized as a system and method for testing wireless communications devices.
Abstract: Techniques for testing a wireless communications device are disclosed. In one particular exemplary embodiment, the techniques may be realized as a system and method for testing a wireless communications device. The method may comprise generating an audio test signal. The audio test signal may be transmitted to a wireless communication device through a wireless base station simulator via a VoIP application. The method may also comprise receiving an output signal, where the output signal may be generated by the wireless communication device and transmitted to a telecoil probe. The method may further comprise processing the output signal by comparing the output signal with the audio test signal.

109 citations


Patent
19 May 2014
TL;DR: In this article, a user input of a variable value is received and, in response, distribution of the audio signals is transitioned from the first mode to the second mode, the extent of the transition being variable based on the value of the user input.
Abstract: Signals in an automobile audio system having at least two near-field speakers located close to an intended position of a listener's head are adjusted such that in a first mode, audio signals are distributed to the near-field speakers according to a first filter that causes the listener to perceive a wide soundstage, and in a second mode, the audio signals are distributed to the near-field speakers according to a second filter that causes the listener to perceive a narrow soundstage. A user input of a variable value is received and, in response, distribution of the audio signals is transitioned from the first mode to the second mode, the extent of the transition being variable based on the value of the user input.

Patent
26 Mar 2014
TL;DR: In this article, an audio normalization gain value is applied to an audio signal to produce a normalized signal, and the normalized signal is processed to compute dynamic range control (DRC) gain values in accordance with a selected one of several pre-defined DRC characteristics.
Abstract: An audio normalization gain value is applied to an audio signal to produce a normalized signal. The normalized signal is processed to compute dynamic range control (DRC) gain values in accordance with a selected one of several pre-defined DRC characteristics. The audio signal is encoded, and the DRC gain values are provided as metadata associated with the encoded audio signal. Several other embodiments are also described and claimed.

Proceedings ArticleDOI
04 May 2014
TL;DR: A multi-resolution approach based on discrete wavelet transform and linear prediction filtering that improves time resolution and performance of onset detection in different musical scenarios and significantly outperforms existing methods in terms of F-Measure is presented.
Abstract: A plethora of different onset detection methods have been proposed in the recent years. However, few attempts have been made with respect to widely-applicable approaches in order to achieve superior performances over different types of music and with considerable temporal precision. In this paper, we present a multi-resolution approach based on discrete wavelet transform and linear prediction filtering that improves time resolution and performance of onset detection in different musical scenarios. In our approach, wavelet coefficients and forward prediction errors are combined with auditory spectral features and then processed by a bidirectional Long Short-Term Memory recurrent neural network, which acts as reduction function. The network is trained with a large database of onset data covering various genres and onset types. We compare results with state-of-the-art methods on a dataset that includes Bello, Glover and ISMIR 2004 Ballroom sets, and we conclude that our approach significantly outperforms existing methods in terms of F-Measure. For pitched non percussive music an absolute improvement of 7.5% is reported.

Journal ArticleDOI
TL;DR: It is shown how more complex speech material should result in a range typically varying between 40 ms audio lead and 200 ms audio lag, and how this natural coordination is reflected in the so-called temporal integration window for audiovisual speech perception.
Abstract: An increasing number of neuroscience papers capitalize on the assumption published in this journal that visual speech would be typically 150 ms ahead of auditory speech. It happens that the estimation of audiovisual asynchrony in the reference paper is valid only in very specific cases, for isolated consonant-vowel syllables or at the beginning of a speech utterance, in what we call "preparatory gestures". However, when syllables are chained in sequences, as they are typically in most parts of a natural speech utterance, asynchrony should be defined in a different way. This is what we call "comodulatory gestures" providing auditory and visual events more or less in synchrony. We provide audiovisual data on sequences of plosive-vowel syllables (pa, ta, ka, ba, da, ga, ma, na) showing that audiovisual synchrony is actually rather precise, varying between 20 ms audio lead and 70 ms audio lag. We show how more complex speech material should result in a range typically varying between 40 ms audio lead and 200 ms audio lag, and we discuss how this natural coordination is reflected in the so-called temporal integration window for audiovisual speech perception. Finally we present a toy model of auditory and audiovisual predictive coding, showing that visual lead is actually not necessary for visual prediction.

Patent
13 May 2014
TL;DR: In this paper, the authors present an apparatus and method of controlling and altering the acoustic output of audio devices that are used in conjunction with a computing device, such as a wireless speaker communication method and computing device software application that are configured to work together to more easily setup and deliver audio information from an audio source to one or more portable audio speakers.
Abstract: Embodiments of the disclosure may provide an apparatus and method of controlling and altering the acoustic output of audio devices that are used in conjunction with a computing device. In some embodiments, the apparatus and methods include a wireless speaker communication method and computing device software application that are configured to work together to more easily setup and deliver audio information from an audio source to one or more portable audio speakers.

Patent
06 May 2014
TL;DR: In this paper, a renderer generates audio transducer drive signals for the audio transducers from the audio data, and a clusterer clusters the transducers into a set of clusters.
Abstract: An audio apparatus comprises a receiver (605) for receiving audio data and audio transducer position data for a plurality of audio transducers (603). A renderer (607) renders the audio data by generating audio transducer drive signals for the audio transducers (603) from the audio data. Furthermore, a clusterer (609) clusters the audio transducers into a set of clusters in response to the audio transducer position data and to distances between audio transducers in accordance with a distance metric. A render controller (611) adapts the rendering in response to the clustering. The apparatus may for example select array processing techniques for specific subsets that contain audio transducers that are sufficiently close. The approach may allow automatic adaptation to audio transducer configurations thereby e.g. allowing a user increased flexibility in positioning loudspeakers.

Journal ArticleDOI
TL;DR: This paper uses datasets of naturalistic affective expressions continuously labeled over time and over different dimensions to analyze the transitions between levels of those dimensions and suggests modeling them as first-order Markov models, which are integrated in a multistage approach.
Abstract: Naturalistic affective expressions change at a rate much slower than the typical rate at which video or audio is recorded. This increases the probability that consecutive recorded instants of expressions represent the same affective content. In this paper, we exploit such a relationship to improve the recognition performance of continuous naturalistic affective expressions. Using datasets of naturalistic affective expressions (AVEC 2011 audio and video dataset, PAINFUL video dataset) continuously labeled over time and over different dimensions, we analyze the transitions between levels of those dimensions (e.g., transitions in pain intensity level). We use an information theory approach to show that the transitions occur very slowly and hence suggest modeling them as first-order Markov models. The dimension levels are considered to be the hidden states in the Hidden Markov Model (HMM) framework. Their discrete transition and emission matrices are trained by using the labels provided with the training set. The recognition problem is converted into a best path-finding problem to obtain the best hidden states sequence in HMMs. This is a key difference from previous use of HMMs as classifiers. Modeling of the transitions between dimension levels is integrated in a multistage approach, where the first level performs a mapping between the affective expression features and a soft decision value (e.g., an affective dimension level), and further classification stages are modeled as HMMs that refine that mapping by taking into account the temporal relationships between the output decision labels. The experimental results for each of the unimodal datasets show overall performance to be significantly above that of a standard classification system that does not take into account temporal relationships. In particular, the results on the AVEC 2011 audio dataset outperform all other systems presented at the international competition.

Patent
11 Feb 2014
TL;DR: In this paper, the authors proposed a method for augmenting sound using a mobile computing device and a connected audio output device, based on a response to the tone entered by a user, generating a hearing profile for the user and corresponding to the mobile computing devices and the audio output devices.
Abstract: One variation of a method for augmenting sound includes: at a mobile computing device and a connected audio output device, outputting a tone in a hearing test; based on a response to the tone entered by a user, generating a hearing profile for the user and corresponding to the mobile computing device and the audio output device; receiving an audio signal; qualifying the audio signal as a particular audio type from a set of audio types; selecting, from a set of sound profiles, a particular sound profile corresponding to the particular audio type; at the mobile computing device, transforming the audio signal into a processed audio signal according to the particular sound profile and the hearing profile of the user; and outputting the processed audio signal at the connected audio output device.

Patent
17 Sep 2014
TL;DR: In this paper, a beamforming-based automatic speech recognition (ASR) system is presented. But, the ASR system is configured to process speech based on multiple channels of audio received from a beamformer.
Abstract: In an automatic speech recognition (ASR) processing system, ASR processing may be configured to process speech based on multiple channels of audio received from a beamformer. The ASR processing system may include a microphone array and the beamformer to output multiple channels of audio such that each channel isolates audio in a particular direction. The multichannel audio signals may include spoken utterances/speech from one or more speakers as well as undesired audio, such as noise from a household appliance. The ASR device may simultaneously perform speech recognition on the multi-channel audio to provide more accurate speech recognition results.

Patent
23 Dec 2014
TL;DR: In this paper, a system and method for selecting an audio tuning profile to apply to audio signals to generate a sound field acoustically optimized at least at one listening position in a listening environment, such as a vehicle passenger compartment.
Abstract: A system and method is provided for selecting an audio tuning profile to apply to audio signals to generate a sound field acoustically optimized at least at one listening position in a listening environment, such as a vehicle passenger compartment. Each audio tuning profile may include a number of audio settings to be applied to an audio signal at one or more audio loudspeaker channels. The audio tuning profile may be selected based on the content or the source of the audio data signals. Thus, audio tuning may be based on the context of the audio.

Patent
Pei Xiang1
12 Feb 2014
TL;DR: In this paper, the authors describe techniques for capturing multi-channel audio data, where a device comprising one or more processors may be configured to implement the techniques, and the processors may analyze captured audio data to identify audio objects, and analyze video data captured concurrent to the capture of the audio data for identifying video objects.
Abstract: In general, techniques are described for capturing multi-channel audio data. A device comprising one or more processors may be configured to implement the techniques. The processors may analyze captured audio data to identify audio objects, and analyze video data captured concurrent to the capture of the audio data to identify video objects. The processors may then associate at least one of the audio objects with at least one of the video objects, and generate multi-channel audio data from the audio data based on the association of the at least one of audio objects with the at least one of the video objects.

Patent
24 Jun 2014
TL;DR: In this article, an audio listening device is programmed to receive an incoming ambient audio signal indicative of an external ambient sound and to store a plurality of desired audio triggers, which are then compared to the external audio signal.
Abstract: In at least one embodiment, a headphone listening apparatus including an audio listening device is provided. The audio listening device is programmed to receive an incoming ambient audio signal indicative of an external ambient sound and to store a plurality of desired audio triggers. The audio listening device is further programmed to compare the external ambient sound to the plurality of desired audio triggers and to transmit a notification signal to headphones in response to the ambient sound being generally similar to a first desired audio trigger of the plurality of desired audio triggers. The headphones are programmed to provide an audio alert to a user to indicate the presence of the external ambient sound.

Patent
14 Aug 2014
TL;DR: In this paper, a system for phase noise mitigated communication including a primary transmitter that converts a digital transmit signal to an analog transmit signal, a primary receiver that receives an analog receive signal and converts the analog receive signals to a digital receive signal, and a digital self-interference cancellation signal with the digital receive signals was proposed.
Abstract: A system for phase noise mitigated communication including a primary transmitter that converts a digital transmit signal to an analog transmit signal, a primary receiver that receives an analog receive signal and converts the analog receive signal to a digital receive signal, an analog self-interference canceller that samples the analog transmit signal, generates an analog self-interference cancellation signal based on the analog transmit signal, and combines the analog self-interference cancellation signal with the analog receive signal and a digital self-interference canceller that samples the digital transmit signal, generates a digital self-interference cancellation signal based on the digital transmit signal, and combines the digital self-interference cancellation signal with the digital receive signal.

Journal ArticleDOI
TL;DR: A unified model is presented for sound source localization and separation based on Bayesian nonparametrics that achieves state-of-the-art sound source separation quality and has more robust performance on the source number estimation under reverberant environments.
Abstract: Sound source localization and separation from a mixture of sounds are essential functions for computational auditory scene analysis. The main challenges are designing a unified framework for joint optimization and estimating the sound sources under auditory uncertainties such as reverberation or unknown number of sounds. Since sound source localization and separation are mutually dependent, their simultaneous estimation is required for better and more robust performance. A unified model is presented for sound source localization and separation based on Bayesian nonparametrics. Experiments using simulated and recorded audio mixtures show that a method based on this model achieves state-of-the-art sound source separation quality and has more robust performance on the source number estimation under reverberant environments.

Patent
11 Aug 2014
TL;DR: In this paper, a system and method for non-linear digital self-interference cancellation including a pre-processor that generates a first pre-processed digital transmit signal from a digital transmission signal of a full-duplex radio, a nonlinear transformer, a transform adaptor that sets the transform configuration of the nonlinear transform, and a post-processor combined the non-logical selfinterference signal with a digital receive signal of the fullduplex radios is presented.
Abstract: A system and method for non-linear digital self-interference cancellation including a pre-processor that generates a first pre-processed digital transmit signal from a digital transmit signal of a full-duplex radio, a non-linear transformer that transforms the first pre-processed digital transmit signal into a non-linear self-interference signal according to a transform configuration, a transform adaptor that sets the transform configuration of the non-linear transformer, and a post-processor that combines the non-linear self-interference signal with a digital receive signal of the full-duplex radio.

Patent
Boby Iyer1
13 Jun 2014
TL;DR: In this paper, a smart audio output volume control is proposed, which correlates a volume level of an audio output to that of an input that triggered generation of the audio output, and outputs audio output response at an output volume level correlated to the input volume level.
Abstract: A method implemented by processing and other audio components of an electronic device provides a smart audio output volume control, which correlates a volume level of an audio output to that of an audio input that triggered generation of the audio output. According to one aspect, the method includes: receiving an audio input that triggers generation of an audio output response from the user device; determining an input volume level corresponding to the received audio input; and outputting the audio output response at an output volume level correlated to the input volume level. The media output volume control level of the device is changed from a preset normal level, including from a mute setting, to the determined output level for outputting the audio output. Following, the media output volume control level is automatically reset to a pre-set volume level for normal media output.

Patent
24 Oct 2014
TL;DR: In this paper, a method of analyzing audio signals, such as for a drive monitoring system, includes recording an audio signal from a mobile device, the audio signal including a background audio stream and a residual audio signal.
Abstract: A method of analyzing audio signals, such as for a drive monitoring system, includes recording an audio signal from a mobile device, the audio signal including a background audio stream and a residual audio signal. Communication with an audio database is performed to obtain a reference signal. If a match between the background audio stream and the reference signal is determined, a time alignment between the background audio stream and the reference is computed. At least a portion of the recorded audio signal is aligned with the reference signal using the time alignment. The background audio stream is canceled from the recorded audio signal, to result in the residual audio stream. A computer processor is used to determine a driving behavior factor from the residual audio stream.

Patent
18 Feb 2014
TL;DR: In this article, a personal audio device including multiple output transducers for reproducing different frequency bands of a source audio signal, includes an adaptive noise canceling (ANC) circuit that adaptively generates an anti-noise signal for each of the transducers from at least one microphone signal that measures the ambient audio to generate antinoise signals.
Abstract: A personal audio device including multiple output transducers for reproducing different frequency bands of a source audio signal, includes an adaptive noise canceling (ANC) circuit that adaptively generates an anti-noise signal for each of the transducers from at least one microphone signal that measures the ambient audio to generate anti-noise signals. The anti-noise signals are generated by separate adaptive filters such that the anti-noise signals cause substantial cancelation of the ambient audio at their corresponding transducers. The use of separate adaptive filters provides low-latency operation, since a crossover is not needed to split the anti-noise into the appropriate frequency bands. The adaptive filters can be implemented or biased to generate anti-noise only in the frequency band corresponding to the particular adaptive filter. The anti-noise signals are combined with source audio of the appropriate frequency band to provide outputs for the corresponding transducers.

Journal ArticleDOI
TL;DR: This work presents a novel feature extraction approach called nonuniform scale-frequency map for environmental sound classification in home automation and reveals that the proposed approach is superior to the other time-frequency methods.
Abstract: This work presents a novel feature extraction approach called nonuniform scale-frequency map for environmental sound classification in home automation. For each audio frame, important atoms from the Gabor dictionary are selected by using the Matching Pursuit algorithm. After the system disregards phase and position information, the scale and frequency of the atoms are extracted to construct a scale-frequency map. Principal Component Analysis (PCA) and Linear Discriminate Analysis (LDA) are then applied to the scale-frequency map, subsequently generating the proposed feature. During the classification phase, a segment-level multiclass Support Vector Machine (SVM) is operated. Experiments on a 17-class sound database indicate that the proposed approach can achieve an accuracy rate of 86.21%. Furthermore, a comparison reveals that the proposed approach is superior to the other time-frequency methods.

Patent
16 May 2014
TL;DR: In this article, a system that produces haptic effects from low-frequency effects audio signals is described. But the haptic signal is transmitted to a haptic output device, where it is sent to a user's haptic device to produce one or more effects.
Abstract: A system is provided that produces haptic effects. The system receives an audio signal that includes a low-frequency effects audio signal. The system further extracts the low-frequency effects audio signal from the audio signal. The system further converts the low-frequency effects audio signal into a haptic signal by shifting frequencies of the low-frequency effects audio signal to frequencies within a target frequency range of a haptic output device. The system further sends the haptic signal to the haptic output device, where the haptic signal causes the haptic output device to output one or more haptic effects.