scispace - formally typeset
Search or ask a question
Topic

Noise

About: Noise is a research topic. Over the lifetime, 5111 publications have been published within this topic receiving 69407 citations. The topic is also known as: Мопсы танцуют под радио бандитов из сталкера 10 часов.


Papers
More filters
Journal ArticleDOI
TL;DR: It is shown that under certain conditions the performance of a suboptimal detector may be improved by adding noise to the received data.
Abstract: It is shown that under certain conditions the performance of a suboptimal detector may be improved by adding noise to the received data. The reasons for this counterintuitive result are explained and a computer simulation example given.

207 citations

Proceedings ArticleDOI
15 Apr 2018
TL;DR: This is the first audiovisual fusion model which simultaneously learns to extract features directly from the image pixels and audio waveforms and performs within-context word recognition on a large publicly available dataset (LRW).
Abstract: Several end-to-end deep learning approaches have been recently presented which extract either audio or visual features from the input images or audio signals and perform speech recognition. However, research on end-to-end audiovisual models is very limited. In this work, we present an end-to-end audiovisual model based on residual networks and Bidirectional Gated Recurrent Units (BGRUs). To the best of our knowledge, this is the first audiovisual fusion model which simultaneously learns to extract features directly from the image pixels and audio waveforms and performs within-context word recognition on a large publicly available dataset (LRW). The model consists of two streams, one for each modality, which extract features directly from mouth regions and raw waveforms. The temporal dynamics in each stream/modality are modeled by a 2-layer BGRU and the fusion of multiple streams/modalities takes place via another 2-layer BGRU. A slight improvement in the classification rate over an end-to-end audio-only and MFCC-based model is reported in clean audio conditions and low levels of noise. In presence of high levels of noise, the end-to-end audiovisual model significantly outperforms both audio-only models.

206 citations

PatentDOI
Dimitri Kanevsky1, Stephane H. Maes1
TL;DR: A system and method for indexing segments of audio/multimedia files and data streams for storage in a database according to audio information such as speaker identity, the background environment and channel, and/or the transcription of the spoken utterances.
Abstract: A system and method for indexing segments of audio/multimedia files and data streams for storage in a database according to audio information such as speaker identity, the background environment and channel (music, street noise, car noise, telephone, studio noise, speech plus music, speech plus noise, speech over speech), and/or the transcription of the spoken utterances. The content or topic of the transcribed text can also be determined using natural language understanding to index based on the context of the transcription. A user can then retrieve desired segments of the audio file from the database by generating a query having one or more desired parameters based on the indexed information.

203 citations

Journal ArticleDOI
Thomas Lecocq1, Stephen Hicks2, Koen Van Noten1, Kasper van Wijk3, Paula Koelemeijer4, Raphael S. M. De Plaen5, Frédérick Massin6, Gregor Hillers7, Robert E. Anthony8, Maria-Theresia Apoloner9, Mario Arroyo-Solórzano10, Jelle Assink11, Pınar Büyükakpınar12, Pınar Büyükakpınar13, Andrea Cannata14, Andrea Cannata15, Flavio Cannavò15, Sebastián Carrasco16, Corentin Caudron17, Esteban J. Chaves, Dave Cornwell18, David Craig19, Olivier F. C. den Ouden20, Olivier F. C. den Ouden11, Jordi Diaz21, Stefanie Donner22, Christos Evangelidis, Läslo Evers11, Läslo Evers20, Benoit Fauville, Gonzalo A. Fernandez, Dimitrios Giannopoulos23, Steven J. Gibbons24, Társilo Girona25, Bogdan Grecu, Marc Grunberg26, György Hetényi27, Anna Horleston28, Adolfo Inza, Jessica C. E. Irving29, Jessica C. E. Irving28, Mohammadreza Jamalreyhani13, Mohammadreza Jamalreyhani30, Alan L. Kafka31, Mathijs Koymans11, Mathijs Koymans20, C. R. Labedz32, Eric Larose17, Nathaniel J. Lindsey33, Mika McKinnon34, Mika McKinnon35, T. Megies36, Meghan S. Miller37, William G. Minarik38, Louis Moresi37, Victor H. Márquez-Ramírez5, Martin Möllhoff19, Ian M. Nesbitt39, Shankho Niyogi40, Javier Ojeda41, Adrien Oth, Simon Richard Proud42, Jay J. Pulli43, Jay J. Pulli31, Lise Retailleau44, Annukka E. Rintamäki7, Claudio Satriano44, Martha K. Savage45, Shahar Shani-Kadmiel20, Reinoud Sleeman11, Efthimios Sokos46, Klaus Stammler22, Alexander E. Stott2, Shiba Subedi27, Mathilde B. Sørensen47, Taka'aki Taira48, Mar Tapia49, Fatih Turhan12, Ben A. van der Pluijm50, Mark Vanstone, Jérôme Vergne26, Tommi Vuorinen7, Tristram Warren42, Joachim Wassermann36, Han Xiao51 
Royal Observatory of Belgium1, Imperial College London2, University of Auckland3, Royal Holloway, University of London4, National Autonomous University of Mexico5, Swiss Seismological Service6, University of Helsinki7, United States Geological Survey8, Central Institution for Meteorology and Geodynamics9, University of Costa Rica10, Royal Netherlands Meteorological Institute11, Kandilli Observatory and Earthquake Research Institute12, University of Potsdam13, University of Catania14, National Institute of Geophysics and Volcanology15, University of Cologne16, University of Savoy17, King's College, Aberdeen18, Dublin Institute for Advanced Studies19, Delft University of Technology20, Spanish National Research Council21, Institute for Geosciences and Natural Resources22, Mediterranean University23, Norwegian Geotechnical Institute24, University of Alaska Fairbanks25, University of Strasbourg26, University of Lausanne27, University of Bristol28, Princeton University29, University of Tehran30, Boston College31, California Institute of Technology32, Stanford University33, University of British Columbia34, Search for extraterrestrial intelligence35, Ludwig Maximilian University of Munich36, Australian National University37, McGill University38, University of Maine39, University of California, Riverside40, University of Chile41, University of Oxford42, BBN Technologies43, Institut de Physique du Globe de Paris44, Victoria University of Wellington45, University of Patras46, University of Bergen47, University of California, Berkeley48, Institut d'Estudis Catalans49, University of Michigan50, University of California, Santa Barbara51
11 Sep 2020-Science
TL;DR: The 2020 seismic noise quiet period is the longest and most prominent global anthropogenic seismic noise reduction on record and suggests that seismology provides an absolute, real-time estimate of human activities.
Abstract: Human activity causes vibrations that propagate into the ground as high-frequency seismic waves. Measures to mitigate the coronavirus disease 2019 (COVID-19) pandemic caused widespread changes in human activity, leading to a months-long reduction in seismic noise of up to 50%. The 2020 seismic noise quiet period is the longest and most prominent global anthropogenic seismic noise reduction on record. Although the reduction is strongest at surface seismometers in populated areas, this seismic quiescence extends for many kilometers radially and hundreds of meters in depth. This quiet period provides an opportunity to detect subtle signals from subsurface seismic sources that would have been concealed in noisier times and to benchmark sources of anthropogenic noise. A strong correlation between seismic noise and independent measurements of human mobility suggests that seismology provides an absolute, real-time estimate of human activities.

202 citations

Proceedings ArticleDOI
S. Boll1
02 Apr 1979
TL;DR: It is shown spectral subtraction can be implemented in terms of a nonstationary, multiplicative, frequency domain filter which changes with the time varying spectral characteristics of the speech.
Abstract: Spectral subtraction has been shown to be an effective approach for reducing ambient acoustic noise in order to improve the intelligibility and quality of digitally compressed speech. This paper presents a set of implementation specifications to improve algorithm performance and minimize algorithm computation and memory requirements. It is shown spectral subtraction can be implemented in terms of a nonstationary, multiplicative, frequency domain filter which changes with the time varying spectral characteristics of the speech. Using this filter a speech activity detector is defined and used to allow the algorithm to adapt automatically to changing ambient noise environments. Also the bandwidth information of this filter is used to further reduce the residual narrowband noise components which remain after spectral subtraction.

200 citations


Network Information
Related Topics (5)
Speech processing
24.2K papers, 637K citations
73% related
Noise
110.4K papers, 1.3M citations
72% related
Signal processing
73.4K papers, 983.5K citations
69% related
Piston
176.1K papers, 825.4K citations
69% related
Hidden Markov model
28.3K papers, 725.3K citations
67% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20221
2021125
2020217
2019224
2018243
2017214