scispace - formally typeset
Search or ask a question

Showing papers on "Noise published in 2008"


Journal ArticleDOI
TL;DR: First data with cochlear implanted subjects show that both LIST and LINT are feasible and are capable of mapping a large range of hearing disabilities.
Abstract: A Dutch sentence test (LIST) and a Dutch number test (LINT) have been developed and validated for the accurate measurement of speech reception thresholds (SRT) in quiet and in noise with severely hearing-impaired individuals and cochlear implant recipients in Flanders and the Netherlands. The LIST consists of 35 lists of 10 sentences of equal known difficulty uttered by a female speaker; while the LINT consists of 400 numbers (1–100) by two male and two female speakers. Normative values were determined at fixed S/N ratios and using the adaptive method (Plomp & Mimpen, ), yielding identical results for SRT and slope. For the LIST, average fitted SRTs were 27.1 (0.9) dB SPL in quiet and −7.8 dB (0.2) SNR in noise. In addition, the LIST in noise displayed a steep discrimination function (17%/dB) and good reliability (within-subject standard deviation=1.2 dB). For the LINT average fitted SRTs in quiet were 20.7 (0.9) dB SPL and about −9.0 dB SNR in noise. Again, the slopes of the performance intensity functio...

195 citations


Journal ArticleDOI
TL;DR: The combination of various deformation- and fault-tolerance mechanisms allows us to employ standard indexing techniques to obtain an efficient, index-based matching procedure, thus providing an important step towards semantically searching large-scale real-world music collections.
Abstract: Given a large audio database of music recordings, the goal of classical audio identification is to identify a particular audio recording by means of a short audio fragment. Even though recent identification algorithms show a significant degree of robustness towards noise, MP3 compression artifacts, and uniform temporal distortions, the notion of similarity is rather close to the identity. In this paper, we address a higher level retrieval problem, which we refer to as audio matching: given a short query audio clip, the goal is to automatically retrieve all excerpts from all recordings within the database that musically correspond to the query. In our matching scenario, opposed to classical audio identification, we allow semantically motivated variations as they typically occur in different interpretations of a piece of music. To this end, this paper presents an efficient and robust audio matching procedure that works even in the presence of significant variations, such as nonlinear temporal, dynamical, and spectral deviations, where existing algorithms for audio identification would fail. Furthermore, the combination of various deformation- and fault-tolerance mechanisms allows us to employ standard indexing techniques to obtain an efficient, index-based matching procedure, thus providing an important step towards semantically searching large-scale real-world music collections.

125 citations


Journal ArticleDOI
Shumeet Baluja1, Michele Covell1
TL;DR: Waveprint uses a combination of computer-vision techniques and large-scale data-stream processing algorithms to create compact fingerprints of audio data that can be efficiently matched, and is more efficient in terms of memory usage and computation than previous state of the art systems.

117 citations


Journal ArticleDOI
TL;DR: Repeated ocean ambient noise measurements at a shallow water (110 m) site near San Clemente Island reveal little increase in noise levels in the absence of local ships.
Abstract: Repeated ocean ambient noise measurements at a shallow water (110 m) site near San Clemente Island reveal little increase in noise levels in the absence of local ships. Navy reports document ambient noise levels at this site in 1958-1959 and 1963-1964 and a seafloor recorder documents noise during 2005-2006. When noise from local ships was excluded from the 2005-2006 recordings, median sound levels were essentially the same as were observed in 1958 and 1963. Local ship noise, however, was present in 31% of the recordings in 1963 but was present in 89% of the recordings in 2005-2006. Median levels including local ships are 6-9 dB higher than median levels chosen from times when local ship noise was absent. Biological sounds and the sound of wind driven waves controlled ambient noise levels in the absence of local ships. The median noise levels at this site are low for an open water site due to the poor acoustic propagation and low average wind speeds. The quiet nature of this site in the absence of local ships allows correlation of wind speed to wave noise across the 10-220 Hz spectral band of this study.

109 citations


PatentDOI
Pei Xiang1, Song Wang1, Kulkarni Prajakt1, Samir Kumar Gupta1, Choy Eddie L T1 
TL;DR: In this article, a multi-microphone active noise cancellation (MMANC) functionality is used to remove background noise from audio information picked up on microphones of the mobile audio device.
Abstract: A mobile audio device (for example, a cellular telephone, personal digital audio player, or MP3 player) performs Audio Dynamic Range Control (ADRC) and Automatic Volume Control (AVC) to increase the volume of sound emitted from a speaker of the mobile audio device so that faint passages of the audio will be more audible. This amplification of faint passages occurs without overly amplifying other louder passages, and without substantial distortion due to clipping. Multi-Microphone Active Noise Cancellation (MMANC) functionality is, for example, used to remove background noise from audio information picked up on microphones of the mobile audio device. The noise-canceled audio may then be communicated from the device. The MMANC functionality generates a noise reference signal as an intermediate signal. The intermediate signal is conditioned and then used as a reference by the AVC process. The gain applied during the AVC process is a function of the noise reference signal.

86 citations


Journal ArticleDOI
TL;DR: It is found that nestling tree swallows exposed to playbacks of white noise from days 3 to 15 posthatch had begging calls with higher minimum frequencies and narrower frequency ranges than control nestlings raised in nests without added noise.
Abstract: Much of the research examining the effects of ambient noise on communication has focused on adult birds using acoustic signals in mate attraction and territory defense. Here, we examine the effects of noise exposure on young birds, which use acoustic signals to solicit food from parents. We found that nestling tree swallows (Tachycineta bicolor) exposed to playbacks of white noise, within natural amplitude levels, from days 3 to 15 posthatch had begging calls with higher minimum frequencies and narrower frequency ranges than control nestlings raised in nests without added noise. Differences in begging call structure also persisted in the absence of noise. Two days after the noise was removed, experimental nestlings produced calls that were narrower in frequency range and less complex than control nestlings. We found no difference in growth between experimental and control nestlings. Our results suggest that long-term noise exposure affects the structure of nestling begging calls. These effects persist in the absence of noise, suggesting that noise may affect how calls develop. Key words: ambient noise, begging calls, call structure, nestling birds, parent–offspring communication. [Behav Ecol]

79 citations


Journal ArticleDOI
TL;DR: Examination of children's ability to perform simultaneous tasks in quiet and in noise suggests that children with minimal HL may be unable to respond to a difficult listening task by drawing resources from other tasks to compensate.
Abstract: Purpose The purpose of the present study was to examine the effect of minimal hearing loss (HL) on children’s ability to perform simultaneous tasks in quiet and in noise. Method Ten children with m...

68 citations


01 Sep 2008
TL;DR: IWAENC2008: the 11th International Workshop on Acoustic Echo and Noise Control, September 14-17, 2008, Seattle, Washington USA.
Abstract: IWAENC2008: the 11th International Workshop on Acoustic Echo and Noise Control, September 14-17, 2008, Seattle, Washington USA

66 citations


Journal ArticleDOI
TL;DR: High noise annoyance consistently correlated with frequent interference of activities and reducing noise at night (10 pm-7 am) was more important than during the rest of the day.
Abstract: This study evaluated road traffic noise annoyance in Canada in relation to activity interference, subject concerns about noise and self-reported distance to a major road. Random digit dialing was employed to survey a representative sample of 2565 Canadians 15years of age and older. Respondents highly annoyed by traffic noise were significantly more likely to perceive annoyance to negatively impact health, live closer to a heavily traveled road and report that traffic noise often interfered with daily activities. Sex, age, education level, community size and province had statistically significant associations with traffic noise annoyance. High noise annoyance consistently correlated with frequent interference of activities. Reducing noise at night (10pm–7am) was more important than during the rest of the day.

61 citations


Journal ArticleDOI
TL;DR: Sentence recognition in noise was employed to investigate the development of temporal resolution in school-age children and children under 14 years performed worse than adults and needed greater S/Ns in order to perform as well as adults.
Abstract: Sentence recognition in noise was employed to investigate the development of temporal resolution in school-age children. Eighty children aged 6 to 15 years and 16 young adults participated. Reception thresholds for sentences (RTSs) were determined in quiet and in backgrounds of competing continuous and interrupted noise. In the noise conditions, RTSs were determined with a fixed noise level. RTSs were higher in quiet for six- to seven-year-old children (p = .006). Performance was better in the interrupted noise evidenced by lower RTS signal-to-noise ratios (S/Ns) relative to continuous noise (p < .0001). An effect of age was found in noise (p < .0001) where RTS S/Ns decreased with increasing age. Specifically, children under 14 years performed worse than adults. "Release from masking" was computed by subtracting RTS S/Ns in interrupted noise from continuous noise for each participant. There was no significant difference in RTS S/N difference scores as a function of age (p = .057). Children were more adversely affected by noise and needed greater S/Ns in order to perform as well as adults. Since there was no effect of age on the amount of release from masking, one can suggest that school-age children have inherently poorer processing efficiency rather than temporal resolution.

59 citations


Patent
26 Aug 2008
TL;DR: In this paper, a perceptual spectral decoder comprises a noise filler, operating according to the method for perceptual spectral decoding, and the set of reconstructed spectral coefficients of a frequency domain formed by the spectrum filling is converted into an audio signal of a time domain.
Abstract: A method for perceptual spectral decoding comprises decoding of spectral coefficients recovered from a binary flux into decoded spectral coefficients of an initial set of spectral coefficients. The initial set of spectral coefficients are spectrum filled. The spectrum filling comprises noise filling of spectral holes by setting spectral coefficients in the initial set of spectral coefficients not being decoded from the binary flux equal to elements derived from the decoded spectral coefficients. The set of reconstructed spectral coefficients of a frequency domain formed by the spectrum filling is converted into an audio signal of a time domain. A perceptual spectral decoder comprises a noise filler, operating according to the method for perceptual spectral decoding.

Patent
12 Nov 2008
TL;DR: In this article, a speech signal from a microphone may be improved by identifying and dampening background noise to enhance speech, using stochastic models to determine which portions of the signal are speech and which portions are noise.
Abstract: A system distinguishes a primary audio source and background noise to improve the quality of an audio signal. A speech signal from a microphone may be improved by identifying and dampening background noise to enhance speech. Stochastic models may be used to model speech and to model background noise. The models may determine which portions of the signal are speech and which portions are noise. The distinction may be used to improve the signal's quality, and for speaker identification or verification.

Patent
30 Oct 2008
TL;DR: In this article, a technique for suppressing non-stationary noise, such as wind noise, in an audio signal is described, in which a series of frames of the audio signal are analyzed to detect whether the audio signals comprises nonstationary noises.
Abstract: A technique for suppressing non-stationary noise, such as wind noise, in an audio signal is described. In accordance with the technique, a series of frames of the audio signal is analyzed to detect whether the audio signal comprises non-stationary noise. If it is detected that the audio signal comprises non-stationary noise, a number of steps are performed. In accordance with these steps, a determination is made as to whether a frame of the audio signal comprises non-stationary noise or speech and non-stationary noise. If it is determined that the frame comprises non-stationary noise, a first filter is applied to the frame and if it is determined that the frame comprises speech and non-stationary noise, a second filter is applied to the frame.

Journal ArticleDOI
TL;DR: Spectral analysis of equipment and activity noise have shown noise predominantly in the 1–8 KHz spectrum, which warrant immediate implementation of noise reduction protocols as a standard of care in the NICU.
Abstract: Objective To perform spectral analysis of noise generated by equipments and activities in a level III neonatal intensive care unit (NICU) and measure the real time sequential hourly noise levels over a 15 day period. Methods Noise generated in the NICU by individual equipments and activities were recorded with a digital spectral sound analyzer to perform spectral analysis over 0.5–8 KHz. Sequential hourly noise level measurements in all the rooms of the NICU were done for 15 days using a digital sound pressure level meter. Independent sample t test and one way ANOVA were used to examine the statistical significance of the results. The study has a 90% power to detect at least 4 dB differences from the recommended maximum of 50 dB with 95 % confidence. Results The mean noise levels in the ventilator room and stable room were 19.99 dB (A) sound pressure level (SPL) and 11.81 dB (A) SPL higher than the maximum recommended of 50 dB (A) respectively (p < 0.001). The equipments generated 19.11 dB SPL higher than the recommended norms in 1–8 KHz spectrum. The activities generated 21.49 dB SPL higher than the recommended norms in 1–8 KHz spectrum (p< 0.001). The ventilator and nebulisers produced excess noise of 8.5 dB SPL at the 0.5 KHz spectrum.Conclusion Noise level in the NICU is unacceptably high. Spectral analysis of equipment and activity noise have shown noise predominantly in the 1–8 KHz spectrum. These levels warrant immediate implementation of noise reduction protocols as a standard of care in the NICU.


Journal ArticleDOI
TL;DR: This report summarizes the development of the Castilian Spanish HINT over the past 25 years in the 20 countries where Spanish is the official language.
Abstract: Spanish is the third most commonly spoken language in the world, after English and Mandarin (Graddol, 2006). The 400 million Spanish speakers are widely dispersed in the western hemisphere. There a...

Journal ArticleDOI
TL;DR: The genetic algorithm (GA) method is used to achieve a global optimal solution with a fast convergence rate for this spectral estimation problem of the autoregressive moving average (ARMA) power spectral density when measurements are corrupted by noise and by missed observations.

Patent
24 Mar 2008
TL;DR: In this article, a sound reproducing device is provided including a communication unit that transmits/receives signals; at least one sound output unit that outputs sound based upon a signal having been received, a sound pickup unit that picks up sound and generates audio data, an echo canceller unit that stores any echo signal contained in the signal, and a noise reducing unit that generates a cancel signal to be used to cancel noise by using the audio data.
Abstract: A sound reproducing device is provided including a communication unit that transmits/receives signals; at least one sound output unit that outputs sound based upon a signal having been received, a sound pickup unit that picks up sound and generates audio data, an echo canceller unit that stores any echo signal contained in the signal having been received at the communication unit and generates a dummy echo signal by using the stored echo signal, and a noise reducing unit that generates a cancel signal to be used to cancel noise by using the audio data if the sound picked up at the sound pickup unit contains noise originating from a noise source and outputs a composite signal generated by combining the output signal from the echo canceller unit and the cancel signal.

Patent
18 Jun 2008
TL;DR: In this article, the perceived loudness of an audio signal is measured by modifying a spectral representation of the audio signal as a function of a reference spectral shape so that the spectral representation conforms more closely to the reference signal spectral shape.
Abstract: The perceived loudness of an audio signal is measured by modifying a spectral representation of an audio signal as a function of a reference spectral shape so that the spectral representation of the audio signal conforms more closely to the reference spectral shape, and determining the perceived loudness of the modified spectral representation of the audio signal.

Journal ArticleDOI
TL;DR: By comparing these alternative LP models to the conventional LP model in terms of frequency estimation accuracy, residual spectral flatness, and perceptual frequency resolution, several new and promising approaches to LP-based audio modeling are obtained.
Abstract: While linear prediction (LP) has become immensely popular in speech modeling, it does not seem to provide a good approach for modeling audio signals. This is somewhat surprising, since a tonal signal consisting of a number of sinusoids can be perfectly predicted based on an (all-pole) LP model with a model order that is twice the number of sinusoids. We provide an explanation why this result cannot simply be extrapolated to LP of audio signals. If noise is taken into account in the tonal signal model, a low-order all-pole model appears to be only appropriate when the tonal components are uniformly distributed in the Nyquist interval. Based on this observation, different alternatives to the conventional LP model can be suggested. Either the model should be changed to a pole-zero, a high-order all-pole, or a pitch prediction model, or the conventional LP model should be preceded by an appropriate frequency transform, such as a frequency warping or downsampling. By comparing these alternative LP models to the conventional LP model in terms of frequency estimation accuracy, residual spectral flatness, and perceptual frequency resolution, we obtain several new and promising approaches to LP-based audio modeling.

Proceedings ArticleDOI
14 Oct 2008
TL;DR: A new audio secret sharing scheme which is secure and ideal is proposed, which is (k, n) threshold for k ges 2, where the previous schemes were (2, n).
Abstract: In this paper, a new audio secret sharing scheme which is secure and ideal is proposed. This scheme is (k, n) threshold for k ges 2, where the previous schemes were (2, n). It is assumed that both, ldquosharesrdquo and ldquosecretrdquo, are audio files instead of a bit string secret proposed in the previous works. The audio secret is reconstructed without any computation, that is only by playing audio shares simultaneously. Moreover, the simulation results shows that the new scheme is not sensitive to audio noise.

Journal ArticleDOI
TL;DR: In this paper, the authors conducted a study in 17 selected measurement locations in the northern part of Klaipeda city, where the measurements were taken in May, June, July, August, September, October and November.
Abstract: The problem of noise is topical not only in Lithuania but the world over as well. The northern part of Klaipeda city is distinct for its industry and heavy traffic in the streets. Noise research was carried out in 17 selected measurement locations in the northern part of Klaipeda city. Noise measurements were taken in May, June, July, August, September, October and November. The measurements were made three times during the day: in the day time from 6 a.m. till 6 p.m., in the evening from 6 p.m. till 10 p.m. and at night from 10 p.m. till 6 a.m. The locations of the measurements are marked on the map. In order to distinguish the source of bigger noise between industry and transport, the northern part was divided into two belts. Industry is prevalent in the first belt, whereas the main troublemakers in the second belt are motor vehicles. The measured noise level is compared with permissible standards in measurement locations, where noise level is usually exceeded, and the analysis of noise levels ...

Patent
Shingo Ikeda1
28 Feb 2008
TL;DR: In this paper, a signal processing apparatus includes sound collecting elements, a noise detector for detecting a level of noise in a low-frequency band of audio signals output from the sound collection elements, and a noise reduction unit for reducing the noise in the audio signals in accordance with a signal output from noise detector.
Abstract: A signal processing apparatus includes sound collecting elements, a noise detector for detecting a level of noise in a low-frequency band of audio signals output from the sound collecting elements, a noise reduction unit for reducing the noise in the audio signals in accordance with a signal output from the noise detector, a converter for converting the audio signals output from the noise reduction unit into pieces of audio data corresponding to channels including a low-frequency channel and other channels, a low-frequency channel controller for controlling a level of the audio data corresponding to the low-frequency channel in accordance with the level of the noise detected using the noise detector, and a level controller for controlling the level of the audio data of the low-frequency channel output from the low-frequency channel controller and levels of the pieces of audio data corresponding to the other channels output from the converter.

Journal ArticleDOI
TL;DR: Use of a 50/50 audio-mixing ratio is recommended for optimal performance with an FM system in quiet and noisy listening situations and suggested that use of a personal FM system resulted in significant improvements in speech recognition in quiet at low-presentation levels,speech recognition in noise, and perceived benefit in noise.
Abstract: Background Use of personal frequency modulated (FM) systems significantly improves speech recognition in noise for users of cochlear implants (CI). There are, however, a number of adjustable parameters of the cochlear implant and FM receiver that may affect performance and benefit, and there is limited evidence to guide audiologists in optimizing these parameters. Purpose This study examined the effect of two sound processor audio-mixing ratios (30/70 and 50/50) on speech recognition and functional benefit for adults with CIs using the Advanced Bionics Auria sound processors. Research design Fully-repeated repeated measures experimental design. Each subject participated in every speech-recognition condition in the study, and qualitative data was collected with subject questionnaires. Study sample Twelve adults using Advanced Bionics Auria sound processors. Participants had greater than 20% correct speech recognition on consonant-nucleus-consonant (CNC) monosyllabic words in quiet and had used their CIs for at least six months. Intervention Performance was assessed at two audio-mixing ratios (30/70 and 50/50). For the 50/50 mixing ratio, equal emphasis is placed on the signals from the sound processor and the FM system. For the 30/70 mixing ratio, the signal from the microphone of the sound processor is attenuated by 10 dB. Data collection and analysis Speech recognition was assessed at two audio-mixing ratios (30/70 and 50/50) in quiet (35 and 50 dB HL) and in noise (+5 signal-to-noise ratio) with and without the personal FM system. After two weeks of using each audio-mixing ratio, the participants completed subjective questionnaires. Results Study results suggested that use of a personal FM system resulted in significant improvements in speech recognition in quiet at low-presentation levels, speech recognition in noise, and perceived benefit in noise. Use of the 30/70 mixing ratio resulted in significantly poorer speech recognition for low-level speech that was not directed to the FM transmitter. There was no significant difference in speech recognition in noise or functional benefit between the two audio-mixing ratios. Conclusions Use of a 50/50 audio-mixing ratio is recommended for optimal performance with an FM system in quiet and noisy listening situations.

Journal ArticleDOI
TL;DR: The aim of the job was to determine if the different classes (trucks, cars, and motorbikes) could be separable using different time and frequency characteristics: zero crossing ratios, spectral centroids, spectral rolloff, subband energies and mel frequency cepstral coefficients.
Abstract: When modeling a city or a secondary road to calculate a noise map, the information about the number of heavy/light vehicles and the average speed it is not always available. In this paper, a first approach to get an automatic classification of vehicles is presented. The system is based on the classification of the audio signal that a noise source produces. Some basic classifiers have been tested (k‐nearest neighbours, FLD (Fischer linear discriminator) and principal components. As first approach, the aim of the job was to determine if the different classes (trucks, cars, and motorbikes) could be separable using different time and frequency characteristics: zero crossing ratios, spectral centroids, spectral rolloff, subband energies and mel frequency cepstral coefficients. The results shows that for some of the characteristics tested, the signals are separable, so a continuous traffic noise signal could be processed to get the information of the number of heavy trucks, cars, and motorbikes that passed by during the recording time. Information of a stereo recording could be used to get information of the direction of the vehicle. At this moment, combining three characteristics and FLD, errors bellow 9% can be reported.

Journal ArticleDOI
TL;DR: This report summarizes the procedures for developing the hearing in noise test in Malay language.
Abstract: The Malay language is an Austronesian language spoken by the Malay people in Malaysia, southern Thailand, the Philippines, Singapore, central eastern Sumatra, the Riau Islands, and parts of the coa...

Patent
Rongshan Yu1
10 Sep 2008
TL;DR: In this paper, the level of estimated noise components is determined at least in part by comparing an estimated noise component level with the audio signal in the subband and increasing the estimation of the noise components level by a predetermined amount.
Abstract: Enhancing speech components of an audio signal composed of speech and noise components includes controlling the gain of the audio signal in ones of its subbands, wherein the gain in a subband is reduced as the level of estimated noise components increases with respect to the level of speech components, wherein the level of estimated noise components is determined at least in part by (1) comparing an estimated noise components level with the level of the audio signal in the subband and increasing the estimated noise components level in the subband by a predetermined amount when the input signal level in the subband exceeds the estimated noise components level in the subband by a limit for more than a defined time, or (2) obtaining and monitoring the signal-to-noise ratio in the subband and increasing the estimated noise components level in the subband by a predetermined amount when the signal-to-noise ratio in the subband exceeds a limit for more than a defined time.

Proceedings ArticleDOI
12 Mar 2008
TL;DR: This paper proposes an audio watermarking technique for protecting audio copyrights based on human psychoacoustic model, discrete wavelet transform, neural network, and error correcting code to guarantee that the embedded watermark is inaudible.
Abstract: Audio watermarking is a method that embeds inaudible information into digital audio data. This paper proposes an audio watermarking technique for protecting audio copyrights based on human psychoacoustic model (HPM), discrete wavelet transform (DWT), neural network (NN) and error correcting code. Our technique exploits frequency perceptual masking studied in HPM to guarantee that the embedded watermark is inaudible. To assure watermark embedding and extraction, neural network is used to memorize the relationships between a Wavelet central sample and its neighbors. To increase robustness of the scheme, the watermark is refined by the Hamming error correcting code while the encoded mark is embedded as new watermark in the transformed audio signal. Our audio watermarking algorithm is robust to common audio signal manipulations like MP3 compression, noise addition, silence addition, bit per sample conversion, noise reduction, dynamic changes and Notch filtering. Furthermore, it allows blind retrieval of embedded watermark which does not need the original audio and makes the watermark perceptually inaudible.

Journal ArticleDOI
Hui-Juan Li1, Wen-Bo Yu1, Jing-qiao Lü1, Lin Zeng1, Nan Li1, Yiming Zhao1 
TL;DR: The authors aimed to evaluate traffic noise level and noise annoyance in Beijing and the impact of the noise on the quality of life of the residences and performed a cross-sectional study in a 12-floor college dormitory near 4th Ring Road in Beijing, China.
Abstract: The authors aimed to evaluate traffic noise level and noise annoyance in Beijing and the impact of the noise on the quality of life of the residences. The authors performed a cross-sectional study in a 12-floor college dormitory near 4th Ring Road in Beijing, China. The north-side rooms of the building were noisy and had windows facing the road. The authors measured both indoor and outdoor noise. Using both a 5-item verbal scale and a 0-10 numerical scale, they questioned a sample of 1,293 college students living in the dormitory about road-traffic noise annoyance. The results showed that the average outdoor day-to-night noise level was 79.2 dB(A) in the noisy rooms and 64.0 dB(A) in the quiet rooms. Nearly 39% of the respondents living in the noisy rooms indicated that they were highly annoyed by traffic noise according to the response on the verbal scale, and 50% of the respondents living in the noisy rooms were highly annoyed according to the numerical scale.

Patent
Qin Li1, Michael L. Seltzer1, Chao He1
05 Dec 2008
TL;DR: In this paper, an audio signal is received that might include keyboard noise and speech and the transformed audio is analyzed to determine whether there is likelihood that keystroke noise is present, if it is determined there is high likelihood that the audio signal contains keystroke noises, a determination is made as to whether a keyboard event occurred around the time of the likely key-stroke noise.
Abstract: An audio signal is received that might include keyboard noise and speech. The audio signal is digitized and transformed from a time domain to a frequency domain. The transformed audio is analyzed to determine whether there is likelihood that keystroke noise is present. If it is determined there is high likelihood that the audio signal contains keystroke noise, a determination is made as to whether a keyboard event occurred around the time of the likely keystroke noise. If it is determined that a keyboard event occurred around the time of the likely keystroke noise, a determination is made as to whether speech is present in the audio signal around the time of the likely keystroke noise. If no speech is present, the keystroke noise is suppressed in the audio signal. If speech is detected in the audio signal or if the keystroke noise abates, the suppression gain is removed from the audio signal.