scispace - formally typeset
Search or ask a question

Showing papers on "Microphone array published in 1991"


Proceedings ArticleDOI
Walter Kellermann1
14 Apr 1991
TL;DR: A self-steering microphone array for teleconferencing in which the digitally implemented steering algorithm consists of two parts that integrates elements of pattern classification and exploits temporal characteristics of speech signals and a novel voting algorithm is presented.
Abstract: A self-steering microphone array for teleconferencing in which the digitally implemented steering algorithm consists of two parts is presented. The first part, the beamforming, is based on known concepts. The second part, a novel voting algorithm, integrates elements of pattern classification and exploits temporal characteristics of speech signals. It accounts for perceptual criteria and the acoustic environment. A real-time implementation is outlined, and results are discussed. The results confirm that the proposed concept deals successfully with teleconferencing environments and that it yields substantially better performance than earlier concepts based on analog hardware. >

123 citations


Journal ArticleDOI
TL;DR: A new method, stochastic region contraction (SRC), is proposed that achieves a computational speedup of 30-50 when compared to the commonly used simulated-annealing method and is ideally suited for coarse-gain parallel processing.
Abstract: The authors deal with optimal microphone placement and gain for a linear one-dimensional array often in a confined environment. A power spectral dispersion function (PSD) is used as a core element for a min-max objective function (PSDX). Derivation of the optimal spacings and gains of the microphones is a hard computational problem since the min-max objective function exhibits multiple local minima (hundreds or thousands). The authors address the computational problem of finding the global optimal solution of the PSDX function. A new method, stochastic region contraction (SRC), is proposed. It achieves a computational speedup of 30-50 when compared to the commonly used simulated-annealing method. SRC is ideally suited for coarse-gain parallel processing. >

72 citations


Journal ArticleDOI
TL;DR: AMNOR as mentioned in this paper is a new type of noise-reduction microphone system which uses a microphone array and digital signal processing. And it is shown that under some simple noise conditions, AMNOR forms the same directivity patterns as those formed by conventional directional microphones.
Abstract: AMNOR is a new type of noise-reduction microphone system which uses a microphone arrayand digital signal processing. In this paper, the theory of AMNOR is described in the frequency domain, and its directivity characteristics are studied. Then, it is shown that, under some simple noise conditions, AMNOR forms the same directivity patterns as those formed by conventional directional microphones. In contrast to the directivity characteristics of conventional directional microphones which are assumed to be optimal for some restricted noise environments, AMNOR can realize the optimal directivity characteristics adaptively for a wide variety of existing noise environments. This is accomplished in AMNOR by the following three functions;(1) AMNOR forms M-1 (where M is the number of microphone elements) dead angles with respect to the noise arrival directions, (2) AMNOR optimizes the directivity lobe figure (3) AMNOR performs directivity control of (1) and (2) for each frequency band independently.

11 citations


Patent
06 Feb 1991
TL;DR: In this article, the authors proposed a method to detect a desired sound period under non-steady state noise by providing a microphone array system having a directivity control function and a sound receiver with a different S/N from the system at the same position and deciding a power difference of received sound for a time period.
Abstract: PURPOSE:To detect a desired sound period under non-steady state noise by providing a microphone array system having a directivity control function and a sound receiver with a different S/N from the system at the same position and deciding a power difference of received sound for a time period. CONSTITUTION:A 1st sound receiver 41 outputting a signal with high S/N consists of a microphone array 51 making up of plural microphone elements and a directivity characteristic control section 52. On the other hand, a 2nd sound receiver outputs a signal with low S/N. The receivers are placed at the same location and power calculation sections 43, 44 detect the power for a time period for a short period respectively. A sound period detection section 45 detects the difference and when the value is within a prescribed range, it is decided to receive an object signal. There is a difference in the S/N in the two signals by the method and since noise and sound period are timewise matched, a desired sound period is detected under non-steady-state noise.

4 citations


Proceedings ArticleDOI
04 Nov 1991
TL;DR: In this paper, the authors address the problem of cancellation of a desired signal in a head-worn microphone array processor and investigate the effect of reverberation-induced multipath on a constrained minimum variance beamformer.
Abstract: The authors address the problem of cancellation of a desired signal in a head-worn microphone array processor. The effect of reverberation-induced multipath on a constrained minimum variance beamformer is considered. Constrained power minimization algorithms are particularly susceptible to this desired-signal cancellation. The conditions are evaluated under which the cancellation of a desired-speech signal occurs for a head-worn array of microphones in a reverberant room. The effects of acoustic headshadow are incorporated in a simulated array processor. Room reverberation effects are also modeled. The acoustic parameters investigated include room size, average wall absorption, and speaker-to-listener distance. Distortion-based speech intelligibility measures are used to assess the desired-speech cancellation. >

4 citations


Proceedings ArticleDOI
24 Jun 1991
TL;DR: In this article, two different kinds of multidimensional intelligent sensing systems are described, which utilize a two-dimensional sensor array for invisible objects and give the visual moving image and global structure of multi-dimensional phenomena by organizing local information from pointwise multisensors based on a model of the phenomena.
Abstract: Two different kinds of multidimensional intelligent sensing systems are described. Both systems utilize a two-dimensional sensor array for invisible objects. These sensor systems may be useful for understanding multidimensional dynamical states. A sensing and visualization system for spatial gas distribution by the use of semiconductor gas sensors is discussed. Gas distribution can be effectively visualized by using a two-dimensional sensor array system. The visualization of a sound wavefront can also be achieved by a wave-propagation model and a 2-D microphone array. The intelligence of these systems gives the visual moving image and global structure of multidimensional phenomena by organizing local information from pointwise multisensors based on a model of the phenomena. These sensing systems may be useful for understanding multidimensional dynamical states. >

2 citations


Proceedings ArticleDOI
14 Apr 1991
TL;DR: Simulation results are presented for narrowband signals for two array geometries: a linear equispaced array and a semicircular array which incorporates the effects of acoustic headshadow.
Abstract: An approach for beamforming in the presence of coherent signals is developed. The correlation between a desired signal and an interferer is broken down by virtual dithering of a beamspace array. The technique can be applied to arbitrary array geometries. Simulation results are presented for narrowband signals for two array geometries: a linear equispaced array and a semicircular array which incorporates the effects of acoustic headshadow. The linear equispace array results allow comparison of this method to spatial smoothing, while the headshadow array results demonstrate the usefulness of the approach for an array which is quite dissimilar to the uniform linear array. >

1 citations


Patent
26 Feb 1991
TL;DR: In this article, a noise suppression system for use on internal combustion engines and small enough for use in automotive applications is disclosed. But this system is not suitable for the use in the military.
Abstract: A noise suppression system for use on internal combustion engines and small enough for use in automotive applications is disclosed. A cancellation noise generator and actuator speakers (50, 52) produce a noise to combine with and cancel the engine noise carried in exhaust gases in a mixing chamber pipe (34). The resultant noise leaving the mixing chamber pipe (34) is measured by a circular tubular microphone array (72) to control noise generator.

Proceedings ArticleDOI
01 Jan 1991
TL;DR: This approach is based on enhancing the Short Time Spectral Amplitude (STSA) of degraded speech using the spectral subtraction algorithm, which requires the noise to be sufficiently stationary for the estimate to be used during the subsequent speech period.
Abstract: Our approach is based on enhancing the Short Time Spectral Amplitude (STSA) of degraded speech using the spectral subtraction algorithm. The use of spectral subtraction to enhance speech has been studied quite extensively in the past [1,2]. These studies have generally shown an increase in speech quality but the gain in intelligibility has been insignificant. The lack of improvement in intelligibility can be atmbiited to two main factors. The first being that since all previous work on the application of spectral subtraction algorithm have been confined to single input systems, the noise short time spectrum can only be estimated during non-speech activity periods. This approach not only requires accurate speechhion-speech activity detection a difficult task, particularly at low signal to noise ratiosbut also requires the noise to be sufficiently stationary for the estimate to be used during the subsequent speech period. The second factor for the lack of improvement in intelligibility is due to the annoying 'musical' type of residual noise introduced by spectral subtraction processing. This residual noise may distract the listener from the speech.