scispace - formally typeset
Search or ask a question

Showing papers on "Microphone array published in 2011"


Journal ArticleDOI
TL;DR: This letter introduces an effective strategy that extends the conventional SRP-PHAT functional with the aim of considering the volume surrounding the discrete locations of the spatial grid, increasing its robustness and allowing for a coarser spatial grid.
Abstract: The Steered Response Power - Phase Transform (SRP-PHAT) algorithm has been shown to be one of the most robust sound source localization approaches operating in noisy and reverberant environments. However, its practical implementation is usually based on a costly fine grid-search procedure, making the computational cost of the method a real issue. In this letter, we introduce an effective strategy that extends the conventional SRP-PHAT functional with the aim of considering the volume surrounding the discrete locations of the spatial grid. As a result, the modified functional performs a full exploration of the sampled space rather than computing the SRP at discrete spatial positions, increasing its robustness and allowing for a coarser spatial grid. To this end, the Generalized Cross-Correlation (GCC) function corresponding to each microphone pair must be properly accumulated according to the defined microphone setup. Experiments carried out under different acoustic conditions confirm the validity of the proposed approach.

159 citations


Journal ArticleDOI
TL;DR: It is shown that the pure phase-mode spherical microphone array can be viewed as a minimum variance distortionless response (MVDR) beamformer in the spherical harmonics domain for the case of spherically isotropic noise.
Abstract: An approach to optimal array pattern synthesis based on spherical harmonics is presented. The array processing problem in the spherical harmonics domain is expressed with a matrix formulation. The beamformer weight vector design problem is written as a multiply constrained problem, so that the resulting beamformer can provide a suitable trade-off among multiple conflicting performance measures such as directivity index, robustness, array gain, sidelobe level, mainlobe width, and so on. The multiply constrained problem is formulated as a convex form of second-order cone programming which is computationally tractable. We show that the pure phase-mode spherical microphone array can be viewed as a minimum variance distortionless response (MVDR) beamformer in the spherical harmonics domain for the case of spherically isotropic noise. It is shown that our approach includes the delay-and-sum beamformer and a pure phase-mode beamformer as special cases, which leads to very flexible designs. Results of simulations and experimental data processing show good performance of the proposed array pattern synthesis approach. To simplify the analysis, the assumption of equidistant spatial sampling of the wavefield by microphones on a spherical surface is used and the aliasing effects due to noncontinuous spatial sampling are neglected.

119 citations


Patent
13 Jan 2011
TL;DR: In this article, a mobile platform includes a microphone array and is capable of implementing beamforming to amplify or suppress audio information from a sound source through user input, such as pointing the mobile platform in the direction of the sound source or through a touch screen display interface.
Abstract: A mobile platform includes a microphone array and is capable of implementing beamforming to amplify or suppress audio information from a sound source. The sound source is indicated through a user input, such as pointing the mobile platform in the direction of the sound source or through a touch screen display interface. The mobile platform further includes orientation sensors capable of detecting movement of the mobile platform. When the mobile platform moves with respect to the sound source, the beamforming is adjusted based on the data from the orientation sensors so that beamforming is continuously implemented in the direction of the sound source. The audio information from the sound source may be included or suppressed from a telephone or video-telephony conversation. Images or video from a camera may be likewise controlled based on the data from the orientation sensors.

114 citations


Journal ArticleDOI
TL;DR: A new method of array synthesis is proposed that allows the design of a robust broadband beamformer with tunable tradeoff between frequency-invariance and directivity, without the need for imposing a desired beam pattern.
Abstract: Frequency-invariant beam patterns are often required by systems using an array of sensors to process broadband signals. If the spatial aperture is shorter than the involved wavelengths, using a superdirective beamforming is essential to get an efficient system. In this context, robustness of array imperfections is a crucial feature. In the literature, only a few approaches have been proposed to design a robust, superdirective, frequency-invariant beamformer based on filter-and-sum architecture; all of them achieve frequency-invariance by imposing a desired beam pattern. However, the choice of a suitable desired beam pattern is critical; an improper choice results in unsatisfactory performance. This paper proposes a new method of array synthesis that allows the design of a robust broadband beamformer with tunable tradeoff between frequency-invariance and directivity, without the need for imposing a desired beam pattern. The latter is defined as a set of variables that do not depend on frequency and are included in the vector of variables to be optimized. To this end, a suitable cost function has been devised whose minimum can be found in closed form. Therefore, the method is analytical and computationally inexpensive. In addition, a technique that allows obtaining a beam pattern with a linear phase over frequency is described. The results show the effectiveness of the proposed method in designing robust superdirective beam patterns for linear arrays receiving far-field signals, with special attention to microphone arrays of limited aperture.

100 citations


Patent
30 Mar 2011
TL;DR: In this paper, a test signal generator and a controller are used for measuring a plurality of loudspeakers arranged at different positions, and an evaluator for evaluating the set of sound signals for each loudspeaker to determine at least one loudspeaker characteristic for each speaker and for indicating a loudspeaker state.
Abstract: An apparatus for measuring a plurality of loudspeakers arranged at different positions comprises: a test signal generator (10) for generating a test signal for a loudspeaker; a microphone device (12) being configured for receiving a plurality of different sound signals in response to one or more loudspeaker signals emitted by a loudspeaker of the plurality of loudspeakers in response to the test signal; a controller (14) for controlling emissions of the loudspeaker signals by the plurality of loudspeakers and for handling the plurality of different sound signals so that a set of sound signals recorded by the microphone device is associated with each loudspeaker of the plurality of loudspeakers in response to the test signal; and an evaluator (16) for evaluating the set of sound signals for each loudspeaker to determine at least one loudspeaker characteristic for each loudspeaker and for indicating a loudspeaker state using the at least one loudspeaker characteristic for the loudspeaker. This scheme allows an automatic, efficient and accurate measurement of loudspeakers arranged in a three-dimensional configuration.

99 citations


Patent
16 Mar 2011
TL;DR: In this article, a microphone array and a noise elimination method are used to enhance signal-to-noise ratio (SNR) to achieve goals of effectively improving voice communication quality and voice identification rate.
Abstract: A voice noise elimination method for microphone array applicable for voice enhancement and voice identification is disclosed. A microphone array and a noise elimination method are used to enhance signal-to-noise ratio (SNR) to achieve goals of effectively improving voice communication quality and voice identification rate. The method includes a voice and non-voice section detection technology and two stages of voice enhancement technology: (1) collecting voice signals in the space by a linear microphone array and processing the voice signals for determination of voice and non-voice sections; (2) effectively evaluating a long-term noise spectrum of the non-voice section to eliminate the noise on the voice signal spectrum, which is considered a first voice SNR enhancement; and (3) achieving the result of the second stage, which is also the final stage, of voice SNR enhancement by residue noise estimation and elimination complying with voice signal partial characteristics.

83 citations


Proceedings ArticleDOI
22 May 2011
TL;DR: This work proposes to use the condition number of the EB-ESPRIT matrix as a robustness measure, and to use Wigner-D weighting to avoid the ill-conditioning issue and improve the robustness, and uses power spectrum testing, frequency smoothing, and manifold vector extension techniques to address the ambiguity, coherent source localization, and large source number problems.
Abstract: Spherical microphone array eigenbeam (EB)-ESPRIT gives an elegant closed-form solution for 3D broadband source localization based on the spherical harmonics (eigenbeam) framework. However, in practical implementations, there are still several issues not being rigorously studied, e.g. how to avoid the ill-conditioning of an EB-ESPRIT matrix, solve the ambiguity problem, handle a large number of sources, and localize coherent broadband sources, etc. In this work, we propose to use the condition number of the EB-ESPRIT matrix as a robustness measure, and to use Wigner-D weighting to avoid the ill-conditioning issue and improve the robustness. In addition, power spectrum testing, frequency smoothing, and manifold vector extension techniques are employed to address the ambiguity, coherent source localization, and large source number problems, respectively. Experimental results based on measurements taken with a real spherical microphone array in a room environment show the effectiveness of the proposed methods.

76 citations


Journal ArticleDOI
TL;DR: This work proposes a framework that simultaneously localizes the mobile robot and multiple sound sources using a microphone array on the robot and an eigenstructure-based generalized cross correlation method for estimating time delays between microphones under multi-source environments.
Abstract: Sound source localization is an important function in robot audition. Most existing works perform sound source localization using static microphone arrays. This work proposes a framework that simul...

73 citations


Journal ArticleDOI
TL;DR: In this paper, a method for estimating the direct-to-reverberant energy ratio (DRR) using a direct and reverberant sound spatial correlation matrix model is presented.
Abstract: We present a method for estimating the direct-to-reverberant energy ratio (DRR) that uses a direct and reverberant sound spatial correlation matrix model (Hereafter referred to as the spatial correlation model). This model expresses the spatial correlation matrix of an array input signal as two spatial correlation matrices, one for direct sound and one for reverberation. The direct sound propagates from the direction of the sound source but the reverberation arrives from every direction uniformly. The DRR is calculated from the power spectra of the direct sound and reverberation that are estimated from the spatial correlation matrix of the measured signal using the spatial correlation model. The results of experiment and simulation confirm that the proposed method gives mostly correct DRR estimates unless the sound source is far from the microphone array, in which circumstance the direct sound picked up by the microphone array is very small. The method was also evaluated using various scales in simulated and actual acoustical environments, and its limitations revealed. We estimated the sound source distance using a small microphone array, which is an example of application of the proposed DRR estimation method.

66 citations


Proceedings ArticleDOI
07 Jul 2011
TL;DR: In this paper, a maximum likelihood approach using time of arrival measurements of short calibration pulses is proposed to solve the self-localization problem of multiple smartphones spontaneously assembled into an ad hoc microphone array as part of a teleconferencing system.
Abstract: The advent of the smartphone in recent years opened new possibilities for the concept of ubiquitous computing. We propose to use multiple smartphones spontaneously assembled into an ad hoc microphone array as part of a teleconferencing system. The unknown spatial positions, the asynchronous sampling and the unknown time offsets between clocks of smartphones in the ad hoc array are the main problems for such an application as well as for almost all other acoustic signal processing algorithms. A maximum likelihood approach using time of arrival measurements of short calibration pulses is proposed to solve this self-localization problem. The global orientation of each phone, obtained by the means of nowadays common built-in geomagnetic compasses, in combination with the constant microphone-loudspeaker distance lead to a nonlinear optimization problem with a reduced dimensionality in contrast to former methods. The applicability of the proposed self-localization is shown in simulation and via recordings in a typical reverberant and noisy conference room.

64 citations


Patent
Erik Visser1, Ernan Liu1
17 Feb 2011
TL;DR: In this paper, a disclosed method selects a plurality of fewer than all of the channels of a multichannel signal, based on information relating to the direction of arrival of at least one frequency component of the multi-channel signal.
Abstract: A disclosed method selects a plurality of fewer than all of the channels of a multichannel signal, based on information relating to the direction of arrival of at least one frequency component of the multichannel signal.

Patent
16 Mar 2011
TL;DR: In this paper, a method and system for enhancing a target sound signal from multiple sound signals is provided. But the system is limited to a single sound source and is not suitable for the use of multiple sound sources.
Abstract: A method and system for enhancing a target sound signal from multiple sound signals is provided. An array of an arbitrary number of sound sensors positioned in an arbitrary configuration receives the sound signals from multiple disparate sources. The sound signals comprise the target sound signal from a target sound source, and ambient noise signals. A sound source localization unit, an adaptive beamforming unit, and a noise reduction unit are in operative communication with the array of sound sensors. The sound source localization unit estimates a spatial location of the target sound signal from the received sound signals. The adaptive beamforming unit performs adaptive beamforming by steering a directivity pattern of the array of sound sensors in a direction of the spatial location of the target sound signal, thereby enhancing the target sound signal and partially suppressing the ambient noise signals, which are further suppressed by the noise reduction unit.

Journal ArticleDOI
TL;DR: Based on the orthogonality of the sensors' location, a MUSIC algorithm in spherical space is proposed, named as SH-MUSIC, in this article, where spherical harmonics transformation is operated before MUSIC, and a better performance is obtained because SH-mUSIC utilizes the array configuration's orthogonomality.

Journal ArticleDOI
TL;DR: A theoretical analysis of the spherical harmonics spectrum of spatially translated sources and defines four measures for the misalignment of the acoustic center of a radiating source to promote optimal alignment.
Abstract: The radiation patterns of acoustic sources have great significance in a wide range of applications, such as measuring the directivity of loudspeakers and investigating the radiation of musical instruments for auralization. Recently, surrounding spherical microphone arrays have been studied for sound field analysis, facilitating measurement of the pressure around a sphere and the computation of the spherical harmonics spectrum of the sound source. However, the sound radiation pattern may be affected by the location of the source inside the microphone array, which is an undesirable property when aiming to characterize source radiation in a unique manner. This paper presents a theoretical analysis of the spherical harmonics spectrum of spatially translated sources and defines four measures for the misalignment of the acoustic center of a radiating source. Optimization is used to promote optimal alignment based on the proposed measures and the errors caused by numerical and array-order limitations are investigated. This methodology is examined using both simulated and experimental data in order to investigate the performance and limitations of the different alignment methods.

Journal ArticleDOI
TL;DR: Results show radial filtering is practical for improving attenuation of far-field and near-field interfering sources relative to a desired source positioned in the same direction.
Abstract: This paper presents an analysis of spherical microphone array capabilities in the near-field, with an emphasis on radial filtering of sources in a given direction. The near-field of the array is defined in terms of frequency and distance from the array. Directional beamforming is demonstrated given the near-field radial compensation filter, which yields a desired directional beampattern at a chosen distance from the array. This pattern deteriorates as the source draws away from the array. Next, a framework is presented for radial filter design, enabling distance discrimination between sources positioned in the same direction relative to the array. Design examples include Dolph-Chebyshev radial filtering, radial notch filtering, and numerical design. Performance is analyzed in terms of spatial response and robustness to noise. Results show radial filtering is practical for improving attenuation of far-field and near-field interfering sources relative to a desired source positioned in the same direction.

Journal ArticleDOI
TL;DR: A wireless sensor network-based wearable countersniper system prototype is presented that has been tested multiple times at the US Army Aberdeen Test Center and the Nashville Police Academy and close to 100% weapon estimation accuracy for 4 out of the 6 guns tested.

Patent
Erik Visser1
23 Sep 2011
TL;DR: In this article, an apparatus for multichannel signal processing separates signal components from different acoustic sources by initializing a separation filter bank with beams in the estimated source directions, adapting the separation filter banks under specified constraints, and normalizing an adapted solution based on a maximum response with respect to direction.
Abstract: An apparatus for multichannel signal processing separates signal components from different acoustic sources by initializing a separation filter bank with beams in the estimated source directions, adapting the separation filter bank under specified constraints, and normalizing an adapted solution based on a maximum response with respect to direction. Such an apparatus may be used to separate signal components from sources that are close to one another in the far field of the microphone array.

Journal ArticleDOI
01 Jun 2011
TL;DR: The new method employs the source sparseness assumption to handle an underdetermined case and obtained promising experimental results for 2-dimensionally distributed sensors and sources 3×4, 3×5 (#sensors × #speech sources), and for 3-dimensional case with 4×5 in a room (reverberation time of 120 ms).
Abstract: This paper proposes a method for estimating the direction of arrival (DOA) of multiple source signals for an underdetermined situation, where the number of sources N exceeds the number of sensors M (M?

Patent
16 Feb 2011
TL;DR: In this article, a dual-microphone based speech enhancement device, comprising a digital microphone array module and a signal processing integrated chip, was proposed, which has high level of integration because a decoding chip, a de-noising chip and the like are integrated into one signal processing chip.
Abstract: The invention provides a dual-microphone based speech enhancement device, comprising a digital microphone array module and a signal processing integrated chip. The signal processing integrated chip is electrically connected with the digital microphone array module and is internally provided with a pulse density modulation decoder module, a directivity forming module, a speech enhancement processing module and an output module. The invention also provides a speech enhancement method based on the dual-microphone based speech enhancement device. Compared with the related technology, the dual-microphone based speech enhancement device has high level of integration because a decoding chip, a de-noising chip and the like are integrated into one signal processing chip. The two digital microphones are characterized by sound inlet in different directions, and the directivity formation is adopted so that the background noise outside wave beams is inhibited. The speech enhancement method is simple, and the development cost is saved.

Journal ArticleDOI
TL;DR: In this article, the inverse problem theory is adapted to sound field extrapolation around a microphone array for further spatial sound and sound environment reproduction for multichannel spatial sound systems such as Wave Field Synthesis and Ambisonics.

Patent
21 Jun 2011
TL;DR: In this article, an electronic apparatus is provided that includes a microphone array, a crossover, a beamformer module, and a combiner module, which combines the high band signals and low band beamformed signals to generate modified wideband audio signals.
Abstract: At least two microphones generate wideband electrical audio signals in response to incoming sound waves, and the wideband audio signals are filtered to generate low band signals and high band signals. From the low band signals, low band beamformed signals are generated, and the low band beamformed signals are combined with the high band signals to generate modified wideband audio signals. In one implementation, an electronic apparatus is provided that includes a microphone array, a crossover, a beamformer module, and a combiner module. The microphone array has at least two pressure microphones that generate wideband electrical audio signals in response to incoming sound waves. The crossover generates low band signals and high band signals from the wideband electrical audio signals. The beamformer module generates low band beamformed signals from the low band signals. The combiner module combines the high band signals and the low band beamformed signals to generate modified wideband audio signals.

Proceedings ArticleDOI
07 Jul 2011
TL;DR: A spatial gradient steered response power using the phase transform (SRPPHAT) method which is capable of localization of competing speakers in overlapping conditions and an integrated framework of multi-source localization and voice activity detection is introduced.
Abstract: Two of the major challenges in microphone array based adaptive beamforming, speech enhancement and distant speech recognition, are robust and accurate source localization and voice activity detection. This paper introduces a spatial gradient steered response power using the phase transform (SRPPHAT) method which is capable of localization of competing speakers in overlapping conditions. We further investigate the behavior of the SRP function and characterize theoretically a fixed point in its search space for the diffuse noise field. We call this fixed point the null position in the SRP search space. Building on this evidence, we propose a technique for multichannel voice activity detection (MVAD) based on detection of a maximum power corresponding to the null position. The gradient SRP-PHAT in tandem with the MVAD form an integrated framework of multi-source localization and voice activity detection. The experiments carried out on real data recordings show that this framework is very effective in practical applications of hands-free communication.

Journal ArticleDOI
TL;DR: This paper demonstrates how the non-regularized DI curves for a given beamforming order clearly define the bandwidth of operation, in other words, the frequency band for which the beamformer has relatively constant and maximum directivity.
Abstract: The design and construction of a circular microphone array (CMA) that has a wide frequency range suitable for human hearing is presented. The design of the CMA was achieved using a technique based on simulated directivity index (DI) curves. The simulated DI curves encapsulate the critical microphone array performance limitations: spatial aliasing, measurement noise, and microphone placement errors. This paper demonstrates how the non-regularized DI curves for a given beamforming order clearly define the bandwidth of operation, in other words, the frequency band for which the beamformer has relatively constant and maximum directivity. Detailed and comprehensive experimental data that characterizes the CMA beamformer are also presented.

Journal ArticleDOI
TL;DR: Results using public available data show the capability of the method to successfully detect the DOA of several sources in real environments.

Proceedings ArticleDOI
05 Dec 2011
TL;DR: The experimental results showed that microphone locations and clock differences were estimated properly with 10–15 sound events (handclaps), and the error of sound source localization with the estimated information was less than the grid size of beamforming, that is, the lowest error was theoretically attained.
Abstract: This paper addresses the online calibration of an asynchronous microphone array for robots. Conventional microphone array technologies require a lot of measurements of transfer functions to calibrate microphone locations, and a multi-channel A/D converter for inter-microphone synchronization. We solve these two problems using a framework combining Simultaneous Localization and Mapping (SLAM) and beamforming in an online manner. To do this, we assume that estimations of microphone locations, a sound source location, and microphone clock difference correspond to mapping, self-localization, observation errors in SLAM, respectively. In our framework, the SLAM process calibrates locations and clock differences of microphones every time a microphone array observes a sound like a human's clapping, and a beamforming process works as a cost function to decide the convergence of calibration by localizing the sound with the estimated locations and clock differences. After calibration, beamforming is used for sound source localization. We implemented a prototype system using Extended Kalman Filter (EKF) based SLAM and Delay-and-Sum Beamforming (DS-BF). The experimental results showed that microphone locations and clock differences were estimated properly with 10–15 sound events (handclaps), and the error of sound source localization with the estimated information was less than the grid size of beamforming, that is, the lowest error was theoretically attained.

Patent
24 Aug 2011
TL;DR: In this article, an audio input system using a beam-forming microphone array is presented, where the beamforming module is used for directionally enhancing the sound along a target direction as well as simultaneously resisting sound sources from other directions.
Abstract: The invention provides an audio input system used in home environment based on a beam-forming microphone array. The audio input system receives an audio input from a user by using the microphone array which is arranged at the circumference of a television in a living room or embedded in the television. The audio input system specifically comprises the microphone array, a beam-forming module, a target sound detection module, an echo eliminating module and a back filtering module, wherein the microphone array comprises a plurality of microphone array elements used for extracting multichannel audio signals in the home living room environment; the beam-forming module is used for directionally enhancing the sound along a target direction as well as simultaneously resisting sound sources from other directions; the target sound detection module is used for judging the starting and ending end points of a target sound section; the echo eliminating module is used for removing a sound signal of a television loudspeaker; and the back filtering module is used for eliminating the irrelevant diffused background noises. The invention also provides an audio input system based on a blind-separation microphone array. The two systems are both used for inputting the audio signals in the home network environment based on the microphone array.

Proceedings ArticleDOI
22 May 2011
TL;DR: Methods of 3D direction of arrival (DOA) estimation, coherent source detection and reflective surface localization are studied, based on recordings by a spherical microphone array, and experimental results in a real room validate the proposed method.
Abstract: Methods of 3D direction of arrival (DOA) estimation, coherent source detection and reflective surface localization are studied, based on recordings by a spherical microphone array. First, the spherical harmonics domain minimum variance distortionless response (EB-MVDR) beamformer is employed for the localization of broadband coherent sources, which is characterized by simpler frequency focusing matrices than the corresponding element-space implementation, and by a higher resolution than conventional spherical array beamformers. After the DOA estimation step, the source signals are extracted by EB-MVDRs. Then, by computing the crosscorrelation functions between the extracted signals, the coherent sources are detected and their time differences of arrival (TDOA) are estimated. Given the positions of the array and the reference source, and the estimated DOA and TDOA of the coherent sources, the positions of the major reflectors can be inferred. Experimental results in a real room validate the proposed method.

Patent
12 Oct 2011
TL;DR: In this article, a microphone-array-based speech recognition system combines a noise cancelling technique for cancelling noise of input speech signals from an array of microphones, according to at least an inputted threshold.
Abstract: A microphone-array-based speech recognition system combines a noise cancelling technique for cancelling noise of input speech signals from an array of microphones, according to at least an inputted threshold. The system receives noise-cancelled speech signals outputted by a noise masking module through at least a speech model and at least a filler model, then computes a confidence measure score with the at least a speech model and the at least a filler model for each threshold and each noise-cancelled speech signal, and adjusts the threshold to continue the noise cancelling for achieving a maximum confidence measure score, thereby outputting a speech recognition result related to the maximum confidence measure score.

Journal ArticleDOI
TL;DR: In this paper, the effects of shear layer refraction were examined using a pulsed laser system to generate a plasma point source in space and time for several different test section flow speeds and configurations.
Abstract: Microphone array processing algorithms often assume straight-line source-to-observer wave propagation. However, when the microphone array is placed outside an open-jet test section, the presence of the shear layer refracts the acoustic waves and causes the wave propagation times to vary from a free-space model. With a known source location in space, the propagation time delay can be determined using Amiet's theoretical method. In this study, the effects of shear layer refraction are examined using a pulsed laser system to generate a plasma point source in space and time for several different test section flow speeds and configurations. An array of microphones is used to measure the pulse signal, allowing for the use of qualitative beamforming and quantitative timing analysis. Results indicate that Amiet's method properly accounts for planar shear layer refraction time delays within experimental uncertainty. This is true both when the source is in the inviscid core of the open-jet test section, as well as ...

Journal ArticleDOI
TL;DR: This paper presents a system for full-duplex hands free voice communication integrated with TV technology that provides comfortable conversation by utilization of microphone array and advanced voice processing algorithms, even with simultaneous TV usage.
Abstract: This paper presents a system for full-duplex hands free voice communication integrated with TV technology. The system provides comfortable conversation by utilization of microphone array and advanced voice processing algorithms, even with simultaneous TV usage. Signal processing includes superdirective beamformer steered by direction-finding module, postprocessing module, acoustic echo canceller, stationary noise reduction module and automatic gain control. All processing is realized in real-time on DSP based platform. As communication channel GSM or VoIP can be used.