scispace - formally typeset
Search or ask a question

Showing papers by "Mats Viberg published in 2010"


Proceedings ArticleDOI
01 Nov 2010
TL;DR: This paper proposes a novel technique for self-interference suppression in full-duplex Multiple-Input Multiple-Output (MIMO) relays that can suppress interference substantially with less impact on the useful signal.
Abstract: Full-duplex relays can provide cost-effective cover-age extension and throughput enhancement However, the main limiting factor is the resulting self-interference signal which deteriorates the relay performance In this paper, we propose a novel technique for self-interference suppression in full-duplex Multiple-Input Multiple-Output (MIMO) relays The relay employs transmit and receive weight filters for suppressing the self-interference signal Unlike existing techniques that are based on zero forcing of self-interference, we aim at maximizing the ratio between the power of the useful signal to the self-interference power at the relay reception and transmission Our simulation results show that the proposed algorithm outperforms the existing schemes since it can suppress interference substantially with less impact on the useful signal

152 citations


Journal ArticleDOI
TL;DR: It is found that the proposed approach outperforms the “classical” technique, which ignores the possible resolution failure of the MUSIC algorithm, and provides better tools for determining the necessary antenna calibration accuracy to achieve some targeted specifications on the estimator performance.
Abstract: This paper considers the statistical performance of the MUSIC method under the condition that two closely spaced sources impinging on an array of sensors are effectively resolved, i.e., the spectrum exhibits two peaks in the neighborhood of the true directions-of-arrival (DOA). The MUSIC algorithm is known to have an infinite resolution power in theory. However, in the presence of modeling errors, sources can not be resolved with certainty, even if the array correlation matrix is perfectly known. The focus of this paper is to predict the bias and variance of the DOA estimates taking into account the possible resolution failure of MUSIC. This performance prediction, based on our recent mathematical investigation, is new to the best of our knowledge. A general mathematical framework to derive closed form expressions of the bias and variance versus the model mismatch, conditioned on a general statistical resolution test is proposed. In order to illustrate our mathematical approach, statistical tests with one and two conditions, respectively, are investigated. The accuracy of the performance prediction is illustrated in a simulation study. It is found that the proposed approach outperforms the “classical” technique, which ignores the possible resolution failure of the MUSIC algorithm. Therefore, our results provide better tools for determining the necessary antenna calibration accuracy to achieve some targeted specifications on the estimator performance.

70 citations


Posted Content
TL;DR: A wideband spectrum sensing model is presented that utilizes a sub-Nyquist sampling scheme to bring substantial savings in terms of the sampling rate and shows a reliable detection even in low SNR and small number of samples.
Abstract: Spectrum sensing is a fundamental component in cognitive radio. A major challenge in this area is the requirement of a high sampling rate in the sensing of a wideband signal. In this paper a wideband spectrum sensing model is presented that utilizes a sub-Nyquist sampling scheme to bring substantial savings in terms of the sampling rate. The correlation matrix of a finite number of noisy samples is computed and used by a subspace estimator to detect the occupied and vacant channels of the spectrum. In contrast with common methods, the proposedmethod does not need the knowledge of signal properties that mitigates the uncertainty problem. We evaluate the performance of this method by computing the probability of detecting signal occupancy in terms of the number of samples and the SNR of randomly generated signals. The results show a reliable detection even in low SNR and small number of samples.

42 citations


Proceedings ArticleDOI
01 Dec 2010
TL;DR: Contrary to the common belief, it is shown that coordination strategies with no data and only limited CSI sharing is preferred to those with full data and CSI sharing when the backhaul capacity is relatively low and the edge SNR is high.
Abstract: Base station coordination is an efficient technique to transcend the limits on spectral efficiency imposed by intercell interference In this paper, we compare the performance of different coordination strategies with different amount of channel state information (CSI) and data sharing among the coordinating base stations We focus on the effect of limited backhaul capacity in a two-cell network Contrary to the common belief, we show that coordination strategies with no data and only limited CSI sharing is preferred to those with full data and CSI sharing when the backhaul capacity is relatively low and the edge SNR is high

21 citations


Proceedings ArticleDOI
14 Mar 2010
TL;DR: It is shown that humans have a distinct acoustic signature and it is proposed to model the echoes from reflecting parts of objects in the scene by a Gaussian-Mixture-Model, which forms the basis for subsequent detection and classification of humans.
Abstract: We address the problem of human detection with mobile platforms such as robots. Instead of using an optical system, we propose to employ an acoustic 2D array to reliably obtain an image of a human in a 3D spatial power spectrum which is independent of lighting conditions and uses cheap acoustic sensors. We show that humans have a distinct acoustic signature and propose to model the echoes from reflecting parts of objects in the scene by a Gaussian-Mixture-Model. When it is fitted to the acoustic image, we can extract geometric relations between the present echoes and represent the acoustic signatures in a low-dimensional parameter space. We present results based on real data measurements that demonstrate that different objects can be reconstructed from the data and discriminated. The obtained parameter space forms the basis for subsequent detection and classification of humans.

9 citations


Journal ArticleDOI
TL;DR: The max-search approach computes ML estimates only for the maximally hypothesized number of signals, and selects relevant components through hypothesis testing, and a novelty of this work is the reduction of indistinguishable components caused by overparameterization.

8 citations


Journal ArticleDOI
TL;DR: In this article, a gradient-based optimisation algorithm for the reconstruction of the shape of a metal protrusion on a flat ground plane based on the scattering matrix of an array antenna is presented.
Abstract: The authors present a gradient-based optimisation algorithm for the reconstruction of the shape of a metal protrusion on a flat ground plane based on the scattering matrix of an array antenna. The sensitivities are based on the continuum form of Maxwell's equations, which offers good flexibility with respect to the choice of field solver. The computational cost for the gradient is independent of the number of degrees of freedom that are used to represent the shape of the protrusion. Initial tests show that the sensitivities yield good accuracy in the 3D setting and that relatively good reconstructions of simple surfaces are feasible with a computational cost that amounts to about some ten evaluations of the scattering matrix.

4 citations


01 Jan 2010
TL;DR: In this paper, a gradient-based optimization algorithm for the solution of an inverse problem was proposed, where the objective function is the misfit between the computed and measured scattering matrix averaged with respect to the waveguide ports and the frequency range used for the reconstruction.
Abstract: We present and test a gradient based optimization algorithm for the solution of an inverse problem, where we solve Maxwell's equations in a 2D setting. We consider a model problem with six parallel-plate waveguides connected to a circular cavity, where the object under reconstruction resides inside the cavity. The goal function in the optimization problem is the misfit between the computed and measured scattering matrix averaged with respect to the waveguide ports and the frequency range used for the reconstruction. The inverse algorithm exploits the field solution of an adjoint problem to compute the gradient of the goal function. This approach yields a computational cost that is independent of the number of degrees of freedom used to describe the object under reconstruction. As a consequence of the reciprocity of Maxwell's equations, the value of the goal function and its gradient are relatively inexpensive to compute when the scattering matrix and the underlying field solutions are available. We use this reconstruction algorithm to study (i) the impact of the cell size on the reconstruction error, (ii) the reconstruction error that stems from an insufficient model order, and (iii) the influenceof noise on the quality of the reconstructions.

4 citations


Proceedings ArticleDOI
14 Mar 2010
TL;DR: Simulation results demonstrate that the proposed low-complexity algorithm for selecting users and the corresponding number of data streams to each user provides performance close to dirty paper coding (DPC) with considerably reduced feedback.
Abstract: In this paper the downlink of a multi-user MIMO (MUMIMO) system with multi-mode transmission is considered. We propose a low-complexity algorithm for selecting users and the corresponding number of data streams to each user, denoted as user transmission mode (UTM). The selection is only based on the average received signal-to-noise ratio (SNR) from the base station (BS) for each user. This reduces the overall amount of feedback for scheduling, as opposed to techniques that assume perfect instantaneous channel state information (CSI) from all users. Analytical average throughput approximations are derived for each user at different UTMs. Simulation results demonstrate that the proposed algorithm provides performance close to dirty paper coding (DPC) with considerably reduced feedback.

1 citations


Journal ArticleDOI
TL;DR: This issue dedicated to Johann F. Böhme, a worldclass researcher and distinguished lecturer, who played a key role in the advancement of knowledge in SSAP since its early days, is dedicated.

1 citations