scispace - formally typeset
Search or ask a question
Author

Baltasar Beferull-Lozano

Bio: Baltasar Beferull-Lozano is an academic researcher from University of Agder. The author has contributed to research in topics: Wireless sensor network & Computer science. The author has an hindex of 21, co-authored 168 publications receiving 2667 citations. Previous affiliations of Baltasar Beferull-Lozano include École Polytechnique Fédérale de Lausanne & École Normale Supérieure.


Papers
More filters
Proceedings ArticleDOI
07 Mar 2004
TL;DR: It is proved that building an optimal data gathering tree is NP-complete and various distributed approximation algorithms are proposed for the explicit communication case.
Abstract: We consider the problem of correlated data gathering by a network with a sink node and a tree communication structure, where the goal is to minimize the total transmission cost of transporting the information collected by the nodes, to the sink node. Two coding strategies are analyzed: a Slepian-Wolf model where optimal coding is complex and transmission optimization is simple, and a joint entropy coding model with explicit communication where coding is simple and transmission optimization is difficult. This problem requires a joint optimization of the rate allocation at the nodes and of the transmission structure. For the Slepian-Wolf setting, we derive a closed form solution and an efficient distributed approximation algorithm with a good performance. For the explicit communication case, we prove that building an optimal data gathering tree is NP-complete and we propose various distributed approximation algorithms.

369 citations

Journal ArticleDOI
TL;DR: This work presents a new lattice-based perfect reconstruction and critically sampled anisotropic M-DIR WT, which provides an efficient tool for nonlinear approximation of images, achieving the approximation power O(N/sup -1.55/), which, while slower than the optimal rate O-2/, is much better than O-1/ achieved with wavelets, but at similar complexity.
Abstract: In spite of the success of the standard wavelet transform (WT) in image processing in recent years, the efficiency of its representation is limited by the spatial isotropy of its basis functions built in the horizontal and vertical directions. One-dimensional (1-D) discontinuities in images (edges and contours) that are very important elements in visual perception, intersect too many wavelet basis functions and lead to a nonsparse representation. To efficiently capture these anisotropic geometrical structures characterized by many more than the horizontal and vertical directions, a more complex multidirectional (M-DIR) and anisotropic transform is required. We present a new lattice-based perfect reconstruction and critically sampled anisotropic M-DIR WT. The transform retains the separable filtering and subsampling and the simplicity of computations and filter design from the standard two-dimensional WT, unlike in the case of some other directional transform constructions (e.g., curvelets, contourlets, or edgelets). The corresponding anisotropic basis functions (directionlets) have directional vanishing moments along any two directions with rational slopes. Furthermore, we show that this novel transform provides an efficient tool for nonlinear approximation of images, achieving the approximation power O(N/sup -1.55/), which, while slower than the optimal rate O(N/sup -2/), is much better than O(N/sup -1/) achieved with wavelets, but at similar complexity.

320 citations

Journal ArticleDOI
TL;DR: A set of correlated sources located at the nodes of a network, and a set of sinks that are the destinations for some of the sources, is considered, for both the data-gathering scenario and general traffic matrices, relevant for general networks.
Abstract: Consider a set of correlated sources located at the nodes of a network, and a set of sinks that are the destinations for some of the sources. The minimization of cost functions which are the product of a function of the rate and a function of the path weight is considered, for both the data-gathering scenario, which is relevant in sensor networks, and general traffic matrices, relevant for general networks. The minimization is achieved by jointly optimizing a) the transmission structure, which is shown to consist in general of a superposition of trees, and b) the rate allocation across the source nodes, which is done by Slepian-Wolf coding. The overall minimization can be achieved in two concatenated steps. First, the optimal transmission structure is found, which in general amounts to finding a Steiner tree, and second, the optimal rate allocation is obtained by solving an optimization problem with cost weights determined by the given optimal transmission structure, and with linear constraints given by the Slepian-Wolf rate region. For the case of data gathering, the optimal transmission structure is fully characterized and a closed-form solution for the optimal rate allocation is provided. For the general case of an arbitrary traffic matrix, the problem of finding the optimal transmission structure is NP-complete. For large networks, in some simplified scenarios, the total costs associated with Slepian-Wolf coding and explicit communication (conditional encoding based on explicitly communicated side information) are compared. Finally, the design of decentralized algorithms for the optimal rate allocation is analyzed.

197 citations

Journal ArticleDOI
TL;DR: It is proved that even in this simple case, the optimization problem is NP-hard, and some efficient, scalable, and distributed heuristic approximation algorithms are proposed for solving this problem and the total transmission cost can be significantly improved over direct transmission or the shortest path tree.
Abstract: We consider the problem of correlated data gathering by a network with a sink node and a tree-based communication structure, where the goal is to minimize the total transmission cost of transporting the information collected by the nodes, to the sink node. For source coding of correlated data, we consider a joint entropy-based coding model with explicit communication where coding is simple and the transmission structure optimization is difficult. We first formulate the optimization problem definition in the general case and then we study further a network setting where the entropy conditioning at nodes does not depend on the amount of side information, but only on its availability. We prove that even in this simple case, the optimization problem is NP-hard. We propose some efficient, scalable, and distributed heuristic approximation algorithms for solving this problem and show by numerical simulations that the total transmission cost can be significantly improved over direct transmission or the shortest path tree. We also present an approximation algorithm that provides a tree transmission structure with total cost within a constant factor from the optimal.

187 citations

Journal Article
TL;DR: In this article, the authors consider the joint optimization of sensor placement and transmission structure for data gathering, where a given number of nodes need to be placed in a field such that the sensed data can be reconstructed at a sink within specified distortion bounds while minimizing the energy consumed for communication.
Abstract: We consider the joint optimization of sensor placement and transmission structure for data gathering, where a given number of nodes need to be placed in a field such that the sensed data can be reconstructed at a sink within specified distortion bounds while minimizing the energy consumed for communication.

87 citations


Cited by
More filters
Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal ArticleDOI

6,278 citations

Journal ArticleDOI
22 Apr 2010
TL;DR: This paper surveys the various options such training has to offer, up to the most recent contributions and structures of the MOD, the K-SVD, the Generalized PCA and others.
Abstract: Sparse and redundant representation modeling of data assumes an ability to describe signals as linear combinations of a few atoms from a pre-specified dictionary. As such, the choice of the dictionary that sparsifies the signals is crucial for the success of this model. In general, the choice of a proper dictionary can be done using one of two ways: i) building a sparsifying dictionary based on a mathematical model of the data, or ii) learning a dictionary to perform best on a training set. In this paper we describe the evolution of these two paradigms. As manifestations of the first approach, we cover topics such as wavelets, wavelet packets, contourlets, and curvelets, all aiming to exploit 1-D and 2-D mathematical models for constructing effective dictionaries for signals and images. Dictionary learning takes a different route, attaching the dictionary to a set of examples it is supposed to serve. From the seminal work of Field and Olshausen, through the MOD, the K-SVD, the Generalized PCA and others, this paper surveys the various options such training has to offer, up to the most recent contributions and structures.

1,345 citations