scispace - formally typeset
Search or ask a question

Showing papers in "Communications in information and systems in 2005"


Journal ArticleDOI
TL;DR: This paper explains why trust in adaptive control is warranted, by reviewing a number of adaptive control approaches which have proved deficient for some reason that has not been immediately apparent, and examines several instances of such a mismatch.
Abstract: Adaptive control is a very appealing technology, at least in principle. Yet its use has been conditioned by an attitude of distrustfulness on the part of some practitioners. In this paper, we explain why such distrustfulness is warranted, by reviewing a number of adaptive control approaches which have proved deficient for some reason that has not been immediately apparent. The explanation of the deficiencies, which normally were reflected in unexpected instabilities, is our main concern. Such explanations, coupled with remedies for avoiding the deficiencies, are necessary to engender confidence in the technology. These include the unpredictable failure of the MIT rule; the bursting phenomenon, and how to prevent it; the Rohrs' counterexample, which attempted to disqualify all adaptive control algorithms; the notion that identification of a plant is only valid conceptually for a restricted range of controllers (with the implication that in adaptive control, certain controller changes suggested by adaptive control algorithms may introduce instability); and the concept of multiple model adaptive control. 1. Introduction. Adaptive controllers are a fact of life, and have been for some decades. However, theory and practice have not always tracked one another. In this paper, we examine several instances of such a mismatch. These are: • The MIT rule, an intuitively based gradient descent algorithm that gave unpredictable performance; satisfactory explanation of performance started to become possible in the 1980s. • Bursting, a phenomenon of temporary instability in adaptive control algo- rithm implementation of a type observed in the 1970s; explanation and our understanding of avoidance mechanisms only became possible in the 1980s. • The Rohrs' counterexample, which argued that adaptive control laws existing at the time could not be used with confidence in practical designs, because unmodeled dynamics in the plant could be excited and yield an unstable control system. • Iterative controller re-design and identification, an intuitively appealing ap- proach to updating controllers that came to prominence in the 1980s and 1990s, and which can lead to unstable performance. Explanation and an understanding of an avoidance mechanism came around 2000. • Multiple model adaptive control, another intuitively appealing approach to adaptive control with the potential to include non-linear systems. It too can lead to unstable performance; early theoretical development left untouched important issues of the number of controllers to be used, and their location

180 citations


Journal ArticleDOI
TL;DR: This paper shows that the compressors used to compute the normalized compression distance are not idempotent in some cases, being strongly skewed with the size of the objects and window size, and therefore causing a deviation in the identity property of the distance if the authors don't take care that the objects to be compressed fit the windows.
Abstract: Using the mathematical background for algorithmic complexity developed by Kol- mogorov in the sixties, Cilibrasi and Vitanyi have designed a similarity distance named normalized compression distance applicable to the clustering of objects of any kind, such as music, texts or gene sequences. The normalized compression distance is a quasi-universal normalized admissible distance under certain conditions. This paper shows that the compressors used to compute the normalized compression distance are not idempotent in some cases, being strongly skewed with the size of the objects and window size, and therefore causing a deviation in the identity property of the distance if we don't take care that the objects to be compressed fit the windows. The relationship underlying the precision of the distance and the size of the objects has been analyzed for several well-known compressors, and specially in depth for three cases, bzip2, gzip and PPMZ which are examples of the three main types of compressors: block-sorting, Lempel-Ziv, and statistic. 1. Introduction. A natural measure of similarity assumes that two objects x and y are similar if the basic blocks of x are in y and vice versa. If this happens we can describe object x by making reference to the blocks belonging to y ,t hus the description of x will be very simple using the description of y. This is partially what a compressor does to code the catenated xy sequence: a search for information shared by both sequences in order to reduce the redundancy of the whole sequence. If the result is small, it means that a lot of information contained in x can be used to codey, following the similarity conditions described in the previous paragraph. This was formalized by Rudi Cilibrasi and Paul Vitanyi (2), giving rise to the concept of normalized compression distance (NCD), which is based on the use of compressors to provide a measure of the similarity between the objects. This distance may then be used to cluster those objects. This idea is very powerful, because it can be applied in the same way to all kind of objects, such as music, texts or gene sequences. There is no need to use specific features of the objects to cluster. The only thing needed to compute the distance from one object x to another object y, is to measure the ability of x to turn the description of y simple and vice versa. Cilibrasi and Vitanyi have perfected this idea in two ways, by stating the con- ditions that a compressor must hold to be useful in the computation of the NCD, and by giving formal expression to the quality of the distance in comparison with an ideal distance proposed by Vitanyi and others in (3). In this paper we show that the

111 citations


Journal ArticleDOI
TL;DR: A likelihood time-reversal imaging technique which is suboptimal but computationally ecient and can be used to initialize the ML estimation, and a maximum likelihood estimators of the locations and reflection parameters of the scatterers are developed.
Abstract: We present a statistical framework for the fixed-frequency computational time- reversal imaging problem assuming point scatterers in a known background medium. Our statistical measurement models are based on the physical models of the multistatic response matrix, the dis- torted wave Born approximation and Foldy-Lax multiple scattering models. We develop maximum likelihood (ML) estimators of the locations and reflection parameters of the scatterers. Using a sim- plified single-scatterer model, we also propose a likelihood time-reversal imaging technique which is suboptimal but computationally ecient and can be used to initialize the ML estimation. We gener- alize the fixed-frequency likelihood imaging to multiple frequencies, and demonstrate its eectiveness in resolving the grating lobes of a sparse array. This enables to achieve high resolution by deploying a large-aperture array consisting of a small number of antennas while avoiding spatial ambiguity. Numerical and experimental examples are used to illustrate the applicability of our results.

44 citations


Journal ArticleDOI
TL;DR: This document describes the design and construction of the experimental apparatus and describes the experimental procedures that were conducted and the animals that were tested.
Abstract: Note: article Reference LICOS-ARTICLE-2005-001 Record created on 2006-12-11, modified on 2016-08-08

44 citations


Journal ArticleDOI
TL;DR: This work presents a computationally-efficient matrix-vector expression for the solution of a matrix linear least squares problem that arises in multistatic antenna array processing and relates the vectorization-by-columns operator to the diagonal extraction operator.
Abstract: We present a computationally-efficient matrix-vector expression for the solution of a matrix linear least squares problem that arises in multistatic antenna array processing. Our derivation relies on an explicit new relation between Kronecker, Khatri-Rao and Schur-Hadamard matrix products, which involves a selection matrix (i.e., a subset of the columns of a permutation matrix). Moreover, we show that the same selection matrix also relates the vectorization-by-columns operator to the diagonal extraction operator, which plays a central role in our computationally- efficient solution.

36 citations


Journal ArticleDOI
TL;DR: A distance for stochastic models based on the concept of sub- space angles within a model and between two models is proposed and used to obtain a clustering over the set of time series.
Abstract: In this paper a methodology to cluster time series based on measurement data is described. In particular, we propose a distance for stochastic models based on the concept of sub- space angles within a model and between two models. This distance is used to obtain a clustering over the set of time series. We show how it is related to the mutual information of the past and the future output processes, and to a previously defined cepstral distance. Finally, the methodol- ogy is applied to the clustering of time series of power consumption within the Belgian electricity grid.

35 citations


Journal ArticleDOI
TL;DR: In this paper, a review of time-frequency and time-scale characterizations for wideband time-varying channels is presented, and the interpretation of these models can be seen to arise from processing assumptions on the transmit and receive waveforms is discussed.
Abstract: U § Abstract. Mobile communication channels are often modeled as linear time-varying filters or, equivalently, as time-frequency integral operators with finite support in time and frequency. Such a characterization inherently assumes the signals are narrowband and may not be appropriate for wide- band signals. In this paper time-scale characterizations are examined that are useful in wideband time-varying channels, for which a time-scale integral operator is physically justifiable. A review of these time-frequency and time-scale characterizations is presented. Both the time-frequency and time-scale integral operators have a two-dimensional discrete characterization which motivates the design of time-frequency or time-scale rake receivers. These receivers have taps for both time and frequency (or time and scale) shifts of the transmitted signal. A general theory of these charac- terizations which generates, as specific cases, the discrete time-frequency and time-scale models is presented here. The interpretation of these models, namely, that they can be seen to arise from processing assumptions on the transmit and receive waveforms is discussed. Out of this discus- sion a third model arises: a frequency-scale continuous channel model with an associated discrete frequency-scale characterization.

28 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied the dirty-paper coding problem over a channel with both noise and interference, where the interference is known to the encoder non-causally and unknown to the decoder.
Abstract: “Writing on dirty paper” refers to the communication problem over a channel with both noise and interference, where the interference is known to the encoder non-causally and unknown to the decoder. This problem is regarded as a basic building block in both the single-user and multiuser communications, and it has been extensively investigated by Costa and other researchers. However, little is known in the case that the encoder can have access to feedback from the decoder. In this paper, we study the dirty-paper coding problem for feedback Gaussian channels without or with memory. We provide the most power efficient coding schemes for this problem, i.e., the schemes achieve lossless interference cancelation. These schemes are based on the Kalman filtering algorithm, extend the Schalkwijk-Kailath feedback codes, have low complexity and a doubly exponential reliability function, and reveal the interconnections among information, control, and estimation over dirty-paper channels with feedback. This research may be found useful to, for example, powerconstrained sensor network communication.

13 citations


Journal ArticleDOI
TL;DR: Experimental results for the binary burst noise channel support the theoretical predictions and show that, in practice, there is much to be gained by taking the channel memory into account.
Abstract: We consider the problem of estimating a discrete signal $X^n = (X_1, \ldots, X_n)$ based on its noise-corrupted observation signal $Z^n = (Z_1, \ldots, Z_n)$. The noise-free, noisy, and reconstruction signals are all assumed to have components taking values in the same finite $M$-ary alphabet $\{0, \ldots, M-1 \}$. For concreteness we focus on the additive noise channel $Z_i = X_i + N_i$, where addition is modulo-$M$, and $\{N_i\}$ is the noise process. The cumulative loss is measured by a given loss function. The distribution of the noise is assumed known, and may have memory restricted only to stationarity and a mild mixing condition. We develop a sequence of denoisers (indexed by the block length $n$) which we show to be asymptotically universal in both a semi-stochastic setting (where the noiseless signal is an individual sequence) and in a fully stochastic setting (where the noiseless signal is emitted from a stationary source). It is detailed how the problem formulation, denoising schemes, and performance guarantees carry over to non-additive channels, as well as to higher-dimensional data arrays. The proposed schemes are shown to be computationally implementable. We also discuss a variation on these schemes that is likely to do well on data of moderate size. We conclude with a report of experimental results for the binary burst noise channel, where the noise is a finite-state hidden Markov process (FS-HMP), and a finite-state hidden Markov random field (FS-HMRF), in the respective cases of one- and two-dimensional data. These support the theoretical predictions and show that, in practice, there is much to be gained by taking the channel memory into account.

12 citations


Journal ArticleDOI
TL;DR: This paper focuses on the derivation of the optimal two-stage estimator for the square-root covariance implementation of the Kalman filter (TS-SRCKF), which is known to be numerically more robust than the standard covariance Implementation of the AKF.
Abstract: This paper considers the problem of estimating an unknown input (bias) by means of the augmented-state Kalman (AKF) filter. To reduce the computational complexity of the AKF, [12] recently developed an optimal two-stage Kalman filter (TS-AKF) that separates the bias estimation from the state estimation, and shows that his new two-stage estimator is equivalent to the standard AKF, but requires less computations per iteration. This paper focuses on the derivation of the optimal two-stage estimator for the square-root covariance implementation of the Kalman filter (TS-SRCKF), which is known to be numerically more robust than the standard covariance implementation. The new TS-SRCKF also estimates the state and the bias separately while at the same time it remains equivalent to the standard augmented-state SRCKF. It is experimentally shown in the paper that the new TS-SRCKF may require less flops per iteration for some problems than the Hsieh's TS-AKF [12]. Furthermore a second, even faster (single-stage) algorithm has been derived in the paper by exploiting the structure of the least-squares problem and the square-root covariance formulation of the AKF. The computational complexities of the two proposed methods have been analyzed and compared the those of other existing implementations of the AKF.

10 citations


Journal ArticleDOI
TL;DR: It is proved that any of these DHTs of length N = 2 can be factorized by means of a divide–and–conquer strategy into a product of sparse, orthogonal matrices where in this context sparse means at most two nonzero entries per row and column.
Abstract: The discrete Hartley transforms (DHT) of types I -- IV and the related matrix algebras are discussed. We prove that any of these DHTs of length $N=2^t$ can be factorized by means of a divide--and--conquer strategy into a product of sparse, orthogonal matrices where in this context sparse means at most two nonzero entries per row and column. The sparsity joint with orthogonality of the matrix factors is the key for proving that these new algorithms have low arithmetic costs equal to $\frac52 N\log_2 (N)+O(N)$ arithmetic operations and an excellent normwise numerical stability. Further, we consider the optimal Frobenius approximation of a given symmetric Toeplitz matrix generated by an integrable symbol in a Hartley matrix algebra. We give explicit formulas for computing these optimal approximations and discuss the related preconditioned conjugate gradient (PCG) iterations. By using the matrix approximation theory, we prove the strong clustering at unity of the preconditioned matrix sequences under the sole assumption of continuity and positivity of the generating function. The multilevel case is also briefly treated. Some numerical experiments concerning DHT preconditioning are included.

Journal ArticleDOI
TL;DR: By using the nonlinear internal model approach, the polynomial assumption on the solution of the regulator equations is relaxed, and the semi-global robust output regulation problem for the class of nonlinear affine systems in normal form is studied.
Abstract: This paper studies the semi-global robust output regulation problem for the class of nonlinear affine systems in normal form. The same problem was studied before by Khalil under the assumption that the solution of the regulator equations is polynomial. By using the nonlinear internal model approach, we have relaxed the polynomial assumption on the solution of the regulator equations.

Journal ArticleDOI
TL;DR: The mathematically simple matrix representation approach is used to present an efficient (i.e., polynomial-time) algorithm for consistency checking of spatial relationships in an image and can completely answer whether a given set of three-dimensional absolute spatial relationships is consistent.
Abstract: In this paper we investigate the consistency problem for spatial relationships in content-based image database systems We use the mathematically simple matrix representation approach to present an efficient (ie, polynomial-time) algorithm for consistency checking of spatial relationships in an image It is shown that, there exists an efficient algorithm to detectwhether, given a set SR of absolute spatial relationships, the maximal set of SR under R contains one pair of contradictory spatial relationships The time required by it is at most a constant multiple of the time to compute the transitive reduction of a graph or to compute the transitive closure of a graph or to perform Boolean matrix multiplication, and thus is always bounded by time complexity O(n3) (and space complexity O(n2)), where n is the number of all involved objects As a corollary, this detection algorithm can completely answer whether a given set of three-dimensional absolute spatial relationships is consistent

Journal ArticleDOI
TL;DR: The mathematically simple matrix representation approach is used to show that the deduction problem and the reduction problem, to eliminate redundant spatial relationships from F, can be solved by efficient (i.e., polynomial-time) algorithms.
Abstract: Spatial reasoning is an important component in pictorial retrieval systems. There are two approaches to handling spatial relationships: the well-known one is to use algorithms on which most earlier work such as (13, 17, 21) is based, and the recent one (30) is to construct deductive rules that allow spatial relationships to be deduced. Sistla et al. (30) developed a system of rules R on reasoning about basic spatial relationships that are of common interest in pictorial databases. In this paper, we consider the following two problems with that system of rules R: the deduction problem (that is, to deduce new spatial relationships from a given set F of spatial relationships) and the reduction problem (that is, to eliminate redundant spatial relationships from F. We use the mathematically simple matrix representation approach to show that these two prob- lems can be solved by efficient (i.e., polynomial-time) algorithms. The time required by both of them is at most a constant multiple of the time to compute the transitive reduction of a directed graph with n vertices or to compute the transitive closure of a directed graph with n vertices or to perform n × n Boolean matrix multiplication, and thus is always bounded by time complexity O(n 3 ) (and space complexity O(n 2 )), where n is the number of all involved objects.

Journal ArticleDOI
TL;DR: This paper presents the two extensions of the Schur-Cohn stability test that derive from these extended Schur coefficients, which provide a stronger condition of stability, which is necessary and sufficient condition of Stability for multidimensional linear system.
Abstract: In the framework of BIBO stability tests for one-dimensional (1-D) linear systems, the Schur-Cohn stability test has the appealing property of being a recursive algorithm. This is a consequence of the simultaneously algebric and analytic aspect of the Schur coefficients, which can be also regarded as reflection coefficients. In the multidimensional setting, this dual aspect gives rise to two extension of the Schur coefficients that are no longer equivalent. This paper presents the two extensions of the Schur-Cohn stability test that derive from these extended Schur coefficients. The reflection-coefficient approach was recently proposed in the 2-D case as a necessary but non sufficient condition of stability. The Schur-type multidimensional approach provides a stronger condition of stability, which is necessary and sufficient condition of stability for multidimensional linear system. This extension is based on so-called slice function associated to n-variable analytic functions. Several examples are given to illustrate this approach.

Journal ArticleDOI
TL;DR: Simulation results are presented confirming that this approach outperforms the more traditional recursive least squares (RLS) adaptive equalizer for this application and rivals the performance of MMSE equalizers requiring channel knowledge.
Abstract: This paper presents a novel adaptive equalization algorithm for time-varying MIMO systems with ISI channel conditions. The algorithm avoids channel estimation before equalization and leads to a direct QR-based procedure for updating the equalizer coefficients to track the time- varying channel characteristics. Our approach does not require precise channel estimation and needs relatively few pilot symbols for satisfactory equalization. The theoretical foundations of the pro- posed algorithm are rooted in signal recovery results derived from the generalized Bezout identity and the finite alphabet property inherent in digital communication schemes. Concerning the con- vergence behavior of the algorithm, we address the following three issues: existence of fixed points, exclusiveness of fixed points, and robustness under noise disturbance and parameter selection. The equalizer demonstrates promising capability in achieving low symbol error rates for a very broad range of SNRs. Simulation results are presented confirming that this approach outperforms the more traditional recursive least squares (RLS) adaptive equalizer for this application and rivals the performance of MMSE equalizers requiring channel knowledge.

Journal ArticleDOI
TL;DR: It is shown that a direct, DSP-based implementation of the above system using a single FFT symbol is highly sus- ceptible to artifacts induced by symbol timing errors (symbol synchronization jitter, SSJ).
Abstract: In this paper a robust, single carrier, multi-user, DSP-based implementation of an OFDM, differential phase shift keying (DPSK) block demodulator is suggested. It is shown that a direct, DSP-based implementation of the above system using a single FFT symbol is highly susceptible to artifacts induced by symbol timing errors (symbol synchronization jitter, SSJ). A dual symbol, FFT-based implementation is suggested to alleviate the effect of timing error on the demodulation process, even eliminate some of the SSJ-induced artifacts. The countermeasures used in the implementation do not assume any statistical model of SSJ nor are they dependant on a symbol synchronization scheme. Also a dual version of the demodulator is derived for an important, but less known, modulation technique called symmetric DPSK (SDPSK).

Journal ArticleDOI
TL;DR: This work gives a complete characterization of a class of discrete dynamical systems with nonlinear feedback that generalize various maps arising in connection with chaotic Dynamical systems, topological dynamics, and linear systems theory.
Abstract: A class of discrete dynamical systems with nonlinear feedback is considered. These systems generalize various maps arising in connection with chaotic dynamical systems, topological dynamics, and linear systems theory. We give a complete characterization of this class of systems.

Journal ArticleDOI
TL;DR: It is proved that his conjecture on Markovianity is true if the characteristic variety of the system has dimension zero, and for the case when the system is defined by a differential operator, the conjecture is also valid.
Abstract: In this paper we study the Markovian properties of a system of linear partial differential equations with constant coefficients as initiated by J.C. Willems. In particular, we prove that his conjecture on Markovianity is true if the characteristic variety of the system has dimension zero. For the case when the system is defined by a differential operator, we give conditions under which the conjecture is also valid. Key word: Linear differential systems, Markovianity, characteristic variety.