scispace - formally typeset
Search or ask a question

Showing papers in "Communications in information and systems in 2007"


Journal ArticleDOI
TL;DR: In this article, the authors examined the asymptotic stabilizability of discrete-time linear systems with delayed input and showed that such systems are semi-globally stabilizable by either linear state or output feedback as long as the open loop system is not exponentially unstable.
Abstract: This paper examines the asymptotic stabilizability of discrete-time linear systems with delayed input. By explicit construction of stabilizing feedback laws, it is shown that a stabilizable able and detectable linear system with an arbitrarily large delay in the input can be asymptotically stabilized by either linear state or output feedback as long as the open loop system is not exponentially unstable (i.e., all the open loop poles are on or inside the unit circle.) It is further shown that such a system, when subject to actuator saturation, is semi-globally asymptotically stabilizable by linear state or output feedback.

42 citations


Journal ArticleDOI
TL;DR: This paper presents the necessary techniques for analyzing the system and shows that random network coding provides the system with both maximum bandwidth efficiency and robustness and points out that the model forrandom network coding in P2P networks is very different from the one that has been studied extensively in the literature.
Abstract: In this paper, we study the application of random network coding in peer-to-peer (P2P) networks. The system we analyze is based on a prototype called Avalanche proposed in Network Coding for Large Scale Content Distribution (C. Gkantsidis and P. Rodriguez) for large scale content distribution on such networks. We present the necessary techniques for analyzing the system and show that random network coding provides the system with both maximum bandwidth efficiency and robustness. We also point out that the model for random network coding in P2P networks is very different from the one that has been studied extensively in the literature.

37 citations


Journal Article
TL;DR: This work presents an introductory survey to factor models for time series, where the factors represent the comovement between the single time series.
Abstract: Factor models are used to condense high dimensional data consisting of many vari- ables into a much smaller number of factors Here we present an introductory survey to factor models for time series, where the factors represent the comovement between the single time series Principal component analysis, linear dynamic factor models with idiosyncratic noise and generalized linear dynamic factor models are introduced and structural properties, such as identifiability, as well as estimation are discussed 1 Introduction Factor analysis has been developed by psychologists for mea- surement of intelligence in the beginning of the twentieth century In particular Burt and Spearman, observing that in tests of mental ability of a person, the scores on different items tended to be correlated, developed the hypothesis of a common latent factor, called general intelligence, (6), (19) In the 1930s, Thurstone and others pro- posed a more general model allowing for more than one common factor, representing different mental abilities, (22) In general, the motivation for the use of factor models is compression of the information contained in a high dimensional data vector into a small number of factors and the idea of underlying latent nonobserved variables influencing the observations Whereas the initial approach to factor analysis was oriented to data originating from independent, identically distributed random variables and consisted in dimen- sion reduction in the cross-sectional dimension (ie the number of variables), the idea has been further generalized to modelling of multivariate time series, thus compress- ing information in the cross-sectional and the time dimension This idea has been pursued rather independently in a number of areas, such as signal processing, (5), or econometrics, (15), (17), (10)

36 citations


Journal ArticleDOI
TL;DR: In this paper, a recursive algorithm for the linear-inequality constrained least square (RLS) problem is proposed and a simple and easily implementable initialization of the RLS algorithm is proposed.
Abstract: Recursive Least Squares (RLS) algorithms have wide-spread applications in many areas, such as real-time signal processing, control and communications. This paper shows that the unique solutions to linear-equality constrained and the unconstrained LS problems, respectively, always have exactly the same recursive form. Their only difference lies in the initial values. Based on this, a recursive algorithm for the linear-inequality constrained LS problem is developed. It is shown that these RLS solutions converge to the true parameter that satisfies the constraints as the data size increases. A simple and easily implementable initialization of the RLS algorithm is proposed. Its convergence to the exact LS solution and the true parameter is shown. The RLS algorithm, in a theoretically equivalent form by a simple modification, is shown to be robust in that the constraints are always guaranteed to be satisfied no matter how large the numerical errors are. Numerical examples are provided to demonstrate the validity of the above results.

36 citations


Journal ArticleDOI
TL;DR: The high sensitivity to brain state changes, ability to operate in real time and small computational requirements make Hurst parameter estimation using any of these three methods well suited for implementation into miniature implantable devices for contingent delivery of anti-seizure therapies.
Abstract: Estimation of the Hurst parameter provides information about the memory range or correlations (long vs. short) of processes (time-series). A new application for the Hurst parameter, real-time event detection, is identified. Hurst estimates using rescaled range, dispersional and bridge- detrended scaled windowed variance analyses of seizure time-series recorded from human subjects reliably detect their onset, termination and intensity. Detection sensitivity is unaltered by signal decimation and window size increases. The high sensitivity to brain state changes, ability to operate in real time and small computational requirements make Hurst parameter estimation using any of these three methods well suited for implementation into miniature implantable devices for contingent delivery of anti-seizure therapies. industrialized world's population, and up to 10% of people in under-developed coun- tries. As seizures are brief and relatively unpredictable, continuous EEG/ECoG moni- toring is needed to implement new therapies, such as contingent electrical stimulation for seizure blockage, via implantable devices, in subjects with pharmaco-resistant epilepsies. Hurst parameter (1) estimation has been applied to many natural (non-biological) (2) and also biological phenomena, such as neuron membrane channel kinetics (10), a fundamental functional operation of the brain. The behavior of membrane channels seems to exhibit long-term correlation (H > 0.78, implying "persistence") and the currents recorded through individual ion channels have self-similar properties, that is, they are fractals and may be best modeled using fractional Brownian motion. The fractal behavior may extend to the whole neuron as measured simultaneously across many channels. This raises the possibility that brain electrical processes may be fractal or self-similar, or that, at a minimum, useful information may be obtained from treating them as such in analyzing data or signals generated by these processes in the brain. The Hurst parameter (H) (1, 2) may provide information about the behavior of continuous and discrete event time series and its estimation in the EEG/ECoG of

33 citations


Journal ArticleDOI
TL;DR: Various sampling and population-based numerical algorithms to overcome the computational difficulties of computing an optimal solution in terms of a policy and/or value function are presented.
Abstract: Many problems modeled by Markov decision processes (MDPs) have very large state and/or action spaces, leading to the well-known curse of dimensionality that makes solution of the resulting models intractable. In other cases, the system of interest is complex enough that it is not feasible to explicitly specify some of the MDP model parameters, but simulated sample paths can be readily generated (e.g., for random state transitions and rewards), albeit at a non-trivial computational cost. For these settings, we have developed various sampling and population-based numerical algorithms to overcome the computational difficulties of computing an optimal solution in terms of a policy and/or value function. Specific approaches presented in this survey include multi-stage adaptive sampling, evolutionary policy iteration and evolutionary random policy search.

20 citations


Journal ArticleDOI
TL;DR: This work studies the regularity, in the sense of the Malliavin calculus, of the renormalized self- intersection local time of the Brownian motion with Hurst parameter H 2.
Abstract: Let BH be a d-dimensional fractional Brownian motion with Hurst parameter H 2 (0,1). We study the regularity, in the sense of the Malliavin calculus, of the renormalized self- intersection local time

19 citations


Journal ArticleDOI
TL;DR: A probabilistic model based on Gaussian mixture is introduced to solve the problem of clusters embedded in different feature subspace, where some features can be irrelevant, and thus hinder the clustering performance.
Abstract: The goal of unsupervised learning, i.e., clustering, is to determine the intrinsic structure of unlabeled data. Feature selection for clustering improves the performance of grouping by removing irrelevant features. Typical feature selection algorithms select a common feature subset for all the clusters. Consequently, clusters embedded in different feature subspaces are not able to be identified. In this paper, we introduce a probabilistic model based on Gaussian mixture to solve this problem. Particularly, the feature relevance for an individual cluster is treated as a probability, which is represented by localized feature saliency and estimated through Expectation Maximization (EM) algorithm during the clustering process. In addition, the number of clusters is determined simultaneously by integrating a Minimum Message Length (MML) criterion. Experiments carried on both synthetic and real-world datasets illustrate the performance of the proposed approach in finding clusters embedded in feature subspace. 1. Introduction. Clustering is unsupervised classification of data objects into different groups (clusters) such that objects in one group are similar together and dis- similar from another group. Applications of data clustering are found in many fields, such as information discovering, text mining, web analysis, image grouping, medi- cal diagnosis, and bioinformatics. Many clustering algorithms have been proposed in the literature (8). Basically, they can be categorized into two groups: hierarchical or partitional. A clustering algorithm typically considers all available features of the dataset in an attempt to learn as much as possible from data. In practice, however, some features can be irrelevant, and thus hinder the clustering performance. Feature selection, which chooses the "best" feature subset for clustering, can be applied to solve this problem. Feature selection is extensively studied in supervised learning scenario (1-3), where class labels are available for judging the performance improvement contributed by a feature selection algorithm. For unsupervised learning, feature selection is a very dif- ficult problem due to the lack of class labels, and it has received extensive attention recently. The algorithm proposed in (4) measures feature similarity by an information compression index. In (5), the relevant features are detected using a distance-based entropy measure. (6) evaluates the cluster quality over different feature subsets by normalizing cluster separability or likelihood using a cross-projection method. In (7), feature saliency is defined as a probability and estimated by the Expectation Maxi- mization (EM) algorithm using Gaussian mixture models. A variational Bayesian ap-

16 citations


Journal ArticleDOI
TL;DR: The goal is to determine a control law, measurable with respect to the history of the space-time process, which minimizes a quadratic cost functional.
Abstract: Maintaining optical alignment between stations of a free-space optical link requires an active pointing mechanism to persistently aim an optical beam toward the receiving station with an acceptable accuracy. This mechanism ensures delivery of maximum optical power to the receiving station in spite of the relative motion of the stations. In the active pointing scheme proposed in the present paper, the receiving station estimates the center of the incident optical beam based on the output of a position-sensitive photodetector. The transmitting station receives this estimate via an independent communication link and uses it to accurately aim at the receiving station. The overall mechanism which implements this scheme can be described in terms of a diffusion process which modulates the rate of a doubly stochastic space-time Poisson process. At the receiving station, observation of the space-time process over a subset of R 2 is provided in order to control the diffusion process. Our goal is to determine a control law, measurable with respect to the history of the space-time process, which minimizes a quadratic cost functional.

13 citations


Journal ArticleDOI
TL;DR: Parameter-dependent linear evolution equations with a fractional noise in the bound- ary conditions are studied and Ergodic-type theorems for stationary and non-stationary solutions are verified to prove the strong consistency of a suitably defined family of estimators.
Abstract: Parameter-dependent linear evolution equations with a fractional noise in the bound- ary conditions are studied. Ergodic-type theorems for stationary and non-stationary solutions are verified and used to prove the strong consistency of a suitably defined family of estimators.

11 citations


Journal ArticleDOI
TL;DR: A new parameter domain is designed, two-layered sphere, and a framework for mapping high genus surfaces onto sphere is presented, which allows the existing applications based on general spherical parameterization to be transferred to the field of high genus surface applications.
Abstract: Surface parameterization establishes bijective maps from a surface onto a topologically equivalent standard domain. It is well known that the spherical parameterization is limited to genus-zero surfaces. In this work, we design a new parameter domain, two-layered sphere, and present a framework for mapping high genus surfaces onto sphere. This setup allows us to trans- fer the existing applications based on general spherical parameterization to the field of high genus surfaces, such as remeshing, consistent parameterization, shape analysis, and so on. Our method is based on Riemann surface theory. We construct meromorphic functions on surfaces: for genus one surfaces, we apply Weierstrass $P$-functions; for high genus surfaces, we compute the quotient between two holomorphic one-forms. Our method of spherical parameterization is theoretically sound and practically efficient. It makes the subsequent applications on high genus surfaces very promising.

Journal ArticleDOI
TL;DR: The GLC ray-tracer is able to create a broad class of multiperspective effects and it provides flexible collineation controls to reduce multipersportive distortions.
Abstract: We present a General Linear Camera (GLC) model that unifies many previous camera models into a single representation. The GLC model is capable of describing all perspective (pinhole), orthographic, and many multiperspective (including pushbroom and two-slit) cameras, as well as epipolar plane images. It also includes three new and previously unexplored multiperspective linear cameras. The GLC model is both general and linear in the sense that, given any vector space where rays are represented as points, it describes all 2D affine subspaces (planes) that can be formed by affine combinations of 3 rays. The incident radiance seen along the rays found on subregions of these 2D linear subspaces are a precise definition of a projected image of a 3D scene. We model the GLC imaging process in terms of two separate stages: the mapping of 3D geometry to rays and the sampling of these rays over an image plane. We derive a closed-form solution to projecting 3D points in a scene to rays in a GLC and a notion of GLC collineation analogous to pinhole cameras. Finally, we develop a GLC ray-tracer for the direct rendering of GLC images. The GLC ray-tracer is able to create a broad class of multiperspective effects and it provides flexible collineation controls to reduce multiperspective distortions.

Journal ArticleDOI
TL;DR: Existence of optimal strategies maximizing average growth rate of portfolio is proved in the case of complete and partial observation of the process modelling the economic factors based on a modification of the vanishing discount approach.
Abstract: This paper considers a discrete-time Markovian model of asset prices with economic factors and transaction costs with proportional and fixed terms. Existence of optimal strategies maximizing average growth rate of portfolio is proved in the case of complete and partial observation of the process modelling the economic factors. The proof is based on a modification of the vanishing discount approach. The main difficulty is the discontinuity of the controlled transition operator of the underlying Markov process.

Journal ArticleDOI
TL;DR: A surface matching framework is proposed: first, with curve signatures, the partitioning of two surfaces defined by simple closed curves on them is matched; second, the segmented subregions are pairwisely matched and then compared on canonical planar domains.
Abstract: We design signatures for curves defined on genus zero surfaces. The signature classifies curves according to the conformal geometry of the given curves and their embedded surface. Based on Teichmuller theory, our signature describes not only the curve shape but also the intrinsic relationship between the curve and its embedded surface. Furthermore, the signature metric is stable, it is close to identity between surfaces sharing similar Riemannian geometry metrics. Based on this, we propose a surface matching framework: first, with curve signatures, we match the partitioning of two surfaces defined by simple closed curves on them; second, the segmented subregions are pairwisely matched and then compared on canonical planar domains. 1. Introduction. Shape analysis and shape comparison are fundamental prob- lems in computer vision, graphics and modeling fields with many important appli- cations. Lots of 2D and 3D shape analysis techniques have been developed in the past couple of decades, most of which are based on comparing curvature or spacial positions of the points on the curve. A complete different way is to consider all the closed curves on the surface. The curve space on surface conveys rich geometric information of the surface itself and is easy to process. The philosophy of analyzing shapes by their associated curve spaces has deep root in algebraic topology (8), infinite dimensional Morse theory (18) and Teichmuller space theory in complex geometry (31). Suppose M is a surface (a 2-manifold), a closed curve on M is a map

Journal ArticleDOI
TL;DR: This paper considers hidden Markov processes in discrete time with a finite state space X and a general observation or read-out space Y, which is assumed to be a Polish space and shows that the random constant and the deterministic positive exponent showing up in the inequality stating exponential stability can be chosen so that for any prescribed s exceeding 1 the s-th exponential moment of the random Constant is finite.
Abstract: We consider hidden Markov processes in discrete time with a finite state space X and a general observation or read-out space Y, which is assumed to be a Polish space. It is well-known that in the statistical analysis of HMMs the so-called predictive filter plays a fundamental role. A useful result establishing the exponential stability of the predictive filter with respect to perturbations of its initial condition was given in the paper of LeGland and Mevel, MCSS, 2000, in the case, when the assumed transition probability matrix was primitive. The main technical result of the present paper is the extension of the cited result by showing that the random constant and the deterministic positive exponent showing up in the inequality stating exponential stability can be chosen so that for any prescribed s exceeding 1 the s-th exponential moment of the random constant is finite. An application of this result to the estimation of HMMs with primitive transition probability densities will be also briefly presented.

Journal ArticleDOI
TL;DR: This work describes the dynamics governing the value function and the associated boundary conditions in a fluid model in which the number of shares are treated as fluid, and the corresponding liquidation is dictated by the rate of selling over time.
Abstract: A common practice for stock-selling decision making is often concerned with liquida- tion of the security in a short duration. This is feasible when a relative smaller number of shares of a stock is treated. Selling a large position during a short period of time in the market frequently de- presses the market, resulting in poor filling prices. In this work, liquidation strategies are considered for selling much smaller number of shares over a longer period of time. By using a fluid model in which the number of shares are treated as fluid, and the corresponding liquidation is dictated by the rate of selling over time. Our objective is to maximize the expected overall return. The problem is formulated as a stochastic control problem with state constraints. Using the method of constrained viscosity solutions, we characterize the dynamics governing the value function and the associated boundary conditions. Numerical algorithms are also provided along with an illustrative example for demonstration purposes.

Journal ArticleDOI
TL;DR: The models and algorithms developed can be extended to model the spread of a disease in a general network of connected zones and give an optimization formulation for obtaining the optimal screening and quarantine policy.
Abstract: In this paper, we propose mathematical models for the spread of HIV in a network of prisons. We study the effect of both screening prisoners and quarantining infectives. Efficient algorithms based on Newton’s method are then developed for computing the equilibrium values of the infectives in each prison. We also give an optimization formulation for obtaining the optimal screening and quarantine policy. The models and algorithms developed can be extended to model the spread of a disease in a general network of connected zones.

Journal ArticleDOI
TL;DR: The goal of nonlinear filtering is to determine the conditional expectation of the form E(�(xt) : ys,0 � st) where E is any C ∞ function or even better to compute the entire conditional probability density of xt given the observation history {ys : 0 � st}.
Abstract: 1. Introduction. Ever since the technique of the Kalman-Bucy filter was pop- ularized, there has been an intense interest in developing nonlinear filtering theory. Basically we have a signal or state process x = {xt} which is usually not observable. What we can observe is a related process y = {yt}. The goal of nonlinear filtering is to determine the conditional expectation of the form E(�(xt) : ys,0 � st) whereis any C ∞ function or even better to compute the entire conditional probability density �(t,x) of xt given the observation history {ys : 0 � st}. In practical applications, it is preferable that the computation of conditional probability density be preformed recursively in terms of a statistic � = {�t}, which can be updated by using only the latest observations. In some cases, �t is computable with a finite system of differential equations driven by y. This leads to the ideal notion of finite dimensional recursive filter. By definition such a filter is a system: dt = �(�t)dt + p X i=1 �i(�t)dyit

Journal ArticleDOI
TL;DR: This paper designs stochastic optimization algorithms and presents numerical experiments using data derived from Berkeley Options Data Base, and uses a Liapunov function approach to obtain the desired convergence rates.
Abstract: This paper is concerned with using stochastic approximation and optimization methods for stock liquidation decision making and option pricing. For stock liquidation problem, we present a class of stochastic recursive algorithms, and make comparisons of performances using stochastic approximation methods and that of certain commonly used heuristic methods, such as moving averaging method and moving maximum method. Stocks listed in NASDAQ are used for making the comparisons. For option pricing, we design stochastic optimization algorithms and present numerical experiments using data derived from Berkeley Options Data Base. An important problem in these studies concerns the rate of convergence taking into consideration of bias and noise variance. In an effort to ascertain the convergence rates incorporating the computational efforts, we use a Liapunov function approach to obtain the desired convergence rates. Variants of the algorithms are also suggested.

Journal ArticleDOI
TL;DR: In this article, a one-parameter family of probability densities (related to the Pareto distribution, which describes many natural phenomena) where the Cramer-Rao inequality provides no information was investigated.
Abstract: We investigate a one-parameter family of probability densities (related to the Pareto distribution, which describes many natural phenomena) where the Cramer-Rao inequality provides no information.