scispace - formally typeset
Search or ask a question

Showing papers on "Convex optimization published in 2008"


Journal ArticleDOI
TL;DR: It is proved that the method for learning sparse representations shared across multiple tasks is equivalent to solving a convex optimization problem for which there is an iterative algorithm which converges to an optimal solution.
Abstract: We present a method for learning sparse representations shared across multiple tasks. This method is a generalization of the well-known single-task 1-norm regularization. It is based on a novel non-convex regularizer which controls the number of learned features common across the tasks. We prove that the method is equivalent to solving a convex optimization problem for which there is an iterative algorithm which converges to an optimal solution. The algorithm has a simple interpretation: it alternately performs a supervised and an unsupervised step, where in the former step it learns task-specific functions and in the latter step it learns common-across-tasks sparse representations for these functions. We also provide an extension of the algorithm which learns sparse nonlinear representations using kernels. We report experiments on simulated and real data sets which demonstrate that the proposed method can both improve the performance relative to learning each task independently and lead to a few learned features common across related tasks. Our algorithm can also be used, as a special case, to simply select--not learn--a few common variables across the tasks.

1,588 citations


Journal ArticleDOI
TL;DR: This work considers the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse, and presents two new algorithms for solving problems with at least a thousand nodes in the Gaussian case.
Abstract: We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright and Jordan, 2006), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for the binary case. We test our algorithms on synthetic data, as well as on gene expression and senate voting records data.

1,189 citations


Book
18 Dec 2008
TL;DR: In this article, Cauchy's Functional Equation and Jensen's Inequality are used to show the boundedness and continuuity of Convex Functions and Additive Functions.
Abstract: Preliminaries- Set Theory- Topology- Measure Theory- Algebra- Cauchy's Functional Equation and Jensen's Inequality- Additive Functions and Convex Functions- Elementary Properties of Convex Functions- Continuous Convex Functions- Inequalities- Boundedness and Continuity of Convex Functions and Additive Functions- The Classes A, B, ?- Properties of Hamel Bases- Further Properties of Additive Functions and Convex Functions- Related Topics- Related Equations- Derivations and Automorphisms- Convex Functions of Higher Orders- Subadditive Functions- Nearly Additive Functions and Nearly Convex Functions- Extensions of Homomorphisms

1,026 citations


Journal ArticleDOI
TL;DR: This article introduces compressive sampling and recovery using convex programming, which converts high-resolution images into a relatively small bit streams in effect turning a large digital data set into a substantially smaller one.
Abstract: Image compression algorithms convert high-resolution images into a relatively small bit streams in effect turning a large digital data set into a substantially smaller one. This article introduces compressive sampling and recovery using convex programming.

1,025 citations


Journal ArticleDOI
TL;DR: The structure of optimal solution sets is studied, finite convergence for important quantities is proved, and $q$-linear convergence rates for the fixed-point algorithm applied to problems with $f(x)$ convex, but not necessarily strictly convex are established.
Abstract: We present a framework for solving the large-scale $\ell_1$-regularized convex minimization problem:\[ \min\|x\|_1+\mu f(x). \] Our approach is based on two powerful algorithmic ideas: operator-splitting and continuation. Operator-splitting results in a fixed-point algorithm for any given scalar $\mu$; continuation refers to approximately following the path traced by the optimal value of $x$ as $\mu$ increases. In this paper, we study the structure of optimal solution sets, prove finite convergence for important quantities, and establish $q$-linear convergence rates for the fixed-point algorithm applied to problems with $f(x)$ convex, but not necessarily strictly convex. The continuation framework, motivated by our convergence results, is demonstrated to facilitate the construction of practical algorithms.

912 citations


Proceedings ArticleDOI
19 Mar 2008
TL;DR: This paper reformulates the problem by treating the 1-bit measurements as sign constraints and further constraining the optimization to recover a signal on the unit sphere, and demonstrates that this approach performs significantly better compared to the classical compressive sensing reconstruction methods, even as the signal becomes less sparse and as the number of measurements increases.
Abstract: Compressive sensing is a new signal acquisition technology with the potential to reduce the number of measurements required to acquire signals that are sparse or compressible in some basis. Rather than uniformly sampling the signal, compressive sensing computes inner products with a randomized dictionary of test functions. The signal is then recovered by a convex optimization that ensures the recovered signal is both consistent with the measurements and sparse. Compressive sensing reconstruction has been shown to be robust to multi-level quantization of the measurements, in which the reconstruction algorithm is modified to recover a sparse signal consistent to the quantization measurements. In this paper we consider the limiting case of 1-bit measurements, which preserve only the sign information of the random measurements. Although it is possible to reconstruct using the classical compressive sensing approach by treating the 1-bit measurements as plusmn 1 measurement values, in this paper we reformulate the problem by treating the 1-bit measurements as sign constraints and further constraining the optimization to recover a signal on the unit sphere. Thus the sparse signal is recovered within a scaling factor. We demonstrate that this approach performs significantly better compared to the classical compressive sensing reconstruction methods, even as the signal becomes less sparse and as the number of measurements increases.

793 citations


Journal ArticleDOI
TL;DR: This work introduces a decentralized scheme for least-squares and best linear unbiased estimation (BLUE) and establishes its convergence in the presence of communication noise and introduces a method of multipliers in conjunction with a block coordinate descent approach to demonstrate how the resultant algorithm can be decomposed into a set of simpler tasks suitable for distributed implementation.
Abstract: We deal with distributed estimation of deterministic vector parameters using ad hoc wireless sensor networks (WSNs). We cast the decentralized estimation problem as the solution of multiple constrained convex optimization subproblems. Using the method of multipliers in conjunction with a block coordinate descent approach we demonstrate how the resultant algorithm can be decomposed into a set of simpler tasks suitable for distributed implementation. Different from existing alternatives, our approach does not require the centralized estimator to be expressible in a separable closed form in terms of averages, thus allowing for decentralized computation even of nonlinear estimators, including maximum likelihood estimators (MLE) in nonlinear and non-Gaussian data models. We prove that these algorithms have guaranteed convergence to the desired estimator when the sensor links are assumed ideal. Furthermore, our decentralized algorithms exhibit resilience in the presence of receiver and/or quantization noise. In particular, we introduce a decentralized scheme for least-squares and best linear unbiased estimation (BLUE) and establish its convergence in the presence of communication noise. Our algorithms also exhibit potential for higher convergence rate with respect to existing schemes. Corroborating simulations demonstrate the merits of the novel distributed estimation algorithms.

740 citations


Journal ArticleDOI
TL;DR: In this paper, a regularized variant of projected subgradient method for nonsmooth, nonstrictly convex minimization in real Hilbert spaces is presented, where only one projection step is needed per iteration and involved stepsizes are controlled so that the algorithm is of practical interest.
Abstract: In this paper, we establish a strong convergence theorem regarding a regularized variant of the projected subgradient method for nonsmooth, nonstrictly convex minimization in real Hilbert spaces. Only one projection step is needed per iteration and the involved stepsizes are controlled so that the algorithm is of practical interest. To this aim, we develop new techniques of analysis which can be adapted to many other non-Fejerian methods.

591 citations


Journal ArticleDOI
TL;DR: This work begins with the standard design under the assumption of a total power constraint and proves that precoders based on the pseudo-inverse are optimal among the generalized inverses in this setting, and examines individual per-antenna power constraints.
Abstract: We consider the problem of linear zero-forcing precoding design and discuss its relation to the theory of generalized inverses in linear algebra. Special attention is given to a specific generalized inverse known as the pseudo-inverse. We begin with the standard design under the assumption of a total power constraint and prove that precoders based on the pseudo-inverse are optimal among the generalized inverses in this setting. Then, we proceed to examine individual per-antenna power constraints. In this case, the pseudo-inverse is not necessarily the optimal inverse. In fact, finding the optimal matrix is nontrivial and depends on the specific performance measure. We address two common criteria, fairness and throughput, and show that the optimal generalized inverses may be found using standard convex optimization methods. We demonstrate the improved performance offered by our approach using computer simulations.

588 citations


Journal ArticleDOI
TL;DR: In this article, the optimal power flow problems (OPF) were reformulated into a semidefinite programming (SDP) model and developed an algorithm of interior point method (IPM) for SDP.

576 citations


Journal ArticleDOI
TL;DR: It is proven that the feasibility of the randomized solutions for all other convex programs can be bounded based on the feasibility for the prototype class of fully-supported problems, which means that all fully- supported problems share the same feasibility properties.
Abstract: Many optimization problems are naturally delivered in an uncertain framework, and one would like to exercise prudence against the uncertainty elements present in the problem. In previous contributions, it has been shown that solutions to uncertain convex programs that bear a high probability to satisfy uncertain constraints can be obtained at low computational cost through constraint randomization. In this paper, we establish new feasibility results for randomized algorithms. Specifically, the exact feasibility for the class of the so-called fully-supported problems is obtained. It turns out that all fully-supported problems share the same feasibility properties, revealing a deep kinship among problems of this class. It is further proven that the feasibility of the randomized solutions for all other convex programs can be bounded based on the feasibility for the prototype class of fully-supported problems. The feasibility result of this paper outperforms previous bounds and is not improvable because it is exact for fully-supported problems.

Book
06 Nov 2008
TL;DR: This comprehensive monograph analyzes Lagrange multiplier theory and shows its impact on the development of numerical algorithms for problems posed in a function space setting and develops and analyze efficient algorithms for constrained optimization and convex optimization problems based on the augumented Lagrangian concept.
Abstract: Lagrange multiplier theory provides a tool for the analysis of a general class of nonlinear variational problems and is the basis for developing efficient and powerful iterative methods for solving these problems. This comprehensive monograph analyzes Lagrange multiplier theory and shows its impact on the development of numerical algorithms for problems posed in a function space setting. The book is motivated by the idea that a full treatment of a variational problem in function spaces would not be complete without a discussion of infinite-dimensional analysis, proper discretization, and the relationship between the two. The authors develop and analyze efficient algorithms for constrained optimization and convex optimization problems based on the augumented Lagrangian concept and cover such topics as sensitivity analysis, convex optimization, second order methods, and shape sensitivity calculus. General theory is applied to challenging problems in optimal control of partial differential equations, image analysis, mechanical contact and friction problems, and American options for the Black-Scholes model. Audience: This book is for researchers in optimization and control theory, numerical PDEs, and applied analysis. It will also be of interest to advanced graduate students in applied analysis and PDE optimization.

Journal ArticleDOI
TL;DR: The use of multiple signals with arbitrary cross-correlation matrix R is proposed, and it is shown that R can be chosen to achieve or approximate a desired spatial transmit beampattern.
Abstract: Proposed next-generation radar systems will have multiple transmit apertures with complete flexibility in the choice of the signals transmitted at each aperture. Here we propose the use of multiple signals with arbitrary cross-correlation matrix R, and show that R can be chosen to achieve or approximate a desired spatial transmit beampattern. Two specific problems are addressed. The first is the constrained optimization problem of finding the value of R which causes the true transmit beampattern to be close in some sense to a desired beampattern. This is approached using convex optimization techniques. The second is the problem of designing multiple constant-modulus waveforms with given cross-correlation R. The use of coded binary phase shift keyed (BPSK) waveforms is considered. A method for finding the code sequences based on random signaling with a structured correlation matrix is proposed. It is also shown that by restricting the class of admissible waveforms one reduces the set of possible signal correlation matrices.

Journal ArticleDOI
TL;DR: It is shown that using semidefinite relaxation, the problem of distributed beamforming is considered for a wireless network which consists of a transmitter, a receiver, and relay nodes and is efficiently solved using interior point methods.
Abstract: In this paper, the problem of distributed beamforming is considered for a wireless network which consists of a transmitter, a receiver, and relay nodes. For such a network, assuming that the second-order statistics of the channel coefficients are available, we study two different beamforming design approaches. As the first approach, we design the beamformer through minimization of the total transmit power subject to the receiver quality of service constraint. We show that this approach yields a closed-form solution. In the second approach, the beamforming weights are obtained through maximizing the receiver signal-to-noise ratio (SNR) subject to two different types of power constraints, namely the total transmit power constraint and individual relay power constraints. We show that the total power constraint leads to a closed-form solution while the individual relay power constraints result in a quadratic programming optimization problem. The later optimization problem does not have a closed-form solution. However, it is shown that using semidefinite relaxation, this problem can be turned into a convex feasibility semidefinite programming (SDP), and therefore, can be efficiently solved using interior point methods. Furthermore, we develop a simplified, thus suboptimal, technique which is computationally more efficient than the SDP approach. In fact, the simplified algorithm provides the beamforming weight vector in a closed form. Our numerical examples show that as the uncertainty in the channel state information is increased, satisfying the quality of service constraint becomes harder, i.e., it takes more power to satisfy these constraints. Also our simulation results show that when compared to the SDP-based method, our simplified technique suffers a 2-dB loss in SNR for low to moderate values of transmit power.

Journal ArticleDOI
TL;DR: This paper provides a complete study on the training based channel estimation issues for relay networks that employ the amplify-and-forward (AF) transmission scheme and provides a new estimation scheme that directly estimates the overall channels from the source to the destination.
Abstract: In this paper, we provide a complete study on the training based channel estimation issues for relay networks that employ the amplify-and-forward (AF) transmission scheme. We first point out that separately estimating the channel from source to relay and relay to destination suffers from many drawbacks. Then we provide a new estimation scheme that directly estimates the overall channels from the source to the destination. The proposed channel estimation well serves the AF based space time coding (STC) that was recently developed. There exists many differences between the proposed channel estimation and that in the traditional single input single out (SISO) and multiple input single output (MISO) systems. For example, a relay must linearly precode its received training sequence by a sophisticatedly designed matrix in order to minimize the channel estimation error. Besides, each relay node is individually constrained by a different power requirement because of the non-cooperation among all relay nodes. We study both the linear least-square (LS) estimator and the minimum mean-square-error (MMSE) estimator. The corresponding optimal training sequences, as well as the optimal preceding matrices are derived from an efficient convex optimization process.

Proceedings Article
08 Dec 2008
TL;DR: In this article, the authors assume that tasks are clustered into groups, which are unknown beforehand, and that tasks within a group have similar weight vectors, resulting in a new convex optimization formulation for multi-task learning.
Abstract: In multi-task learning several related tasks are considered simultaneously, with the hope that by an appropriate sharing of information across tasks, each task may benefit from the others. In the context of learning linear functions for supervised classification or regression, this can be achieved by including a priori information about the weight vectors associated with the tasks, and how they are expected to be related to each other. In this paper, we assume that tasks are clustered into groups, which are unknown beforehand, and that tasks within a group have similar weight vectors. We design a new spectral norm that encodes this a priori assumption, without the prior knowledge of the partition of tasks into groups, resulting in a new convex optimization formulation for multi-task learning. We show in simulations on synthetic examples and on the IEDB MHC-I binding dataset, that our approach outperforms well-known convex methods for multi-task learning, as well as related non-convex methods dedicated to the same problem.

Posted Content
TL;DR: A new spectral norm is designed that encodes this a priori assumption that tasks are clustered into groups, which are unknown beforehand, and that tasks within a group have similar weight vectors, resulting in a new convex optimization formulation for multi-task learning.
Abstract: In multi-task learning several related tasks are considered simultaneously, with the hope that by an appropriate sharing of information across tasks, each task may benefit from the others. In the context of learning linear functions for supervised classification or regression, this can be achieved by including a priori information about the weight vectors associated with the tasks, and how they are expected to be related to each other. In this paper, we assume that tasks are clustered into groups, which are unknown beforehand, and that tasks within a group have similar weight vectors. We design a new spectral norm that encodes this a priori assumption, without the prior knowledge of the partition of tasks into groups, resulting in a new convex optimization formulation for multi-task learning. We show in simulations on synthetic examples and on the IEDB MHC-I binding dataset, that our approach outperforms well-known convex methods for multi-task learning, as well as related non convex methods dedicated to the same problem.

Proceedings ArticleDOI
01 Dec 2008
TL;DR: This paper proposes a subgradient method for solving coupled optimization problems in a distributed way given restrictions on the communication topology and studies convergence properties of the proposed scheme using results from consensus theory and approximate subgradient methods.
Abstract: In this paper we propose a subgradient method for solving coupled optimization problems in a distributed way given restrictions on the communication topology. The iterative procedure maintains local variables at each node and relies on local subgradient updates in combination with a consensus process. The local subgradient steps are applied simultaneously as opposed to the standard sequential or cyclic procedure. We study convergence properties of the proposed scheme using results from consensus theory and approximate subgradient methods. The framework is illustrated on an optimal distributed finite-time rendezvous problem.

Journal ArticleDOI
TL;DR: Sufficient conditions for the existence of a desired filter are established in terms of linear matrix inequalities (LMIs), and the corresponding filter design is cast into a convex optimization problem which can be efficiently solved by using commercially available numerical software.

Journal ArticleDOI
TL;DR: The scenario approach is illustrated at a tutorial level, focusing mainly on algorithmic aspects, and its versatility and virtues will be pointed out through a number of examples in model reduction, robust and optimal control.

Journal ArticleDOI
TL;DR: It is shown that when the MAC between sensors and the fusion center is noiseless, the resulting problem has a closed-form solution (which is in sharp contrast to the orthogonal MAC case), while in the noisy MAC case the problem can be efficiently solved by semidefinite programming (SDP).
Abstract: We consider the distributed estimation of an unknown vector signal in a resource constrained sensor network with a fusion center. Due to power and bandwidth limitations, each sensor compresses its data in order to minimize the amount of information that needs to be communicated to the fusion center. In this context, we study the linear decentralized estimation of the source vector, where each sensor linearly encodes its observations and the fusion center also applies a linear mapping to estimate the unknown vector signal based on the received messages. We adopt the mean squared error (MSE) as the performance criterion. When the channels between sensors and the fusion center are orthogonal, it has been shown previously that the complexity of designing the optimal encoding matrices is NP-hard in general. In this paper, we study the optimal linear decentralized estimation when the multiple access channel (MAC) is coherent. For the case when the source and observations are scalars, we derive the optimal power scheduling via convex optimization and show that it admits a simple distributed implementation. Simulations show that the proposed power scheduling improves the MSE performance by a large margin when compared to the uniform power scheduling. We also show that under a finite network power budget, the asymptotic MSE performance (when the total number of sensors is large) critically depends on the multiple access scheme. For the case when the source and observations are vectors, we study the optimal linear decentralized estimation under both bandwidth and power constraints. We show that when the MAC between sensors and the fusion center is noiseless, the resulting problem has a closed-form solution (which is in sharp contrast to the orthogonal MAC case), while in the noisy MAC case, the problem can be efficiently solved by semidefinite programming (SDP).

BookDOI
01 Jan 2008
TL;DR: This paper presents Statistical Learning Theory, a Pack-based Strategy for Uncertain Feasibility and Optimization Problems, and Behaviors Described by Rational Symbols and the Parametrization of the Stabilizing Controllers.
Abstract: Statistical Learning Theory: A Pack-based Strategy for Uncertain Feasibility and Optimization Problems.- UAV Formation Control: Theory and Application.- Electrical and Mechanical Passive Network Synthesis.- Output Synchronization of Nonlinear Systems with Relative Degree One.- On the Computation of Optimal Transport Maps Using Gradient Flows and Multiresolution Analysis.- Realistic Anchor Positioning for Sensor Localization.- Graph Implementations for Nonsmooth Convex Programs.- When Is a Linear Continuous-time System Easy or Hard to Control in Practice?.- Metrics and Morphing of Power Spectra.- A New Type of Neural Computation.- Getting Mobile Autonomous Robots to Form a Prescribed Geometric Arrangement.- Convex Optimization in Infinite Dimensional Spaces.- The Servomechanism Problem for SISO Positive LTI Systems.- Passivity-based Stability of Interconnection Structures.- Identification of Linear Continuous-time Systems Based on Iterative Learning Control.- A Pontryagin Maximum Principle for Systems of Flows.- Safe Operation and Control of Diesel Particulate Filters Using Level Set Methods.- Robust Control of Smart Material-based Actuators.- Behaviors Described by Rational Symbols and the Parametrization of the Stabilizing Controllers.

Journal ArticleDOI
TL;DR: A variety of structure and motion problems, for example, triangulation, camera resectioning, and homography estimation, can be recast as quasi-convex optimization problems within this framework and can be efficiently solved using second-order cone programming (SOCP), which is a standard technique in convex optimization.
Abstract: This paper presents a new framework for solving geometric structure and motion problems based on the Linfin-norm. Instead of using the common sum-of-squares cost function, that is, the L2-norm, the model-fitting errors are measured using the Linfin-norm. Unlike traditional methods based on L2, our framework allows for the efficient computation of global estimates. We show that a variety of structure and motion problems, for example, triangulation, camera resectioning, and homography estimation, can be recast as quasi-convex optimization problems within this framework. These problems can be efficiently solved using second-order cone programming (SOCP), which is a standard technique in convex optimization. The methods have been implemented in Matlab and the resulting toolbox has been made publicly available. The algorithms have been validated on real data in different settings on problems with small and large dimensions and with excellent performance.

Journal ArticleDOI
TL;DR: This work gives the first polynomial time algorithm for exactly computing an equilibrium for the linear utilities case of the market model defined by Fisher using the primal--dual paradigm in the enhanced setting of KKT conditions and convex programs.
Abstract: We give the first polynomial time algorithm for exactly computing an equilibrium for the linear utilities case of the market model defined by Fisher. Our algorithm uses the primal--dual paradigm in the enhanced setting of KKT conditions and convex programs. We pinpoint the added difficulty raised by this setting and the manner in which our algorithm circumvents it.

Journal ArticleDOI
TL;DR: This work advocates a cross-layer approach to joint multiuser transmit beamforming and admission control, aiming to maximize the number of users that can be served at their desired QoS.
Abstract: Multiuser downlink beamforming under quality of service (QoS) constraints has attracted considerable interest in years, because it is particularly appealing from a network operator's perspective (e.g., UMTS, 802.16e). When there are many co-channel users and/or the service constraints are stringent, the problem becomes infeasible and some form of admission control is necessary. We advocate a cross-layer approach to joint multiuser transmit beamforming and admission control, aiming to maximize the number of users that can be served at their desired QoS. It is shown that the core problem is NP-hard, yet amenable to convex approximation tools. Two computationally efficient convex approximation algorithms are proposed: one is based on semidefinite relaxation of an equivalent problem reformulation; the other takes a penalized second-order cone approach. Their performance is assessed in a range of experiments, using both simulated and measured channel data. In all experiments considered, the proposed algorithms work remarkably well in terms of the attained performance-complexity trade-off, consistently exhibiting close to optimal performance at an affordable computational complexity.

Proceedings ArticleDOI
19 Mar 2008
TL;DR: This paper proposes sparse channel estimation methods based on convex/linear programming and derived by adapting recent advances from the theory of compressed sensing, revealing significant advantages of the proposed methods over the conventional channel estimation schemes.
Abstract: Reliable wireless communications often requires accurate knowledge of the underlying multipath channel. This typically involves probing of the channel with a known training waveform and linear processing of the input probe and channel output to estimate the impulse response. Many real-world channels of practical interest tend to exhibit impulse responses characterized by a relatively small number of nonzero channel coefficients. Conventional linear channel estimation strategies, such as the least squares, are ill-suited to fully exploiting the inherent low-dimensionality of these sparse channels. In contrast, this paper proposes sparse channel estimation methods based on convex/linear programming. Quantitative error bounds for the proposed schemes are derived by adapting recent advances from the theory of compressed sensing. The bounds come within a logarithmic factor of the performance of an ideal channel estimator and reveal significant advantages of the proposed methods over the conventional channel estimation schemes.

Posted Content
TL;DR: In this article, the effect of stochastic errors on two constrained incremental sub-gradient algorithms was studied and the convergence results and error bounds for the Markov randomized incremental subgradient method were established.
Abstract: In this paper we study the effect of stochastic errors on two constrained incremental sub-gradient algorithms. We view the incremental sub-gradient algorithms as decentralized network optimization algorithms as applied to minimize a sum of functions, when each component function is known only to a particular agent of a distributed network. We first study the standard cyclic incremental sub-gradient algorithm in which the agents form a ring structure and pass the iterate in a cycle. We consider the method with stochastic errors in the sub-gradient evaluations and provide sufficient conditions on the moments of the stochastic errors that guarantee almost sure convergence when a diminishing step-size is used. We also obtain almost sure bounds on the algorithm's performance when a constant step-size is used. We then consider \ram{the} Markov randomized incremental subgradient method, which is a non-cyclic version of the incremental algorithm where the sequence of computing agents is modeled as a time non-homogeneous Markov chain. Such a model is appropriate for mobile networks, as the network topology changes across time in these networks. We establish the convergence results and error bounds for the Markov randomized method in the presence of stochastic errors for diminishing and constant step-sizes, respectively.

01 Jan 2008
TL;DR: In this paper, a Hadamard type inequality for s-convex functions in both sense and on the co-ordinates is given for both senses and coordinates.
Abstract: In this paper a Hadamard’s type inequalities for s–convex functions in both sense and s–convex functions on the co–ordinates are given

Journal ArticleDOI
TL;DR: Based on a new model and an improved separation lemma, the observer-based controller is developed for the asymptotical stabilization of the NCSs, which are shown in terms of nonlinear matrices inequalities.

Journal ArticleDOI
TL;DR: In the paper, the global asymptotic stability of equilibrium is considered for continuous bidirectional associative memory (BAM) neural networks of neutral type by using the Lyapunov method in terms of linear matrix inequality (LMI).