scispace - formally typeset
Search or ask a question
Author

Qingshan Liu

Bio: Qingshan Liu is an academic researcher from Southeast University. The author has contributed to research in topics: Artificial neural network & Recurrent neural network. The author has an hindex of 25, co-authored 66 publications receiving 2465 citations. Previous affiliations of Qingshan Liu include The Chinese University of Hong Kong & Huazhong University of Science and Technology.


Papers
More filters
Journal ArticleDOI
TL;DR: This technical note presents a second-order multi-agent network for distributed optimization with a sum of convex objective functions subject to bound constraints that is capable of solving more general constrained distributed optimization problems.
Abstract: This technical note presents a second-order multi-agent network for distributed optimization with a sum of convex objective functions subject to bound constraints. In the multi-agent network, the agents connect each others locally as an undirected graph and know only their own objectives and constraints. The multi-agent network is proved to be able to reach consensus to the optimal solution under mild assumptions. Moreover, the consensus of the multi-agent network is converted to the convergence of a dynamical system, which is proved using the Lyapunov method. Compared with existing multi-agent networks for optimization, the second-order multi-agent network herein is capable of solving more general constrained distributed optimization problems. Simulation results on two numerical examples are presented to substantiate the performance and characteristics of the multi-agent network.

292 citations

Journal ArticleDOI
TL;DR: It is proved that all agents with any initial state can reach output consensus at an optimal solution to the given constrained optimization problem, provided that the graph describing the communication links among agents is undirected and connected.
Abstract: This technical note presents a continuous-time multi-agent system for distributed optimization with an additive objective function composed of individual objective functions subject to bound, equality, and inequality constraints. Each individual objective function is assumed to be convex in the region defined by its local bound constraints only without the need to be globally convex. All agents in the system communicate using a proportional-integral protocol with their output information instead of state information to reduce communication bandwidth. It is proved that all agents with any initial state can reach output consensus at an optimal solution to the given constrained optimization problem, provided that the graph describing the communication links among agents is undirected and connected. It is further proved that the system with only integral protocol is also convergent to the unique optimal solution if each individual objective function is strictly convex. Simulation results are presented to substantiate the theoretical results.

236 citations

Journal ArticleDOI
TL;DR: First, the relationship between optimal solutions and the equilibrium points of the multiagent system with time delay is revealed and sufficient conditions in form of linear matrix inequality are derived for ascertaining convergence to optimal solutions, in the cases of slow-varying delay and fast-variesing delay.
Abstract: In this paper, distributed optimization is addressed based on a continuous-time multiagent system in the presence of time-varying communication delays. First, the relationship between optimal solutions and the equilibrium points of the multiagent system with time delay is revealed. Next, delay-dependent and delay-independent sufficient conditions in form of linear matrix inequality are derived for ascertaining convergence to optimal solutions, in the cases of slow-varying delay and fast-varying delay. Furthermore, a set of conditions are also obtained for the delay-free case. In addition, a sampled-data communication scheme is presented based on the conditions for the fast varying delay systems. Simulation results are presented to substantiate the theoretical results. An application for distributed parameter estimation is also given.

227 citations

Journal ArticleDOI
TL;DR: This neural network is capable of solving a large class of quadratic programming problems and is proven to be globally stable and to be convergent to optimal solutions as long as the objective function is strictly convex on a set defined by the equality constraints.
Abstract: In this paper, a one-layer recurrent neural network with a discontinuous hard-limiting activation function is proposed for quadratic programming. This neural network is capable of solving a large class of quadratic programming problems. The state variables of the neural network are proven to be globally stable and the output variables are proven to be convergent to optimal solutions as long as the objective function is strictly convex on a set defined by the equality constraints. In addition, a sequential quadratic programming approach based on the proposed recurrent neural network is developed for general nonlinear programming. Simulation results on numerical examples and support vector machine (SVM) learning show the effectiveness and performance of the neural network.

219 citations

Journal ArticleDOI
TL;DR: It is proved that the output variables of the proposed neural network are globally convergent to the optimal solutions provided that the objective function is at least pseudoconvex.
Abstract: This paper presents a one-layer projection neural network for solving nonsmooth optimization problems with generalized convex objective functions and subject to linear equalities and bound constraints. The proposed neural network is designed based on two projection operators: linear equality constraints, and bound constraints. The objective function in the optimization problem can be any nonsmooth function which is not restricted to be convex but is required to be convex (pseudoconvex) on a set defined by the constraints. Compared with existing recurrent neural networks for nonsmooth optimization, the proposed model does not have any design parameter, which is more convenient for design and implementation. It is proved that the output variables of the proposed neural network are globally convergent to the optimal solutions provided that the objective function is at least pseudoconvex. Simulation results of numerical examples are discussed to demonstrate the effectiveness and characteristics of the proposed neural network.

206 citations


Cited by
More filters
Book ChapterDOI
01 Jan 1994
TL;DR: In this Chapter, a decision maker (or a group of experts) trying to establish or examine fair procedures to combine opinions about alternatives related to different points of view is imagined.
Abstract: In this Chapter, we imagine a decision maker (or a group of experts) trying to establish or examine fair procedures to combine opinions about alternatives related to different points of view.

1,329 citations

Book ChapterDOI
01 Jan 2003
TL;DR: “Multivalued Analysis” is the theory of set-valued maps (called multifonctions) and has important applications in many different areas and there is no doubt that a modern treatise on “Nonlinear functional analysis” can not afford the luxury of ignoring multivalued analysis.
Abstract: “Multivalued Analysis” is the theory of set-valued maps (called multifonctions) and has important applications in many different areas. Multivalued analysis is a remarkable mixture of many different parts of mathematics such as point-set topology, measure theory and nonlinear functional analysis. It is also closely related to “Nonsmooth Analysis” (Chapter 5) and in fact one of the main motivations behind the development of the theory, was in order to provide necessary analytical tools for the study of problems in nonsmooth analysis. It is not a coincidence that the development of the two fields coincide chronologically and follow parallel paths. Today multivalued analysis is a mature mathematical field with its own methods, techniques and applications that range from social and economic sciences to biological sciences and engineering. There is no doubt that a modern treatise on “Nonlinear Functional Analysis” can not afford the luxury of ignoring multivalued analysis. The omission of the theory of multifunctions will drastically limit the possible applications.

996 citations

Book ChapterDOI
01 Jan 1997
TL;DR: In this paper, a nonlinear fractional programming problem is considered, where the objective function has a finite optimal value and it is assumed that g(x) + β + 0 for all x ∈ S,S is non-empty.
Abstract: In this chapter we deal with the following nonlinear fractional programming problem: $$P:\mathop{{\max }}\limits_{{x \in s}} q(x) = (f(x) + \alpha )/((x) + \beta )$$ where f, g: R n → R, α, β ∈ R, S ⊆ R n . To simplify things, and without restricting the generality of the problem, it is usually assumed that, g(x) + β + 0 for all x ∈ S,S is non-empty and that the objective function has a finite optimal value.

797 citations

Journal ArticleDOI
TL;DR: An overview of recent advances in event-triggered consensus of MASs is provided and some in-depth analysis is made on several event- Triggered schemes, including event-based sampling schemes, model-based event-Triggered scheme, sampled-data-basedevent-trIGgered schemes), and self- triggered sampling schemes.
Abstract: Event-triggered consensus of multiagent systems (MASs) has attracted tremendous attention from both theoretical and practical perspectives due to the fact that it enables all agents eventually to reach an agreement upon a common quantity of interest while significantly alleviating utilization of communication and computation resources. This paper aims to provide an overview of recent advances in event-triggered consensus of MASs. First, a basic framework of multiagent event-triggered operational mechanisms is established. Second, representative results and methodologies reported in the literature are reviewed and some in-depth analysis is made on several event-triggered schemes, including event-based sampling schemes, model-based event-triggered schemes, sampled-data-based event-triggered schemes, and self-triggered sampling schemes. Third, two examples are outlined to show applicability of event-triggered consensus in power sharing of microgrids and formation control of multirobot systems, respectively. Finally, some challenging issues on event-triggered consensus are proposed for future research.

770 citations

Journal ArticleDOI
TL;DR: This survey paper aims to offer a detailed overview of existing distributed optimization algorithms and their applications in power systems, and focuses on the application of distributed optimization in the optimal coordination of distributed energy resources.

468 citations