scispace - formally typeset
B

Bo Liu

Researcher at Auburn University

Publications -  85
Citations -  1807

Bo Liu is an academic researcher from Auburn University. The author has contributed to research in topics: Reinforcement learning & Temporal difference learning. The author has an hindex of 21, co-authored 70 publications receiving 1462 citations. Previous affiliations of Bo Liu include Stevens Institute of Technology & Philips.

Papers
More filters
Journal ArticleDOI

Accelerating a Recurrent Neural Network to Finite-Time Convergence for Solving Time-Varying Sylvester Equation by Using a Sign-Bi-power Activation Function

TL;DR: A sign-bi-power activation function is proposed in this paper to accelerate Zhang neural network to finite-time convergence and the proposed strategy is applied to online calculating the pseudo-inverse of a matrix and nonlinear control of an inverted pendulum system.
Journal ArticleDOI

Decentralized kinematic control of a class of collaborative redundant manipulators via recurrent neural networks

TL;DR: The global stability of the proposed neural network and the optimality of the neural solution are proven in theory and application orientated simulations demonstrate the effectiveness of this proposed method.
Posted Content

Finite-Sample Analysis of Proximal Gradient TD Algorithms

TL;DR: Theoretical analysis of gradient TD (GTD) reinforcement learning methods implies that the GTD family of algorithms are comparable and may indeed be preferred over existing least squares TD methods for off-policy learning, due to their linear complexity.
Proceedings Article

Finite-sample analysis of proximal gradient TD algorithms

TL;DR: The authors derived primal-dual saddle-point objective functions to obtain finite-sample bounds on the performance of gradient TD (GTD) reinforcement learning methods and showed that the results imply that the GTD family of algorithms are comparable and may indeed be preferred over existing least squares TD methods for off-policy learning, due to their linear complexity.
Journal ArticleDOI

Selective Positive–Negative Feedback Produces the Winner-Take-All Competition in Recurrent Neural Networks

TL;DR: This paper presents a simple model, which produces the WTA competition by taking advantage of selective positive-negative feedback through the interaction of neurons via p-norm, and has an explicit explanation of the competition mechanism.