scispace - formally typeset
Search or ask a question

Showing papers by "Choy Heng Lai published in 2019"


Journal ArticleDOI
22 Nov 2019
TL;DR: In this article, the authors extend the models of opinion dynamics to show that, under passive influence, there is little distinction between the effectiveness of influential and susceptible individuals, and develop an excitation model for the mechanism of active influence.
Abstract: The authors extend the models of opinion dynamics to show that, under passive influence, there is little distinction between the effectiveness of influential and of susceptible individuals. The paper develops an excitation model for the mechanism of active influence and show that influential and susceptible individuals play substantially different roles in driving contagion.

4 citations


Posted Content
TL;DR: In this article, the authors developed a general theory that reveals the exact edge of chaos is the boundary between the chaotic phase and the (pseudo)periodic phase arising from Neimark-Sacker bifurcation.
Abstract: It has long been suggested that the biological brain operates at some critical point between two different phases, possibly order and chaos. Despite many indirect empirical evidence from the brain and analytical indication on simple neural networks, the foundation of this hypothesis on generic non-linear systems remains unclear. Here we develop a general theory that reveals the exact edge of chaos is the boundary between the chaotic phase and the (pseudo)periodic phase arising from Neimark-Sacker bifurcation. This edge is analytically determined by the asymptotic Jacobian norm values of the non-linear operator and influenced by the dimensionality of the system. The optimality at the edge of chaos is associated with the highest information transfer between input and output at this point similar to that of the logistic map. As empirical validations, our experiments on the various deep learning models in computer vision demonstrate the optimality of the models near the edge of chaos, and we observe that the state-of-art training algorithms push the models towards such edge as they become more accurate. We further establishes the theoretical understanding of deep learning model generalization through asymptotic stability.

1 citations


Posted Content
TL;DR: In this paper, a dynamical stability analysis on various computer vision models was conducted, and it was shown that the optimal deep neural network performance occurs near the transition point separating stable and chaotic attractors.
Abstract: It has long been suggested that living systems, in particular the brain, may operate near some critical point. How about machines? Through dynamical stability analysis on various computer vision models, we find direct evidence that optimal deep neural network performance occur near the transition point separating stable and chaotic attractors. In fact modern neural network architectures push the model closer to this edge of chaos during the training process. Our dissection into their fully connected layers reveals that they achieve the stability transition through self-adjusting an oscillation-diffusion process embedded in the weights. Further analogy to the logistic map leads us to believe that the optimality near the edge of chaos is a consequence of maximal diversity of stable states, which maximize the effective expressivity.