scispace - formally typeset
Search or ask a question
Author

Ivan Tyukin

Bio: Ivan Tyukin is an academic researcher from University of Leicester. The author has contributed to research in topics: Nonlinear system & Curse of dimensionality. The author has an hindex of 22, co-authored 129 publications receiving 1415 citations. Previous affiliations of Ivan Tyukin include Norwegian University of Science and Technology & Saint Petersburg State Electrotechnical University.


Papers
More filters
Journal ArticleDOI
TL;DR: This work presents sufficient conditions for synchronization in networks of neuronal oscillators which are interconnected via diffusive coupling, i.e. linearly coupled via gap junctions using the theory of semi-passive and passive systems and demonstrates that when the coupling is strong enough the oscillators become synchronized.

132 citations

Journal ArticleDOI
TL;DR: Stochastic separation theorems provide us with classifiers and determine a non-iterative (one-shot) procedure for their construction and allow us to correct legacy artificial intelligence systems.
Abstract: The concentrations of measure phenomena were discovered as the mathematical background to statistical mechanics at the end of the nineteenth/beginning of the twentieth century and have been explored in mathematics ever since. At the beginning of the twenty-first century, it became clear that the proper utilization of these phenomena in machine learning might transform the curse of dimensionality into the blessing of dimensionality . This paper summarizes recently discovered phenomena of measure concentration which drastically simplify some machine learning problems in high dimension, and allow us to correct legacy artificial intelligence systems. The classical concentration of measure theorems state that i.i.d. random points are concentrated in a thin layer near a surface (a sphere or equators of a sphere, an average or median-level set of energy or another Lipschitz function, etc.). The new stochastic separation theorems describe the thin structure of these thin layers: the random points are not only concentrated in a thin layer but are all linearly separable from the rest of the set, even for exponentially large random sets. The linear functionals for separation of points can be selected in the form of the linear Fisher’s discriminant. All artificial intelligence systems make errors. Non-destructive correction requires separation of the situations (samples) with errors from the samples corresponding to correct behaviour by a simple and robust classifier. The stochastic separation theorems provide us with such classifiers and determine a non-iterative (one-shot) procedure for their construction. This article is part of the theme issue ‘Hilbert’s sixth problem’.

132 citations

Journal ArticleDOI
TL;DR: This work considers and analyze published procedures, both randomized and deterministic, for selecting elements from families of parameterized elementary functions that have been shown to ensure the rate of convergence in L2 norm of order O(1/N), where N is the number of elements.

122 citations

Journal ArticleDOI
TL;DR: A solution to the problem of asymptotic reconstruction of the state and parameter values in systems of ordinary differential equations of which the unknowns are allowed to be nonlinearly parameterized functions of state and time is proposed.

86 citations

Journal ArticleDOI
TL;DR: In this article, a series of new stochastic separation theorems are proven for fast non-destructive correction of AI systems, including binary classifiers, which separate the situations with high risk of errors from the situations where the AI systems work properly.

67 citations


Cited by
More filters
Proceedings ArticleDOI
22 Jan 2006
TL;DR: Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
Abstract: We will review some of the major results in random graphs and some of the more challenging open problems. We will cover algorithmic and structural questions. We will touch on newer models, including those related to the WWW.

7,116 citations

Journal Article
TL;DR: In this article, the authors explore the effect of dimensionality on the nearest neighbor problem and show that under a broad set of conditions (much broader than independent and identically distributed dimensions), as dimensionality increases, the distance to the nearest data point approaches the distance of the farthest data point.
Abstract: We explore the effect of dimensionality on the nearest neighbor problem. We show that under a broad set of conditions (much broader than independent and identically distributed dimensions), as dimensionality increases, the distance to the nearest data point approaches the distance to the farthest data point. To provide a practical perspective, we present empirical results on both real and synthetic data sets that demonstrate that this effect can occur for as few as 10-15 dimensions. These results should not be interpreted to mean that high-dimensional indexing is never meaningful; we illustrate this point by identifying some high-dimensional workloads for which this effect does not occur. However, our results do emphasize that the methodology used almost universally in the database literature to evaluate high-dimensional indexing techniques is flawed, and should be modified. In particular, most such techniques proposed in the literature are not evaluated versus simple linear scan, and are evaluated over workloads for which nearest neighbor is not meaningful. Often, even the reported experiments, when analyzed carefully, show that linear scan would outperform the techniques being proposed on the workloads studied in high (10-15) dimensionality!.

1,992 citations

Journal ArticleDOI
01 Mar 1970

1,097 citations

Journal ArticleDOI
TL;DR: Compared with existing deep neural networks, experimental results on the Modified National Institute of Standards and Technology database and NYU NORB object recognition dataset benchmark data demonstrate the effectiveness of the proposed Broad Learning System.
Abstract: Broad Learning System (BLS) that aims to offer an alternative way of learning in deep structure is proposed in this paper. Deep structure and learning suffer from a time-consuming training process because of a large number of connecting parameters in filters and layers. Moreover, it encounters a complete retraining process if the structure is not sufficient to model the system. The BLS is established in the form of a flat network, where the original inputs are transferred and placed as “mapped features” in feature nodes and the structure is expanded in wide sense in the “enhancement nodes.” The incremental learning algorithms are developed for fast remodeling in broad expansion without a retraining process if the network deems to be expanded. Two incremental learning algorithms are given for both the increment of the feature nodes (or filters in deep structure) and the increment of the enhancement nodes. The designed model and algorithms are very versatile for selecting a model rapidly. In addition, another incremental learning is developed for a system that has been modeled encounters a new incoming input. Specifically, the system can be remodeled in an incremental way without the entire retraining from the beginning. Satisfactory result for model reduction using singular value decomposition is conducted to simplify the final structure. Compared with existing deep neural networks, experimental results on the Modified National Institute of Standards and Technology database and NYU NORB object recognition dataset benchmark data demonstrate the effectiveness of the proposed BLS.

1,061 citations

01 Jan 2013

801 citations