scispace - formally typeset
Search or ask a question
Author

Jun-ichiro Hirayama

Bio: Jun-ichiro Hirayama is an academic researcher from National Institute of Advanced Industrial Science and Technology. The author has contributed to research in topics: Blind signal separation & Bayesian inference. The author has an hindex of 10, co-authored 37 publications receiving 290 citations. Previous affiliations of Jun-ichiro Hirayama include Ricoh & Nara Institute of Science and Technology.

Papers
More filters
Journal ArticleDOI
TL;DR: The results suggest that analyzing multisubject brain activities on common bases by the proposed method enables information sharing across subjects with low-burden resting calibration, and is effective for practical use of BMI in variable environments.

98 citations

Proceedings Article
14 Jul 2011
TL;DR: This article showed that the Bregman divergence provides a rich framework to estimate unnormalized statistical models for continuous or discrete random variables, that is, models which do not integrate or sum to one, respectively.
Abstract: We show that the Bregman divergence provides a rich framework to estimate unnormalized statistical models for continuous or discrete random variables, that is, models which do not integrate or sum to one, respectively. We prove that recent estimation methods such as noise-contrastive estimation, ratio matching, and score matching belong to the proposed framework, and explain their interconnection based on supervised learning. Further, we discuss the role of boosting in un-supervised learning.

45 citations

Journal ArticleDOI
TL;DR: The proposed switching ICA (SwICA) is based on the noisy ICA formulated as a generative model, and employs a special type of hidden Markov model (HMM) to represent such prior knowledge that the source may abruptly appear or disappear with time.
Abstract: Independent component analysis (ICA) is currently the most popularly used approach to blind source separation (BSS), the problem of recovering unknown source signals when their mixtures are observed but the actual mixing process is unknown. Many ICA algorithms assume that a fixed set of source signals consistently exists in mixtures throughout the time-series to be examined. However, real-world signals often have such difficult nonstationarity that each source signal abruptly appears or disappears, thus the set of active sources dynamically changes with time. In this paper, we propose switching ICA (SwICA), which focuses on such situations. The proposed approach is based on the noisy ICA formulated as a generative model. We employ a special type of hidden Markov model (HMM) to represent such prior knowledge that the source may abruptly appear or disappear with time. The special HMM setting then provides an effect of variable selection in a dynamic way. We use the variational Bayes (VB) method to derive an effective approximation of Bayesian inference for this model. In simulation experiments using artificial and realistic source signals, the proposed method exhibited performance superior to existing methods, especially in the presence of noise. The compared methods include the natural-gradient ICA with a nonholonomic constraint, and the existing ICA method incorporating an HMM source model, which aims to deal with general nonstationarities that may exist in source signals. In addition, the proposed method could successfully recover the source signals even when the total number of true sources was overestimated or was larger than that of mixtures. We also propose a modification of the basic Markov model into a semi-Markov model, and show that the semi-Markov one is more effective for robust estimation of the source appearance.

21 citations

Journal ArticleDOI
TL;DR: This paper adopts the probabilistic principal component analysis as a functional model of cortical representation learning, and presents an on-line learning method for PPCA according to Bayesian inference, including a heuristic criterion for model selection.

18 citations

Book ChapterDOI
19 Apr 2009
TL;DR: This work proposes a new approach to modeling time-varying relational data such as e-mail transactions based on a dynamic extension of matrix factorization, and applies the sequential Bayesian framework to track the variations of true parameters.
Abstract: We propose a new approach to modeling time-varying relational data such as e-mail transactions based on a dynamic extension of matrix factorization. To estimate effectively the true relationships behind a sequence of noise-corrupted relational matrices, their dynamic evolutions are modeled in a space of low-rank matrices. The observed matrices are assumed as to be sampled from an exponential family distribution that has the low-rank matrix as natural parameters. We apply the sequential Bayesian framework to track the variations of true parameters. In the experiments using both artificial and real-world datasets, we demonstrate our method can appropriately estimate time-varying true relations based on noisy observations, more effectively than existing methods.

17 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This Review looks at some key brain theories in the biological and physical sciences from the free-energy perspective, suggesting that several global brain theories might be unified within a free- energy framework.
Abstract: A free-energy principle has been proposed recently that accounts for action, perception and learning. This Review looks at some key brain theories in the biological (for example, neural Darwinism) and physical (for example, information theory and optimal control theory) sciences from the free-energy perspective. Crucially, one key theme runs through each of these theories — optimization. Furthermore, if we look closely at what is optimized, the same quantity keeps emerging, namely value (expected reward, expected utility) or its complement, surprise (prediction error, expected cost). This is the quantity that is optimized under the free-energy principle, which suggests that several global brain theories might be unified within a free-energy framework.

4,866 citations

Journal ArticleDOI
TL;DR: A free-energy formulation that advances Helmholtz's agenda to find principles of brain function based on conservation laws and neuronal energy is reviewed, which rests on advances in statistical physics, theoretical biology and machine learning to explain a remarkable range of facts about brain structure and function.

1,369 citations

Journal ArticleDOI
TL;DR: A comprehensive overview of the modern classification algorithms used in EEG-based BCIs is provided, the principles of these methods and guidelines on when and how to use them are presented, and a number of challenges to further advance EEG classification in BCI are identified.
Abstract: Objective: Most current Electroencephalography (EEG)-based Brain-Computer Interfaces (BCIs) are based on machine learning algorithms. There is a large diversity of classifier types that are used in this field, as described in our 2007 review paper. Now, approximately 10 years after this review publication, many new algorithms have been developed and tested to classify EEG signals in BCIs. The time is therefore ripe for an updated review of EEG classification algorithms for BCIs. Approach: We surveyed the BCI and machine learning literature from 2007 to 2017 to identify the new classification approaches that have been investigated to design BCIs. We synthesize these studies in order to present such algorithms, to report how they were used for BCIs, what were the outcomes, and to identify their pros and cons. Main results: We found that the recently designed classification algorithms for EEG-based BCIs can be divided into four main categories: adaptive classifiers, matrix and tensor classifiers, transfer learning and deep learning, plus a few other miscellaneous classifiers. Among these, adaptive classifiers were demonstrated to be generally superior to static ones, even with unsupervised adaptation. Transfer learning can also prove useful although the benefits of transfer learning remain unpredictable. Riemannian geometry-based methods have reached state-of-the-art performances on multiple BCI problems and deserve to be explored more thoroughly, along with tensor-based methods. Shrinkage linear discriminant analysis and random forests also appear particularly useful for small training samples settings. On the other hand, deep learning methods have not yet shown convincing improvement over state-of-the-art BCI methods. Significance: This paper provides a comprehensive overview of the modern classification algorithms used in EEG-based BCIs, presents the principles of these Review of Classification Algorithms for EEG-based BCI 2 methods and guidelines on when and how to use them. It also identifies a number of challenges to further advance EEG classification in BCI.

1,280 citations

Journal ArticleDOI
TL;DR: This review aims to provide a comprehensive description of the dFC approaches proposed so far, and point at the directions that the authors see as most promising for the future developments of the field.

1,032 citations

Journal ArticleDOI
TL;DR: It is shown that if the precision depends on the states, one can explain many aspects of attention, including attentional bias or gating, competition for attentional resources, attentional capture and associated speed-accuracy trade-offs.
Abstract: We suggested recently that attention can be understood as inferring the level of uncertainty or precision during hierarchical perception. In this paper, we try to substantiate this claim using neuronal simulations of directed spatial attention and biased competition. These simulations assume that neuronal activity encodes a probabilistic representation of the world that optimizes free-energy in a Bayesian fashion. Because free-energy bounds surprise or the (negative) log-evidence for internal models of the world, this optimization can be regarded as evidence accumulation or (generalized) predictive coding. Crucially, both predictions about the state of the world generating sensory data and the precision of those data have to be optimized. Here, we show that if the precision depends on the states, one can explain many aspects of attention. We illustrate this in the context of the Posner paradigm, using the simulations to generate both psychophysical and electrophysiological responses. These simulated responses are consistent with attentional bias or gating, competition for attentional resources, attentional capture and associated speed-accuracy trade-offs. Furthermore, if we present both attended and non-attended stimuli simultaneously, biased competition for neuronal representation emerges as a principled and straightforward property of Bayes-optimal perception.

1,015 citations