scispace - formally typeset
Search or ask a question
Author

Hiroshi Nagaoka

Bio: Hiroshi Nagaoka is an academic researcher from University of Electro-Communications. The author has contributed to research in topics: Quantum channel & Classical capacity. The author has an hindex of 21, co-authored 45 publications receiving 4466 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: A capacity formula as well as a characterization of the strong converse property is given just in parallel with the corresponding classical results of Verdu-Han (1994) which are based on the so-called information-spectrum method.
Abstract: The capacity of a classical-quantum channel (or, in other words, the classical capacity of a quantum channel) is considered in the most general setting, where no structural assumptions such as the stationary memoryless property are made on a channel. A capacity formula as well as a characterization of the strong converse property is given just in parallel with the corresponding classical results of Verdu-Han (1994) which are based on the so-called information-spectrum method. The general results are applied to the stationary memoryless case with or without cost constraint on inputs, whereby a deep relation between the channel coding theory and the hypothesis testing for two quantum states is elucidated.

398 citations

Journal ArticleDOI
TL;DR: A new inequality is shown between the errors of the first kind and the second kind, which complements the result of Hiai and Petz (1991) to establish the quantum version of Stein's lemma and yields the strong converse in quantum hypothesis testing.
Abstract: The hypothesis testing problem for two quantum states is treated. We show a new inequality between the errors of the first kind and the second kind, which complements the result of Hiai and Petz (1991) to establish the quantum version of Stein's lemma. The inequality is also used to show a bound on the probability of errors of the first kind when the power exponent for the probability of errors of the second kind exceeds the quantum relative entropy, which yields the strong converse in quantum hypothesis testing. Finally, we discuss the relation between the bound and the power exponent derived by Han and Kobayashi (1989) in classical hypothesis testing.

384 citations

Journal ArticleDOI
TL;DR: A lower bound on the probability of decoding error for a quantum communication channel is presented, from which the strong converse to the quantum channel coding theorem is immediately shown.
Abstract: A lower bound on the probability of decoding error for a quantum communication channel is presented, from which the strong converse to the quantum channel coding theorem is immediately shown. The results and their derivations are mostly straightforward extensions of the classical counterparts which were established by Arimoto (1973), except that more careful treatment is necessary here due to the noncommutativity of operators.

201 citations

Journal ArticleDOI
TL;DR: In this paper, the information-spectrum analysis made by Han for classical hypothesis testing for simple hypotheses is extended to a unifying framework including both classical and quantum hypothesis testing, and the results are also applied to fixed-length source coding when loosening the normalizing condition for probability distributions and for quantum states.
Abstract: The information-spectrum analysis made by Han for classical hypothesis testing for simple hypotheses is extended to a unifying framework including both classical and quantum hypothesis testing. The results are also applied to fixed-length source coding when loosening the normalizing condition for probability distributions and for quantum states. We establish general formulas for several quantities relating to the asymptotic optimality of tests/codes in terms of classical and quantum information spectra

158 citations


Cited by
More filters
Book
16 Dec 2008
TL;DR: The variational approach provides a complementary alternative to Markov chain Monte Carlo as a general source of approximation methods for inference in large-scale statistical models.
Abstract: The formalism of probabilistic graphical models provides a unifying framework for capturing complex dependencies among random variables, and building large-scale multivariate statistical models. Graphical models have become a focus of research in many statistical, computational and mathematical fields, including bioinformatics, communication theory, statistical physics, combinatorial optimization, signal and image processing, information retrieval and statistical machine learning. Many problems that arise in specific instances — including the key problems of computing marginals and modes of probability distributions — are best studied in the general setting. Working with exponential family representations, and exploiting the conjugate duality between the cumulant function and the entropy for exponential families, we develop general variational representations of the problems of computing likelihoods, marginal probabilities and most probable configurations. We describe how a wide variety of algorithms — among them sum-product, cluster variational methods, expectation-propagation, mean field methods, max-product and linear programming relaxation, as well as conic programming relaxations — can all be understood in terms of exact or approximate forms of these variational representations. The variational approach provides a complementary alternative to Markov chain Monte Carlo as a general source of approximation methods for inference in large-scale statistical models.

4,335 citations

Journal ArticleDOI

2,415 citations

Book
12 Oct 2009
TL;DR: This book provides a broad survey of models and efficient algorithms for Nonnegative Matrix Factorization (NMF), including NMFs various extensions and modifications, especially Nonnegative Tensor Factorizations (NTF) and Nonnegative Tucker Decompositions (NTD).
Abstract: This book provides a broad survey of models and efficient algorithms for Nonnegative Matrix Factorization (NMF) This includes NMFs various extensions and modifications, especially Nonnegative Tensor Factorizations (NTF) and Nonnegative Tucker Decompositions (NTD) NMF/NTF and their extensions are increasingly used as tools in signal and image processing, and data analysis, having garnered interest due to their capability to provide new insights and relevant information about the complex latent relationships in experimental data sets It is suggested that NMF can provide meaningful components with physical interpretations; for example, in bioinformatics, NMF and its extensions have been successfully applied to gene expression, sequence analysis, the functional characterization of genes, clustering and text mining As such, the authors focus on the algorithms that are most useful in practice, looking at the fastest, most robust, and suitable for large-scale models Key features: Acts as a single source reference guide to NMF, collating information that is widely dispersed in current literature, including the authors own recently developed techniques in the subject area Uses generalized cost functions such as Bregman, Alpha and Beta divergences, to present practical implementations of several types of robust algorithms, in particular Multiplicative, Alternating Least Squares, Projected Gradient and Quasi Newton algorithms Provides a comparative analysis of the different methods in order to identify approximation error and complexity Includes pseudo codes and optimized MATLAB source codes for almost all algorithms presented in the book The increasing interest in nonnegative matrix and tensor factorizations, as well as decompositions and sparse representation of data, will ensure that this book is essential reading for engineers, scientists, researchers, industry practitioners and graduate students across signal and image processing; neuroscience; data mining and data analysis; computer science; bioinformatics; speech processing; biomedical engineering; and multimedia

2,136 citations

Proceedings ArticleDOI
01 Dec 2005
TL;DR: This paper proposes and analyzes parametric hard and soft clustering algorithms based on a large class of distortion functions known as Bregman divergences, and shows that there is a bijection between regular exponential families and a largeclass of BRegman diverGences, that is called regular Breg man divergence.
Abstract: A wide variety of distortion functions, such as squared Euclidean distance, Mahalanobis distance, Itakura-Saito distance and relative entropy, have been used for clustering. In this paper, we propose and analyze parametric hard and soft clustering algorithms based on a large class of distortion functions known as Bregman divergences. The proposed algorithms unify centroid-based parametric clustering approaches, such as classical kmeans , the Linde-Buzo-Gray (LBG) algorithm and information-theoretic clustering, which arise by special choices of the Bregman divergence. The algorithms maintain the simplicity and scalability of the classical kmeans algorithm, while generalizing the method to a large class of clustering loss functions. This is achieved by first posing the hard clustering problem in terms of minimizing the loss in Bregman information, a quantity motivated by rate distortion theory, and then deriving an iterative algorithm that monotonically decreases this loss. In addition, we show that there is a bijection between regular exponential families and a large class of Bregman divergences, that we call regular Bregman divergences. This result enables the development of an alternative interpretation of an efficient EM scheme for learning mixtures of exponential family distributions, and leads to a simple soft clustering algorithm for regular Bregman divergences. Finally, we discuss the connection between rate distortion theory and Bregman clustering and present an information theoretic analysis of Bregman clustering algorithms in terms of a trade-off between compression and loss in Bregman information.

1,723 citations