scispace - formally typeset
Search or ask a question
Author

Shunsuke Ihara

Other affiliations: Ehime University
Bio: Shunsuke Ihara is an academic researcher from Nagoya University. The author has contributed to research in topics: Gaussian noise & Gaussian random field. The author has an hindex of 10, co-authored 31 publications receiving 711 citations. Previous affiliations of Shunsuke Ihara include Ehime University.

Papers
More filters
Book
01 Sep 1993

402 citations

Journal ArticleDOI
Shunsuke Ihara1
TL;DR: It is proved that C ( X 0) is the capacity of the channel with additive Gaussian noise X 0 with same covariance as that of X and H X 0 ( X ) is the entropy of the measure induced by X in the functional space with respect to thatinduced by X 0.
Abstract: We give upper and lower bounds of the capacity C ( X ) of a channel with additive noise X under a constraint on input signals in terms of the second order moments It is proved that C ( X 0 ) ⩽ C ( X ) ⩽ C ( X 0 )+ H X 0 ( X ), where C ( X 0 ) is the capacity of the channel with additive Gaussian noise X 0 with same covariance as that of X and H X 0 ( X ) is the entropy of the measure induced by X in the functional space with respect to that induced by X 0

144 citations

Journal Article
TL;DR: The aim of the paper is to derive formulas for the reliability function and the minimum achievable rate for memoryless Gaussian sources.
Abstract: We are interesting in the error exponent for source coding with fidelity criterion. For each fixed distortion level ∆, the maximum attainable error exponent at rate R, as a function of R, is called the reliability function. The minimum rate achieving the given error exponent is called the minimum achievable rate. For memoryless sources with finite alphabet, Marton (1974) gave an expression of the reliability function. The aim of the paper is to derive formulas for the reliability function and the minimum achievable rate for memoryless Gaussian sources. key words: error exponent, minimum achievable rate, reliability

39 citations

Journal ArticleDOI
Shunsuke Ihara1
TL;DR: The main aim of the paper is to prove a coding theorem for a continuous-time Gaussian channel with feedback, under an average power constraint.
Abstract: The main aim of the paper is to prove a coding theorem for a continuous-time Gaussian channel with feedback, under an average power constraint. In the case of discrete-time, the coding theorem for the feedback Gaussian channel has been shown by Cover and Pombra (1989). >

24 citations

Journal ArticleDOI
TL;DR: The optimal coding under average power constraints is constructed for the Gaussian channel Y(t), where X(·) is a Gaussian noise.

22 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This work compute a lower bound on the capacity of a channel that is learned by training, and maximize the bound as a function of the received signal-to-noise ratio (SNR), fading coherence time, and number of transmitter antennas.
Abstract: Multiple-antenna wireless communication links promise very high data rates with low error probabilities, especially when the wireless channel response is known at the receiver. In practice, knowledge of the channel is often obtained by sending known training symbols to the receiver. We show how training affects the capacity of a fading channel-too little training and the channel is improperly learned, too much training and there is no time left for data transmission before the channel changes. We compute a lower bound on the capacity of a channel that is learned by training, and maximize the bound as a function of the received signal-to-noise ratio (SNR), fading coherence time, and number of transmitter antennas. When the training and data powers are allowed to vary, we show that the optimal number of training symbols is equal to the number of transmit antennas-this number is also the smallest training interval length that guarantees meaningful estimates of the channel matrix. When the training and data powers are instead required to be equal, the optimal number of symbols may be larger than the number of antennas. We show that training-based schemes can be optimal at high SNR, but suboptimal at low SNR.

2,466 citations

Journal ArticleDOI
TL;DR: Some information-theoretic considerations used to determine upper bounds on the information rates that can be reliably transmitted over a two-ray propagation path mobile radio channel model, operating in a time division multiplex access (TDMA) regime, under given decoding delay constraints are presented.
Abstract: We present some information-theoretic considerations used to determine upper bounds on the information rates that can be reliably transmitted over a two-ray propagation path mobile radio channel model, operating in a time division multiplex access (TDMA) regime, under given decoding delay constraints. The sense in which reliability is measured is addressed, and in the interesting eases where the decoding delay constraint plays a significant role, the maximal achievable rate (capacity), is specified in terms of capacity versus outage. In this case, no coding capacity in the strict Shannon sense exists. Simple schemes for time and space diversity are examined, and their potential benefits are illuminated from an information-theoretic stand point. In our presentation, we chose to specialize to the TDMA protocol for the sake of clarity and convenience. Our main arguments and results extend directly to certain variants of other multiple access protocols such as code division multiple access (CDMA) and frequency division multiple access (FDMA), provided that no fast feedback from the receiver to the transmitter is available. >

1,216 citations

Journal ArticleDOI
TL;DR: The CDMA channel with randomly and independently chosen spreading sequences accurately models the situation where pseudonoise sequences span many symbol periods and provides a comparison baseline for CDMA channels with deterministic signature waveforms spanning one symbol period.
Abstract: The CDMA channel with randomly and independently chosen spreading sequences accurately models the situation where pseudonoise sequences span many symbol periods. Furthermore, its analysis provides a comparison baseline for CDMA channels with deterministic signature waveforms spanning one symbol period. We analyze the spectral efficiency (total capacity per chip) as a function of the number of users, spreading gain, and signal-to-noise ratio, and we quantify the loss in efficiency relative to an optimally chosen set of signature sequences and relative to multiaccess with no spreading. White Gaussian background noise and equal-power synchronous users are assumed. The following receivers are analyzed: (a) optimal joint processing, (b) single-user matched filtering, (c) decorrelation, and (d) MMSE linear processing.

1,015 citations

Journal ArticleDOI
TL;DR: The sum capacity of a class of potentially nondegraded Gaussian vector broadcast channels where a single transmitter with multiple transmit terminals sends independent information to multiple receivers is characterizes to be a saddle-point of a Gaussian mutual information game.
Abstract: This paper characterizes the sum capacity of a class of potentially nondegraded Gaussian vector broadcast channels where a single transmitter with multiple transmit terminals sends independent information to multiple receivers. Coordination is allowed among the transmit terminals, but not among the receive terminals. The sum capacity is shown to be a saddle-point of a Gaussian mutual information game, where a signal player chooses a transmit covariance matrix to maximize the mutual information and a fictitious noise player chooses a noise correlation to minimize the mutual information. The sum capacity is achieved using a precoding strategy for Gaussian channels with additive side information noncausally known at the transmitter. The optimal precoding structure is shown to correspond to a decision-feedback equalizer that decomposes the broadcast channel into a series of single-user channels with interference pre-subtracted at the transmitter.

862 citations

Journal ArticleDOI
TL;DR: This paper gives an introduction to Gaussian processes on a fairly elementary level with special emphasis on characteristics relevant in machine learning and shows up precise connections to other "kernel machines" popular in the community.
Abstract: Gaussian processes (GPs) are natural generalisations of multivariate Gaussian random variables to infinite (countably or continuous) index sets. GPs have been applied in a large number of fields to a diverse range of ends, and very many deep theoretical analyses of various properties are available. This paper gives an introduction to Gaussian processes on a fairly elementary level with special emphasis on characteristics relevant in machine learning. It draws explicit connections to branches such as spline smoothing models and support vector machines in which similar ideas have been investigated. Gaussian process models are routinely used to solve hard machine learning problems. They are attractive because of their flexible non-parametric nature and computational simplicity. Treated within a Bayesian framework, very powerful statistical methods can be implemented which offer valid estimates of uncertainties in our predictions and generic model selection procedures cast as nonlinear optimization problems. Their main drawback of heavy computational scaling has recently been alleviated by the introduction of generic sparse approximations.13,78,31 The mathematical literature on GPs is large and often uses deep concepts which are not required to fully understand most machine learning applications. In this tutorial paper, we aim to present characteristics of GPs relevant to machine learning and to show up precise connections to other "kernel machines" popular in the community. Our focus is on a simple presentation, but references to more detailed sources are provided.

752 citations