scispace - formally typeset
Search or ask a question
Author

I.M. Ryzhik

Bio: I.M. Ryzhik is an academic researcher. The author has contributed to research in topics: Table (landform) & Elementary function. The author has an hindex of 12, co-authored 19 publications receiving 51170 citations.

Papers
More filters
Book
01 Jan 1943
TL;DR: Combinations involving trigonometric and hyperbolic functions and power 5 Indefinite Integrals of Special Functions 6 Definite Integral Integral Functions 7.Associated Legendre Functions 8 Special Functions 9 Hypergeometric Functions 10 Vector Field Theory 11 Algebraic Inequalities 12 Integral Inequality 13 Matrices and related results 14 Determinants 15 Norms 16 Ordinary differential equations 17 Fourier, Laplace, and Mellin Transforms 18 The z-transform
Abstract: 0 Introduction 1 Elementary Functions 2 Indefinite Integrals of Elementary Functions 3 Definite Integrals of Elementary Functions 4.Combinations involving trigonometric and hyperbolic functions and power 5 Indefinite Integrals of Special Functions 6 Definite Integrals of Special Functions 7.Associated Legendre Functions 8 Special Functions 9 Hypergeometric Functions 10 Vector Field Theory 11 Algebraic Inequalities 12 Integral Inequalities 13 Matrices and related results 14 Determinants 15 Norms 16 Ordinary differential equations 17 Fourier, Laplace, and Mellin Transforms 18 The z-transform

27,354 citations


Cited by
More filters
Book
23 Nov 2005
TL;DR: The treatment is comprehensive and self-contained, targeted at researchers and students in machine learning and applied statistics, and deals with the supervised learning problem for both regression and classification.
Abstract: A comprehensive and self-contained introduction to Gaussian processes, which provide a principled, practical, probabilistic approach to learning in kernel machines. Gaussian processes (GPs) provide a principled, practical, probabilistic approach to learning in kernel machines. GPs have received increased attention in the machine-learning community over the past decade, and this book provides a long-needed systematic and unified treatment of theoretical and practical aspects of GPs in machine learning. The treatment is comprehensive and self-contained, targeted at researchers and students in machine learning and applied statistics. The book deals with the supervised-learning problem for both regression and classification, and includes detailed algorithms. A wide variety of covariance (kernel) functions are presented and their properties discussed. Model selection is discussed both from a Bayesian and a classical perspective. Many connections to other well-known techniques from machine learning and statistics are discussed, including support-vector machines, neural networks, splines, regularization networks, relevance vector machines and others. Theoretical issues including learning curves and the PAC-Bayesian framework are treated, and several approximation methods for learning with large datasets are discussed. The book contains illustrative examples and exercises, and code and datasets are available on the Web. Appendixes provide mathematical background and a discussion of Gaussian Markov processes.

11,357 citations

Journal ArticleDOI
TL;DR: In this article, an exponential ARCH model is proposed to study volatility changes and the risk premium on the CRSP Value-Weighted Market Index from 1962 to 1987, which is an improvement over the widely-used GARCH model.
Abstract: This paper introduces an ARCH model (exponential ARCH) that (1) allows correlation between returns and volatility innovations (an important feature of stock market volatility changes), (2) eliminates the need for inequality constraints on parameters, and (3) allows for a straightforward interpretation of the "persistence" of shocks to volatility. In the above respects, it is an improvement over the widely-used GARCH model. The model is applied to study volatility changes and the risk premium on the CRSP Value-Weighted Market Index from 1962 to 1987. Copyright 1991 by The Econometric Society.

10,019 citations

Journal ArticleDOI
TL;DR: In this article, a system which utilizes a minimum mean square error (MMSE) estimator is proposed and then compared with other widely used systems which are based on Wiener filtering and the "spectral subtraction" algorithm.
Abstract: This paper focuses on the class of speech enhancement systems which capitalize on the major importance of the short-time spectral amplitude (STSA) of the speech signal in its perception. A system which utilizes a minimum mean-square error (MMSE) STSA estimator is proposed and then compared with other widely used systems which are based on Wiener filtering and the "spectral subtraction" algorithm. In this paper we derive the MMSE STSA estimator, based on modeling speech and noise spectral components as statistically independent Gaussian random variables. We analyze the performance of the proposed STSA estimator and compare it with a STSA estimator derived from the Wiener estimator. We also examine the MMSE STSA estimator under uncertainty of signal presence in the noisy observations. In constructing the enhanced signal, the MMSE STSA estimator is combined with the complex exponential of the noisy phase. It is shown here that the latter is the MMSE estimator of the complex exponential of the original phase, which does not affect the STSA estimation. The proposed approach results in a significant reduction of the noise, and provides enhanced speech with colorless residual noise. The complexity of the proposed algorithm is approximately that of other systems in the discussed class.

3,905 citations

Journal ArticleDOI
TL;DR: In this article, the basic ideas and the mathematical foundation of the partition of unity finite element method (PUFEM) are presented and a detailed and illustrative analysis is given for a one-dimensional model problem.

3,276 citations

Journal ArticleDOI
TL;DR: In this article, two distinct types of welfare measures are introduced and then estimated from Bishop and Heberlein's data, based on the hypothesis of utility maximization, and measures of compensating and equivalent surplus are derived from the fitted models.
Abstract: Since the work of Bishop and Heberlein, a number of contingent valuation experiments have appeared involving discrete responses which are analyzed by logit or similar techniques. This paper addresses the issues of how the logit models should be formulated to be consistent with the hypothesis of utility maximization and how measures of compensating and equivalent surplus should be derived from the fitted models. Two distinct types of welfare measures are introduced and then estimated from Bishop and Heberlein's data.

2,829 citations