scispace - formally typeset
Search or ask a question
Author

Khachik Sargsyan

Bio: Khachik Sargsyan is an academic researcher from Sandia National Laboratories. The author has contributed to research in topics: Uncertainty quantification & Polynomial chaos. The author has an hindex of 20, co-authored 94 publications receiving 1647 citations. Previous affiliations of Khachik Sargsyan include University of Michigan & Oak Ridge National Laboratory.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, extinction times for a class of birth-death processes commonly found in applications were considered, where there is a control parameter which defines a threshold, below which the population quickly becomes extinct; above, it persists for a long time.
Abstract: We consider extinction times for a class of birth-death processes commonly found in applications, where there is a control parameter which defines a threshold. Below the threshold, the population quickly becomes extinct; above, it persists for a long time. We give an exact expression for the mean time to extinction in the discrete case and its asymptotic expansion for large values of the population scale. We have results below the threshold, at the threshold, and above the threshold, and we observe that the Fokker--Planck approximation is valid only quite near the threshold. We compare our asymptotic results to exact numerical evaluations for the susceptible-infected-susceptible epidemic model, which is in the class that we treat. This is an interesting example of the delicate relationship between discrete and continuum treatments of the same problem.

180 citations

Journal ArticleDOI
TL;DR: It is shown that for a given computational budget, basis selection produces a more accurate PCE than would be obtained if the basis were fixed a priori.

178 citations

Posted Content
TL;DR: An exact expression is given for the mean time to extinction in the discrete case and its asymptotic expansion for large values of the population scale and it is observed that the Fokker--Planck approximation is valid only quite near the threshold.
Abstract: We consider extinction times for a class of birth-death processes commonly found in applications, where there is a control parameter which determines whether the population quickly becomes extinct, or rather persists for a long time. We give an exact expression for the discrete case and its asymptotic expansion for large values of the population. We have results below the threshold, at the threshold, and above the threshold (where there is a quasi-stationary state and the extinction time is very long.) We show that the Fokker-Planck approximation is valid only quite near the threshold. We compare our analytical results to numerical simulations for the SIS epidemic model, which is in the class that we treat. This is an interesting example of the delicate relationship between discrete and continuum treatments of the same problem.

164 citations

Journal ArticleDOI
TL;DR: This work implements a PC-based surrogate model construction that “learns” and retains only the most relevant basis terms of the PC expansion, using sparse Bayesian learning, which dramatically reduces the dimensionality of the problem, making it more amenable to further analysis such as sensitivity or calibration studies.
Abstract: Uncertainty quantification in complex physical models is often challenged by the computational expense of these models. One often needs to operate under the assumption of sparsely available model simulations. This issue is even more critical when models include a large number of input parameters. This “curse of dimensionality,” in particular, leads to a prohibitively large number of basis terms in spectral methods for uncertainty quantification, such as polynomial chaos (PC) methods. In this work, we implement a PC-based surrogate model construction that “learns” and retains only the most relevant basis terms of the PC expansion, using sparse Bayesian learning. This dramatically reduces the dimensionality of the problem, making it more amenable to further analysis such as sensitivity or calibration studies. The model of interest is the community land model with about 80 input parameters, which also exhibits nonsmooth input-output behavior. We enhanced the methodology by a clustering and classifying procedure that leads to a piecewisePC surrogate thereby dealing with nonlinearity. We then obtain global sensitivity information for five outputs with respect to all input parameters using less than 10,000 model simulations—a very small number for an 80-dimensional input parameter space.

143 citations

Journal ArticleDOI
TL;DR: A novel statistical calibration framework for physical models, relying on probabilistic embedding of model discrepancy error within the model, is introduced, showing that the calibrated model predictions fit the data and that uncertainty in these predictions is consistent in a mean-square sense with the discrepancy from the detailed model data.
Abstract: We introduce a novel statistical calibration framework for physical models, relying on probabilistic embedding of model discrepancy error within the model. For clarity of illustration, we take the measurement errors out of consideration, calibrating a chemical model of interest with respect to a more detailed model, considered as “truth” for the present purpose. We employ Bayesian statistical methods for such model-to-model calibration and demonstrate their capabilities on simple synthetic models, leading to a well-defined parameter estimation problem that employs approximate Bayesian computation. The method is then demonstrated on two case studies for calibration of kinetic rate parameters for methane air chemistry, where ignition time information from a detailed elementary-step kinetic model is used to estimate rate coefficients of a simple chemical mechanism. We show that the calibrated model predictions fit the data and that uncertainty in these predictions is consistent in a mean-square sense with the discrepancy from the detailed model data.

103 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

01 May 1993
TL;DR: Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems.
Abstract: Three parallel algorithms for classical molecular dynamics are presented. The first assigns each processor a fixed subset of atoms; the second assigns each a fixed subset of inter-atomic forces to compute; the third assigns each a fixed spatial region. The algorithms are suitable for molecular dynamics models which can be difficult to parallelize efficiently—those with short-range forces where the neighbors of each atom change rapidly. They can be implemented on any distributed-memory parallel machine which allows for message-passing of data between independently executing processors. The algorithms are tested on a standard Lennard-Jones benchmark problem for system sizes ranging from 500 to 100,000,000 atoms on several parallel supercomputers--the nCUBE 2, Intel iPSC/860 and Paragon, and Cray T3D. Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems. For large problems, the spatial algorithm achieves parallel efficiencies of 90% and a 1840-node Intel Paragon performs up to 165 faster than a single Cray C9O processor. Trade-offs between the three algorithms and guidelines for adapting them to more complex molecular dynamics simulations are also discussed.

29,323 citations

01 Jan 2016
TL;DR: The modern applied statistics with s is universally compatible with any devices to read, and is available in the digital library an online access to it is set as public so you can download it instantly.
Abstract: Thank you very much for downloading modern applied statistics with s. As you may know, people have search hundreds times for their favorite readings like this modern applied statistics with s, but end up in harmful downloads. Rather than reading a good book with a cup of coffee in the afternoon, instead they cope with some harmful virus inside their laptop. modern applied statistics with s is available in our digital library an online access to it is set as public so you can download it instantly. Our digital library saves in multiple countries, allowing you to get the most less latency time to download any of our books like this one. Kindly say, the modern applied statistics with s is universally compatible with any devices to read.

5,249 citations

Journal ArticleDOI
TL;DR: Van Kampen as mentioned in this paper provides an extensive graduate-level introduction which is clear, cautious, interesting and readable, and could be expected to become an essential part of the library of every physical scientist concerned with problems involving fluctuations and stochastic processes.
Abstract: N G van Kampen 1981 Amsterdam: North-Holland xiv + 419 pp price Dfl 180 This is a book which, at a lower price, could be expected to become an essential part of the library of every physical scientist concerned with problems involving fluctuations and stochastic processes, as well as those who just enjoy a beautifully written book. It provides an extensive graduate-level introduction which is clear, cautious, interesting and readable.

3,647 citations

01 Jan 2007
TL;DR: Two algorithms for generating the Gaussian quadrature rule defined by the weight function when: a) the three term recurrence relation is known for the orthogonal polynomials generated by $\omega$(t), and b) the moments of the weightfunction are known or can be calculated.
Abstract: Most numerical integration techniques consist of approximating the integrand by a polynomial in a region or regions and then integrating the polynomial exactly. Often a complicated integrand can be factored into a non-negative ''weight'' function and another function better approximated by a polynomial, thus $\int_{a}^{b} g(t)dt = \int_{a}^{b} \omega (t)f(t)dt \approx \sum_{i=1}^{N} w_i f(t_i)$. Hopefully, the quadrature rule ${\{w_j, t_j\}}_{j=1}^{N}$ corresponding to the weight function $\omega$(t) is available in tabulated form, but more likely it is not. We present here two algorithms for generating the Gaussian quadrature rule defined by the weight function when: a) the three term recurrence relation is known for the orthogonal polynomials generated by $\omega$(t), and b) the moments of the weight function are known or can be calculated.

1,007 citations