scispace - formally typeset
Search or ask a question
Author

James L. Beck

Bio: James L. Beck is an academic researcher from California Institute of Technology. The author has contributed to research in topics: Bayesian inference & Bayesian probability. The author has an hindex of 63, co-authored 329 publications receiving 15992 citations. Previous affiliations of James L. Beck include University of Notre Dame & Kyoto University.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, a set simulation approach is proposed to compute small failure probabilities encountered in reliability analysis of engineering systems, which can be expressed as a product of larger conditional failure probabilities by introducing intermediate failure events.

1,890 citations

Journal ArticleDOI
TL;DR: The problem of updating a structural model and its associated uncertainties by utilizing dynamic response data is addressed using a Bayesian statistical framework that can handle the inherent ill-conditioning and possible nonuniqueness in model updating applications.
Abstract: The problem of updating a structural model and its associated uncertainties by utilizing dynamic response data is addressed using a Bayesian statistical framework that can handle the inherent ill-conditioning and possible nonuniqueness in model updating applications. The objective is not only to give more accurate response predictions for prescribed dynamic loadings but also to provide a quantitative assessment of this accuracy. In the methodology presented, the updated (optimal) models within a chosen class of structural models are the most probable based on the structural data if all the models are equally plausible a priori. The prediction accuracy of the optimal structural models is given by also updating probability models for the prediction error. The precision of the parameter estimates of the optimal structural models, as well as the precision of the optimal prediction-error parameters, can be examined. A large-sample asymptotic expression is given for the updated predictive probability distribution of the uncertain structural response, which is a weighted average of the prediction probability distributions for each optimal model. This predictive distribution can be used to make model predictions despite possible nonuniqueness in the optimal models.

1,235 citations

Journal ArticleDOI
TL;DR: In this article, an adaptive Markov chain approach is proposed to evaluate the desired integral that is based on the Metropolis-Hastings algorithm and a concept similar to simulated annealing.
Abstract: In a full Bayesian probabilistic framework for "robust" system identification, structural response predictions and performance reliability are updated using structural test data D by considering the predictions of a whole set of possible structural models that are weighted by their updated probability. This involves integrating h(θ)p(θ|D) over the whole parameter space, where θ is a parameter vector defining each model within the set of possible models of the structure, h(θ) is a model prediction of a response quantity of interest, and p(θ|D) is the updated probability density for θ, which provides a measure of how plausible each model is given the data D. The evaluation of this integral is difficult because the dimension of the parameter space is usually too large for direct numerical integration and p(θ|D) is concentrated in a small region in the parameter space and only known up to a scaling constant. An adaptive Markov chain Monte Carlo simulation approach is proposed to evaluate the desired integral that is based on the Metropolis-Hastings algorithm and a concept similar to simulated annealing. By carrying out a series of Markov chain simulations with limiting stationary distributions equal to a sequence of intermediate probability densities that converge on p(θ|D), the region of concentration of p(θ|D) is gradually portrayed. The Markov chain samples are used to estimate the desired integral by statistical averaging. The method is illustrated using simulated dynamic test data to update the robust response variance and reliability of a moment-resisting frame for two cases: one where the model is only locally identifiable based on the data and the other where it is unidentifiable.

671 citations

Journal ArticleDOI
TL;DR: In this paper, a Bayesian probabilistic approach is presented for selecting the most plausible class of models for a structural or mechanical system within some specified set of model classes, based on system response data.
Abstract: A Bayesian probabilistic approach is presented for selecting the most plausible class of models for a structural or mechanical system within some specified set of model classes, based on system response data. The crux of the approach is to rank the classes of models based on their probabilities conditional on the response data which can be calculated based on Bayes’ theorem and an asymptotic expansion for the evidence for each model class. The approach provides a quantitative expression of a principle of model parsimony or of Ockham’s razor which in this context can be stated as "simpler models are to be preferred over unnecessarily complicated ones." Examples are presented to illustrate the method using a single-degree-of-freedom bilinear hysteretic system, a linear two-story frame, and a ten-story shear building, all of which are subjected to seismic excitation.

529 citations

Journal ArticleDOI
TL;DR: This application of Bayes' Theorem automatically applies a quantitative Ockham's razor that penalizes the data‐fit of more complex model classes that extract more information from the data.
Abstract: Probability logic with Bayesian updating provides a rigorous framework to quantify modeling uncertainty and perform system identification. It uses probability as a multi-valued propositional logic for plausible reasoning where the probability of a model is a measure of its relative plausibility within a set of models. System identification is thus viewed as inference about plausible system models and not as a quixotic quest for the true model. Instead of using system data to estimate the model parameters, Bayes' Theorem is used to update the relative plausibility of each model in a model class, which is a set of input–output probability models for the system and a probability distribution over this set that expresses the initial plausibility of each model. Robust predictive analyses informed by the system data use the entire model class with the probabilistic predictions of each model being weighed by its posterior probability. Additional robustness to modeling uncertainty comes from combining the robust predictions of each model class in a set of candidates for the system, where each contribution is weighed by the posterior probability of the model class. This application of Bayes' Theorem automatically applies a quantitative Ockham's razor that penalizes the data-fit of more complex model classes that extract more information from the data. Robust analyses involve integrals over parameter spaces that usually must be evaluated numerically by Laplace's method of asymptotic approximation or by Markov Chain Monte Carlo methods. An illustrative application is given using synthetic data corresponding to a structural health monitoring benchmark structure.

497 citations


Cited by
More filters
Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Reference EntryDOI
31 Oct 2001
TL;DR: The American Society for Testing and Materials (ASTM) as mentioned in this paper is an independent organization devoted to the development of standards for testing and materials, and is a member of IEEE 802.11.
Abstract: The American Society for Testing and Materials (ASTM) is an independent organization devoted to the development of standards.

3,792 citations

Journal ArticleDOI
TL;DR: In this article, a set simulation approach is proposed to compute small failure probabilities encountered in reliability analysis of engineering systems, which can be expressed as a product of larger conditional failure probabilities by introducing intermediate failure events.

1,890 citations

Journal ArticleDOI
TL;DR: In this paper, the authors provide a concise point of departure for researchers and practitioners alike wishing to assess the current state of the art in the control and monitoring of civil engineering structures, and provide a link between structural control and other fields of control theory.
Abstract: This tutorial/survey paper: (1) provides a concise point of departure for researchers and practitioners alike wishing to assess the current state of the art in the control and monitoring of civil engineering structures; and (2) provides a link between structural control and other fields of control theory, pointing out both differences and similarities, and points out where future research and application efforts are likely to prove fruitful. The paper consists of the following sections: section 1 is an introduction; section 2 deals with passive energy dissipation; section 3 deals with active control; section 4 deals with hybrid and semiactive control systems; section 5 discusses sensors for structural control; section 6 deals with smart material systems; section 7 deals with health monitoring and damage detection; and section 8 deals with research needs. An extensive list of references is provided in the references section.

1,883 citations

Journal ArticleDOI
TL;DR: In this article, the sources and characters of uncertainties in engineering modeling for risk and reliability analyses are discussed, and they are generally categorized as either aleatory or epistemic, if the modeler sees a possibility to reduce them by gathering more data or by refining models.

1,835 citations