scispace - formally typeset
Search or ask a question

Showing papers on "Bayes' theorem published in 2022"


Journal ArticleDOI
TL;DR: In this article , the authors study how Bayes factors misbehave under different conditions and use simulation-based calibration to test the accuracy and bias of Bayes factor estimates using bridge sampling.
Abstract: Inferences about hypotheses are ubiquitous in the cognitive sciences. Bayes factors provide one general way to compare different hypotheses by their compatibility with the observed data. Those quantifications can then also be used to choose between hypotheses. While Bayes factors provide an immediate approach to hypothesis testing, they are highly sensitive to details of the data/model assumptions and it's unclear whether the details of the computational implementation (such as bridge sampling) are unbiased for complex analyses. Here, we study how Bayes factors misbehave under different conditions. This includes a study of errors in the estimation of Bayes factors; the first-ever use of simulation-based calibration to test the accuracy and bias of Bayes factor estimates using bridge sampling; a study of the stability of Bayes factors against different MCMC draws and sampling variation in the data; and a look at the variability of decisions based on Bayes factors using a utility function. We outline a Bayes factor workflow that researchers can use to study whether Bayes factors are robust for their individual analysis. (PsycInfo Database Record (c) 2022 APA, all rights reserved).

26 citations


Journal ArticleDOI
TL;DR: In this article , the authors explored the potential of using multi-trait GS models for predicting seven different end-use quality traits using cross-validation, independent prediction, and across-location predictions in a wheat breeding program.
Abstract: Soft white wheat is a wheat class used in foreign and domestic markets to make various end products requiring specific quality attributes. Due to associated cost, time, and amount of seed needed, phenotyping for the end-use quality trait is delayed until later generations. Previously, we explored the potential of using genomic selection (GS) for selecting superior genotypes earlier in the breeding program. Breeders typically measure multiple traits across various locations, and it opens up the avenue for exploring multi-trait–based GS models. This study’s main objective was to explore the potential of using multi-trait GS models for predicting seven different end-use quality traits using cross-validation, independent prediction, and across-location predictions in a wheat breeding program. The population used consisted of 666 soft white wheat genotypes planted for 5 years at two locations in Washington, United States. We optimized and compared the performances of four uni-trait– and multi-trait–based GS models, namely, Bayes B, genomic best linear unbiased prediction (GBLUP), multilayer perceptron (MLP), and random forests. The prediction accuracies for multi-trait GS models were 5.5 and 7.9% superior to uni-trait models for the within-environment and across-location predictions. Multi-trait machine and deep learning models performed superior to GBLUP and Bayes B for across-location predictions, but their advantages diminished when the genotype by environment component was included in the model. The highest improvement in prediction accuracy, that is, 35% was obtained for flour protein content with the multi-trait MLP model. This study showed the potential of using multi-trait–based GS models to enhance prediction accuracy by using information from previously phenotyped traits. It would assist in speeding up the breeding cycle time in a cost-friendly manner.

22 citations


Book ChapterDOI
28 Feb 2022
TL;DR: In this article , the works of the prophet of the Judea Pearl have been cited and a new citation alert has been added to be sent to the record that has been cited.
Abstract: chapter Share on Reverend Bayes on Inference Engines: A Distributed Hierarchical Approach Author: Judea Pearl Search about this author Authors Info & Claims Probabilistic and Causal Inference: The Works of Judea PearlFebruary 2022 Pages 129–138https://doi.org/10.1145/3501714.3501727Online:04 March 2022Publication History 0citation11DownloadsMetricsTotal Citations0Total Downloads11Last 12 Months11Last 6 weeks7 Get Citation AlertsNew Citation Alert added!This alert has been successfully added and will be sent to:You will be notified whenever a record that you have chosen has been cited.To manage your alert preferences, click on the button below.Manage my Alerts New Citation Alert!Please log in to your account Save to BinderSave to BinderCreate a New BinderNameCancelCreateExport CitationPublisher SiteGet Access

14 citations


Journal ArticleDOI
TL;DR: In this article , the authors proposed model predictive control with risk-parity objective with transaction control, and derived a successive convex algorithm to efficiently solve the risk parity MPC.

14 citations


MonographDOI
31 Jan 2022
TL;DR: A review of Bayes Rules! by Alicia Johnson, Miles Ott, and Mine Dogucu, and Amy's Luck by David Hand can be found in this article . But this review focuses on the Bayes rules.
Abstract: This article contains book reviews of Bayes Rules! by Alicia Johnson, Miles Ott, and Mine Dogucu, and Amy’s Luck by David Hand.

13 citations


Journal ArticleDOI
TL;DR: Li et al. as mentioned in this paper proposed a framework of active learning and semi-supervised learning for lithology identification based on improved naive Bayes (ALSLINB) for mining logging data.
Abstract: Lithology identification is the basis of energy exploration and reservoir evaluation, intelligent and accurate identification of underground lithology is a key issue. The establishment of a machine learning lithology identification model using logging data is a hot research direction in recent years. However, the logging data has a high degree of non-linearity and multi-response characteristics, and there are insufficient numbers of labeled samples in the training data set. These will eventually affect the modeling accuracy and may cause over-fitting. Therefore, a framework of active learning and semi-supervised learning for lithology identification based on improved naive Bayes (ALSLINB) is proposed. The contributions are fourfold: (i) The Gaussian mixture model (GMM) based on the EM algorithm is used to estimate the probability density of the log data, which fits the probability distribution of the nonlinear multi-response log data. (ii) A framework combining active learning (AL) and semi-supervised learning is proposed for the expansion of labeled samples in the training data set. (iii) The application of pseudo-labeling detection technology can effectively improve the authenticity of pseudo-label samples. (iv) Different from the general deterministic lithology identification method, the result of the ALSLINB algorithm corresponds to the probability score, which provides an auxiliary basis for the prediction result. Finally, the ALSLINB algorithm is applied to two different data sets for a large number of experiments and compared with the related baseline methods to verify its effectiveness and generalization ability. The result proves that the ALSLINB algorithm can complete the lithology recognition task well and has high accuracy and robustness, which provides a new direction for intelligent lithology identification.

12 citations


Journal ArticleDOI
01 Mar 2022-PLOS ONE
TL;DR: It is proposed that even in tumultuous scenarios with limited information like the early months of the COVID-19 pandemic, straightforward approaches like this one with discrete, attainable inputs can improve ABMs to better support stakeholders.
Abstract: Agent-based models (ABMs) have become a common tool for estimating demand for hospital beds during the COVID-19 pandemic. A key parameter in these ABMs is the probability of hospitalization for agents with COVID-19. Many published COVID-19 ABMs use either single point or age-specific estimates of the probability of hospitalization for agents with COVID-19, omitting key factors: comorbidities and testing status (i.e., received vs. did not receive COVID-19 test). These omissions can inhibit interpretability, particularly by stakeholders seeking to use an ABM for transparent decision-making. We introduce a straightforward yet novel application of Bayes’ theorem with inputs from aggregated hospital data to better incorporate these factors in an ABM. We update input parameters for a North Carolina COVID-19 ABM using this approach, demonstrate sensitivity to input data selections, and highlight the enhanced interpretability and accuracy of the method and the predictions. We propose that even in tumultuous scenarios with limited information like the early months of the COVID-19 pandemic, straightforward approaches like this one with discrete, attainable inputs can improve ABMs to better support stakeholders.

12 citations


Journal ArticleDOI
TL;DR: In this article, a generalized framework for optimal sensor placement design for structural health monitoring (SHM) applications using Bayes risk as the objective function is presented and applied to an example problem concerning the state detection of the boundary of a beam modeled by springs.

12 citations


Journal ArticleDOI
TL;DR: In this paper , a generalized framework for optimal sensor placement design for structural health monitoring (SHM) applications using Bayes risk as the objective function is presented and applied to an example problem concerning the state detection of the boundary of a beam modeled by springs.

12 citations


Posted ContentDOI
31 Jan 2022
TL;DR: The authors revisited the JAB01 and introduced a piecewise transformation that clarifies the connection to the frequentist two-sided p-value, and derived simple and accurate approximate Bayes factors for the t-test, the binomial test, the comparison of two proportions, and the correlation test.
Abstract: In 1936, Sir Harold Jeffreys proposed an approximate objective Bayes factor that quantifies the degree to which the point-null hypothesis H0 outpredicts the alternative hypothesis H1. This approximate Bayes factor (henceforth JAB01) depends only on sample size and on how many standard errors the maximum likelihood estimate is away from the point under test. We revisit JAB01 and introduce a piecewise transformation that clarifies the connection to the frequentist two-sided p-value. Specifically, if p ≤ .10 then JAB01 ≈ 3p√n; if .10 < p ≤ .50 then JAB01 ≈ √(pn); and if p > .50 then JAB01 ≈ p^(1/4)√n. These transformation rules present p-value practitioners with a straightforward opportunity to obtain Bayesian benefits such as the ability to monitor evidence as data accumulate without reaching a foregone conclusion. Using the JAB01 framework we derive simple and accurate approximate Bayes factors for the t-test, the binomial test, the comparison of two proportions, and the correlation test.

11 citations


Posted ContentDOI
22 Feb 2022
TL;DR: In this article , a detailed model of COVID-19 transmission with high spatial and demographic resolution, developed as part of the RAMP initiative, is described and employed the uncertainty quantification approaches of Bayes linear emulation and history matching to mimic JUNE.
Abstract: We analyze JUNE: a detailed model of COVID-19 transmission with high spatial and demographic resolution, developed as part of the RAMP initiative. JUNE requires substantial computational resources to evaluate, making model calibration and general uncertainty analysis extremely challenging. We describe and employ the uncertainty quantification approaches of Bayes linear emulation and history matching to mimic JUNE and to perform a global parameter search, hence identifying regions of parameter space that produce acceptable matches to observed data, and demonstrating the capability of such methods. This article is part of the theme issue ‘Technical challenges of modelling real-life epidemics and examples of overcoming these’.

Journal ArticleDOI
TL;DR: The study is useful to predict cardiovascular disease with better accuracy by applying ML techniques like Decision Tree and Naïve Bayes and also with the help of risk factors.
Abstract: Machine Learning is an application of Artificial Intelligence where the method begins with observations on data. In the medical field, it is very important to make a correct decision within less time while treating a patient. Here ML techniques play a major role in predicting the disease by considering the vast amount of data that is produced by the healthcare field. In India, heart disease is the major cause of death. According to WHO, it can predict and prevent stroke by timely actions. In this paper, the study is useful to predict cardiovascular disease with better accuracy by applying ML techniques like Decision Tree and Naïve Bayes and also with the help of risk factors. The dataset that we considered is the Heart Failure Dataset which consists of 13 attributes. In the process of analyzing the performance of techniques, the collected data should be pre-processed. Later, it should follow by feature selection and reduction.

Journal ArticleDOI
TL;DR: In this article , a systematic review of machine learning methods to predict chronic diseases is presented. And the authors found that machine learning can predict the occurrence of individual chronic diseases, progression, and their determinants and in many contexts.
Abstract: We aimed to review the literature regarding the use of machine learning to predict chronic diseases.This was a systematic review.The searches included five databases. We included studies that evaluated the prediction of chronic diseases using machine learning models and reported the area under the receiver operating characteristic curve values. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis scale was used to assess the quality of studies.In total, 42 studies were selected. The best reported area under the receiver operating characteristic curve value was 1, whereas the worst was 0.74. K-nearest neighbors, Naive Bayes, deep neural networks, and random forest were the machine learning models most frequently used for achieving the best performance.We found that machine learning can predict the occurrence of individual chronic diseases, progression, and their determinants and in many contexts. The findings are original and relevant to improve clinical decisions and the organization of health care facilities.

Journal ArticleDOI
TL;DR: This paper aims to propose a framework of real-time driving behavior safety level classification and evaluation, which was validated by a case study of driving simulation experiments and found the optimal number of clusters to be three.
Abstract: The road traffic safety situation is severe worldwide and exploring driving behavior is a research hotspot since it is the main factor causing road accidents. However, there are few studies investigating how to evaluate real-time traffic safety of driving behavior and the number of driving behavior safety levels has not yet been thoroughly explored. This paper aims to propose a framework of real-time driving behavior safety level classification and evaluation, which was validated by a case study of driving simulation experiments. The proposed methodology focuses on determining the optimal aggregation time interval, finding the optimal number of safety levels for driving behavior, classifying the safety levels, and evaluating the driving safety levels in real time. An improved cross-validation mean square error model based on driver behavior vectors was proposed to determine the optimal aggregation time interval, which was found to be 1s. Three clustering techniques were applied, i.e., k-means clustering, hierarchical clustering and model-based clustering. The optimal number of clusters was found to be three. Support vector machines, decision trees and naïve Bayes classifiers were then developed as classification models. The accuracy of the combination of k-means clustering and decision trees proved to be the best with three clusters.

Journal ArticleDOI
TL;DR: In this article , the authors consider a class of canonical neural networks comprising rate coding models, where neural activity and plasticity minimise a common cost function and are modulated with a certain delay, and show that such neural networks implicitly perform active inference and learning to minimise the risk associated with future outcomes.
Abstract: This work considers a class of canonical neural networks comprising rate coding models, wherein neural activity and plasticity minimise a common cost function-and plasticity is modulated with a certain delay. We show that such neural networks implicitly perform active inference and learning to minimise the risk associated with future outcomes. Mathematical analyses demonstrate that this biological optimisation can be cast as maximisation of model evidence, or equivalently minimisation of variational free energy, under the well-known form of a partially observed Markov decision process model. This equivalence indicates that the delayed modulation of Hebbian plasticity-accompanied with adaptation of firing thresholds-is a sufficient neuronal substrate to attain Bayes optimal inference and control. We corroborated this proposition using numerical analyses of maze tasks. This theory offers a universal characterisation of canonical neural networks in terms of Bayesian belief updating and provides insight into the neuronal mechanisms underlying planning and adaptive behavioural control.

Journal ArticleDOI
TL;DR: In this article , a fisher linear discriminant analysis classification algorithm fused with naïve Bayes (B-FLDA) was proposed for the ERP-BCI to simultaneously recognize the subjects' intentions, working and idle states.

Journal ArticleDOI
TL;DR: LANTERN as mentioned in this paper distills genotype-phenotype landscape (GPL) measurements into a low-dimensional feature space that represents the fundamental biological mechanisms of the system while also enabling straightforward, explainable predictions.
Abstract: Large-scale measurements linking genetic background to biological function have driven a need for models that can incorporate these data for reliable predictions and insight into the underlying biophysical system. Recent modeling efforts, however, prioritize predictive accuracy at the expense of model interpretability. Here, we present LANTERN (landscape interpretable nonparametric model, https://github.com/usnistgov/lantern), a hierarchical Bayesian model that distills genotype-phenotype landscape (GPL) measurements into a low-dimensional feature space that represents the fundamental biological mechanisms of the system while also enabling straightforward, explainable predictions. Across a benchmark of large-scale datasets, LANTERN equals or outperforms all alternative approaches, including deep neural networks. LANTERN furthermore extracts useful insights of the landscape, including its inherent dimensionality, a latent space of additive mutational effects, and metrics of landscape structure. LANTERN facilitates straightforward discovery of fundamental mechanisms in GPLs, while also reliably extrapolating to unexplored regions of genotypic space.

Journal ArticleDOI
TL;DR: It is found that three adaptive Bayesian model averaging methods performed best across all the statistical tasks and that two of these were also among the most computationally efficient.
Abstract: Significance Choosing a statistical model and accounting for uncertainty about this choice are important parts of the scientific process and are required for common statistical tasks such as parameter estimation, interval estimation, statistical inference, point prediction, and interval prediction. A canonical example is the choice of variables in a linear regression model. Many ways of doing this have been proposed, including Bayesian and penalized regression methods, and it is not clear which are best. We compare 21 popular methods via an extensive simulation study based on a wide range of real datasets. We found that three adaptive Bayesian model averaging methods performed best across all the statistical tasks and that two of these were also among the most computationally efficient.

Journal ArticleDOI
TL;DR: The authors argue that Tendeiro and Kiers are overly pessimistic, and that several of their "issues" with NHBT may in fact be conceived as pronounced advantages, and illustrate their arguments with simple concrete examples.
Abstract: Tendeiro and Kiers (2019) provide a detailed and scholarly critique of Null Hypothesis Bayesian Testing (NHBT) and its central component-the Bayes factor-that allows researchers to update knowledge and quantify statistical evidence. Tendeiro and Kiers conclude that NHBT constitutes an improvement over frequentist p-values, but primarily elaborate on a list of 11 "issues" of NHBT. We believe that several issues identified by Tendeiro and Kiers are of central importance for elucidating the complementary roles of hypothesis testing versus parameter estimation and for appreciating the virtue of statistical thinking over conducting statistical rituals. But although we agree with many of their thoughtful recommendations, we believe that Tendeiro and Kiers are overly pessimistic, and that several of their "issues" with NHBT may in fact be conceived as pronounced advantages. We illustrate our arguments with simple, concrete examples and end with a critical discussion of one of the recommendations by Tendeiro and Kiers, which is that "estimation of the full posterior distribution offers a more complete picture" than a Bayes factor hypothesis test. (PsycInfo Database Record (c) 2022 APA, all rights reserved).

Journal ArticleDOI
TL;DR: In this article , the authors proposed a shift from disease PRS to PGx PRS approaches by simultaneously modeling both the prognostic and predictive effects and further made this shift possible by developing a series of PRS-PGx-Bayes methods, including a novel Bayesian regression approach.
Abstract: Polygenic risk scores (PRS) have been successfully developed for the prediction of human diseases and complex traits in the past years. For drug response prediction in randomized clinical trials, a common practice is to apply PRS built from a disease genome-wide association study (GWAS) directly to a corresponding pharmacogenomics (PGx) setting. Here, we show that such an approach relies on stringent assumptions about the prognostic and predictive effects of the selected genetic variants. We propose a shift from disease PRS to PGx PRS approaches by simultaneously modeling both the prognostic and predictive effects and further make this shift possible by developing a series of PRS-PGx methods, including a novel Bayesian regression approach (PRS-PGx-Bayes). Simulation studies show that PRS-PGx methods generally outperform the disease PRS methods and PRS-PGx-Bayes is superior to all other PRS-PGx methods. We further apply the PRS-PGx methods to PGx GWAS data from a large cardiovascular randomized clinical trial (IMPROVE-IT) to predict treatment related LDL cholesterol reduction. The results demonstrate substantial improvement of PRS-PGx-Bayes in both prediction accuracy and the capability of capturing the treatment-specific predictive effects while compared with the disease PRS approaches.

Journal ArticleDOI
TL;DR: In this paper , a high accident risk prediction model is developed to analyze traffic accident data, and identify priority intersections for improvement, which can also be used as a reference for future intersection design and environmental improvements.
Abstract: In this study, a high accident risk prediction model is developed to analyze traffic accident data, and identify priority intersections for improvement. A database of the traffic accidents was organized and analyzed, and an intersection accident risk prediction model based on different mechanical learning methods was created to estimate the possible high accident risk locations for traffic management departments to use in planning countermeasures to reduce accident risk. Using Bayes’ theorem to identify environmental variables at intersections that affect accident risk levels, this study found that road width, speed limit and roadside markings are the significant risk factors for traffic accidents. Meanwhile, Naïve Bayes, Decision tree C4.5, Bayesian Network, Multilayer perceptron (MLP), Deep Neural Networks (DNN), Deep Belief Network (DBN) and Convolutional Neural Network (CNN) were used to develop an accident risk prediction model. This model can also identify the key factors that affect the occurrence of high-risk intersections, and provide traffic management departments with a better basis for decision-making for intersection improvement. Using the same environmental characteristics as high-risk intersections for model inputs to estimate the degree of risk that may occur in the future, which can be used to prevent traffic accidents in the future. Moreover, it also can be used as a reference for future intersection design and environmental improvements.

Posted ContentDOI
22 Mar 2022-bioRxiv
TL;DR: This work presents new approaches, based on reworking Felsenstein’s algorithm, for likelihood-based phylogenetic analysis of epidemiological genomic datasets at unprecedented scales, and exploits near-certainty regarding ancestral genomes, and the similarities between closely related and densely sampled genomes, to greatly reduce computational demands for memory and time.
Abstract: Phylogenetics plays a crucial role in the interpretation of genomic data1. Phylogenetic analyses of SARS-CoV-2 genomes have allowed the detailed study of the virus’s origins2, of its international3,4 and local4–9 spread, and of the emergence10 and reproductive success11 of new variants, among many applications. These analyses have been enabled by the unparalleled volumes of genome sequence data generated and employed to study and help contain the pandemic12. However, preferred model-based phylogenetic approaches including maximum likelihood and Bayesian methods, mostly based on Felsenstein’s ‘pruning’ algorithm13,14, cannot scale to the size of the datasets from the current pandemic4,15, hampering our understanding of the virus’s evolution and transmission16. We present new approaches, based on reworking Felsenstein’s algorithm, for likelihood-based phylogenetic analysis of epidemiological genomic datasets at unprecedented scales. We exploit near-certainty regarding ancestral genomes, and the similarities between closely related and densely sampled genomes, to greatly reduce computational demands for memory and time. Combined with new methods for searching amongst candidate evolutionary trees, this results in our MAPLE (‘MAximum Parsimonious Likelihood Estimation’) software giving better results than popular approaches such as FastTree 217, IQ-TREE 218, RAxML-NG19 and UShER15. Our approach therefore allows complex and accurate proba-bilistic phylogenetic analyses of millions of microbial genomes, extending the reach of genomic epidemiology. Future epidemiological datasets are likely to be even larger than those currently associated with COVID-19, and other disciplines such as metagenomics and biodiversity science are also generating huge numbers of genome sequences20–22. Our methods will permit continued use of preferred likelihood-based phylogenetic analyses.

Journal ArticleDOI
TL;DR: In this paper , the problem of estimating unknown parameters of a two-parameter distribution with bathtub shape under the assumption that data are type I progressive hybrid censored was studied and the maximum likelihood estimators were derived and then obtained the observed Fisher information matrix.
Abstract: In this paper we study the problem of estimating unknown parameters of a two-parameter distribution with bathtub shape under the assumption that data are type I progressive hybrid censored. We derive maximum likelihood estimators and then obtain the observed Fisher information matrix. Bayes estimators are also obtained under the squared error loss function and highest posterior density intervals are constructed as well. We perform a simulation study to compare proposed methods and analyzed a real data set for illustration purposes. Finally we establish optimal plans with respect to cost constraints and obtain comments based on a numerical study.

Journal ArticleDOI
TL;DR: Doi et al. as mentioned in this paper developed a Bayesian approach to construct the confidence interval for the ratio of CVs of two normal distributions, and compared with two existing classical approaches: the generalised confidence interval (GCI) and the method of variance estimates recovery (MOVER) approaches.
Abstract: The coefficient of variation (CV) is a useful statistical tool for measuring the relative variability between multiple populations, while the ratio of CVs can be used to compare the dispersion. In statistics, the Bayesian approach is fundamentally different from the classical approach. For the Bayesian approach, the parameter is a quantity whose variation is described by a probability distribution. The probability distribution is called the prior distribution, which is based on the experimenter’s belief. The prior distribution is updated with sample information. This updating is done with the use of Bayes’ rule. For the classical approach, the parameter is quantity and an unknown value, but the parameter is fixed. Moreover, the parameter is based on the observed values in the sample. Herein, we develop a Bayesian approach to construct the confidence interval for the ratio of CVs of two normal distributions. Moreover, the efficacy of the Bayesian approach is compared with two existing classical approaches: the generalised confidence interval (GCI) and the method of variance estimates recovery (MOVER) approaches. A Monte Carlo simulation was used to compute the coverage probability (CP) and average length (AL) of three confidence intervals. The results of a simulation study indicate that the Bayesian approach performed better in terms of the CP and AL. Finally, the Bayesian and two classical approaches were applied to analyse real data to illustrate their efficacy. In this study, the application of these approaches for use in classical civil engineering topics is targeted. Two real data, which are used in the present study, are the compressive strength data for the investigated mixes at 7 and 28 days, as well as the PM2.5 air quality data of two stations in Chiang Mai province, Thailand. The Bayesian confidence intervals are better than the other confidence intervals for the ratio of CVs of normal distributions. Doi: 10.28991/CEJ-SP2021-07-010 Full Text: PDF

Journal ArticleDOI
TL;DR: In this paper , stochastic variational inference (SVI) was used to estimate the pixel-wise uncertainty of the sound-speed reconstruction using SVI and a common approximation which is already implicit in other types of iterative reconstruction.
Abstract: Bayesian methods are a popular research direction for inverse problems. There are a variety of techniques available to solve Bayes’ equation, each with their own strengths and limitations. Here, we discuss stochastic variational inference (SVI), which solves Bayes’ equation using gradient-based methods. This is important for applications which are time-limited (e.g. medical tomography) or where solving the forward problem is expensive (e.g. adjoint methods). To evaluate the use of SVI in both these contexts, we apply it to ultrasound tomography of the brain using full-waveform inversion (FWI). FWI is a computationally expensive adjoint method for solving the ultrasound tomography inverse problem, and we demonstrate that SVI can be used to find a no-cost estimate of the pixel-wise variance of the sound-speed distribution using a mean-field Gaussian approximation. In other words, we show experimentally that it is possible to estimate the pixel-wise uncertainty of the sound-speed reconstruction using SVI and a common approximation which is already implicit in other types of iterative reconstruction. Uncertainty estimates have a variety of uses in adjoint methods and tomography. As an illustrative example, we focus on the use of uncertainty for image quality assessment. This application is not limiting; our variance estimator has effectively no computational cost and we expect that it will have applications in fields such as non-destructive testing or aircraft component design where uncertainties may not be routinely estimated.

Journal ArticleDOI
TL;DR: In this article , the authors argue that a Bayesian approach can support science learners to make sense of uncertainty in science education, and they provide a brief primer on Bayes' theorem and then describe three ways to make Bayesian reasoning practical in K-12 science education contexts.
Abstract: Abstract Uncertainty is ubiquitous in science, but scientific knowledge is often represented to the public and in educational contexts as certain and immutable. This contrast can foster distrust when scientific knowledge develops in a way that people perceive as a reversals, as we have observed during the ongoing COVID-19 pandemic. Drawing on research in statistics, child development, and several studies in science education, we argue that a Bayesian approach can support science learners to make sense of uncertainty. We provide a brief primer on Bayes’ theorem and then describe three ways to make Bayesian reasoning practical in K-12 science education contexts. There are a) using principles informed by Bayes’ theorem that relate to the nature of knowing and knowledge, b) interacting with a web-based application (or widget—Confidence Updater) that makes the calculations needed to apply Bayes’ theorem more practical, and c) adopting strategies for supporting even young learners to engage in Bayesian reasoning. We conclude with directions for future research and sum up how viewing science and scientific knowledge from a Bayesian perspective can build trust in science.

Journal ArticleDOI
TL;DR: In this paper , the classical and Bayesian estimation procedures for stress-strength reliability parameter (SSRP) have been considered based on two independent adaptive Type II progressive hybrid censored samples from inverted exponentiated Rayleigh distributions with different shape parameters.
Abstract: ABSTRACT In this paper, the classical and Bayesian estimation procedures for stress–strength reliability parameter (SSRP) have been considered based on two independent adaptive Type II progressive hybrid censored samples from inverted exponentiated Rayleigh distributions with different shape parameters. The maximum likelihood estimate of SSRP and its asymptotic confidence interval are attained. The Bayes estimate of SSRP is obtained under two loss functions using the Lindley’s approximation and Metropolis–Hastings algorithm. The highest posterior density credible interval is successively constructed. The behavior of suggested estimators is assessed using a simulation study. Finally, the droplet splashing data under two surface wettabilities are considered to illustrate the application of the stress–strength reliability model to the engineering data.

Journal ArticleDOI
TL;DR: In this article , robust empirical Bayes confidence intervals (EBCIs) were constructed for the normal mean problem, and the authors considered the effects of U.S. neighborhoods on intergenerational mobility.
Abstract: We construct robust empirical Bayes confidence intervals (EBCIs) in a normal means problem. The intervals are centered at the usual linear empirical Bayes estimator, but use a critical value accounting for shrinkage. Parametric EBCIs that assume a normal distribution for the means (Morris, 1983b) may substantially undercover when this assumption is violated. In contrast, our EBCIs control coverage regardless of the means distribution, while remaining close in length to the parametric EBCIs when the means are indeed Gaussian. If the means are treated as fixed, our EBCIs have an average coverage guarantee: the coverage probability is at least $1 - \alpha$ on average across the $n$ EBCIs for each of the means. Our empirical application considers the effects of U.S. neighborhoods on intergenerational mobility.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed Covid-19 Prudential Expectation Strategy (CPES) as a new strategy for predicting the behavior of the person's body if he has been infected with Covid19, which composes of three phases called outlier rejection phase (ORP), feature selection phase (FSP), and classification phase (CP).

Journal ArticleDOI
TL;DR: In this article , a new reparameterize form of the Wilson-Hilferty distribution for data modelling with increasing, decreasing and bathtub shape hazard rates has been considered, taking into account the estimation for the unknown model parameters, reliability function and hazard function based on two frequentist methods and Bayesian method of estimation using Type II progressively censored data.
Abstract: A new re‐parameterize form of the Wilson‐Hilferty distribution for data modelling with increasing, decreasing and bathtub shape hazard rates has been considered. This paper takes into account the estimation for the unknown model parameters, reliability function and hazard function based on two frequentist methods and Bayesian method of estimation using Type‐II progressively censored data. In frequentist method, besides conventional likelihood based estimation, another competitive method, known as maximum product of spacing (MPS) method is proposed to estimate the model parameters, reliability function and hazard function as an alternative approach to the common likelihood method. In Bayesian paradigm, we have also considered the MPS function as an alternative to the traditional likelihood function and both are also discussed under the Bayesian set up for unknown parameters, reliability function and hazard function. Moreover, for all considered unknown quantities, the approximate confidence intervals under the proposed frequentist approaches as well as the Bayes credible intervals are constructed. Extensive Monte‐Carlo simulation studies are conducted to evaluate the performance of the proposed estimates with respect to various criteria quantities. Furthermore, we discuss an optimal progressive censoring plan among different competing censoring plans using three optimality criteria. Finally, to show the applicability of the proposed methodologies in a real‐life scenario, an engineering dataset is analysed.