scispace - formally typeset
Search or ask a question

Showing papers on "Random effects model published in 2022"


Journal ArticleDOI
Ramy Abou Ghayda, Keum Hwa Lee, Young Joo Han, Seohyun Ryu, Sung Hwi Hong, So Jin Yoon, Gwang Hun Jeong, Jae Won Yang, Hyo Jeong Lee, Jinhee Lee, Jun Young Lee, Maria Effenberger, Michael Eisenhut, Andreas Kronbichler, Marco Solmi, Han Li, Louis Jacob, Ai Koyanagi, Joaquim Radua, Sevda Aghayeva, Mohamed Lemine Cheikh brahim Ahmed, Abdulwahed Al Serouri, Humaid O. Al-Shamsi, Mehrdad Amir-Behghadami, Oidov Baatarkhuu, Hyam Bashour, Anastasiia Bondarenko, Adrián Camacho-Ortiz, Franz Castro, Horace Cox, Hayk Davtyan, Kirk Osmond Douglas, Elena Dragioti, Shahul H. Ebrahim, Martina Ferioli, Harapan Harapan, Saad I. Mallah, Aamer Ikram, Shigeru Inoue, Slobodan M. Jankovic, Umesh Jayarajah, Milos Jesenak, P. Kakodkar, Yohannes Kebede, Meron Mehari Kifle, David Koh, Visnja Kokic Males, Katarzyna Kotfis, Sulaiman Lakoh, Lowell Ling, Jorge J. Llibre-Guerra, Masaki Machida, Richard Makurumidze, Mohammed A. Mamun, Izet Masic, Hoang Van Minh, Sergey Moiseev, Thomas Nadasdy, Chen Nahshon, Silvio A. Ñamendys-Silva, Blaise Nguendo Yongsi, Henning Bay Nielsen, Zita Aleyo Nodjikouambaye, Ohnmar Ohnmar, Atte Oksanen, Oluwatomi Owopetu, Konstantinos Parperis, Gonzalo Perez, Krit Pongpirul, Marius Rademaker, S. Rosa, Ranjit Sah, Dina E. Sallam, Patrick Schober, Tanu Singhal, Silva Tafaj, Irene Torres, J. Smith Torres-Roman, Dimitrios Tsartsalis, Jadambaa Tsolmon, Laziz Tuychiev, Batric Vukcevic, Guy Ikambo Wanghi, Uwe Wollina, RH Xu, Lin Yang, Zoubida Zaidi, Lee Smith, Jae Il Shin 
TL;DR: In this article , a more accurate representation of COVID-19's case fatality rate (CFR) by performing meta-analyses by continents and income, and by comparing the result with pooled estimates was provided.
Abstract: The aim of this study is to provide a more accurate representation of COVID‐19's case fatality rate (CFR) by performing meta‐analyses by continents and income, and by comparing the result with pooled estimates. We used multiple worldwide data sources on COVID‐19 for every country reporting COVID‐19 cases. On the basis of data, we performed random and fixed meta‐analyses for CFR of COVID‐19 by continents and income according to each individual calendar date. CFR was estimated based on the different geographical regions and levels of income using three models: pooled estimates, fixed‐ and random‐model. In Asia, all three types of CFR initially remained approximately between 2.0% and 3.0%. In the case of pooled estimates and the fixed model results, CFR increased to 4.0%, by then gradually decreasing, while in the case of random‐model, CFR remained under 2.0%. Similarly, in Europe, initially, the two types of CFR peaked at 9.0% and 10.0%, respectively. The random‐model results showed an increase near 5.0%. In high‐income countries, pooled estimates and fixed‐model showed gradually increasing trends with a final pooled estimates and random‐model reached about 8.0% and 4.0%, respectively. In middle‐income, the pooled estimates and fixed‐model have gradually increased reaching up to 4.5%. in low‐income countries, CFRs remained similar between 1.5% and 3.0%. Our study emphasizes that COVID‐19 CFR is not a fixed or static value. Rather, it is a dynamic estimate that changes with time, population, socioeconomic factors, and the mitigatory efforts of individual countries.

44 citations


Journal ArticleDOI
TL;DR: In this article , a three-level random-effects meta-analysis examining the average effect of the COVID-19-related school closures with respect to several moderator variables is presented.
Abstract: COVID-19 led to school closures and the necessity to use remote learning in 2020 and 2021 around the globe. This article provides results for a three-level random-effects meta-analysis examining the average effect of the COVID-19-related school closures with respect to several moderator variables. The results showed a robust average effect of d=?0.175(SE=0.063$d = - 0.175( SE = 0.063$, p=0.013,95%CI[?0.308,?0.041])$p = 0.013,95\% {\rm{CI}}[ { - 0.308, - 0.041} ] )$. The moderator analysis was largely insignificant;however, the results tentatively point out that younger students in schools were more negatively affected compared to older students, and that the negative effect reduced with subsequent lockdowns in autumn and winter 2020/2021. The results are discussed with respect to potential explanations.

39 citations


Journal ArticleDOI
20 Jan 2022-PeerJ
TL;DR: This article showed that having few random effects levels does not strongly influence the parameter estimates or uncertainty around those estimates for fixed effects terms, at least in the case presented here, and that the coverage probability of fixed effects estimates is sample size dependent.
Abstract: As linear mixed-effects models (LMMs) have become a widespread tool in ecology, the need to guide the use of such tools is increasingly important. One common guideline is that one needs at least five levels of the grouping variable associated with a random effect. Having so few levels makes the estimation of the variance of random effects terms (such as ecological sites, individuals, or populations) difficult, but it need not muddy one’s ability to estimate fixed effects terms—which are often of primary interest in ecology. Here, I simulate datasets and fit simple models to show that having few random effects levels does not strongly influence the parameter estimates or uncertainty around those estimates for fixed effects terms—at least in the case presented here. Instead, the coverage probability of fixed effects estimates is sample size dependent. LMMs including low-level random effects terms may come at the expense of increased singular fits, but this did not appear to influence coverage probability or RMSE, except in low sample size (N = 30) scenarios. Thus, it may be acceptable to use fewer than five levels of random effects if one is not interested in making inferences about the random effects terms (i.e. when they are ‘nuisance’ parameters used to group non-independent data), but further work is needed to explore alternative scenarios. Given the widespread accessibility of LMMs in ecology and evolution, future simulation studies and further assessments of these statistical methods are necessary to understand the consequences both of violating and of routinely following simple guidelines.

28 citations


Journal ArticleDOI
TL;DR: In this article, a full simulation of random parameters is undertaken for out-of-sample injury-severity predictions, and the prediction accuracy of the estimated models was assessed, not surprisingly, that the random parameters logit model with heterogeneity in the means and variances outperformed other models in predictive performance.

27 citations


Journal ArticleDOI
TL;DR: In this paper , a full simulation of random parameters is undertaken for out-of-sample injury-severity predictions, and the prediction accuracy of the estimated models was assessed, not surprisingly, that the random parameters logit model with heterogeneity in the means and variances outperformed other models in predictive performance.

27 citations


Journal ArticleDOI
TL;DR: In this paper , the authors proposed a meta-analysis procedure for combining data from multiple studies, which can increase statistical power, improve accuracy, and provide a summary of clinical questions with respect to key clinical questions.
Abstract: Meta-analysis is the statistical procedure for combining data from multiple studies. Meta-analyses are being conducted with increasing frequency (Figure 1). Compared to a single clinical study, they can increase statistical power, improve accuracy, and provide a summary of fi ndings with respect to key clinical questions. Understanding the statistical models underlying the analysis is important.

21 citations


Book ChapterDOI
TL;DR: In this paper, both fixed-and random-effect meta-analysis models are compared, with the fixed-effect model assuming that all studies share a single common effect and all of the variance in observed effect sizes is attributable to sampling error.
Abstract: Deciding whether to use a fixed-effect model or a random-effects model is a primary decision an analyst must make when combining the results from multiple studies through meta-analysis. Both modeling approaches estimate a single effect size of interest. The fixed-effect meta-analysis assumes that all studies share a single common effect and, as a result, all of the variance in observed effect sizes is attributable to sampling error. The random-effects meta-analysis estimates the mean of a distribution of effects, thus assuming that study effect sizes vary from one study to the next. Under this model, variance in observed effect sizes is attributable to both sampling error (within-study variance) and statistical heterogeneity (between-study variance).The most popular meta-analyses involve using a weighted average to combine the study-level effect sizes. Both fixed- and random-effects models use an inverse-variance weight (variance of the observed effect size). However, given the shared between-study variance used in the random-effects model, it leads to a more balanced distribution of weights than under the fixed-effect model (i.e., small studies are given more relative weight and large studies less). The standard error for these estimators also relates to the inverse-variance weights. As such, the standard errors and confidence intervals for the random-effects model are larger and wider than in the fixed-effect analysis. Indeed, in the presence of statistical heterogeneity, fixed-effect models can lead to overly narrow intervals.In addition to commonly used, generalizable models, there are additional fixed-effect models and random-effect models that can be considered. Additional fixed-effect models that are specific to dichotomous data are more robust to issues that arise from sparse data. Furthermore, random-effects models can be expanded upon using generalized linear mixed models so that different covariance structures are used to distribute statistical heterogeneity across multiple parameters. Finally, both fixed- and random-effects modeling can be conducted using a Bayesian framework.

20 citations


Journal ArticleDOI
TL;DR: Ag-RDTs detect most of the individuals infected with SARS-CoV-2, and almost all when high viral loads are present, they are especially useful to detect persons with high viral load who are most likely to transmit the virus.
Abstract: Background Comprehensive information about the accuracy of antigen rapid diagnostic tests (Ag-RDTs) for Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) is essential to guide public health decision makers in choosing the best tests and testing policies. In August 2021, we published a systematic review and meta-analysis about the accuracy of Ag-RDTs. We now update this work and analyze the factors influencing test sensitivity in further detail. Methods and findings We registered the review on PROSPERO (registration number: CRD42020225140). We systematically searched preprint and peer-reviewed databases for publications evaluating the accuracy of Ag-RDTs for SARS-CoV-2 until August 31, 2021. Descriptive analyses of all studies were performed, and when more than 4 studies were available, a random-effects meta-analysis was used to estimate pooled sensitivity and specificity with reverse transcription polymerase chain reaction (RT-PCR) testing as a reference. To evaluate factors influencing test sensitivity, we performed 3 different analyses using multivariable mixed-effects meta-regression models. We included 194 studies with 221,878 Ag-RDTs performed. Overall, the pooled estimates of Ag-RDT sensitivity and specificity were 72.0% (95% confidence interval [CI] 69.8 to 74.2) and 98.9% (95% CI 98.6 to 99.1). When manufacturer instructions were followed, sensitivity increased to 76.3% (95% CI 73.7 to 78.7). Sensitivity was markedly better on samples with lower RT-PCR cycle threshold (Ct) values (97.9% [95% CI 96.9 to 98.9] and 90.6% [95% CI 88.3 to 93.0] for Ct-values <20 and <25, compared to 54.4% [95% CI 47.3 to 61.5] and 18.7% [95% CI 13.9 to 23.4] for Ct-values ≥25 and ≥30) and was estimated to increase by 2.9 percentage points (95% CI 1.7 to 4.0) for every unit decrease in mean Ct-value when adjusting for testing procedure and patients’ symptom status. Concordantly, we found the mean Ct-value to be lower for true positive (22.2 [95% CI 21.5 to 22.8]) compared to false negative (30.4 [95% CI 29.7 to 31.1]) results. Testing in the first week from symptom onset resulted in substantially higher sensitivity (81.9% [95% CI 77.7 to 85.5]) compared to testing after 1 week (51.8%, 95% CI 41.5 to 61.9). Similarly, sensitivity was higher in symptomatic (76.2% [95% CI 73.3 to 78.9]) compared to asymptomatic (56.8% [95% CI 50.9 to 62.4]) persons. However, both effects were mainly driven by the Ct-value of the sample. With regards to sample type, highest sensitivity was found for nasopharyngeal (NP) and combined NP/oropharyngeal samples (70.8% [95% CI 68.3 to 73.2]), as well as in anterior nasal/mid-turbinate samples (77.3% [95% CI 73.0 to 81.0]). Our analysis was limited by the included studies’ heterogeneity in viral load assessment and sample origination. Conclusions Ag-RDTs detect most of the individuals infected with SARS-CoV-2, and almost all (>90%) when high viral loads are present. With viral load, as estimated by Ct-value, being the most influential factor on their sensitivity, they are especially useful to detect persons with high viral load who are most likely to transmit the virus. To further quantify the effects of other factors influencing test sensitivity, standardization of clinical accuracy studies and access to patient level Ct-values and duration of symptoms are needed.

18 citations


Journal ArticleDOI
TL;DR: The findings showed that the use of ACEIs/ARBs did not significantly influence either mortality or severity in COVID-19 patients taking ACEIs or ARBs, and no association of mortality and severity was found.
Abstract: Purpose: The primary objective of this systematic review is to assess association of mortality in COVID-19 patients on Angiotensin-converting-enzyme inhibitors (ACEIs) and Angiotensin-II receptor blockers (ARBs). A secondary objective is to assess associations with higher severity of the disease in COVID-19 patients. Materials and Methods: We searched multiple COVID-19 databases (WHO, CDC, LIT-COVID) for longitudinal studies globally reporting mortality and severity published before January 18th, 2021. Meta-analyses were performed using 53 studies for mortality outcome and 43 for the severity outcome. Mantel-Haenszel odds ratios were generated to describe overall effect size using random effect models. To account for between study results variations, multivariate meta-regression was performed with preselected covariates using maximum likelihood method for both the mortality and severity models. Result: Our findings showed that the use of ACEIs/ARBs did not significantly influence either mortality (OR = 1.16 95% CI 0.94–1.44, p = 0.15, I2 = 93.2%) or severity (OR = 1.18, 95% CI 0.94–1.48, p = 0.15, I2 = 91.1%) in comparison to not being on ACEIs/ARBs in COVID-19 positive patients. Multivariate meta-regression for the mortality model demonstrated that 36% of between study variations could be explained by differences in age, gender, and proportion of heart diseases in the study samples. Multivariate meta-regression for the severity model demonstrated that 8% of between study variations could be explained by differences in age, proportion of diabetes, heart disease and study country in the study samples. Conclusion: We found no association of mortality or severity in COVID-19 patients taking ACEIs/ARBs.

17 citations


Book ChapterDOI
TL;DR: The authors consider methods to estimate the heterogeneity variance parameter in a random-effects model, consider in more detail what this parameter represents and how the possible explanations for heterogeneity can be explored through statistical methods, and discuss publication bias as an alternative explanation for why observed effect estimates might form some distribution other than what we might come to expect.
Abstract: The random-effects model allows for the possibility that studies in a meta-analysis have heterogeneous effects. That is, observed study estimates vary not only due to random sampling error but also due to inherent differences in the way studies have been designed and conducted. In this chapter, we consider methods to estimate the heterogeneity variance parameter in a random-effects model, consider in more detail what this parameter represents and how the possible explanations for heterogeneity can be explored through statistical methods. Toward the end of this chapter, publication bias is discussed as an alternative explanation for why observed effect estimates might form some distribution other than what we might come to expect.

17 citations


Journal ArticleDOI
TL;DR: Weightlifting training is commonly used to improve strength, power and speed in athletes as mentioned in this paper , however, to date, WLT studies have either not compared training effects against those of other training methods, or been limited by small sample sizes, which can be resolved by pooling studies in a meta-analysis.
Abstract: Weightlifting training (WLT) is commonly used to improve strength, power and speed in athletes. However, to date, WLT studies have either not compared training effects against those of other training methods, or been limited by small sample sizes, which are issues that can be resolved by pooling studies in a meta-analysis. Therefore, the objective of this systematic review with meta-analysis was to evaluate the effects of WLT compared with traditional resistance training (TRT), plyometric training (PLYO) and/or control (CON) on strength, power and speed.The systematic review included peer-reviewed articles that employed a WLT intervention, a comparison group (i.e. TRT, PLYO, CON), and a measure of strength, power and/or speed. Means and standard deviations of outcomes were converted to Hedges' g effect sizes using an inverse variance random-effects model to generate a weighted mean effect size (ES).Sixteen studies were included in the analysis, comprising 427 participants. Data indicated that when compared with TRT, WLT resulted in greater improvements in weightlifting load lifted (4 studies, p = 0.02, g = 1.35; 95% CI 0.20-2.51) and countermovement jump (CMJ) height (9 studies, p = 0.00, g = 0.95; 95% CI 0.04-1.87). There was also a large effect in terms of linear sprint speed (4 studies, p = 0.13, g = 1.04; 95% CI - 0.03 to 2.39) and change of direction speed (CODS) (2 studies, p = 0.36, g = 1.21; 95% CI - 1.41 to 3.83); however, this was not significant. Interpretation of these findings should acknowledge the high heterogeneity across the included studies and potential risk of bias. WLT and PLYO resulted in similar improvements in speed, power and strength as demonstrated by negligible to moderate, non-significant effects in favour of WLT for improvements in linear sprint speed (4 studies, p = 0.35, g = 0.20; 95% CI - 0.23 to 0.63), CODS (3 studies, p = 0.52, g = 0.17; 95% CI - 0.35 to 0.68), CMJ (6 studies, p = 0.09, g = 0.31; 95% CI - 0.05 to 0.67), squat jump performance (5 studies, p = 0.08, g = 0.34; 95% CI - 0.04 to 0.73) and strength (4 studies, p = 0.20, g = 0.69; 95% CI - 0.37 to 1.75).Overall, these findings support the notion that if the training goal is to improve strength, power and speed, supplementary weightlifting training may be advantageous for athletic development. Whilst WLT and PLYO may result in similar improvements, WLT can elicit additional benefits above that of TRT, resulting in greater improvements in weightlifting and jumping performance.

Journal ArticleDOI
TL;DR: In this article , a linear mixed effects model (LMMs) is proposed to handle missing data under the Missing At Random (MAR) assumption without requiring imputations, which has not been very well explored in the CEA context.
Abstract: Abstract Trial‐based cost‐effectiveness analyses (CEAs) are an important source of evidence in the assessment of health interventions. In these studies, cost and effectiveness outcomes are commonly measured at multiple time points, but some observations may be missing. Restricting the analysis to the participants with complete data can lead to biased and inefficient estimates. Methods, such as multiple imputation, have been recommended as they make better use of the data available and are valid under less restrictive Missing At Random (MAR) assumption. Linear mixed effects models (LMMs) offer a simple alternative to handle missing data under MAR without requiring imputations, and have not been very well explored in the CEA context. In this manuscript, we aim to familiarize readers with LMMs and demonstrate their implementation in CEA. We illustrate the approach on a randomized trial of antidepressants, and provide the implementation code in R and Stata. We hope that the more familiar statistical framework associated with LMMs, compared to other missing data approaches, will encourage their implementation and move practitioners away from inadequate methods.

Journal ArticleDOI
01 Sep 2022
TL;DR: In this article , a random-parameter Bayesian hierarchical extreme value model with heterogeneity in means and variances (RPBHEV-HMV) is proposed to better capture unobserved heterogeneity.
Abstract: Using random parameters in combination with extreme value theory (EVT) models has been shown to capture unobserved heterogeneity and improve crash estimation based on traffic conflicts. However, in existing random-parameter EVT models, the predefined distribution means and variances for random parameters are usually constant, which may not capture unobserved heterogeneity well. Therefore, the present study develops a random-parameter Bayesian hierarchical extreme value model with heterogeneity in means and variances (RPBHEV-HMV) to better capture unobserved heterogeneity. The developed model offers two main advantages: (1) it allows random parameters to be normally distributed with varying means and variances; and (2) it incorporates several factors contributing to a heterogeneous distribution of means and variances of random parameters. Application of the developed model to conflict-based rear-end crash prediction was conducted at four signalized intersections in the city of Surrey, British Columbia, Canada. The modified time to collision was employed to fit the generalized extreme value distribution. Three conflict indicators and three traffic parameters were considered as covariates to capture nonstationarity in conflict extremes as well as heterogeneity in means and variances. The results indicated that the RPBHEV-HMV model outperforms existing RPBHEV models in terms of goodness of fit, explanatory power, and crash estimation accuracy and precision.

Journal ArticleDOI
TL;DR: In this paper , the authors explore temporal shifts in the effects of explanatory variables on the injury severity outcomes of crashes involving distracted driving using data from distracted driving crashes on Kansas State highways over a four-year period.

Posted ContentDOI
15 Feb 2022-medRxiv
TL;DR: With viral load, as estimated by Ct-value, being the most influential factor on their sensitivity, Ag-RDTs are especially useful to detect persons with high viral load who are most likely to transmit the virus.
Abstract: Background Comprehensive information about the accuracy of antigen rapid diagnostic tests (Ag-RDTs) for SARS-CoV-2 is essential to guide public health decision makers in choosing the best tests and testing policies. In August 2021, we published a systematic review and meta-analysis about the accuracy of Ag-RDTs. We now update this work and analyze the factors influencing test sensitivity in further detail. Methods and findings We registered the review on PROSPERO (registration number: CRD42020225140). We systematically searched multiple databases (PubMed, Web of Science Core Collection, medRvix, bioRvix, and FIND) for publications evaluating the accuracy of Ag-RDTs for SARS-CoV-2 until August 31, 2021. Descriptive analyses of all studies were performed, and when more than 4 studies were available, a random-effects meta-analysis was used to estimate pooled sensitivity and specificity with reverse transcription polymerase chain reaction (RT-PCR) testing as a reference. To evaluate factors influencing test sensitivity, we performed 3 different analyses using multivariate mixed-effects meta-regression models. We included 194 studies with 221,878 Ag-RDTs performed. Overall, the pooled estimates of Ag-RDT sensitivity and specificity were 72.0% (95% confidence interval [CI] 69.8 to 74.2) and 98.9% (95% CI 98.6 to 99.1), respectively. When manufacturer instructions were followed, sensitivity increased to 76.4% (95%CI 73.8 to 78.8). Sensitivity was markedly better on samples with lower RT-PCR cycle threshold (Ct) values (sensitivity of 97.9% [95% CI 96.9 to 98.9] and 90.6% [95% CI 88.3 to 93.0] for Ct-values <20 and <25, compared to 54.4% [95% CI 47.3 to 61.5] and 18.7% [95% CI 13.9 to 23.4] for Ct-values [≥]25 and [≥]30) and was estimated to increase by 2.9 percentage points (95% CI 1.7 to 4.0) for every unit decrease in mean Ct-value when adjusting for testing procedure and patients symptom status. Concordantly, we found the mean Ct-value to be lower for true positive (22.2 [95% CI 21.5 to 22.8]) compared to false negative (30.4 [95% CI 29.7 to 31.1]) results. Testing in the first week from symptom onset resulted in substantially higher sensitivity (81.9% [95% CI 77.7 to 85.5]) compared to testing after 1 week (51.8%, 95% CI 41.5 to 61.9). Similarly, sensitivity was higher in symptomatic (76.2% [95% CI 73.3 to 78.9]) compared to asymptomatic (56.8% [95% CI 50.9 to 62.4]) persons. However, both effects were mainly driven by the Ct-value of the sample. With regards to sample type, highest sensitivity was found for nasopharyngeal (NP) and combined NP/oropharyngeal samples (70.8% [95% CI 68.3 to 73.2]), as well as in anterior nasal/mid-turbinate samples (77.3% [95% CI 73.0 to 81.0]). Conclusion Ag-RDTs detect most of the individuals infected with SARS-CoV-2, and almost all when high viral loads are present (>90%). With viral load, as estimated by Ct-value, being the most influential factor on their sensitivity, they are especially useful to detect persons with high viral load who are most likely to transmit the virus. To further quantify the effects of other factors influencing test sensitivity, standardization of clinical accuracy studies and access to patient level Ct-values and duration of symptoms are needed.

Journal ArticleDOI
TL;DR: In this paper, a systematic review and meta-analysis of evidence regarding PM2.5 exposure in term birth is presented, which shows that TBW was negatively associated with maternal exposure during the entire pregnancy and each trimester.

Journal ArticleDOI
Xingping Wang1
TL;DR: In this paper , a systematic review and meta-analysis of evidence regarding PM2.5 exposure in term birth is presented, which shows that TBW was negatively associated with PM 2.5 exposures during the entire pregnancy and each trimester.

Journal ArticleDOI
R.U.Memetov1
TL;DR: The authors showed that random slopes can lead to a substantial increase in false-positive conclusions in null-hypothesis tests and showed that the same is true for Bayesian hypothesis testing with mixed models, which often yield Bayes factors reflecting very strong evidence for a mean effect on the population level.
Abstract: Mixed models are gaining popularity in psychology. For frequentist mixed models, previous research showed that excluding random slopes-differences between individuals in the direction and size of an effect-from a model when they are in the data can lead to a substantial increase in false-positive conclusions in null-hypothesis tests. Here, I demonstrated through five simulations that the same is true for Bayesian hypothesis testing with mixed models, which often yield Bayes factors reflecting very strong evidence for a mean effect on the population level even if there was no such effect. Including random slopes in the model largely eliminates the risk of strong false positives but reduces the chance of obtaining strong evidence for true effects. I recommend starting analysis by testing the support for random slopes in the data and removing them from the models only if there is clear evidence against them.

Journal ArticleDOI
TL;DR: In this article , the authors used the DerSimonian-Laird technique to pool the estimates using random effects meta-analysis to estimate the prevalence risk of FED and associated risk factors in medical students.
Abstract: Medical students have a higher risk of developing psychological issues, such as feeding and eating disorders (FEDs). In the past few years, a major increase was observed in the number of studies on the topic. The goal of this review was to estimate the prevalence risk of FEDs and its associated risk factors in medical students. Nine electronic databases were used to conduct an electronic search from the inception of the databases until 15th September 2021. The DerSimonian–Laird technique was used to pool the estimates using random-effects meta-analysis. The prevalence of FEDs risk in medical students was the major outcome of interest. Data were analyzed globally, by country, by research measure and by culture. Sex, age, and body mass index were examined as potential confounders using meta-regression analysis. A random-effects meta-analysis evaluating the prevalence of FEDs in medical students (K = 35, N = 21,383) generated a pooled prevalence rate of 17.35% (95% CI 14.15–21.10%), heterogeneity [Q = 1528 (34), P = 0.001], τ2 = 0.51 (95% CI 0.36–1.05), τ = 0.71 (95% CI 0.59–1.02), I2 = 97.8%; H = 6.70 (95% CI 6.19–7.26). Age and sex were not significant predictors. Body mass index, culture and used research tool were significant confounders. The prevalence of FEDs symptoms in medical students was estimated to be 17.35%. Future prospective studies are urgently needed to construct prevention and treatment programs to provide better outcomes for students at risk of or suffering from FEDs. Level I, systematic review and meta-analysis.

Journal ArticleDOI
TL;DR: In this article, the authors used a Synthetic Minority Over-Sampling Technique for Panel Data (SMOTE-P) to resolve the excess zero problem of disaggregate analysis of bus-involved crashes.

Journal ArticleDOI
TL;DR: In this article , an empirical real-time clustering of critical and non-critical crashes was conducted on 402-miles of Interstate-80 in Wyoming, where the crash dataset was conflated with real-term traffic-related and environmental contributing factors.
Abstract: Abstract Traffic crashes impose tremendous socio-economic losses on societies. To alleviate these concerns, countless traffic safety researches have shed light on the cognition of observable crash/crash severity contributing factors. Nonetheless, some influential factors might not be observable or measurable, referred to as unobserved heterogeneity, that could be accounted for by structuring random intercepts and slopes in hierarchical models. With this respect, although it is known random slopes can capture more unobserved heterogeneity, most previous studies utilized random intercepts to simplify result interpretations, indicating an inconsistency in the literature considering the hierarchical modeling specification. This study delves into the mentioned confusion within an empirical real-time clustering critical crashes, involving fatal or incapacitating injuries, versus non-critical crashes throughout 402-miles of Interstate-80 in Wyoming. The crash dataset was conflated with real-time traffic-related and environmental contributing factors. Regarding the inclusion of random intercepts and slopes, eleven Logistic regressions were conducted. As a data-dependent matter, results depicted random slopes, compared to random intercepts, do not necessarily enhance models’ out-of-sample predictive performance because they impose much more complexity on the models’ structure. Besides, considering the type of unobserved heterogeneity, if random slopes are required, random intercepts should be accompanied to allow data showing their true patterns.

Journal ArticleDOI
TL;DR: This article analyzed the consequences of treating a grouping variable with 2-8 levels as fixed or random effect in correctly specified and alternative models (under- or over-parametrized models).
Abstract: Biological data are often intrinsically hierarchical (e.g., species from different genera, plants within different mountain regions), which made mixed-effects models a common analysis tool in ecology and evolution because they can account for the non-independence. Many questions around their practical applications are solved but one is still debated: Should we treat a grouping variable with a low number of levels as a random or fixed effect? In such situations, the variance estimate of the random effect can be imprecise, but it is unknown if this affects statistical power and type I error rates of the fixed effects of interest. Here, we analyzed the consequences of treating a grouping variable with 2-8 levels as fixed or random effect in correctly specified and alternative models (under- or overparametrized models). We calculated type I error rates and statistical power for all-model specifications and quantified the influences of study design on these quantities. We found no influence of model choice on type I error rate and power on the population-level effect (slope) for random intercept-only models. However, with varying intercepts and slopes in the data-generating process, using a random slope and intercept model, and switching to a fixed-effects model, in case of a singular fit, avoids overconfidence in the results. Additionally, the number and difference between levels strongly influences power and type I error. We conclude that inferring the correct random-effect structure is of great importance to obtain correct type I error rates. We encourage to start with a mixed-effects model independent of the number of levels in the grouping variable and switch to a fixed-effects model only in case of a singular fit. With these recommendations, we allow for more informative choices about study design and data analysis and make ecological inference with mixed-effects models more robust for small number of levels.

Journal ArticleDOI
TL;DR: A transformed ED (TED) process degradation model with both age- and state-dependent increments is proposed, which is an extended model including the basic ED process model as a special case.

Journal ArticleDOI
TL;DR: In this article , a meta-analysis was performed using random-effects and weighted average models, as well as a combined estimate based on curve fitting, which indicated that about two in five patients with uveal melanoma ultimately succumb to their disease.
Abstract: Abstract Background A large proportion of patients with uveal melanoma develop metastases and succumb to their disease. Reports on the size of this proportion vary considerably. Methods PubMed, Web of Science and Embase were searched for articles published after 1980. Studies with ≥100 patients reporting ≥five-year relative survival rates were included. Studies solely reporting Kaplan-Meier estimates and cumulative incidences were not considered, due to risk for competing risk bias and classification errors. A meta-analysis was performed using random-effects and weighted averages models, as well as a combined estimate based on curve fitting. Results Nine studies and a total of 18 495 patients are included. Overall, the risk of selective reporting bias is low. Relative survival rates vary across the population of studies (I 2 48 to 97% and Q p < 0.00001 to 0.15), likely due to differences in baseline characteristics and the large number of patients included (τ 2 < 0.02). The 30-year relative survival rates follow a cubic curve that is well fitted to data from the random-effects inverse-variance and weighted average models ( R 2 = 0.95, p = 7.19E −7 ). The estimated five, ten, 15, 20, 25 and 30-year relative survival rates are 79, 66, 60, 60, 62 and 67%, respectively. Conclusions The findings suggest that about two in five of all patients with uveal melanoma ultimately succumb to their disease. This indicates a slightly better prognosis than what is often assumed, and that patients surviving 20 years or longer may have a survival advantage to individuals of the same sex and age from the general population.

Journal ArticleDOI
TL;DR: In this paper , a Monte Carlo simulation study was conducted to compare the performance of multilevel models (MLM) and random-intercept cross-lagged panel model (RI-CLPM) for psychotherapy mechanisms of change.
Abstract: OBJECTIVE Modeling cross-lagged effects in psychotherapy mechanisms of change studies is complex and requires careful attention to model selection and interpretation. However, there is a lack of field-specific guidelines. We aimed to (a) describe the estimation and interpretation of cross lagged effects using multilevel models (MLM) and random-intercept cross lagged panel model (RI-CLPM); (b) compare these models' performance and risk of bias using simulations and an applied research example to formulate recommendations for practice. METHOD Part 1 is a tutorial focused on introducing/describing dynamic effects in the form of autoregression and bidirectionality. In Part 2, we compare the estimation of cross-lagged effects in RI-CLPM, which takes dynamic effects into account, with three commonly used MLMs that cannot accommodate dynamics. In Part 3, we describe a Monte Carlo simulation study testing model performance of RI-CLPM and MLM under realistic conditions for psychotherapy mechanisms of change studies. RESULTS Our findings suggested that all three MLMs resulted in severely biased estimates of cross-lagged effects when dynamic effects were present in the data, with some experimental conditions generating statistically significant estimates in the wrong direction. MLMs performed comparably well only in conditions which are conceptually unrealistic for psychotherapy mechanisms of change research (i.e., no inertia in variables and no bidirectional effects). DISCUSSION Based on conceptual fit and our simulation results, we strongly recommend using fully dynamic structural equation modeling models, such as the RI-CLPM, rather than static, unidirectional regression models (e.g., MLM) to study cross-lagged effects in mechanisms of change research. (PsycInfo Database Record (c) 2022 APA, all rights reserved).

Journal ArticleDOI
30 Dec 2022-PLOS ONE
TL;DR: This article conducted a survey of meta-epidemiological studies to identify additional trial design characteristics that may be associated with significant over- or underestimation of the treatment effect and to use such identified characteristics as a basis for the formulation of new CQS appraisal criteria.
Abstract: Aim To conduct a survey of current meta-epidemiological studies to identify additional trial design characteristics that may be associated with significant over- or underestimation of the treatment effect and to use such identified characteristics as a basis for the formulation of new CQS appraisal criteria. Materials and methods We retrieved eligible studies from two systematic reviews on this topic (latest search May 2015) and searched the databases PubMed and Embase for further studies from June 2015 –March 2022. All data were extracted by one author and verified by another. Sufficiently homogeneous estimates from single studies were pooled using random-effects meta-analysis. Trial design characteristics associated with statistically significant estimates from single datasets (which could not be pooled) and meta-analyses were used as a basis to formulate new or amend existing CQS criteria. Results A total of 38 meta-epidemiological studies were identified. From these, seven trial design characteristics associated with statistically significant over- or underestimation of the true therapeutic effect were found. Conclusion One new criterion concerning double-blinding was added to the CQS, and the original criteria for concealing the random allocation sequence and for minimum sample size were amended.

Journal ArticleDOI
01 Jun 2022-Energy
TL;DR: In this article , the authors developed heterogeneity-based econometric models that forecast the yearly consumption of electricity and showed that the random parameter methodology is statistically superior to the classical multiple linear regression model.

Journal ArticleDOI
TL;DR: In this article , a generalized random forest (GRF) was proposed for the estimation of heterogeneous treatment effects (HTEs) in road safety analysis, which is able to search for complex treatment heterogeneity.

Journal ArticleDOI
TL;DR: In this paper , a transformed exponential dispersion (TED) process degradation model with both age and state-dependent increments was proposed, which is an extended model including the basic ED process model as a special case.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors developed a Bayesian modeling approach with three types of spatial effects, i.e., spatial correlation, spatial heterogeneity, and spillover effect, to model speed mean and variance on structured roads.