scispace - formally typeset
Search or ask a question

Showing papers on "Unit-weighted regression published in 2017"


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors developed a geographically and temporally weighted regression (GTWR) model to account for spatial and temporal variability in the relationship between the non-continuous AQI-derived PM 2.5 and satellite-derived aerosol optical depth (AOD).

156 citations


Journal ArticleDOI
Hongyan Zhu1, Bingquan Chu1, Yangyang Fan1, Xiaoya Tao1, Wenxin Yin1, Yong He1 
TL;DR: The results clearly demonstrated that hyperspectral imaging has the potential as a fast and non-invasive method to predict the quality attributes of kiwifruits.
Abstract: We investigated the feasibility and potentiality of determining firmness, soluble solids content (SSC), and pH in kiwifruits using hyperspectral imaging, combined with variable selection methods and calibration models. The images were acquired by a push-broom hyperspectral reflectance imaging system covering two spectral ranges. Weighted regression coefficients (BW), successive projections algorithm (SPA) and genetic algorithm–partial least square (GAPLS) were compared and evaluated for the selection of effective wavelengths. Moreover, multiple linear regression (MLR), partial least squares regression and least squares support vector machine (LS-SVM) were developed to predict quality attributes quantitatively using effective wavelengths. The established models, particularly SPA-MLR, SPA-LS-SVM and GAPLS-LS-SVM, performed well. The SPA-MLR models for firmness (R pre = 0.9812, RPD = 5.17) and SSC (R pre = 0.9523, RPD = 3.26) at 380–1023 nm showed excellent performance, whereas GAPLS-LS-SVM was the optimal model at 874–1734 nm for predicting pH (R pre = 0.9070, RPD = 2.60). Image processing algorithms were developed to transfer the predictive model in every pixel to generate prediction maps that visualize the spatial distribution of firmness and SSC. Hence, the results clearly demonstrated that hyperspectral imaging has the potential as a fast and non-invasive method to predict the quality attributes of kiwifruits.

55 citations


Journal ArticleDOI
01 Jan 2017-Geoderma
TL;DR: Wang et al. as mentioned in this paper used PLSR, GWRK, and PLSRK to predict soil organic matter (SOM) based on visible and near-infrared (VNIR) spectra.

40 citations


Journal ArticleDOI
TL;DR: Analysis indicated that non‐linear models could give a better representation of the RBE‐LET relationship, as differences between the models were observed for the SOBP scenario, and both non‐ linear LET spectrum‐ and linear LETd based models should be further evaluated in clinically realistic scenarios.
Abstract: Purpose The relative biological effectiveness (RBE) of protons varies with the radiation quality, quantified by the linear energy transfer (LET). Most phenomenological models employ a linear dependency of the dose-averaged LET (LETd) to calculate the biological dose. However, several experiments have indicated a possible non-linear trend. Our aim was to investigate if biological dose models including non-linear LET dependencies should be considered, by introducing a LET spectrum based dose model. Method The RBE-LET relationship was investigated by fitting of polynomials from 1st to 5th degree to a database of 85 data points from aerobic in vitro experiments. We included both unweighted and weighted regression, the latter taking into account experimental uncertainties. Statistical testing was performed to decide whether higher degree polynomials provided better fits to the data as compared to lower degrees. The newly developed models were compared to three published LETd based models for a simulated spread out Bragg peak (SOBP) scenario. Results The statistical analysis of the weighted regression analysis favoured a non-linear RBE-LET relationship, with the quartic polynomial found to best represent the experimental data (p=0.010). The results of the unweighted regression analysis were on the borderline of statistical significance for non-linear functions (p=0.053), and with the current database a linear dependency could not be rejected. For the SOBP scenario, the weighted non-linear model estimated a similar mean RBE value (1.14) compared to the three established models (1.13-1.17). The unweighted model calculated a considerably higher RBE value (1.22). Conclusion The analysis indicated that non-linear models could give a better representation of the RBE-LET relationship. However, this is not decisive, as inclusion of the experimental uncertainties in the regression analysis had a significant impact on the determination and ranking of the models. As differences between the models were observed for the SOBP scenario, both non-linear LET spectrum- and linear LETd based models should be further evaluated in clinically realistic scenarios. This article is protected by copyright. All rights reserved.

34 citations


Journal ArticleDOI
TL;DR: In this article, an iterated spatially weighted regression procedure was proposed to verify the presence of economic growth heterogeneities in EU regions in the period 1981-2009, and the results highlight the presence in the EU regions of five different clubs with different growth paths within each subgroup.
Abstract: This paper proposes a new technique based on an iterated spatially weighted regression procedure to verify the presence of economic growth heterogeneities in EU regions in the period 1981–2009. The approach extends a procedure originally proposed in the field of image analysis based on the assumption of local homogeneity of the signal. The presence of the heterogeneity is a criterion to divide the sample of observations (i.e. regions) into smaller homogeneous groups. Our results highlight the presence in the EU regions of five different clubs with different growth paths within each subgroup. Spatial dependence is also considered in the definition of the economic convergence model.

25 citations


Journal ArticleDOI
26 Jan 2017-Entropy
TL;DR: The experiments proved the existence of both global stationarity and spatiotemporal non-stationarity, as well as the practical ability of the proposed method, and compared to the MGWR and GTWR models, the MGTWR model obtained the lowest AIC value and mean square error (MSE) and the highest coefficient of determination (R2) and adjusted coefficient of determined (R1adj).
Abstract: To capture both global stationarity and spatiotemporal non-stationarity, a novel mixed geographically and temporally weighted regression (MGTWR) model accounting for global and local effects in both space and time is presented. Since the constant and spatial-temporal varying coefficients could not be estimated in one step, a two-stage least squares estimation is introduced to calibrate the model. Both simulations and real-world datasets are used to test and verify the performance of the proposed MGTWR model. Additionally, an Akaike Information Criterion (AIC) is adopted as a key model fitting diagnostic. The experiments demonstrate that the MGTWR model yields more accurate results than do traditional spatially weighted regression models. For instance, the MGTWR model decreased AIC value by 2.7066, 36.368 and 112.812 with respect to those of the mixed geographically weighted regression (MGWR) model and by 45.5628, −38.774 and 35.656 with respect to those of the geographical and temporal weighted regression (GTWR) model for the three simulation datasets. Moreover, compared to the MGWR and GTWR models, the MGTWR model obtained the lowest AIC value and mean square error (MSE) and the highest coefficient of determination (R2) and adjusted coefficient of determination (R2adj). In addition, our experiments proved the existence of both global stationarity and spatiotemporal non-stationarity, as well as the practical ability of the proposed method.

24 citations


Journal ArticleDOI
TL;DR: This article examined the effect of using bootstrap weights to account for complex sample design in analyses of Survey of Consumer Finances (SCF) datasets and found no substantial differences between the unweighted and the weighted analyses.
Abstract: We examined the effects of using bootstrap weights to account for the complex sample design in analyses of Survey of Consumer Finances (SCF) datasets. No article published in this journal that has used the SCF has mentioned the issue of complex sample designs. We compared results obtained without weights and with application of population and bootstrap weights in a logistic regression, and found no substantial differences between the unweighted and the weighted analyses. We also compared results for an ordinary least squares regression, and found few differences between unweighted and weighted models. Unweighted regressions produced more conservative significance tests than the counterpart, and some econometricians have uggested that unweighted analyses are better for hypothesis testing. If estimation of the magnitudes of effects is important, weighted regression may be better because it produces consistent estimators. Researchers should be cautious in drawing conclusions when weighted and unweighted effects are substantially different. Lindamood, Hanna, and Bi (2007) reviewed articles that used the Survey of Consumer Finances (SCF) datasets and appeared in the Journal of Consumer Affairs. They examined the papers for lack of transparency with respect to a number of methodological issues, including weighting of multivariate analyses and the use of the Repeated Implicate Inference (RII) method. However, no articles in this journal that have used SCF datasets have included a discussion on the issue of complex sample designs (Nielsen and Seay 2014; Nielsen et al. 2009). What is the effect on standard error estimates and hypotheses testing when complex sample designs are ignored? Would a consideration of complex sample designs have changed the major conclusions of analyses of SCF datasets? For our comparisons, we used the logit model of Yuh and Hanna (2010), who analyzed a combination of the 1995-2004 SCF datasets, and employed a normative economic framework to create their hypotheses. We also used an ordinary least squares (OLS) model similar to the Yuh and Hanna logit analyses. We used the 2010 SCF dataset, and slightly modified some of their specifications. We focused on providing comparisons between an unweighted multivariate analysis of a SCF dataset, one that applies population weights only, and one that uses both population and bootstrap replicate weights. We suggest guidelines for how to choose between unweighted and weighted models. BRIEF LITERATURE REVIEW The issue of weighting of multivariate analyses has been controversial. Winship and Radbill (1994) stated that the decision to use sampling weights in regression analysis is complicated, and unweighted regression is preferred if the sampling weights are a function of the independent variables. Deaton (1997, 66) mentioned "the old and still controversial issue of whether survey weights should be used in regression." He stated that the answer should be based on the purpose of the regression, and that "the strongest argument for weighted regression comes from those who regard regression as descriptive, not structural" (Deaton 1997, 71). He suggested that researchers adopt the approach taken by DuMouchel and Duncan (1983) to compare weighted and unweighted estimates (Deaton 1997, 72). Lindamood et al. (2007) compared unweighted versus population-weighted analyses of an SCF dataset for three different logistic regressions. They reported that of 99 coefficient estimates for the three logits, nine had a difference between unweighted and weighted in terms of whether the statistical significance level was less than .05 (nine were significant in the weighted model but not significant in the unweighted model). Only one of the 99 coefficients had a statistically significant effect in the unweighted estimate and a nonsignificant effect in the weighted estimate. They cited Deaton's (1997, 66-73) discussion that weighted regression analysis is suspect for hypothesis testing on datasets with endogenous weights, and recommended that if hypothesis testing is the main research focus, unweighted regression analysis should be used for SCF datasets. …

18 citations


Journal ArticleDOI
TL;DR: This paper built up the GDP estimation model based on the NTL data in each year with each method respectively, then applied each model to the other 12 years for the evaluation of the time series transferability, and revealed that the performances of models differ greatly across years and methods.
Abstract: Despite the fact that economic data are of great significance in the assessment of human socioeconomic development, the application of this data has been hindered partly due to the unreliable and inefficient economic censuses conducted in developing countries. The night-time light (NTL) imagery from the Defense Meteorological Satellite Program’s Operational Linescan System (DMSP/OLS) provides one of the most important ways to evaluate an economy with low cost and high efficiency. However, little research has addressed the transferability of the estimation across years. Based on the entire DN series from 0 to 63 of NTL data and GDP data in 31 provinces of mainland China from 2000 to 2012, this paper aims to study the transferability of economy estimation across years, with four linear and non-linear data mining methods, including the Multiple Linear Regression (MLR), Local Weighted Regression (LWR), Partial Least Squares Regression (PLSR), and Support Vector Machine Regression (SVMR). We firstly built up the GDP estimation model based on the NTL data in each year with each method respectively, then applied each model to the other 12 years for the evaluation of the time series transferability. Results revealed that the performances of models differ greatly across years and methods: PLSR (mean of ) and SVMR (mean of ) are superior to MLR (mean of ) and LWR (mean of ) for model calibration; only PLSR (mean of , mean of ) holds a strong transferability among different years; the frequency of three DN sections of (0–1), (4–16), and (57–63) are especially important for economy estimation. Such results are expected to provide a more comprehensive understanding of the NTL, which can be used for economy estimation across years.

14 citations


Journal ArticleDOI
01 Dec 2017-Test
TL;DR: In this paper, the authors used a Bayesian version of forward plots to exhibit the presence of multiple outliers in a data set from banking with 1903 observations and nine explanatory variables and showed the clear advantages from including prior information in the forward search.
Abstract: The frequentist forward search yields a flexible and informative form of robust regression. The device of fictitious observations provides a natural way to include prior information in the search. However, this extension is not straightforward, requiring weighted regression. Bayesian versions of forward plots are used to exhibit the presence of multiple outliers in a data set from banking with 1903 observations and nine explanatory variables which shows, in this case, the clear advantages from including prior information in the forward search. Use of observation weights from frequentist robust regression is shown to provide a simple general method for robust Bayesian regression.

14 citations


Journal ArticleDOI
TL;DR: Comparisons of geographically weighted regression-based models for modeling fire risk at the city scale show that the road density and the spatial distribution of enterprises have the strongest influences on fire risk, which implies that governments should focus on areas where roads and enterprises are densely clustered.
Abstract: An increasing number of fires are occurring with the rapid development of cities, resulting in increased risk for human beings and the environment. This study compares geographically weighted regression-based models, including geographically weighted regression (GWR) and geographically and temporally weighted regression (GTWR), which integrates spatial and temporal effects and global linear regression models (LM) for modeling fire risk at the city scale. The results show that the road density and the spatial distribution of enterprises have the strongest influences on fire risk, which implies that we should focus on areas where roads and enterprises are densely clustered. In addition, locations with a large number of enterprises have fewer fire ignition records, probably because of strict management and prevention measures. A changing number of significant variables across space indicate that heterogeneity mainly exists in the northern and eastern rural and suburban areas of Hefei city, where human-related facilities or road construction are only clustered in the city sub-centers. GTWR can capture small changes in the spatiotemporal heterogeneity of the variables while GWR and LM cannot. An approach that integrates space and time enables us to better understand the dynamic changes in fire risk. Thus governments can use the results to manage fire safety at the city scale.

14 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigated the spatiotemporal characteristics and the dominating factors of China's province-level carbon intensity in the construction industry from 2005 to 2014, which is aimed at providing a scientific basis for government while implementing a regionaloriented carbon emissions reduction strategy.
Abstract: Climate change continuously threatens sustainable development. As the largest energy consumer and carbon emitter in the world, China is facing increasing pressure to cut carbon emissions. Based on Moran’s index I and geographically weighted regression, this paper investigates the spatiotemporal characteristics and the dominating factors of China’s province-level carbon intensity in the construction industry from 2005 to 2014, which is aimed at providing a scientific basis for government while implementing a regionaloriented carbon emissions reduction strategy. The empirical results are shown as follows. Firstly, carbon intensity in the construction industry of each province has been decreasing in the past 10 years. Secondly, provincial carbon intensity in this sector shows significant positive spatial autocorrelation characteristics and the degree of spatial clustering of carbon intensity tended to weaken in this period. Third, according to the analysis of the geographically weighted regression (GWR) model, carbon intensity is positively affected by energy intensity while the labor input and production efficiency both have negative effect. Particularly the regression coefficient of labor input is almost twice as large as the other two factors. The results reveal that there is a significant spatial disparity of these three factors in different provinces.

Journal ArticleDOI
TL;DR: This paper improves the BVSPLS based on a novel selection mechanism based on sorting the weighted regression coefficients, and then the importance of each variable of the sorted list is evaluated using root mean square errors of prediction (RMSEP) criterion in each iteration step.

Journal ArticleDOI
TL;DR: In this article, a so-called GWGlasso method was proposed for structure identification and variable selection in GWR models. And the method penalizes the loss function of the local-linear estimation of the GWR model by the coefficients and their partial derivatives in the way of the adaptive group lasso and can simultaneously identify spatially varying coefficients, nonzero constant coefficients and zero coefficients.
Abstract: Geographically weighted regression (GWR) is an important tool for exploring spatial non-stationarity of a regression relationship, in which whether a regression coefficient really varies over space is especially important in drawing valid conclusions on the spatial variation characteristics of the regression relationship. This paper proposes a so-called GWGlasso method for structure identification and variable selection in GWR models. This method penalizes the loss function of the local-linear estimation of the GWR model by the coefficients and their partial derivatives in the way of the adaptive group lasso and can simultaneously identify spatially varying coefficients, nonzero constant coefficients and zero coefficients. Simulation experiments are further conducted to assess the performance of the proposed method and the Dublin voter turnout data set is analysed to demonstrate its application.

Posted Content
TL;DR: Bayesian versions of forward plots are used to exhibit the presence of multiple outliers in a data set from banking with 1903 observations and nine explanatory variables which shows the clear advantages from including prior information in the forward search.
Abstract: The frequentist forward search yields a flexible and informative form of robust regression. The device of fictitious observations provides a natural way to include prior information in the search. However, this extension is not straightforward, requiring weighted regression. Bayesian versions of forward plots are used to exhibit the presence of multiple outliers in a data set from banking with 1903 observations and nine explanatory variables which shows, in this case, the clear advantages from including prior information in the forward search. Use of observation weights from frequentist robust regression is shown to provide a simple general method for robust Bayesian regression.

Journal ArticleDOI
30 Nov 2017-Cauchy
TL;DR: Results from the study showed that the modeling Geographically Weighted Regression (GWR) with a weighted Fixed Gaussian Kernel showed that all predictor variables affect the number of dengue fever patients, whereas the weighted Queen Contiguity, not all predictors affect the dengu fever patients.
Abstract: Regression analysis is a method for determining the effect of the response and predictor variables, yet simple regression does not consider the different properties in each location. Methods Geographically Weighted Regression (GWR) is a technique point of approach to a simple regression model be weighted regression model. The purpose of this study is to establish a model using Geographically Weighted Regression (GWR) with a weighted Fixed Gaussian Kernel and Queen Contiguity in cases of dengue fever patients and to determine the best weighting between the weighted Euclidean distance as well as the Queen Contiguity based on the value of R2. Results from the study showed that the modeling Geographically Weighted Regression (GWR) with a weighted Fixed Gaussian Kernel showed that all predictor variables affect the number of dengue fever patients, whereas the weighted Queen Contiguity, not all predictor variables affect the dengue fever patients. Based on the value of R2 is known that a weighted Fixed Gaussian Kernel is better used.

DOI
01 Jan 2017
TL;DR: A unified view of high-dimensional bridge regression is presented that combines the results obtained in [Bouchut-Boyaval, M3AS (23) 2013] and [M2AS (24) 2013], which show clear trends in both the horizontal and vertical dimensions of the model.
Abstract: A unified view of high-dimensional bridge regression

Journal ArticleDOI
TL;DR: In this paper, it was shown that the weighted regression outperforms the maximum likelihood (ML) estimation with respect to bias and mean square error in large samples, but it is shown that in smaller samples, weighted regression performs best.
Abstract: Estimation for the log-logistic and Weibull distributions can be performed by using the equations used for probability plotting, and this technique outperforms the maximum likelihood (ML) estimation often in small samples. This leads to a highly heteroskedastic regression problem. Exact expressions for the variances of the residuals are derived which can be used to perform weighted regression. In large samples, the ML performs best, but it is shown that in smaller samples, the weighted regression outperforms the ML estimation with respect to bias and mean square error.

Book ChapterDOI
06 Sep 2017
TL;DR: The use of the PAELLA algorithm in the context of weighted regression in a natural extension of previous work, and an experiment comparing this new approach versus probabilistic macro sampling is reported.
Abstract: This paper reports the use of the PAELLA algorithm in the context of weighted regression. First, an experiment comparing this new approach versus probabilistic macro sampling is reported, as a natural extension of previous work. Then another different experiment is reported where this approach is tested against a state of the art regression technique. Both experiments provide satisfactory results.

Journal ArticleDOI
TL;DR: Zhou et al. as discussed by the authors proposed a local attribute-similarity weighted regression (LASWR) algorithm, which characterized the similarity among spatial points based on non-spatial attributes better than on spatial distance.
Abstract: Existing spatial interpolation methods estimate the property values of an unmeasured point with observations of its closest points based on spatial distance (SD). However, considering that properties of the neighbors spatially close to the unmeasured point may not be similar, the estimation of properties at the unmeasured one may not be accurate. The present study proposed a local attribute-similarity weighted regression (LASWR) algorithm, which characterized the similarity among spatial points based on non-spatial attributes (NSA) better than on SD. The real soil datasets were used in the validation. Mean absolute error (MAE) and root mean square error (RMSE) were used to compare the performance of LASWR with inverse distance weighting (IDW), ordinary kriging (OK) and geographically weighted regression (GWR). Cross-validation showed that LASWR generally resulted in more accurate predictions than IDW and OK and produced a finer-grained characterization of the spatial relationships between SOC and environmental variables relative to GWR. The present research results suggest that LASWR can play a vital role in improving prediction accuracy and characterizing the influence patterns of environmental variables on response variable. Keywords: attribute similarity, geographically weighted regression, local regression, spatial interpolation DOI: 10.25165/j.ijabe.20171005.2209 Citation: Zhou J G, Dong D M, Li Y Y. Local attribute-similarity weighting regression algorithm for interpolating soil property values. Int J Agric & Biol Eng, 2017; 10(5): 95–103.


Patent
07 Sep 2017
TL;DR: In this paper, a local regression model was proposed to predict a modeling error of a state of a product such as a cooling stop temperature of a steel sheet with high accuracy while preventing dispersion of regression calculation and keeping significance of local weighted regression.
Abstract: PROBLEM TO BE SOLVED: To predict a modeling error of a state of a product such as a cooling stop temperature of a steel sheet with high accuracy while preventing dispersion of regression calculation and keeping significance of local weighted regression.SOLUTION: A local regression model generation section 10 extracts achievement data having a manufacture condition similar to a manufacture condition of a prediction object material as nearby teacher data from a database 9 and uses the nearby teacher data to generate a local weighted regression model for calculating an error of a prediction value of a cooling stop temperature of the prediction object material (called modeling error prediction value) to be calculated by a temperature prediction model, and the modeling error prediction value is calculated by the local weighted regression model. At such a time, a weight coefficient w((i) is a subscript indicating steel sheet) of the local weighted regression model is set by an expression (101) using a distance dbased on a distance function of the nearby teacher data, an average value μand a standard deviation σof the distance μ.SELECTED DRAWING: Figure 1

Posted Content
TL;DR: In this paper, the authors explore ways to fit mixed effects models to tall data, including predictors of interest and confounding factors as covariates, and including random intercepts to allow for heterogeneity in outcome among practices.
Abstract: Motivated by two case studies using primary care records from the Clinical Practice Research Datalink, we describe statistical methods that facilitate the analysis of tall data, with very large numbers of observations. Our focus is on investigating the association between patient characteristics and an outcome of interest, while allowing for variation among general practices. We explore ways to fit mixed effects models to tall data, including predictors of interest and confounding factors as covariates, and including random intercepts to allow for heterogeneity in outcome among practices. We introduce: (1) weighted regression and (2) meta-analysis of estimated regression coefficients from each practice. Both methods reduce the size of the dataset, thus decreasing the time required for statistical analysis. We compare the methods to an existing subsampling approach. All methods give similar point estimates, and weighted regression and meta-analysis give similar standard errors for point estimates to analysis of the entire dataset, but the subsampling method gives larger standard errors. Where all data are discrete, weighted regression is equivalent to fitting the mixed model to the entire dataset. In the presence of a continuous covariate, meta-analysis is useful. Both methods are easy to implement in standard statistical software.

Journal ArticleDOI
11 Dec 2017
TL;DR: In this article, the authors used spatial regression models in variable selection, estimation, and prediction compared to conventional regression models, and found that spatial regressions performed relatively better compared to other regression models based on AIC value and R -squared.
Abstract: Trachoma is a neglected tropical disease and leading infectious cause of blindness, In Kenya it accounts for 19% of blindness. Past research on associated risk factors in Kenya have relied on traditional impact survey data only however non uniform distribution of prevalence in suspected endemic areas despite similar interventions measures calls for the need to include environmental and climatic potential risk factors in modeling trachoma transmission. Our study therefore aims at determining the prevalence of trachoma and its associated risks factors by use of spatial regression models in variable selection, estimation and prediction compared to conventional regression models. Through use of data from trachoma surveys and remotely sensed environmental and climatic data, spatial and non-spatial regression models were implemented. Regression results were then utilized in spatial interpolation using kriging and geographically weighted regression. Rainfall, presence of flies in children’s face, dirty faces of children and aridity were found out to be the significant variables that contributes towards trachoma transmission. Spatial lag model had the least value of akaike information criterion of 385.08 hence performed relatively better compared to the rest of the regressions models. In estimation of prevalence in places where data was not collected, multivariate regression kriging did slightly better than the geographically weighted regression. The study shows that Spatial regression models performs better compared to conventional regression models both in variable selection and in spatial prediction of trachoma prevalence. Among the spatial regressions the significant variables as obtained were similar though spatial lag performed relatively better compared to other regression models in variable selection based on AIC value and R -squared. There was minimal variation between the two spatial interpolation methods.


Book ChapterDOI
17 Jun 2017
TL;DR: In this paper, an improvement response surface method based on weighted regression for reliability analysis is presented, the new weight function is built based on parallel circuit theory, because the closer sample points are to the failure curve (the smaller the branch resistance is), the higher weights are given (the greater the branch current is).
Abstract: For structural reliability analysis, the response surface method is popularly used to reduce the computational efforts of numerical analysis. The general method in the response surface method is to use the least square regression method. To give higher weight to the points closer to the failure curve, an improvement response surface method based on weighted regression for reliability analysis is presented, the new weight function is built based on parallel circuit theory, because the closer sample points are to the failure curve (the smaller the branch resistance is), the higher weights are given (the greater the branch current is). Numerical applications are provided to indicate the significance of the presented method.

Journal ArticleDOI
TL;DR: This paper explores the application of ordinal logistic and Poisson regression as alternatives to ordinary least squares estimation for modeling operational performance in a military testing environment.
Abstract: Historically, the application of logistic and Poisson regression has been focused in the social science and medical fields where the response variable typically has only a few possible outcomes. These techniques are not commonly applied to characterize military operations even though response variables that measure success or failure are commonly encountered in this field. This paper explores the application of ordinal logistic and Poisson regression as alternatives to ordinary least squares estimation for modeling operational performance in a military testing environment. The operational test planners chose a nested face-centered experimental design, which was executed to collect test data. Three modeling techniques were employed in the analysis: multiple linear regression, ordinal logistic regression, and Poisson regression. The purpose of the study was to determine which regression technique best fits the test data. Cross validation and model goodness comparison were accomplished by assessing that the model fits for each model type in combination with a comparison of significant main effects and interactions. Finally, contrasts are provided relative to the ease of implementing each technique. Copyright © 2016 John Wiley & Sons, Ltd.