scispace - formally typeset
Search or ask a question

Showing papers in "Statistics, Optimization and Information Computing in 2015"


Journal ArticleDOI
TL;DR: In this article, an overview of optimal estimations of linear functionals which depend on the unknown values of a stochastic sequence is provided, based on observations of the sequence without noise as well as observations of a sequence with a stationary noise.
Abstract: This survey provides an overview of optimal estimation of linear functionals which depend on the unknown values of a stationary stochastic sequence. Based on observations of the sequence without noise as well as observations of the sequence with a stationary noise, estimates could be obtained. Formulas for calculating the spectral characteristics and the mean-square errors of the optimal estimates of functionals are derived in the case of spectral certainty, where spectral densities of the sequences are exactly known. In the case of spectral uncertainty, where spectral densities of the sequences are not known exactly while sets of admissible spectral densities are given, the minimax-robust method of estimation is applied. Formulas that determine the least favourable spectral densities and the minimax spectral characteristics of estimates are presented for some special classes of admissible spectral densities.

40 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed an inertial proximal point method (iPPM) with alternating inertial steps to solve the maximal monotone operator inclusion problem and showed that the even subsequence generated by this method is contractive with the set of solutions.
Abstract: The proximal point method (PPM) for solving maximal monotone operator inclusion problem is a highly powerful tool for algorithm design, analysis and interpretation. To accelerate convergence of the PPM, inertial PPM (iPPM) was proposed in the literature. In this note, we point out that some of the attractive properties of the PPM, e.g., the generated sequence is contractive with the set of solutions, do not hold anymore for iPPM. To partially inherit the advantages of the PPM and meanwhile incorporate inertial extrapolation steps, we propose an iPPM with alternating inertial steps. Our analyses show that the even subsequence generated by the proposed iPPM is contractive with the set of solutions. Moreover, we establish global convergence result under much relaxed conditions on the inertial extrapolation stepsizes, e.g., monotonicity is no longer needed and the stepsizes are significantly enlarged compared to existing methods. Furthermore, we establish certain $o(1/k)$ convergence rate results, where $k$ denotes the iteration counter. These features are new to inertial type PPMs.

23 citations


Journal ArticleDOI
TL;DR: Experimental results indicate that the proposed hybrid algorithm (TLBMO), which is established by combining the advantages of Teaching-learning-based optimization (TLBO) and Bird Mating Optimizer, performs better than other existing algorithms for global numerical optimization.
Abstract: Bird Mating Optimizer (BMO) is a novel meta-heuristic optimization algorithm inspired by intelligent mating behavior of birds. However, it is still insufficient in convergence of speed and quality of solution. To overcome these drawbacks, this paper proposes a hybrid algorithm (TLBMO), which is established by combining the advantages of Teaching-learning-based optimization (TLBO) and Bird Mating Optimizer (BMO). The performance of TLBMO is evaluated on 23 benchmark functions, and compared with seven state-of-the-art approaches, namely BMO, TLBO, Artificial Bee Bolony (ABC), Particle Swarm Optimization (PSO), Fast Evolution Programming (FEP), Differential Evolution (DE), Group Search Optimization (GSO). Experimental results indicate that the proposed method performs better than other existing algorithms for global numerical optimization.

22 citations


Journal ArticleDOI
TL;DR: In this article, the problem of mean-square optimal estimation of the linear functional of a continuous time random process with stationary nth increments from observations of the process at time points t ∈ R \ [0;T] is investigated under the conditions of spectral certainty and spectral uncertainty.
Abstract: The problem of mean-square optimal estimation of the linear functional A_T ξ=\int_{0}^Ta(t) ξ(t)dt that depends on the unknown values of a continuous time random process ξ(t),t ∈ R, with stationary nth increments from observations of the process ξ(t) at time points t ∈ R \ [0;T] is investigated under the condition of spectral certainty as well as under the condition of spectral uncertainty. Formulas for calculation the value of the mean-square error and spectral characteristic of the optimal linear estimate of the functional are derived under the condition of spectral certainty where spectral density oft he process is exactly known. In the case of spectral uncertainty where spectral density of the process is not exactly known, but a class of admissible spectral densities is given, relations that determine the least favourable spectral density and the minimax spectral characteristic are specified.

20 citations


Journal ArticleDOI
TL;DR: A revised Optimal LDF by Integer Programming (Revised IP-OLDF) based on the minimum number of misclassification (minimum NM) criterion resolves three problems entirely and proposes a k-fold crossvalidation method.
Abstract: Fisher proposed a linear discriminant function (Fisher’s LDF). From 1971, we analysed electrocardiogram (ECG) data in order to develop the diagnostic logic between normal and abnormal symptoms by Fisher’s LDF and a quadratic discriminant function (QDF). Our four years research was inferior to the decision tree logic developed by the medical doctor. After this experience, we discriminated many data and found four problems of the discriminant analysis. A revised Optimal LDF by Integer Programming (Revised IP-OLDF) based on the minimum number of misclassification (minimum NM) criterion resolves three problems entirely [13, 18]. In this research, we discuss fourth problem of the discriminant analysis. There are no standard errors (SEs) of the error rate and discriminant coefficient. We propose a k-fold crossvalidation method. This method offers a model selection technique and a 95% confidence intervals (C.I.) of error rates and discriminant coefficients.

16 citations


Journal ArticleDOI
TL;DR: In this article, a new model selection procedure of the discriminant analysis was proposed to obtain the mean error rate in the validation samples (M2) and the 95\% confidence interval (CI) of discriminant coefficient.
Abstract: In this paper, we focus on the new model selection procedure of the discriminant analysis. Combining re-sampling technique with k-fold cross validation, we develop a k-fold cross validation for small sample method. By this breakthrough, we obtain the mean error rate in the validation samples (M2) and the 95\% confidence interval (CI) of discriminant coefficient. Moreover, we propose the model selection procedure in which the model having a minimum M2 was chosen to the best model. We apply this new method and procedure to the pass/ fail determination of exam scores. In this case, we fix the constant =1 for seven linear discriminant functions (LDFs) and several good results were obtained as follows: 1) M2 of Fisher's LDF are over 4.6\% worse than Revised IP-OLDF. 2) A soft-margin SVM for penalty c=1 (SVM1) is worse than another mathematical programming (MP) based LDFs and logistic regression . 3) The 95\% CI of the best discriminant coefficients was obtained. Seven LDFs except for Fisher's LDF are almost the same as a trivial LDF for the linear separable model. Furthermore, if we choose the median of the coefficient of seven LDFs except for Fisher's LDF, those are almost the same as the trivial LDF for the linear separable model.

12 citations


Journal ArticleDOI
TL;DR: A method to estimate parameters and states of a dynamic system is proposed inspired by the parallelized ensemble Kalman filter and the polynomial chaos theory of Wiener-Askey in order to treat problems which have complex nonlinear dynamics in high dimensions.
Abstract: This article develops a methodology combining methods of numerical analysis and stochastic differential equations with computational algorithms to treat problems which have complex nonlinear dynamics in high dimensions. A method to estimate parameters and states of a dynamic system is proposed inspired by the parallelized ensemble Kalman filter (PEnKF) and the polynomial chaos theory of Wiener-Askey. The main advantage of the proposal is in providing a precise efficient algorithm with low computational cost. For the analysed data, the methods provide good predictions, spatially and temporally, for the unknown precipitation states for the first 24 hours. Two goodness of fit measures provide confidence in the quality of the model predictions. The performance of the parallel algorithm, measured by the acceleration and efficiency factors, shows an increase of 7% in speed with respect to the sequential version and is most efficient for P = 2 threads.

10 citations


Journal ArticleDOI
TL;DR: In this paper, the authors choose and estimate the parameters of the Cobb-Douglas function with additive errors and multiplicative errors for some selected manufacturing industries of Bangladesh over the period 1978-79 to 2011-2012.
Abstract: In developing counties, efficiency of economic development has determined by the analysis of industrial production. An examination of the characteristic of industrial sector is an essential aspect of growth studies. The most of the developed countries are highly industrialized as they brief "The more industrialization, the more development". For proper industrialization and industrial development we have to study industrial input-output relationship that leads to production analysis. For a number of reasons econometrician's belief that industrial production is the most important component of economic development because, if domestic industrial production increases, GDP will increase, if elasticity of labor is higher, implement rates will increase and investment will increase if elasticity of capital is higher. In this regard, this paper choose and estimate the parameters of Cobb-Douglas function with additive errors and multiplicative errors for some selected manufacturing industries of Bangladesh over the period 1978-79 to 2011-2012, which should be helpful in suggesting the most suitable Cobb-Douglas production function to forecast the production process for some selected manufacturing industries of developing countries like Bangladesh. This paper also investigates the efficiency of both capital and labor elasticity of the two mentioned form of Cobb-Douglas production function. The estimated results shows that the estimates of both capital and labor elasticity of Cobb-Douglas production function with additive errors are more efficient than those estimates of Cobb-Douglas production function with multiplicative errors.

8 citations


Journal ArticleDOI
TL;DR: In this paper, data-based recurrence relation is used to compute a sequence of response time and the sample means from those response times are used to estimate true mean response time.
Abstract: In the analysis of queueing network models, the response time plays an important role in studying the various characteristics. In this paper data based recurrence relation is used to compute a sequence of response time. The sample means from those response times, denoted by $\hat {r_1} $ and $ \hat {r_2}$ are used to estimate true mean response time $r_1$ and $r_2$. Further we construct some confidence intervals for mean response time $r_1$ and $r_2$ of a two stage open queueing network model. A numerical simulation study is conducted in order to demonstrate performance of the proposed estimator $ \hat {r_1} $ and $ \hat {r_2}$ and bootstrap confidence intervals of $ r_1$ and $r_2$. Also we investigate the accuracy of the different confidence intervals by calculating the coverage percentage, average length, relative coverage and relative average length.

8 citations


Journal ArticleDOI
TL;DR: In this article, two new bivariate zero-inflated generalized Poisson (ZIGP) distributions were proposed by incorporating a multiplicative factor (or dependency parameter) λ, named as Type I and Type II bivariate ZIGP distributions, respectively.
Abstract: To model correlated bivariate count data with extra zero observations, this paper proposes two new bivariate zero-inflated generalized Poisson (ZIGP) distributions by incorporating a multiplicative factor (or dependency parameter) λ, named as Type I and Type II bivariate ZIGP distributions, respectively. The proposed distributions possess a flexible correlation structure and can be used to fit either positively or negatively correlated and either over- or under-dispersed count data, comparing to the existing models that can only fit positively correlated count data with over-dispersion. The two marginal distributions of Type I bivariate ZIGP share a common parameter of zero inflation while the two marginal distributions of Type II bivariate ZIGP have their own parameters of zero inflation, resulting in a much wider range of applications. The important distributional properties are explored and some useful statistical inference methods including maximum likelihood estimations of parameters, standard errors estimation, bootstrap confidence intervals and related testing hypotheses are developed for the two distributions. A real data are thoroughly analyzed by using the proposed distributions and statistical methods. Several simulation studies are conducted to evaluate the performance of the proposed methods.

7 citations


Journal ArticleDOI
TL;DR: The Two-Step-SDP algorithm is a new algorithm for clustering objects and dimensionality reduction, based on Semidefinite Programming models, and it is faster than the K-means algorithm and the ALS algorithm.
Abstract: Inspired by the recently proposed statistical technique called clustering and disjoint principal component analysis (CDPCA), we present in this paper a new algorithm for clustering objects and dimensionality reduction, based on Semidefinite Programming (SDP) models. The Two-Step-SDP algorithm is based on SDP relaxations of two clustering problems and on a K-means step in a reduced space. The Two-Step-SDP algorithm was implemented and tested in R, a widely used open source software. Besides returning clusters of both objects and attributes, the Two-Step-SDP algorithm returns the variance explained by each component and the component loadings. The numerical experiments on different data sets show that the algorithm is quite efficient and fast. Comparing to other known iterative algorithms for clustering, namely, the K-means and ALS algorithms, the computational time of the Two-Step-SDP algorithm is comparable to the K-means algorithm, and it is faster than the ALS algorithm.

Journal ArticleDOI
TL;DR: In this article, a new method of estimation based on optimal search is proposed for estimating the parameters using the marginal distributions and the concepts of maximum likelihood, spacings and least squares.
Abstract: Multivariate gamma distribution finds abundant applications in stochastic modelling, hydrology and reliability. Parameter estimation in this distribution is a challenging one as it involves many parameters to be estimated simultaneously. In this paper, the form of multivariate gamma distribution proposed by Mathai and Moschopoulos [10] is considered. This form has nice properties in terms of marginal and conditional densities. A new method of estimation based on optimal search is proposed for estimating the parameters using the marginal distributions and the concepts of maximum likelihood, spacings and least squares. The proposed methodology is easy to implement and is free from calculus. It optimizes the objective function by searching over a wide range of values and determines the estimate of the parameters. The consistency of the estimates is demonstrated in terms of mean, standard deviation and mean square error through simulation studies for different choices of parameters.

Journal ArticleDOI
TL;DR: In this paper, a generalized higher order exponential type invexity concept is introduced, which encompasses most of the existing generalized inveXity concepts in the literature, including the Antczak type first order first order $B$-($b,$ β,$γ,$ γ,$β,$ gamma,$ α,$α,γ,β,γ)-invexities, and then a wide range of parametrically sufficient optimality conditions leading to the solvability for discrete minimax fractional programming problems are established.
Abstract: First, a class of comprehensive higher order exponential type generalized $B$-($b,$ $\rho,$ $\eta,$ $\omega,$ $\theta,$ $\tilde{p},$ $\tilde{r},$ $\tilde{s}$)-invexities is introduced, which encompasses most of the existing generalized invexity concepts in the literature, including the Antczak type first order $B$-($b,$ $\eta,$ $\tilde{p},$ $\tilde{r}$)-invexities as well as the Zalmai type $(\alpha,$ $\beta,$ $\gamma,$ $\eta,$ $\rho,$ $\theta$)-invexities, and then a wide range of parametrically sufficient optimality conditions leading to the solvability for discrete minimax fractional programming problems are established with some other related results. To the best of our knowledge, the obtained results are new and general in nature relating the investigations on generalized higher order exponential type invexities.

Journal ArticleDOI
TL;DR: In this article, the general equation form of a thermal explosion in a vessel with boundary values is presented, and the central difference method and Newton iteration method are used to solve the relevant partial differential equations in one-dimensional and two-dimensional forms, finally the order of convergence of the numerical scheme is verified by numerical experiments and the experiment results are provided.
Abstract: In this paper, the general equation form of a thermal explosion in a vessel with boundary values is firstly presented, later the central difference method and Newton iteration method are used to solve the relevant partial differential equations in one-dimensional and two-dimensional forms, finally the order of convergence of the numerical scheme is verified by numerical experiments and the experiment results are provided.

Journal ArticleDOI
TL;DR: In this paper, a mathematical model for the dengue disease transmission and finding the most effective ways of controlling the disease was presented, where multiobjective optimization was applied to find the optimal control strategies, considering the simultaneous minimization of infected humans and costs due to insecticide application.
Abstract: During the last decades, the global prevalence of dengue progressed dramatically. It is a disease which is now endemic in more than one hundred countries of Africa, America, Asia and the Western Pacific. This study addresses a mathematical model for the dengue disease transmission and finding the most effective ways of controlling the disease. The model is described by a system of ordinary differential equations representing human and vector dynamics. Multiobjective optimization is applied to find the optimal control strategies, considering the simultaneous minimization of infected humans and costs due to insecticide application. The obtained results show that multiobjective optimization is an effective tool for finding the optimal control. The set of trade-off solutions encompasses a whole range of optimal scenarios, providing valuable information about the dynamics of infection transmissions. The results are discussed for different values of model parameters.

Journal ArticleDOI
TL;DR: In this paper, an image is reconstructed as a minimizer of an energy function that sums a TV term for image regularity and a least squares term for data fitting, which is called RecPK.
Abstract: Variational models with Total Variation (TV) regularization have long been known to preserve image edges and produce high quality reconstruction. On the other hand, recent theory on compressive sensing has shown that it is feasible to accurately reconstruct images from a few linear measurements via TV regularization. However, in general TV models are difficult to solve due to the nondifferentiability and the universal coupling of variables. In this paper, we propose the use of alternating direction method for image reconstruction from highly incomplete convolution data, where an image is reconstructed as a minimizer of an energy function that sums a TV term for image regularity and a least squares term for data fitting. Our algorithm, called RecPK, takes advantage of problem structures and has an extremely low per-iteration cost. To demonstrate the efficiency of RecPK, we compare it with TwIST, a state-of-the-art algorithm for minimizing TV models. Moreover, we also demonstrate the usefulness of RecPK in image zooming.

Journal ArticleDOI
TL;DR: In this paper, it was shown that under certain conditions the excursion sets volumes of stationary positively associated random fields converge after rescaling to the normal distribution as the excursions level and the size of the observation window grows.
Abstract: We prove that under certain conditions the excursion sets volumes of stationary positively associated random fields converge after rescaling to the normal distribution as the excursion level and the size of the observation window grow In addition, we provide a number of examples

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors presented a class of adaptive proximal point algorithms (APPA) with contraction strategy for total variational image restoration, in which there is an inner extrapolation in the prediction step, which is followed by a correction step for contraction.
Abstract: Image restoration is a fundamental problem in various areas of imaging sciences. This paper presents a class of adaptive proximal point algorithms (APPA) with contraction strategy for total variational image restoration. In each iteration, the proposed methods choose an adaptive proximal parameter matrix which is not necessary symmetric. In fact, there is an inner extrapolation in the prediction step, which is followed by a correction step for contraction. And the inner extrapolation is implemented by an adaptive scheme. By using the framework of contraction method, global convergence result and a convergence rate of O(1/N) could be established for the proposed methods. Numerical results are reported to illustrate the efficiency of the APPA methods for solving total variation image restoration problems. Comparisons with the state-of-the-art algorithms demonstrate that the proposed methods are comparable and promising.

Journal ArticleDOI
TL;DR: A dynamic quantum is very suitable to accommodate low priority processes that still for a long duration to complete their requests, i.e. avoid the starvation of CPU- bounded processes.
Abstract: The new design of multilevel feedback queue will depend on usage new technique in computing the quantum to produce (ADQ) Auto Detect Quantum which is relied on the burst of each process has enrolled to the system. By summating the burst time of each process has arrived and dividing it by the number of available processes, we can obtained the dynamic quantum in each level of scheduling. The processes are scheduled and shifted down from queue to other according to their remaining bursts time that should be updated periodically. Every queue has a unique auto detected quantum which is gradually increased or decreased from top level to bottom level queues according to the case of arriving processes. Depending on the results of graphical simulating algorithm on cases study, we can discover that a dynamic quantum is very suitable to accommodate low priority processes that still for a long duration to complete their requests, i.e. avoid the starvation of CPU- bounded processes. Although, it stills compatible with high priority processes (I/O-Bounded) to provide a fair interactivity with them. In comparison to traditional MLFQ the performance of the new scheduling technique is better and practical according to the applied results. Additional, we developed suitable software to simulate the new design and test it on different cases to prove it.

Journal ArticleDOI
TL;DR: In this article, general results about the GLR tests, for testing simple hypothesis versus two-sided hypothesis, in the family with support dependent on the parameter, are obtained, and it is shown that such GLRs tests are equivalent to the UMP tests in the same problems.
Abstract: Some general results about the GLR tests, for testing simple hypothesis versus two-sided hypothesis, in the family with support dependent on the parameter, are obtained. In addition, we show that such GLR tests are equivalent to the UMP tests in the same problems. Moreover, we derive the general form of the UMP tests for testing an interval hypothesis versus two-sided alternative.

Journal ArticleDOI
TL;DR: An extended form of a statistical model for the composite fading channels is derived from the maximum entropy principle by replacing the conditional density by entropy-maximizing distribution (Mathai's pathway model), which is versatile enough to represent short-term fading as well as the shadowing.
Abstract: Wireless communication systems are subject to short and long-term fading channels. In this paper, an extended form of a statistical model for the composite fading channels is derived from the maximum entropy principle. Subsequently, the composite fading channel is derived by replacing the conditional density by entropy-maximizing distribution (Mathai's pathway model). This pathway model is versatile enough to represent short-term fading as well as the shadowing. The new wireless channel model generalizes the commonly used models for multipath fading and shadowing. In particular, using the G-function, we derive the density function, distribution function and moments of the new model in closed form. These derived results are a suitable device to analyze the performance of composite fading systems such as density function of the Signal Noise to Ratio (SNR), Amount of Fading (AF), and Outage Probability (OP) etc. The results will be shown graphically for different signal and fading parameter values.

Journal ArticleDOI
TL;DR: In this article, the exact distribution of the ratio of two independent Hyper-Erlang distribution is derived and closed expressions of the probability density, cumulative distribution, reliability function, hazard function, moment generating function and the rth moment are found for this ratio distribution and proved to be a linear combination of the generalized-F distribution.
Abstract: The distribution of ratio of two random variables has been studied by several authors especially when the two random variables are independent and come from the same family. In this paper, the exact distribution of the ratio of two independent Hyper-Erlang distribution is derived. However, closed expressions of the probability density, cumulative distribution function, reliability function, hazard function, moment generating function and the rth moment are found for this ratio distribution and proved to be a linear combination of the Generalized-F distribution. Finally, we will apply our results to real life application in analyzing the performance of wireless communication systems.

Journal ArticleDOI
TL;DR: In this article, the generalized (G'/G)-expansion and generalized tanh-coth methods were used to construct solitary wave solutions of nonlinear evolution equations, and it was shown that the generalized expansion method, with the help of symbolic computation, provides a straightforward and powerful mathematical tool for solving nonlinear wave equations in mathematical physics.
Abstract: In this work, we establish the exact solutions to the modified forms of Degasperis–Procesi (DP) and Camassa–Holm (CH) equations. The generalized (G’/G)-expansion and generalized tanh-coth methods were used to construct solitary wave solutions of nonlinear evolution equations. The generalized (G’/G)-expansion method presents a wider applicability for handling nonlinear wave equations. It is shown that the (G’/G)-expansion method, with the help of symbolic computation, provides a straightforward and powerful mathematical tool for solving nonlinear evolution equations in mathematical physics.