scispace - formally typeset
Search or ask a question

Showing papers in "Computational Economics in 1997"


Journal ArticleDOI
TL;DR: Constrained Maximum Likelihood (CML) as mentioned in this paper is a sequential quadratic programming (SQP) method that computes two classes of confidence intervals by inversion of the Wald and likelihood ratio statistics, and by simulation.
Abstract: Constrained Maximum Likelihood (CML), developed at Aptech Systems, generates maximum likelihood estimates with general parametric constraints (linear or nonlinear, equality or inequality), using the sequential quadratic programming method. CML computes two classes of confidence intervals, by inversion of the Wald and likelihood ratio statistics, and by simulation. The inversion techniques can produce misleading test sizes, but Monte Carlo evidence suggests this problem can be corrected under certain circumstances.

62 citations


Journal ArticleDOI
TL;DR: An algorithm that permits the search for dependencies among sets of data (univariate or multivariate time-series, or cross-sectional observations) and uses parts of equations as building blocks to breed ever better formulas is presented.
Abstract: This paper presents an algorithm that permits the search for dependencies among sets of data (univariate or multivariate time-series, or cross-sectional observations). The procedure is modeled after genetic theories and Darwinian concepts, such as natural selection and survival of the fittest. It permits the discovery of equations of the data-generating process in symbolic form. The genetic algorithm that is described here uses parts of equations as building blocks to breed ever better formulas. Apart from furnishing a deeper understanding of the dynamics of a process, the method also permits global predictions and forecasts. The algorithm is successfully tested with artificial and with economic time-series and also with cross-sectional data on the performance and salaries of NBA players during the 94–95 season.

44 citations


Journal ArticleDOI
TL;DR: In this paper, the gradient and Hessian can be combined to provide an efficient and convenient global optimization process for nonlinear estimation problems with unknown number of stationary points, which is a promising method that eliminates all but the global optimum.
Abstract: Nonlinear estimation problems have a unknown number of stationary points. Interval arithmetic is a promising method that eliminates all but the global optimum. Automatic differentiation provides users with a convenient method of computing the gradient and Hessian of nonlinear functions. These two can be combined to provide an efficient and convenient global optimization process.

22 citations


Journal ArticleDOI
TL;DR: Three methods have been described for solving SEMs subject to separable linear equalities constraints, considers the constraints as additional precise observations while the other two methods reparameterized the constraints to solve reduced unconstrained SEMs.
Abstract: Algorithms for computing the three-stage least squares (3SLS) estimator usually require the disturbance convariance matrix to be non-singular. However, the solution of a reformulated simultaneous equation model (SEM) results into the redundancy of this condition. Having as a basic tool the QR decomposition, the 3SLS estimator, its dispersion matrix and methods for estimating the singular disturbance covariance matrix and derived. Expressions revealing linear combinations between the observations which become redundant have also been presented. Algorithms for computing the 3SLS estimator after the SEM have been modified by deleting or adding new observations or variables are found not to be very efficient, due to the necessity of removing the endogeneity of the new data or by re-estimating the disturbance covariance matrix. Three methods have been described for solving SEMs subject to separable linear equalities constraints. The first method considers the constraints as additional precise observations while the other two methods reparameterized the constraints to solve reduced unconstrained SEMs. Method for computing the main matrix factorizations illustrate the basic principles to be adopted for solving SEMs on serial or parallel computers.

21 citations


Journal ArticleDOI
David A. Belsley1
TL;DR: In this article, Monte Carlo experiments have shown that the usual Student's t statistic is badly biased in both mean and variance and results in grossly misleading tests of hypotheses when treated as a Student' s t.
Abstract: Monte Carlo experiments establish that the usual ’t-statistic‘ used for testing for first-order serial correlation with artificial regressions is far from being distributed as a Student‘s t in small samples. Rather, it is badly biased in both mean and variance and results in grossly misleading tests of hypotheses when treated as a Student‘s t. (Similar distortions plague the familiar Durbin–Watson statistic.) Simply computed corrections for the mean and variance are derived, however, which are shown to lead to a transformed statistic producing acceptable tests. The test procedure is detailed and exemplar code provided.

6 citations


Journal ArticleDOI
TL;DR: A description of a parallel implementation of the Genetic Algorithm for non-linear optimization is presented and comparisons were made for neural network optimization of a limited dependent variable problem and a chaotic time series.
Abstract: A description of a parallel implementation of the Genetic Algorithm for non-linear optimization is presented. The system is implemented with four Intel i860 RISC processors using INMOS transputers for communications. The performance is compared to the Cray Y-MP, Cray J916, an SGI Power Challenge L R8000 75 MHZ workstation, an SGI Challenge L R4400 100 MHZ work-station, an SGI Indy SC R4400 200 MHZ workstation, and both parastation and Paramid parallel I860 machines, each with one to four participating I860s. Comparisons were made for neural network optimization of a limited dependent variable problem and a chaotic time series. A complex rational expectations model was also estimated. The I860 based machines were able to achieve performances comparable to the supercomputers.

5 citations


Posted Content
TL;DR: The authors show that additive outliers that alter the level of output for only one period reliably trigger false rejections of the unit root hypothesis when it is true and signal the presence of permanent shifts in trend that did not occur.
Abstract: Several recent papers conclude that U.S. real GDP is trend stationary, implying that all shocks are transitory and long run path is deterministic. These inferences fail to take into account two problems: the distortion of test size in finite samples due to data-based model selection, and the fragility of unit root tests in the face of plausible departures from the maintained hypothesis of temporal homogeneity. Indeed, additive outliers that alter the level of output for only one period reliably trigger false rejections of the unit root hypothesis when it is true and signal the presence of permanent shifts in trend that did not occur. Trend stationarity is not supported by the more homogeneous post-war data and if imposed would imply business cycles of implausble duration and pattern - the economy was 8% below the trend line in 1994

4 citations


Posted Content
TL;DR: In this paper, a suite of Matlab functions implementing this method of approximating a solution to a given continuous stochastic optimal control problem is described, which is similar to our approach.
Abstract: Computing the solution to a stochastic optimal control problem is difficult. A method of approximating a solution to a given stochatic optimal problem was developed in [1]. This paper describes a suite of Matlab functions implementing this method of approximating a solution to a given continuous stochastic optimal control problem.

2 citations


Posted Content
TL;DR: In this article, the optimal sample size and the optimal reservation price both without recall and with full recall were calculated for VSS without recall, and a VB30 program that calculates only the full recall case was presented.
Abstract: Morgan (1983) guaranteed that VSS dominated both FSS and SSR. But it is difficult to calculate the optimal sample size and the optimal reservation price both without recall and with full recall. As VSS without recall is a simplification of VSS with full recall, we will present on appendix a VB30 program that calculates only the full recall case. As known, on VSS, the search is sequential and in each period the sample size is variable. As normal, we will extract sellers prices from F(x) that is common knowledge, temporal horizon is T, goods are homogenous, no discount and consumer buys once just one unity of goods.

1 citations


Posted Content
TL;DR: In this paper, strategies for constructing a Markov decision chain approximating a continuous-time finite-horizon optimal control problem are investigated, and some simple, analytically soluble, examples are treated and low computational complexity is reported.
Abstract: Strategies for constructing a Markov decision chain approximating a continuous-time finite-horizon optimal control problem are investigated. Some simple, analytically soluble, examples are treated and low computational complexity is reported. Extensions to the method and implementation are discussed. In particular, relevance of the approximated solution to a stochastic renewable resource valuation problem is examined.

1 citations