Q2. What are the future works in "Estimation of ergodic agent-based models by simulated minimum distance" ?
Of course, the analysis and recommendations contained in this paper must be regarded as a mere introduction to the problem of estimating AB models ( in their companion paper ( Grazzini and Richiardi, 2014 ) the authors extend the analysis to estimation of non-ergodic models ). Third, the possibility of applying Bayesian methods should be investigated, not only in conjunction with simulated maximum likelihood but also with the other estimation procedures. However, including priors allows for a more general estimation procedure, and leaves open the possibility of using little informative priors. In this respect, the authors can think of many open issues and avenues for research.
Q3. What is the importance of knowing the corresponding estimators?
In particular, since the simulated moments (or other summary measures) are used as an estimate of the theoretical moments, it is crucial to know whether the corresponding estimators are consistent.
Q4. What is the main problem of DSGE models?
In particular, DSGE models not only feature a large number of parameters, but share with AB models an important aspect of complex systems: they include many nonlinear feedback effects.
Q5. What are the conditions for consistency in minimum distance estimators?
The minimum distance estimator belongs to the class of extremum estimators, which also includes maximum likelihood, nonlinear least square and generalized method of moments.
Q6. What are the tests used to assess whether the statistics of interest are constant in time and across runs?
The tests are used to assess whether the statistics of interest are constant in time and across runs: the stationarity test uses samples from a given simulation run, while the ergodicity test uses samples from different runs.
Q7. What is the reason why the assumption of correct specification must be invoked?
35If the real data cannot be tested for stationarity due to small sample size, the assumption of correct specification must again be invoked.
Q8. What is the definition of a vector of aggregate variables Y?
A vector of aggregate variables Y t is defined as a (vectorial) function over the state of the system, that is as a projection from X to Y :Y t = G(Xt,κt).
Q9. What is the problem with the deterministic linear combination of observable variables?
The first one is stochastic singularity, which arises when a small number of structural shocks is used to generate predictions about a large number of observable variables: the model then predicts a deterministic linear combination of observable variables which causes the likelihood to be 0 with probability 1.12 Solutions to this problem involve reducing the number of observable variables on which inference is made (or using a projection from the set of observables to a smaller set of composed indicators), increasing the number of shocks, or adding measurement errors.
Q10. Why do the authors use the term'stationarity' to describe the theoretical moments?
Because the theoretical moments cannot be analytically derived, the authors proceed by simulating them: stationarity (and ergodicity) ensure that the empirical means computed on the artificial data converge to the theoretical ones.
Q11. Why is the convergence of a smooth function so slow?
This is because of the curse of dimensionality, the fact that the convergence of any estimator to the true value of a smooth function defined on a space of high dimension (the parameter space) is very slow (De Marchi, 2005; Weeks, 1995).
Q12. What is the way to estimate a nonlinear state space?
if the state space is nonlinear an extended Kalman filter (XKF) can be implemented on a linearized system around the steady state; this provides sub-optimal inference, but still relies on the Gaussian assumption.
Q13. What are the consistency conditions for minimum distance estimators?
The consistency conditions for extremum estimators, including the minimum distance estimatordefined in equation (7), are given in the following theorem (Newey and McFadden, 1994, p.2121):Theorem If there is a function Q0(θ) such that (i) Q0(θ) is uniquely maximized at θ0; (ii) Θ is compact; (iii) Q0(θ) is continuous; (iv) Q̂n(θ) converges uniformly in probability to Q0(θ), then θ̂n p−→ θ0.
Q14. How many indirect estimation techniques can be used for the distribution of relevant statistics?
For some of them direct estimation techniques can be used, as the models are simple enough to derive a closed form solution for the distribution of relevant statistics.
Q15. Why is the AB model under- or weakly identified?
As stated by Canova and Sala (2009, p. 448), when models are under- or weaklyidentified “reasonable estimates are obtained not because the model and the data are informative but because auxiliary restrictions make the likelihood of the data (or a portion of it) informative.
Q16. Why is ML limited by the number of linearly independent variables?
This is because “ML estimation is limited by the number of linearly independent variables while moment-based estimation is limited by the number of linearly independent moments.
Q17. How do the authors analyze the mapping of (X0,) into Y t?
the only way to analyze the mapping of (X0,θ) into Y t is by means of Monte Carlo analysis, by simulating the model for different initial states and values of the parameters, and repeating each simulation experiment many times to obtain a distribution of Y t.
Q18. What is the intermediate approach to estimating the parameters?
An intermediate approach is to calibrate some parameters, and then estimate the others conditional on the values of the calibrated set.