Multivariate Student-t regression models: Pitfalls and inference
read more
Citations
Finite Mixture and Markov Switching Models
Distributions generated by perturbation of symmetry with emphasis on a multivariate skew t‐distribution
Distributions generated by perturbation of symmetry with emphasis on a multivariate skew $t$ distribution
Phylogeography Takes a Relaxed Random Walk in Continuous Space and Time
References
The Theory of Matrices
An Efficient Method of Estimating Seemingly Unrelated Regressions and Tests for Aggregation Bias
Sampling-based approaches to calculating marginal densities
Explaining the Gibbs Sampler
Related Papers (5)
Distributions generated by perturbation of symmetry with emphasis on a multivariate skew t‐distribution
Frequently Asked Questions (11)
Q2. What are the future works in this paper?
Once a posterior is found to exist, this does not guarantee inference on the basis of an extended sample ! The analysis is now fully coherent, in that new observations can never destroy the possibility of conducting inference.
Q3. What is the rank condition for a design matrix X and a sample y?
For a design matrix X and a sample y ∈ <n×p, sj , j = 1, . . . , p, is the largest number of observations such that the rank of the corresponding submatrix of X is k while the rank of the corresponding submatrix of (X : y) is k + p− j.Clearly, since r(X : y) = k + p, the authors obtain that k ≤ sp < sp−1 < . . . < s1 < n.
Q4. What is the simplest way to implement Bayesian analysis with set observations?
A simple Gibbs sampling strategy is proposed to implement Bayesian analysis with set observations and a number of Examples is considered, all under Student sampling with an Exponential prior on ν.
Q5. What is the minimum requirement for a scale mixture of Normals?
It is immediate from Lemma 1 thatfor finite mixtures of Normals, p(y) <∞ if and only if r(X : y) = k + p, (3.1)which is the minimal possible requirement for any scale mixture of Normals.
Q6. How can the authors use the Gibbs sampler to perform Bayesian inference?
In general, Bayesian inference using set observations can easily be implemented through a Gibbs sampler on the parameters augmented with y = (y1, . . . , yn)′.
Q7. What is the simplest way to interpret the assumption of Theorem 4?
As explained in the discussion of Theorem 3, the assumption of Theorem 4 (ii) is most easily interpreted in the location-scale case (k = 1 and xi = 1), where r(X : y) < 1 + p means that there exists a (p − 1)-dimensional affine space that intersects all of the sets S1, . . . , Sn.
Q8. How many sets of observations are based on the analysis?
As expected, the analysis based on all sixteen set observations identifies the four extra observations [i.e. the last four in (4.6)] as outliers through small values of the mixing variables λi associated with these observations.
Q9. What is the way to explain the Student-t model?
classical inference could be based on efficient likelihood estimation [Lehmann (1983, chap. 6)], grouped likelihoods [see Giesbrecht and Kempthorne (1976) for a lognormal model and Beckman and Johnson (1987) for the Student-t case], sample percentiles [Resek (1976)], modified likelihood [as in Cheng and Iles (1987)] or spacings methods [as in Cheng and Amin (1979)].
Q10. What is the probability of a sample having zero probability?
Even though Theorem 1 indicates that Bayesian inference is possible for almost all samples (i.e. except for a set of zero probability under the sampling model), problems can occur since any sample of point observations formally has probability zero of being observed.
Q11. What is the correct prior for the Bayesian model?
The Bayesian model will be completed with a commonly used improper prior on the regression coefficients and scatter matrix, and some proper prior on the degrees of freedom.