scispace - formally typeset
Search or ask a question
Author

Songsak Sriboonchitta

Bio: Songsak Sriboonchitta is an academic researcher from Chiang Mai University. The author has contributed to research in topics: Copula (probability theory) & Quantile. The author has an hindex of 22, co-authored 464 publications receiving 2551 citations. Previous affiliations of Songsak Sriboonchitta include New Mexico State University & Asian Development Bank Institute.


Papers
More filters
Book
19 Oct 2009
TL;DR: In this paper, the authors provide a characterization theorem for coherent risk measures in the context of stochastic dominance tests, which are used to test whether a risk measure is consistent with the mean-variance rule.
Abstract: Utility in Decision Theory Choice under certainty Basic probability background Choice under uncertainty Utilities and risk attitudes Foundations of Stochastic Dominance Some preliminary mathematics Deriving representations of preferences Stochastic dominance (SD) Issues in Stochastic Dominance A closer look at the mean-variance rule Multivariate SD Stochastic dominance via quantile functions Financial Risk Measures The problem of risk modeling Some popular risk measures Desirable properties of risk measures Choquet Integrals as Risk Measures Extended theory of measures Capacities The Choquet integral Basic properties of the Choquet integral Comonotonicity Notes on copulas A characterization theorem A class of coherent risk measures Consistency with SD Foundational Statistics for Stochastic Dominance From theory to applications Structure of statistical inference Generalities on statistical estimation Nonparametric estimation Basics of hypothesis testing Models and Data in Econometrics Justifications of models Coarse data Modeling dependence structure Some additional statistical tools Applications to Finance Diversification Diversification on convex combinations Prospect and Markowitz SD Market rationality and efficiency SD and rationality of momentum effect Applications to Risk Management Measures of profit/loss for risk analysis REITs and stocks and fixed-income assets Evaluating hedge funds performance Evaluating iShare performance Applications to Economics Indifference curves/location-scale (LS) family LS family for n random seed sources Elasticity of risk aversion and trade Income inequality Appendix: Stochastic Dominance Tests Bibliography Index Exercises appear at the end of each chapter.

153 citations

Journal ArticleDOI
TL;DR: In this article, the determinants of switching to Jasmine rice and its productivity while allowing for production inefficiency at the level of individual producers were jointly evaluated in stochastic frontier models.
Abstract: The paper jointly evaluates the determinants of switching to Jasmine rice and its productivity while allowing for production inefficiency at the level of individual producers. Model diagnostics reveal that serious selection bias exists, justifying use of a sample selection framework in stochastic frontier models. Results from the probit variety selection equation reveal that gross return (mainly powered by significantly higher Jasmine rice price), access to irrigation and education are the important determinants of choosing Jasmine rice. Results from the stochastic production frontier reveal that land, irrigation and fertilisers are the significant determinants of Jasmine rice productivity. Significantly lower productivity in Phitsanulok and Tung Gula Rong Hai provinces demonstrate the influence of biophysical and environmental factors on productivity performance. The mean level of technical efficiency is estimated at 0.63 suggesting that 59% [(100 − 63)/63] of the productivity is lost due to technical inefficiency. Policy implications include measures to keep Jasmine rice price high, increase access to irrigation and fertiliser availability, as well as investment in education targeted to farm households which will synergistically increase adoption of Jasmine rice as well as farm productivity.

78 citations

Journal ArticleDOI
TL;DR: A new clustering algorithm based on the evidential K nearest-neighbor (EK-NN) rule, which iteratively reassigns objects to clusters until a stable partition is obtained, generally performs better than density-based and model-based procedures for finding a partition with an unknown number of clusters.
Abstract: We propose a new clustering algorithm based on the evidential K nearest-neighbor (EK-NN) rule. Starting from an initial partition, the algorithm, called EK-NNclus, iteratively reassigns objects to clusters using the EK-NN rule, until a stable partition is obtained. After convergence, the cluster membership of each object is described by a Dempster-Shafer mass function assigning a mass to each cluster and to the whole set of clusters. The mass assigned to the set of clusters can be used to identify outliers. The method can be implemented in a competitive Hopfield neural network, whose energy function is related to the plausibility of the partition. The procedure can thus be seen as searching for the most plausible partition of the data. The EK-NNclus algorithm can be set up to depend on two parameters, the number K of neighbors and a scale parameter, which can be fixed using simple heuristics. The number of clusters does not need to be determined in advance. Numerical experiments with a variety of datasets show that the method generally performs better than density-based and model-based procedures for finding a partition with an unknown number of clusters.

76 citations

Journal ArticleDOI
TL;DR: In this article, the EVCLUS algorithm constructs a credal partition in such a way that larger dissimilarities between objects correspond to higher degrees of conflict between the associated mass functions.
Abstract: In evidential clustering, the membership of objects to clusters is considered to be uncertain and is represented by Dempster-Shafer mass functions, forming a credal partition. The EVCLUS algorithm constructs a credal partition in such a way that larger dissimilarities between objects correspond to higher degrees of conflict between the associated mass functions. In this paper, we present several improvements to EVCLUS, making it applicable to very large dissimilarity data. First, the gradient-based optimization procedure in the original EVCLUS algorithm is replaced by a much faster iterative row-wise quadratic programming method. Secondly, we show that EVCLUS can be provided with only a random sample of the dissimilarities, reducing the time and space complexity from quadratic to roughly linear. Finally, we introduce a two-step approach to construct credal partitions assigning masses to selected pairs of clusters, making the algorithm outputs more informative than those of the original EVCLUS, while remaining manageable for large numbers of clusters.

68 citations

Journal ArticleDOI
TL;DR: This paper aims at estimating the dependency between the percentage changes of the agricultural price and agricultural production indices of Thailand and also their conditional volatilities using copula-based GARCH models and provides less restrictive models for dependency and the conditional volatility GARCH.

62 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Book
01 Jan 2009

8,216 citations

Journal ArticleDOI

6,278 citations