scispace - formally typeset
Search or ask a question

Showing papers by "Oliver Linton published in 2017"


Journal ArticleDOI
TL;DR: In this paper, the authors investigate a longitudinal data model with nonparametric regression functions that may vary across the observed individuals and develop a statistical procedure to estimate the unknown group structure from the data.
Abstract: Summary We investigate a longitudinal data model with non-parametric regression functions that may vary across the observed individuals. In a variety of applications, it is natural to impose a group structure on the regression curves. Specifically, we may suppose that the observed individuals can be grouped into a number of classes whose members all share the same regression function. We develop a statistical procedure to estimate the unknown group structure from the data. Moreover, we derive the asymptotic properties of the procedure and investigate its finite sample performance by means of a simulation study and a real data example.

37 citations


Journal Article
TL;DR: In this paper, the authors investigate a longitudinal data model with nonparametric regression functions that may vary across the observed individuals and develop a statistical procedure to estimate the unknown group structure from the data.
Abstract: We investigate a longitudinal data model with non‐parametric regression functions that may vary across the observed individuals. In a variety of applications, it is natural to impose a group structure on the regression curves. Specifically, we may suppose that the observed individuals can be grouped into a number of classes whose members all share the same regression function. We develop a statistical procedure to estimate the unknown group structure from the data. Moreover, we derive the asymptotic properties of the procedure and investigate its finite sample performance by means of a simulation study and a real data example.

33 citations


ReportDOI
TL;DR: In this paper, the authors studied the effect of funding costs on the conditional probability of issuing a corporate bond and found that for non-financial firms yields are negatively related to bond issuance but that the effect is larger in the pre-crisis period.
Abstract: Summary What is the effect of funding costs on the conditional probability of issuing a corporate bond? We study this question in a novel dataset covering 5610 issuances by US firms over the period from 1990 to 2014. Identification of this effect is complicated because of unobserved, common shocks such as the global financial crisis. To account for these shocks, we extend the common correlated effects estimator to settings where outcomes are discrete. Both the asymptotic properties and the small-sample behavior of this estimator are documented. We find that for non-financial firms yields are negatively related to bond issuance but that the effect is larger in the pre-crisis period.

26 citations


ReportDOI
TL;DR: In this paper, the authors consider nonparametric additive models with a deterministic time trend and both stationary and integrated variables as components, and propose an estimation strategy based on orthogonal series expansion that takes account of the different type of stationarity/nonstationarity possessed by each covariate.

20 citations



Journal ArticleDOI
20 Jul 2017
TL;DR: The authors studied the differences between the income distributions of males and females drawn from Metis, Inuit, North American Indian and Non-Aboriginal constituencies in Canada in the first decade of the twenty-first century.
Abstract: Following the work of Gini, Dagum and Tukey, this paper extends Gini’s Transvariation measure for comparing two distributions to the simultaneous comparison of many distributions. In so doing, it develops measures of absolute and relative similarity, dissimilarity and exceptionality together with techniques for assessing particular aspects of variations across those distributions. These techniques are exemplified in a study of differences between the income distributions of males and females drawn from Metis, Inuit, North American Indian and Non-Aboriginal constituencies in Canada in the first decade of the twenty-first century. While the distributions were becoming increasingly similar (interpreted as improving equality of opportunity), this was occurring primarily at the center of the distribution. At the extremes, the distributions were diverging, suggesting that such improvements in equality of opportunity were not for all.

10 citations


ReportDOI
TL;DR: This article proposed a semi-parametric coupled component exponential GARCH model for intraday and overnight returns that allows the two series to have different dynamical properties and adopted a dynamic conditional score model with t-distributed innovations that captures the very heavy tails of overnight returns.

9 citations


Journal ArticleDOI
TL;DR: In this article, the authors provide new identification results for the bid-ask spread and the nonparametric distribution of the latent fundamental price increments (e t ) from the observed transaction prices alone, which allow for discrete or continuous e t and the observed price increments do not need to have any finite moments.

8 citations


Book
17 Mar 2017

8 citations


Journal ArticleDOI
TL;DR: In this article, an alter-native estimator for the EGARCH model is presented, which is available in a simple closed form and which can be used as starting values for MLE.
Abstract: The EGARCH is a popular model for discrete time volatility since it allows for asymmetric effects and naturally ensures positivity even when including exogenous variables. Estimation and inference is usually done via maximum likelihood. Although some progress has been made recently, a complete distribution theory of MLE for EGARCH models is still missing. Furthermore, the estimation procedure itself may be highly sensitive to starting values, the choice of numerical optimation algorithm, etc. We present an alter- native estimator that is available in a simple closed form and which could be used, for example, as starting values for MLE. The estimator of the dynamic parameter is inde- pendent of the innovation distribution. For the other parameters we assume that the innovation distribution belongs to the class of Generalized Error Distributions (GED), profiling out its parameter in the estimation procedure. We discuss the properties of the proposed estimator and illustrate its performance in a simulation study.

8 citations


Journal ArticleDOI
TL;DR: The limit theory of quantilogram and cross-quantilogram under long memory was studied in this paper, where a moving block bootstrap (MBB) procedure was proposed and its consistency was proved, enabling a consistent confidence interval construction for the quantilograms.
Abstract: This paper studies the limit theory of the quantilogram and cross-quantilogram under long memory. We establish the sub-root-n central limit theorems for quantilograms that depend on nuisance parameters. A moving block bootstrap (MBB) procedure is proposed and its consistency is proved, thereby enabling a consistent confidence interval construction for the quantilograms. The newly developed uniform reduction principles (URPs) for the quantilograms serve as the main technical devices used to derive asymptotics and MBB validity. Confirmatory simulation results are reported. Some empirical practices on quantile predictive relations between financial returns and long memory predictors are performed using the new methods.

Posted Content
TL;DR: In this article, the authors provide an in-depth analysis of the evolution of liquidity during the flash episode in sterling during the early hours of 7 October 2016. And they examine a number of estimates both of the cost of trading, and the price impact of executed transactions.
Abstract: This paper provides an in-depth analysis of the evolution of liquidity during the flash episode in sterling during the early hours of 7 October 2016. It examines a number of estimates both of the cost of trading, and the price impact of executed transactions. These include a variant of the ‘volatility over volume’ measure of liquidity based on transaction data, which provides a better proxy of illiquidity — as given by measures based on high-frequency limit order book data — than other summary measures of price impact. The paper also shows that the fall in the value of sterling during the initial part of the flash episode was consistent with the estimated impact on prices of a large number of individually small — but in aggregate large — volume of orders to sell sterling during a normally quiet period of the trading day. However, the subsequent change in price was larger than that consistent with the estimated impact on prices of observed orders to sell sterling. This might support the suggestion, which was included in the report on the episode provided by the Bank for International Settlements, that the move in sterling may have been amplified by the pause in trading on the CME futures exchange.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the behavior of the Betfair betting market and the sterling/dollar exchange rate (futures price) during 24 June 2016, the night of the EU referendum.
Abstract: We study the behaviour of the Betfair betting market and the sterling/dollar exchange rate (futures price) during 24 June 2016, the night of the EU referendum. We investigate how the two markets responded to the announcement of the voting results. We employ a Bayesian updating methodology to update prior opinion about the likelihood of the final outcome of the vote. We then relate the voting model to the real time evolution of the market determined prices. We find that although both markets appear to be inefficient in absorbing the new information contained in vote outcomes, the betting market is apparently less inefficient than the FX market. The different rates of convergence to fundamental value between the two markets leads to highly profitable arbitrage opportunities.

Posted Content
TL;DR: In this paper, an estimation methodology for a semiparametric quantile factor panel model is proposed, which is robust to the existence of moments and to the form of weak cross-sectional dependence in the idiosyncratic error term.
Abstract: We propose an estimation methodology for a semiparametric quantile factor panel model. We provide tools for inference that are robust to the existence of moments and to the form of weak cross-sectional dependence in the idiosyncratic error term. We apply our method to daily stock return data.


Journal ArticleDOI
TL;DR: In this paper, the authors provide an in-depth analysis of the evolution of liquidity during the flash episode in sterling during the early hours of 7 October 2016. And they examine a number of estimates both of the cost of trading, and the price impact of executed transactions.
Abstract: This paper provides an in-depth analysis of the evolution of liquidity during the flash episode in sterling during the early hours of 7 October 2016. It examines a number of estimates both of the cost of trading, and the price impact of executed transactions. These include a variant of the ‘volatility over volume’ measure of liquidity based on transaction data, which provides a better proxy of illiquidity — as given by measures based on high-frequency limit order book data — than other summary measures of price impact. The paper also shows that the fall in the value of sterling during the initial part of the flash episode was consistent with the estimated impact on prices of a large number of individually small — but in aggregate large — volume of orders to sell sterling during a normally quiet period of the trading day. However, the subsequent change in price was larger than that consistent with the estimated impact on prices of observed orders to sell sterling. This might support the suggestion, which was included in the report on the episode provided by the Bank for International Settlements, that the move in sterling may have been amplified by the pause in trading on the CME futures exchange.

Book ChapterDOI
01 Jan 2017
TL;DR: In this paper, the Neyman-Pearson theory of optimal testing is applied to the classical framework of hypothesis testing: the hypothesis and the test procedure, and the power function is considered.
Abstract: We discuss the classical framework of hypothesis testing: the hypothesis and the test procedure. We consider both the exact or finite sample approach and the approximate or large sample approach. We consider the power function and give the Neyman–Pearson theory of optimal testing.

Posted Content
TL;DR: In this article, the authors study the effect of funding costs on the conditional probability of issuing a corporate bond and find that for non-financial firms, yields are negatively related to bond issuance but that effect is larger in the pre-crisis period.
Abstract: What is the effect of funding costs on the conditional probability of issuing a corporate bond? We study this question in a novel dataset covering 5,610 issuances by US firms over the period from 1990 to 2014. Identification of this effect is complicated because of unobserved, common shocks such as the global financial crisis. To account for these shocks, we extend the common correlated effects estimator to settings where outcomes are discrete. Both the asymptotic properties and the sample behaviour of this estimator are documented. We find that for non-financial firms, yields are negatively related to bond issuance but that effect is larger in the pre-crisis period.

Book ChapterDOI
01 Jan 2017
TL;DR: In this paper, the authors define conditional probability and give Bayes Rule for calculating conditional probability in reverse, and give several applications of Bayes rule to economic problems, including the concept of independence and conditional independence.
Abstract: We define conditional probability and give Bayes Rule for calculating conditional probability in reverse. We give several applications of Bayes Rule to economic problems. We define the concept of independence and conditional independence and give examples illustrating the difference.

Book ChapterDOI
01 Jan 2017
TL;DR: In this article, confidence sets and intervals based on exact and approximate distribution theory are defined for the binomial special case of probability distributions and Bayesian intervals for the general case of confidence sets.
Abstract: We define confidence sets and intervals based on exact and approximate distribution theory. We consider also Bayesian intervals for the binomial special case.

Book ChapterDOI
01 Jan 2017
TL;DR: In this article, the formal definitions of probability are introduced and some of the implications of the axioms of probability for counting the elements of sets that are useful in probability determination.
Abstract: In this chapter we introduce the formal definitions of probability. We establish some of the implications of the axioms of probability. We also define some techniques for counting the elements of sets that are useful in probability determination.

Book ChapterDOI
01 Jan 2017
TL;DR: In this paper, some of the issues and theory around the estimation of parameters are discussed and the performance evaluation of estimation methods through Mean Squared Error and large sample mean squared error.
Abstract: In this chapter we discuss some of the issues and theory around the estimation of parameters. We discuss the method of moments and the maximum likelihood estimator. We cover the performance evaluation of estimation methods through Mean Squared Error and large sample mean squared error.

Book ChapterDOI
01 Jan 2017
TL;DR: In this paper, the generalized method of moments estimator (GMOMA) is introduced and the consistency and asymptotic normality of a class of estimators defined through minimizing an objective function is investigated.
Abstract: We introduce the Generalized Method of Moments estimator. We provide results regarding the consistency and asymptotic normality of a class of estimators defined through minimizing an objective function.

Book ChapterDOI
01 Jan 2017
TL;DR: In this article, the authors consider simulation methods for estimating distributions, including the bootstrap and subsampling methods, and give some analytical calculations for critical values of the distributions or critical values.
Abstract: We consider simulation methods for estimating distributions. We consider the bootstrap and subsampling methods for estimating distributions or critical values and give some analytical calculations.

Book ChapterDOI
01 Jan 2017
TL;DR: In this article, the authors consider an example of recent work on nonparametric identification, which goes beyond the parametric framework we have mostly considered, and they consider continuous random variables Y 1,Y 2,X Y 1, Y 2, X, e 1, e 2 ) where e 1 e 1 and e 2 e 2 are unobserved shocks, and h 1 h 1 and h 2 h 2 are unknown structural functions of interest.
Abstract: We consider an example of recent work on nonparametric identification, which goes beyond the parametric framework we have mostly considered. Suppose that we observe continuous random variables Y 1 ,Y 2 ,X Y 1 , Y 2 , X , where Y 1 = h 1 ( Y 2 , X , e 1 , e 2 ) Y 2 = h 2 ( Y 1 , X , e 1 , e 2 ) , where e 1 e 1 and e 2 e 2 are unobserved shocks, and h 1 h 1 and h 2 h 2 are unknown structural functions of interest.

Book ChapterDOI
01 Jan 2017
TL;DR: This work introduces matrices and their main properties: linear independence, rank, and dimension, and defines eigenvalues and eigenvectors and derive theirmain properties.
Abstract: We introduce matrices and their main properties: linear independence, rank, and dimension. We define eigenvalues and eigenvectors and derive their main properties. We consider systems of linear equations. We define linear spaces and projections.

Book ChapterDOI
01 Jan 2017
TL;DR: This work defines the main concept of posterior distribution used in Bayesian inference and provides some of the main concepts used in frequentist inference: Likelihood, Identification, Sufficient statistics, and Ancillarity.
Abstract: We discuss data, sampling, and descriptive methods. We provide some of the main concepts used in frequentist inference: Likelihood, Identification, Sufficient statistics, and Ancillarity. We define the main concept of posterior distribution used in Bayesian inference.

Book ChapterDOI
01 Jan 2017
TL;DR: The concept of real valued random variables is introduced, the cumulative distribution function and quantile function are defined, and their main properties are given.
Abstract: We introduce the concept of real valued random variables. We define the cumulative distribution function and quantile function, and give their main properties. We define probability density function and probability mass function for continuously distributed and discretely distributed random variables respectively.

Book ChapterDOI
01 Jan 2017
TL;DR: In this article, the effects of including too many variables and too few variables in a regression model are considered, assuming that there is a true model, which of course we may or may not know.
Abstract: We now work towards a consideration which variables or how many variables to include in a regression. We shall assume that there is a true model, which of course we may or may not know. We have now to consider the effects of including too many variables and of including too few variables.

Book ChapterDOI
01 Jan 2017
TL;DR: In this paper, the authors established some results about the large sample properties of the least square estimator for both the i.i.d. case and some cases which are not.
Abstract: We establish some results about the large sample properties of the least squares estimator. We consider both the i.i.d. case and some cases which are not i.i.d.