scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Forecasting in 2003"


Journal ArticleDOI
Kerstin Cuhls1
Abstract: The definitions of forecasting vary to a certain extent, but they all have the view into the future in common. The future is unknown, but the broad, general directions can be guessed at and reasonably dealt with. Foresight goes further than forecasting, including aspects of networking and the preparation of decisions concerning the future. This is one reason why, in the 1990s, when foresight focused attention on a national scale in many countries, the wording also changed from forecasting to foresight. Foresight not only looks into the future by using all instruments of futures research, but includes utilizing implementations for the present. What does a result of a futures study mean for the present? Foresight is not planning, but foresight results provide ‘information’ about the future and are therefore one step in the planning and preparation of decisions. In this paper, some of the differences are described in a straightforward manner and demonstrated in the light of the German foresight process ‘Futur’. Copyright © 2003 John Wiley & Sons, Ltd.

324 citations


Journal ArticleDOI
TL;DR: In this paper, a number of statistical models for predicting the daily volatility of several key UK financial time series are explored, including linear and GARCH-type models of volatility, compared with forecasts derived from a multivariate approach.
Abstract: Recent research has suggested that forecast evaluation on the basis of standard statistical loss functions could prefer models which are sub-optimal when used in a practical setting. This paper explores a number of statistical models for predicting the daily volatility of several key UK financial time series. The out-of-sample forecasting performance of various linear and GARCH-type models of volatility are compared with forecasts derived from a multivariate approach. The forecasts are evaluated using traditional metrics, such as mean squared error, and also by how adequately they perform in a modern risk management setting. We find that the relative accuracies of the various methods are highly sensitive to the measure used to evaluate them. Such results have implications for any econometric time series forecasts which are subsequently employed in financial decisionmaking.

197 citations


Journal ArticleDOI
TL;DR: In this article, the authors employ a two-stage model selection procedure for the S&P 500 index and India's NSE-50 index at the 95% and 99% levels.
Abstract: Value-at-Risk (VaR) is widely used as a tool for measuring the market risk of asset portfolios. However, alternative VaR implementations are known to yield fairly different VaR forecasts. Hence, every use of VaR requires choosing among alternative forecasting models. This paper undertakes two case studies in model selection, for the S&P 500 index and India's NSE-50 index, at the 95% and 99% levels. We employ a two-stage model selection procedure. In the first stage we test a class of models for statistical accuracy. If multiple models survive rejection with the tests, we perform a second stage filtering of the surviving models using subjective loss functions. This two-stage model selection procedure does prove to be useful in choosing a VaR model, while only incompletely addressing the problem. These case studies give us some evidence about the strengths and limitations of present knowledge on estimation and testing for VaR. Copyright © 2003 John Wiley & Sons, Ltd.

181 citations


Journal ArticleDOI
TL;DR: The rough sets models developed in this research did not provide any significant comparative advantage with regard to prediction accuracy over the actual auditors' methodologies, and should be fairly robust.
Abstract: Both international and US auditing standards require auditors to evaluate the risk of bankruptcy when planning an audit and to modify their audit report if the bankruptcy risk remains high at the conclusion of the audit. Bankruptcy prediction is a problematic issue for auditors as the development of a cause–effect relationship between attributes that may cause or be related to bankruptcy and the actual occurrence of bankruptcy is difficult. Recent research indicates that auditors only signal bankruptcy in about 50% of the cases where companies subsequently declare bankruptcy. Rough sets theory is a new approach for dealing with the problem of apparent indiscernibility between objects in a set that has had a reported bankruptcy prediction accuracy ranging from 76% to 88% in two recent studies. These accuracy levels appear to be superior to auditor signalling rates, however, the two prior rough sets studies made no direct comparisons to auditor signalling rates and either employed small sample sizes or non-current data. This study advances research in this area by comparing rough set prediction capability with actual auditor signalling rates for a large sample of United States companies from the 1991 to 1997 time period. Prior bankruptcy prediction research was carefully reviewed to identify 11 possible predictive factors which had both significant theoretical support and were present in multiple studies. These factors were expressed as variables and data for 11 variables was then obtained for 146 bankrupt United States public companies during the years 1991–1997. This sample was then matched in terms of size and industry to 145 non-bankrupt companies from the same time period. The overall sample of 291 companies was divided into development and validation subsamples. Rough sets theory was then used to develop two different bankruptcy prediction models, each containing four variables from the 11 possible predictive variables. The rough sets theory based models achieved 61% and 68% classification accuracy on the validation sample using a progressive classification procedure involving three classification strategies. By comparison, auditors directly signalled going concern problems via opinion modifications for only 54% of the bankrupt companies. However, the auditor signalling rate for bankrupt companies increased to 66% when other opinion modifications related to going concern issues were included. In contrast with prior rough sets theory research which suggested that rough sets theory offered significant bankruptcy predictive improvements for auditors, the rough sets models developed in this research did not provide any significant comparative advantage with regard to prediction accuracy over the actual auditors' methodologies. The current research results should be fairly robust since this rough sets theory based research employed (1) a comparison of the rough sets model results to actual auditor decisions for the same companies, (2) recent data, (3) a relatively large sample size, (4) real world bankruptcy/non-bankruptcy frequencies to develop the variable classifications, and (5) a wide range of industries and company sizes. Copyright © 2003 John Wiley & Sons, Ltd.

109 citations


Journal ArticleDOI
Henrik Amilon1
TL;DR: In this paper, the authors examined whether a neural network (MLP) can be used to find a call option pricing formula better corresponding to market prices and the properties of the underlying asset than the Black-Scholes formula.
Abstract: The Black-Scholes formula is a well-known model for pricing and hedging derivative securities. It relies, however, on several highly questionable assumptions. This paper examines whether a neural network (MLP) can be used to find a call option pricing formula better corresponding to market prices and the properties of the underlying asset than the Black-Scholes formula' The neural network method is applied to the out-of-sample pricing and delta-hedging of daily Swedish stock index call options from 1997 to 1999. The relevance of a hedge-analysis is stressed further in this paper. As benchmarks, the Black-Scholes model with historical and implied volatility estimates are used. Comparisons reveal that the neural network models outperform the benchmarks both in pricing and hedging performances. A moving block bootstrap is used to test the statistical significance of the results. Although the neural networks are superior, the results are sometimes insignificant at the 5% level. Copyright (C) 2003 John Wiley Sons, Ltd. (Less)

95 citations


Journal ArticleDOI
TL;DR: It is shown that several difficult choices have to be made, often requiring an assessment of opposing and synergistic tendencies, in the process of obtaining a list of prioritized generic 'themes' for the UK Technology Foresight Programme.
Abstract: A main reason for the popularity of national technology foresight exercises over the last decade has been their promise of allowing emerging generic technology areas to be identified and prioritized for resource-allocation purposes. Yet descriptions of the conduct of such exercises tend to be superficial, providing few clues to those wanting to undertake similar exercises. Taking the UK Technology Foresight Programme as an example, this paper sets out to describe the processes used to obtain a list of prioritized generic ‘themes’. We show that several difficult choices have to be made, often requiring an assessment of opposing and synergistic tendencies. In the case of the UK Programme, a number of decisions seemed to be taken without adequate regard to some of the consequences. This resulted in the identification of generic themes that were, for the most part, subsequently ignored. This paper sets out to explain how this state of affairs came about, and points to possible lessons for those intending to embark upon similar exercises. Copyright © 2003 John Wiley & Sons, Ltd.

93 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider the potential of multicriteria decision-making methods in this kind of priority-determination and examine the limitations of these methods in the foresight context.
Abstract: In recent years, many countries have carried out foresight exercises to better exploit scientific and technological opportunities. Often, these exercises have sought to identify ‘critical’ or ‘key’ technologies or, more broadly, to establish research priorities. In this paper, we consider the potential of multicriteria decision-making methods in this kind of priority-determination and examine the limitations of these methods in the foresight context. We also provide results from a combined evaluation and foresight study where multicriteria methods were deployed to support the shaping of research and technology development activities in the Finnish forestry and forest industry. Copyright © 2003 John Wiley & Sons, Ltd.

87 citations



Journal ArticleDOI
TL;DR: In this paper, the power of five statistics (including the Kolmogorov-Smirnov (KS) statistic) to reject uniformity of the pits in the presence of misspecification in the mean, variance, skewness or kurtosis of the forecast errors was investigated.
Abstract: One popular method for testing the validity of a model's forecasts is to use the probability integral transforms (pits) of the forecasts and to test for departures from the dual hypotheses of independence and uniformity, with departures from uniformity tested using the Kolmogorov-Smirnov (KS) statistic. This paper investigates the power of five statistics (including the KS statistic) to reject uniformity of the pits in the presence of misspecification in the mean, variance, skewness or kurtosis of the forecast errors. The KS statistic has the lowest power of the five statistics considered and is always dominated by the Anderson and Darling statistic. Copyright (C) 2003 John Wiley Sons, Ltd.

74 citations


Journal ArticleDOI
TL;DR: The authors compare linear autoregressive (AR) and self-exciting threshold auto-regression (SETAR) models in terms of their point forecast performance, and their ability to characterize the uncertainty surrounding those forecasts, i.e. interval or density forecasts.
Abstract: We compare linear autoregressive (AR) models and self-exciting threshold autoregressive (SETAR) models in terms of their point forecast performance, and their ability to characterize the uncertainty surrounding those forecasts, i.e. interval or density forecasts. A two-regime SETAR process is used as the data-generating process in an extensive set of,Monte Carlo simulations, and we consider the discriminatory power of recently developed methods of forecast evaluation for different degrees of non-linearity. We find that the interval and density evaluation methods are unlikely to show the linear model to be deficient on samples of the size typical for macroeconomic data. Copyright (C) 2003 John Wiley Sons, Ltd.

65 citations


Journal ArticleDOI
TL;DR: In this article, the authors examined the out-of-sample forecast performance of two parametric and two non-parametric nonlinear models of stock returns, including the standard regime switching and the Markov regime switching.
Abstract: Following recent non-linear extensions of the present-value model, this paper examines the out-of-sample forecast performance of two parametric and two non-parametric nonlinear models of stock returns. The parametric models include the standard regime switching and the Markov regime switching, whereas the non-parametric are the nearest-neighbour and the artificial neural network models. We focused on the US stock market using annual observations spanning the period 1872-1999. Evaluation of forecasts was based on two criteria, namely forecast accuracy and forecast encompassing. In terms of accuracy, the Markov and the artificial neural network models produce at least as accurate forecasts as the other models. In terms of encompassing, the Markov model outperforms all the others. Overall, both criteria suggest that the Markov regime switching model is the most preferable non-linear empirical extension of the present-value model for out-of-sample stock return forecasting. Copyright © 2003 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: The authors examined the macro-dynamic behavior of 15 OECD countries in terms of a small set of familiar, widely-used core economic variables, omitting countryspecific shocks, and found that a simple VAR "common model" strongly supports the hypothesis that many industrialized nations have similar macroeconomic dynamics.
Abstract: Is there a common model inherent in macroeconomic data? Macroeconomic theory suggests that market economies of various nations should share many similar dynamic patterns; as a result, individual-country empirical models, for a wide variety of countries often include the same variables. Yet, empirical studies often find important roles for idiosyncratic shocks in the differing macroeconomic performance of countries. We use forecasting criteria to examine the macro-dynamic behavior of 15 OECD countries in terms of a small set of familiar, widely–used core economic variables, omitting countryspecific shocks. We find this small set of variables and a simple VAR “common model” strongly supports the hypothesis that many industrialized nations have similar macroeconomic dynamics.

Journal ArticleDOI
TL;DR: The first Technology Foresight Programme (TEP) as mentioned in this paper was a foresight program for transition countries, which was based on panel activities and a large-scale Delphi survey with a strong emphasis on socio-economic needs.
Abstract: Hungary launched its first Technology Foresight Programme (TEP) in 1997. This was a holistic foresight programme, based on panel activities and a large-scale Delphi survey, with a strong emphasis on socio-economic needs. The paper discusses why a foresight exercise is relevant to a transition country, then describes what was done (organization, methods and results), and how the process evolved in Hungary. Policy conclusions, methodological lessons and questions for further research are also offered. Copyright © 2003 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, the authors studied whether the people or companies which make forecasts behave strategically with the aim of maximizing aspects such as publicity, salary or their prestige, or more generally to minimize some loss function; or whether, on the contrary, they make forecasts which resemble consensus forecasts.
Abstract: Professional forecasters can have other objectives as well as minimizing expected squared forecast errors. This paper studies whether the people or companies which make forecasts behave strategically with the aim of maximizing aspects such as publicity, salary or their prestige, or more generally to minimize some loss function; or whether, on the contrary, they make forecasts which resemble consensus forecasts (herding behaviour). This study also analyses whether, as forecasters gain more reputation and experience, they make more radical forecasts, that is, they deviate further from the consensus. For this the Livingston Survey is used, a panel of experts who make forecasts on the future evolution of the United States economy. Copyright © 2003 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, the authors examined the impact of marketing covariates on a model's forecasting performance and explored whether their presence enables to reduce the length of the model calibration period (i.e., shorten the duration of the test market).
Abstract: A number of researchers have developed models that use test market data to generate forecasts of a new product’s performance. However, most of these models have ignored the effects of marketing covariates. In this paper we examine what impact these covariates have on a model’s forecasting performance and explore whether their presence enables us to reduce the length of the model calibration period (i.e. shorten the duration of the test market). We develop from first principles a set of models that enable us to systematically explore the impact of various model ‘components’ on forecasting performance. Furthermore, we also explore the impact of the length of the test market on forecasting performance. We find that it is critically important to capture consumer heterogeneity, and that the inclusion of covariate effects can improve forecast accuracy, especially for models calibrated on fewer than 20 weeks of data. Copyright © 2003 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, the authors reviewed the 1995-1999 foresight program of the Dutch National Council for Agricultural Research, and evaluated some key dimensions of the foresight process, including the selection and range of participants, the immediate impact of interactive tools such as workshops and the ultimate effect on the strategic thinking in the agricultural sector.
Abstract: Science and Technology Foresight (STF) is an interactive and systematic exploration of future dynamics of science, technology, the economy and society with the aim of identifying and supporting viable strategies and actions for stakeholders. In comparison to futures studies and forecasting, the literature on foresight has paid little attention to its actual strategic value. In this paper we review the 1995-1999 foresight programme of the Dutch National Council for Agricultural Research, and evaluate some key dimensions of the foresight process, including the selection and range of participants, the immediate impact of interactive tools such as workshops and the ultimate effect on the strategic thinking in the agricultural sector. The evaluation indicates that strategic thinking in the Dutch agricultural sector has improved. The paper concludes with suggestions for monitoring and evaluation of foresight that may increase the understanding of foresight's strategic value.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the application of diffusion index forecasting models to the problem of forecasting the growth rates of real output and real investment, and find gains in forecast accuracy at short horizons from the diffusion index models.
Abstract: The growth rates of real output and real investment are two macroeconomic time series which are particularly difficult to forecast. This paper considers the application of diffusion index forecasting models to this problem. We begin by characterizing the performance of standard forecasts, via recently-introduced measures of predictability and the forecast content, noting the maximum horizon at which the forecasts have value. We then compare diffusion index forecasts with a variety of alternatives, including the forecasts made by the OECD. We find gains in forecast accuracy at short horizons from the diffusion index models, but do not find evidence that the maximum horizon for forecasts can be extended in this way. Copyright © 2003 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, the authors draw lessons from the French technology foresight exercise "Key Technologies 2005" and present a specific tool which was developed to describe each technology (a characterization grid relating functional market needs and technological solutions to fulfil the generic need).
Abstract: The paper draws lessons from the French technology foresight exercise ‘Key Technologies 2005’. It first describes the exercise as it took place: its context and objectives as well as the methodology that was adopted to identify, select and characterize 120 key technologies. Specifically, the paper describes the criteria used to select among the candidate key technologies, and then presents a specific tool which was developed to describe each technology (a characterization grid relating functional market needs and technological solutions to fulfil the generic need). Finally, twelve lessons are discussed. These deal with both the content of the foresight results and the methodology of running a technology foresight at national level. Copyright © 2003 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, the authors presented two composite leading indicators of economic activity in Germany estimated using a dynamic factor model with and without regime switching, and obtained optimal inferences of business cycle turning points indicate that the two state regime switching procedure leads to a successful representation of the sample data and provides an appropriate tool for forecasting business conditions.
Abstract: In this paper we present two new composite leading indicators of economic activity in Germany estimated using a dynamic factor model with and without regime switching. The obtained optimal inferences of business cycle turning points indicate that the two-state regime switching procedure leads to a successful representation of the sample data and provides an appropriate tool for forecasting business conditions. Copyright © 2003 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, the central England temperature (CET) data were extracted from 1723, using both annual and seasonal averages, and attention was focused on fitting non-parametric trends and it was found that, while there is no compelling evidence of a trend increase in the CET, there have been three periods of cooling, stability and warming, roughly associated with the beginning and the end of the Industrial Revolution.
Abstract: Trends are extracted from the central England temperature (CET) data available from 1723, using both annual and seasonal averages. Attention is focused on fitting non-parametric trends and it is found that, while there is no compelling evidence of a trend increase in the CET, there have been three periods of cooling, stability, and warming, roughly associated with the beginning and the end of the Industrial Revolution. There does appear to have been an upward shift in trend spring temperatures, but forecasting of current trends is hazardous because of the statistical uncertainty surrounding them. Copyright © 2003 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, the issues of non-stationarity and long memory of real interest rates are examined and compared with autoregressive models allowing short-term mean reversion are compared with fractional integration models in terms of their ability to explain the behaviour of the data and to forecast out-of-sample.
Abstract: The issues of non-stationarity and long memory of real interest rates are examined here. Autoregressive models allowing short-term mean reversion are compared with fractional integration models in terms of their ability to explain the behaviour of the data and to forecast out-of-sample. The data used are weekly observations of 3-month Eurodeposit rates for 10 countries, adjusted for inflation, for 14 years. Following Brenner, Harjes and Kroner, the volatility of these rates is shown to both exhibit GARCH effects and depend on the level of interest rates. Although relatively little support is found for the hypothesis of mean reversion, evidence of long memory in interest rate changes is found for seven countries. The out-of-sample forecasting performance for a year ahead of the fractional integrated models was significantly better than a no change. Copyright © 2003 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, the authors adopt the backtesting criteria of the Basle Committee to compare the performance of a number of simple value-at-risk (VaR) models, and find that the use of ARCH and GARCH-based models to forecast their VaRs is not a reliable way to manage a bank's market risk.
Abstract: This paper adopts the backtesting criteria of the Basle Committee to compare the performance of a number of simple Value-at-Risk (VaR) models. These criteria provide a new standard on forecasting accuracy. Currently central banks in major money centres, under the auspices of the Basle Committee of the Bank of International settlement, adopt the VaR system to evaluate the market risk of their supervised banks. Banks are required to report VaRs to bank regulators with their internal models. These models must comply with Basle's backtesting criteria. If a bank fails the VaR backtesting, higher capital requirements will be imposed. VaR is a function of volatility forecasts. Past studies mostly conclude that ARCH and GARCH models provide better volatility forecasts. However, this paper finds that ARCH- and GARCH-based VaR models consistently fail to meet Basle's backtesting criteria. These findings suggest that the use of ARCH- and GARCH-based models to forecast their VaRs is not a reliable way to manage a bank's market risk. Copyright © 2002 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: A sampling-based Bayesian approach for fractionally integrated autoregressive moving average (ARFIMA) processes is presented in this article, where a particular type of ARMA process is used as an approximation for the ARIMA in a Metropolis-Hastings algorithm, and then importance sampling is used to adjust for the approximation error.
Abstract: A new sampling-based Bayesian approach for fractionally integrated autoregressive moving average (ARFIMA) processes is presented A particular type of ARMA process is used as an approximation for the ARFIMA in a Metropolis–Hastings algorithm, and then importance sampling is used to adjust for the approximation error This algorithm is relatively time-efficient because of fast convergence in the sampling procedures and fewer computations than competitors Its frequentist properties are investigated through a simulation study The performance of the posterior means is quite comparable to that of the maximum likelihood estimators for small samples, but the algorithm can be extended easily to a variety of related processes, including ARFIMA plus short-memory noise The methodology is illustrated using the Nile River data Copyright © 2003 John Wiley & Sons, Ltd

Journal ArticleDOI
TL;DR: An approach to reduce the uncertainty regarding alternative AVG implementations by developing a backcasting approach to eliminate non-plausible, non-promising and non-accepted AVG concepts and reduce the scope of policy making to the most viable ones.
Abstract: In various countries, transport policy makers are increasingly interested in the automation of vehicle driving tasks. Current policy developments regarding Automated Vehicle Guidance (AVG) are complicated by several uncertainties about the development of AVG technologies, whether their implementation will contribute to or conflict with transport policy goals, and the basic societal conditions that are required for AVG implementation. In this article, we present an approach to reduce the uncertainty regarding alternative AVG implementations. In particular, we develop a backcasting approach to limit the scope of policy development and research by eliminating parts of the large variety in possible AVG developments. This approach consists of the following steps: (1) the specification of plausible AVG concepts; (2) the analysis of the conditions for the implementation of these concepts, resulting in a set of promising AVG concepts and (3) the analysis whether stakeholders' decisions and actions related to the implementation of plausible concepts will be fulfilled in time. This approach helps to eliminate non-plausible, non-promising and non-accepted AVG concepts and reduce the scope of policy making to the most viable ones. Copyright © 2003 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: An efficient way to select the best subset threshold autoregressive model from a very large of number of possible models, and at the same time estimate the unknown parameters is developed.
Abstract: We develop in this paper an efficient way to select the best subset threshold autoregressive model. The proposed method uses a stochastic search idea. Differing from most conventional approaches, our method does not require us to fix the delay or the threshold parameters in advance. By adopting the Markov chain Monte Carlo techniques, we can identify the best subset model from a very large of number of possible models, and at the same time estimate the unknown parameters. A simulation experiment shows that the method is very effective. In its application to the US unemployment rate, the stochastic search method successfully selects lag one as the time delay and five best models from more than 4000 choices. Copyright © 2003 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: The United States government has not sponsored technology foresight as it has been defined and practised by governments in Europe, Japan and elsewhere in the world as mentioned in this paper, but the US government sponsored a parallel effort called critical technologies identification between 1989 and 1999.
Abstract: The United States government has not sponsored technology foresight as it has been defined and practised by governments in Europe, Japan and elsewhere in the world. [Foresight has been described in many places, but the original concept, as far as the authors are aware, was proposed by Martin and Irvine (1989).] [Different approaches to identifying important technologies is summarized in Wagner (1997).] Instead, the US government sponsored a parallel effort called ‘critical technologies identification’ between 1989 and 1999. This paper describes the critical technologies movement in the United States and explores why critical technologies identification was limited in its ability to capture the attention of US government officials and other decision-makers. The authors suggest possible alternative futures for foresight in the United States. Copyright © 2003 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: A multivariate dynamic linear model that allows to carry out a dynamic principal components analysis in a set of multivariate time series and to analyse the similarity in their evolution once the influence of non-stationarity in each of them has been removed is proposed.
Abstract: In this paper, we propose a multivariate dynamic linear model (MDLM) that allows us to carry out a dynamic principal components analysis in a set of multivariate time series and to analyse the similarity in their evolution once the influence of non-stationarity in each of them has been removed. In order to illustrate the methodology, we consider the distribution of value added of the firms operating in the Spanish Transport Material Manufacturing sector. Copyright © 2003 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, the authors provide an empirical study of some multivariate ARCH and GARCH models that already exist in the literature and have attracted a lot of practical interest and provide implementation details and illustrations using daily exchange rates of the Athens exchange market.
Abstract: Multivariate time-varying volatility models have attracted a lot of attention in modern finance theory. We provide an empirical study of some multivariate ARCH and GARCH models that already exist in the literature and have attracted a lot of practical interest. Bayesian and classical techniques are used for the estimation of the parameters of the models and model comparisons are addressed via predictive distributions. We provide implementation details and illustrations using daily exchange rates of the Athens exchange market. Copyright © 2003 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, the authors examined the intra-day behavior of asset prices shortly before and after large price changes and found that prices do overreact and a correction takes place after the large price movements, especially those on the downside.
Abstract: This paper examines the intra-day behaviour of asset prices shortly before and after large price changes. Whereas similar studies so far have been based on daily closing prices, I use three years of high frequency data of 120 stocks listed on the French stock exchange. Various systematic patterns, in addition to those often reported in the literature, emerge from this data. Evidence is found that prices do overreact and that a correction takes place after large price movements, especially those on the downside. The correction does not take place immediately after the large price change. Prior to this, some very significant and sometimes economically important patterns can be observed. When the bid‐ask spread is taken into account, I still find some ex post profitable trading strategies that are, however, too small in magnitude to suggest market inefficiency. Copyright © 2003 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, the authors make a critical analysis of two foresight exercises organized by a research institution and make proposals to facilitate the development of strategic intelligence and improve the linkages between foresight, evaluation and programme formulation.
Abstract: Foresight is a powerful tool for imagining possible futures, for raising public awareness, for helping decision-making and addressing questions related to the relationship between science and society. In this article we make a critical analysis of two foresight exercises organized by a research institution. A precise description of the exercises conducted on the futures of the cocoa commodity chain and the hevea commodity chain helps to understand the possible processes of a foresight exercise, what foresight can achieve and what kinds of difficulties can occur. From these experiences, proposals are made to facilitate the development of strategic intelligence in a research institution and improve the linkages between foresight, evaluation and programme formulation. The role of foresight in transforming knowledge and in pushing a closed system into a political arena is also shown. (Resume d'auteur)