scispace - formally typeset
Search or ask a question

Showing papers on "Sharpe ratio published in 2001"


Journal ArticleDOI
TL;DR: In this paper, two modifications are introduced into the standard real-business-cycle model: habit preferences and a two-sector technology with limited inter-sectoral factor mobility, which is consistent with the observed mean risk-free rate, equity premium, and Sharpe ratio on equity.
Abstract: Two modifications are introduced into the standard real-business-cycle model: habit preferences and a two-sector technology with limited intersectoral factor mobility. The model is consistent with the observed mean risk-free rate, equity premium, and Sharpe ratio on equity. In addition, its business-cycle implications represent a substantial improvement over the standard model. It accounts for persistence in output, comovement of employment across different sectors over the business cycle, the evidence of "excess sensitivity" of consumption growth to output growth, and the "inverted leading-indicator property of interest rates," that interest rates are negatively correlated with future output.

1,108 citations


Journal ArticleDOI
TL;DR: It is demonstrated how direct reinforcement can be used to optimize risk-adjusted investment returns (including the differential Sharpe ratio), while accounting for the effects of transaction costs.
Abstract: We present methods for optimizing portfolios, asset allocations, and trading systems based on direct reinforcement (DR). In this approach, investment decision-making is viewed as a stochastic control problem, and strategies are discovered directly. We present an adaptive algorithm called recurrent reinforcement learning (RRL) for discovering investment policies. The need to build forecasting models is eliminated, and better trading performance is obtained. The direct reinforcement approach differs from dynamic programming and reinforcement algorithms such as TD-learning and Q-learning, which attempt to estimate a value function for the control problem. We find that the RRL direct reinforcement framework enables a simpler problem representation, avoids Bellman's curse of dimensionality and offers compelling advantages in efficiency. We demonstrate how direct reinforcement can be used to optimize risk-adjusted investment returns (including the differential Sharpe ratio), while accounting for the effects of transaction costs. In extensive simulation work using real financial data, we find that our approach based on RRL produces better trading strategies than systems utilizing Q-learning (a value function method). Real-world applications include an intra-daily currency trader and a monthly asset allocation system for the S&P 500 Stock Index and T-Bills.

396 citations


Journal ArticleDOI
TL;DR: In this paper, a portfolio selection model was developed to allocate financial assets by maximising expected return subject to the constraint that the expected maximum loss should meet the Value-at-Risk limits set by the risk manager.
Abstract: In this paper, we develop a portfolio selection model which allocates financial assets by maximising expected return subject to the constraint that the expected maximum loss should meet the Value-at-Risk limits set by the risk manager. Similar to the mean–variance approach a performance index like the Sharpe index is constructed. Furthermore when expected returns are assumed to be normally distributed we show that the model provides almost identical results to the mean–variance approach. We provide an empirical analysis using two risky assets: US stocks and bonds. The results highlight the influence of both non-normal characteristics of the expected return distribution and the length of investment time horizon on the optimal portfolio selection.

292 citations


Posted Content
TL;DR: In this paper, the authors use a proxy for the log consumption-aggregate wealth ratio as a predictor of both the mean and volatility of stock market returns, and show that variation in the equity risk-premium is strongly negatively linked to variation in market volatility.
Abstract: Are excess stock market returns predictable over time and, if so, at what horizons and with which economic indicators? Can stock return predictability be explained by changes in stock market volatility? How does the mean return per unit risk change over time? This chapter reviews what is known about the time-series evolution of the risk-return tradeoff for stock market investment, and presents some new empirical evidence using a proxy for the log consumption-aggregate wealth ratio as a predictor of both the mean and volatility of excess stock market returns. We characterize the risk-return tradeoff as the conditional expected excess return on a broad stock market index divided by its conditional standard deviation, a quantity commonly known as the Sharpe ratio. Our own investigation suggests that variation in the equity risk-premium is strongly negatively linked to variation in market volatility, at odds with leading asset pricing models. Since the conditional volatility and conditional mean move in opposite directions, the degree of countercyclicality in the Sharpe ratio that we document here is far more dramatic than that produced by existing equilibrium models of financial market behaviour, which completely miss the sheer magnitude of variation in the price of stock market risk.

281 citations


Journal ArticleDOI
TL;DR: This article showed that hedge fund indices are highly attractive in mean-variance terms, but this is much less the case when skewness, kurtosis and autocorrelation are taken into account.
Abstract: The monthly return distributions of many hedge fund indices exhibit highly unusual skewness and kurtosis properties as well as first-order serial correlation. This has important consequences for investors. We demonstrate that although hedge fund indices are highly attractive in mean-variance terms, this is much less the case when skewness, kurtosis and autocorrelation are taken into account. Sharpe Ratios will substantially overestimate the true risk-return performance of (portfolios containing) hedge funds. Similarly, mean-variance portfolio analysis will over-allocate to hedge funds and overestimate the attainable benefits from including hedge funds in an investment portfolio. We also find substantial differences between indices that aim to cover the same type of strategy. Investors' perceptions of hedge fund performance and value added will therefore strongly depend on the indices used.

157 citations


Journal ArticleDOI
TL;DR: In this paper, the authors evaluate the performance of 16 different hedge funds during rising and falling stock markets between 1990 and 1998 both as stand-alone assets and as portfolio assets and conclude that hedge funds generally provide more downside protection than commodity funds.
Abstract: A primary motivation for investing in hedge funds and commodity funds is to diversify against falling stock prices. The authors evaluate the performance of 16 different such funds during rising and falling stock markets between 1990 and 1998 both as stand–alone assets and as portfolio assets. They use the Sharpe ratio and alternative safety–first criteria to evaluate performance. The conclusion is that commodity funds generally provide more downside protection than hedge funds. Commodity funds have higher returns in bear markets than hedge funds, and generally have an inverse correlation with stock returns in bear markets. Hedge funds typically exhibit a higher positive correlation with stock returns in bear markets than in bull markets. Three hedge fund styles—market–neutral, event–driven, and global macro—provide fairly good downside protection with more attractive returns over all markets than commodity funds.

108 citations


Journal ArticleDOI
TL;DR: In this article, a derivative structure that can induce an upward bias in the measurement of the Sharpe ratio is described, which smoothes observed returns and lowers observed volatility without significantly altering the annual return.
Abstract: The Sharpe ratio is a commonly used measure of return/risk performance. However, the Sharpe ratio is susceptible to gaming by managers. This article describes a derivative structure that can induce an upward bias in the measurement of the Sharpe ratio. The structure accomplishes this by shifting returns from the highest monthly return each year to the lowest one. This smoothes observed returns-and lowers observed volatility-without significantly altering the annual return. The objective of the article is to demonstrate how adding derivatives can appear to improve risk-adjusted return without actually doing so.

89 citations


Journal ArticleDOI
TL;DR: In this article, the authors examined the investment performance of Singapore real estate and property stocks over the past 25 years and found that real estate outperformed property stocks on a risk-adjusted basis.
Abstract: Examines the investment performance of Singapore real estate and property stocks over the past 25 years. Evaluations using coefficient of variation (CV), Sharpe index (SI) and time‐varying Jensen abnormal return index (JI) suggest that real estate outperformed property stocks on a risk‐adjusted basis. Results also indicate that risk‐adjusted investment performance for residential properties remained superior to performance for other real estate types and property stocks. Further analysis using time‐varying JI reveals that the excess return performance of property stocks could differ significantly from that of direct properties, and performance of property stock led real estate market performance. Finally, the performance implications arising from the study are evaluated.

45 citations


Posted Content
TL;DR: This paper showed that hedge fund indices are highly attractive in mean-variance terms, but this is much less the case when skewness, kurtosis and autocorrelation are taken into account.
Abstract: he monthly return distributions of many hedge fund indices exhibit highly unusual skewness and kurtosis properties as well as first-order serial correlation. This has important consequences for investors. We demonstrate that although hedge fund indices are highly attractive in mean-variance terms, this is much less the case when skewness, kurtosis and autocorrelation are taken into account. Sharpe Ratios will substantially overestimate the true risk-return performance of (portfolios containing) hedge funds. Similarly, mean-variance portfolio analysis will over-allocate to hedge funds and overestimate the attainable benefits from including hedge funds in an investment portfolio. We also find substantial differences between indices that aim to cover the same type of strategy. Investors’ perceptions of hedge fund performance and value added will therefore strongly depend on the indices used.

43 citations


Journal ArticleDOI
TL;DR: In this paper, the authors construct optimal portfolios of equity funds by combining historical returns on funds and passive indexes with prior views about asset pricing and skill, and distinguish pricing-model inaccuracy from managerial skill.
Abstract: We construct optimal portfolios of equity funds by combining historical returns on funds and passive indexes with prior views about asset pricing and skill. By including both benchmark and nonbenchmark indexes, we distinguish pricing-model inaccuracy from managerial skill. Even modest confidence in a pricing model helps construct portfolios with high Sharpe ratios. Investing in active mutual funds can be optimal even for investors who believe active managers cannot outperform passive indexes. Optimal portfolios exclude hot-hand funds even for investors who believe momentum is priced. Our large universe of funds offers no close substitutes for the Fama-French and momentum benchmarks.

42 citations


Journal ArticleDOI
TL;DR: In this paper, the authors derived two risk-adjusted performance measures for investors with risk averse preferences, Xe and Re, for real-time trading models for foreign exchange rates and their properties are compared to those of more traditional measures like the annualized return, the Sharpe Ratio and the maximum drawdown.
Abstract: We derive two risk-adjusted performance measures for investors with risk averse preferences. Maximizing these measures is equivalent to maximizing the expected utility of an investor. The rst measure, Xe, is derived assuming a constant risk aversion while the second measure, Re ,i s based on a stronger risk aversion to clustering of losses than of gains. The clustering of returns is captured through a multi-horizon framework. The empirical properties of Xe, Re are studied within the context of real-time trading models for foreign exchange rates and their properties are compared to those of more traditional measures like the annualized return, the Sharpe Ratio and the maximum drawdown. Our measures are shown to be more robust against clustering of losses and have the ability to fully characterize the dynamic behaviour of investment strategies. c 2001 Elsevier Science B.V. All rights reserved. PACS: C52; C53

Journal ArticleDOI
TL;DR: In this article, the profitability of non-linear trading rules based on nearest neighbor predictors was investigated for the New York Stock Exchange, and the results suggest that, taking into account transaction costs, the nonlinear trading rule is superior to a risk-adjusted buy-and-hold strategy.
Abstract: In this paper we investigate the profitability of non-linear trading rules based on nearest neighbor predictors. Applying this investment strategy to the New York Stock Exchange, our results suggest that, taking into account transaction costs, the non-linear trading rule is superior to a risk-adjusted buy-and-hold strategy (both in terms of returns and of Sharpe ratios) for the 1998 and 1999 periods of upward trend. In contrast, for the relatively "stable" market period of 2000, we found that both strategies generate equal returns, although the risk-adjusted buy-and-hold strategy yields a higher Sharpe ratio.

Journal ArticleDOI
TL;DR: In this paper, the authors examined the out-of-sample performance of using resampled portfolio efficiency, an approach proposed in 1998, in international asset allocation strategies for the period January 1983 to May 2000.
Abstract: We examined the out-of-sample performance of using resampled portfolio efficiency, an approach proposed in 1998, in international asset allocation strategies for the period January 1983 to May 2000. For most models we used to estimate expected returns, using strategies based on resampled portfolio efficiency provided some benefits, in terms of improved Sharpe ratios and abnormal returns, over using traditional mean–variance strategies. We found little evidence, however, that active mean–variance strategies or resampled efficiency strategies would have generated significantly positive abnormal returns for the time period we considered.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a methodology to estimate portfolio risks in both normal and stressed times using confidence weighted forecast correlations, which is based on the assumption that the distribution of returns of asset prices in the past will hold into the future, hence an estimate of the tail of the return distribution would yield a good estimate of portfolio risk.
Abstract: The instability of historical risk factor correlations renders their use in estimating portfolio risk extremely questionable. In periods of market stress correlations of risk factors have a tendency to quickly go well beyond estimated values. For instance, in times of severe market stress, one would expect with certainty to see the correlation of yield levels and credit spreads to go to -1, even though historical estimates will miss this region of correlation. This event might lead to realized portfolio risk profile substantially different than what was initially estimated. The purpose of this paper is to explore the effects of correlations on fixed income portfolio risks. To achieve this, we propose a methodology to estimate portfolio risks in both normal and stressed times using confidence weighted forecast correlations. 1 Background Most calculations of portfolio risk require an estimate of the volatilities and correlations of the assets in the portfolio. The traditional methodology assumes that correlations and volatilities obtained from historical data are a fair estimate of future correlations and volatilities. Some sophisticated techniques, as summarized in Litterman and Winkelmann (1998) weight recent data more heavily, but still suffer from the basic problem of depending too heavily on history. Most standard techniques for estimating Value at Risk1(VaR) assume that the distribution of returns of asset prices in the past will hold into the future, hence an estimate of the tail of the return distribution would yield a good estimate of the loss in value of the portfolio in times of stress. Obtaining implied correlations from traded option prices is possible in foreign exchange and interest rates to some degree, but far from satisfactory when multiple asset classes are involved (see, for example Bhansali (1997)). At some firms reliance on historical estimates of correlation and volatility are treated with skepticism, because of the simple fact that these historical estimates fail miserably in times of market stress, and even in normal times are at best quite inaccurate. Correlation and volatility forecasts that are arrived at as part of a qualitative process involving discussions and meetings are then considered more trustworthy. The ratio of expected excess return to expected volatility is the Sharpe ratio of a portfolio2. As an active bond manager, one cannot consistently obtain an estimate of the ex-ante Sharpe ratio by forecasting excess returns (the numerator) and using historical data for measuring risk (the denominator). What one would like to do is also forecast correlations and volatilities that are expected to be realized in different market scenarios (with levels of certainty for different correlations and volatilities based on our confidence in the forecasts) and then estimate the Sharpe ratio for the future. If there are many different risk factors, making up a correlation matrix is not simple. An example will show the inherent problem. Assume that we have three risk factors, level of yields, slope of the yield curve and spreads of non-government sector. Then, as soon as the correlation between level and slope, and the correlation between level and spread, are specified the correlation between slope and spread is automatically restricted to be in a region whose range depends on the specified correlations. This range is determined by the requirement that the correlation matrix be positive semi-definite. For a large set of factors, this problem is compounded, i.e. selecting a set of correlations by hand restricts the ranges of all the other correlations in a very complicated way. Typically a forecasted correlation matrix obtained by some qualitative process will not be mathematically consistent and a methodology must be introduced to obtain a consistent correlation matrix from the forecasted one. A group of portfolio managers are likely to have a high degree of confidence in forecasting specific elements of the correlation matrix, e.g. the correlation between level and slope, and slightly lower degree of confidence in forecasting other correlations, e.g. the correlation between level and spreads. The methodology introduced by Rebonato and Jackel (2000) provides a convenient way to obtain a consistent correlation matrix that is close to the See for example, Jorion (2001). For a recent discussion of some sources of bias in VAR estimates see, Xiongwei and Pearson (1999). See, for example, Wilmott (2000). 1 original forecast. Elements that we have a lot of confidence about are forced to be close to the desired values using a systematic weighting procedure, and more freedom is allowed in the selection of the elements about which we are not so certain. The confidence weighting is particularly important for large correlation matrices, where forecasting all of the elements is not practical. Taking most of the elements from historical data and forecasting a few of the key elements, the weighting procedure can be arranged to get from this hybrid historical/forecasted matrix a consistent correlation matrix with elements that are very close to the forecasted ones (however, some of the elements taken from historical data may change significantly). The purpose of this paper is to illustrate with some simple examples the importance (particularly in times of market stress) of using forecast correlation matrices and also to elaborate on the method of Rebonato and Jackel (2000) for obtaining consistent confidence weighted correlation matrices that are as close as possible to the forecasted one. One small technical improvement on the work of Rebonato and Jackel is the use of a parameterization for the most general N ×N correlation matrix that involves only N(N − 1)/2 angles. In the next section we illustrate, with two examples, how forecast correlation matrices change between normal and stressed economic environments. The problem of mathematical consistency of forecast correlation matrices is also discussed. Section 3 elaborates on the method of Rebonato and Jackel for obtaining a consistent confidence weighted correlation matrices from a forecasted matrix. In section 4 we illustrate the importance of forecasting correlation matrices by examining how correlations effect the total level duration of portfolios in normal and stressed economic environments. Finally some concluding remarks are given in section 5. 2 Factors and Correlations Let us begin by identifying some of the main sources of risk for typical fixed income portfolios. This list is by no means exhaustive, and used solely for illustration. • Duration: Risk due to the change in yield level factor (Level). • 2-10 Duration: Risk due to the change in slope factor between the 2 and 10 year points (Slope2−10). 3 • 10-30 Duration: Risk due to the change in slope factor between 10 and 30 year points (Slope10−30). • Mortgage Spread Duration: Risk due to the change in the mortgage spread factor (Mortgage) measured against the benchmark treasury curve. • Corporate Spread Duration: Risk due to the change in the corporate spread factor (Corporate) measured against the benchmark treasury curve. This list can be expanded to include risk factors such as currency, implied tax rates (for municipals), convertibles, implied inflation rates (for TIPS), EMBI spread (for emerging In many risk-management systems the risk of yield curve reshapings is measured by “key-rate” durations in place of the 2− 10 and 10− 30 durations. 2 market bonds) etc. For purpose of illustrating our approach in complete detail, we will work only with the five factors listed above. One possible expectation for the signs of the elements of the correlation matrix is

Journal ArticleDOI
TL;DR: In this paper, data envelopment analysis has shown to be a powerful tool in the evaluation of investment performance when investor's expected utility function contains multiple attributes such as expenses, capital constraints, and even the attributes risk and return calculated over different time periods.
Abstract: This work investigates mutual fund performance through data envelopment analysis. The data envelopment analysis has shown to be a powerful tool in the evaluation of investment performance when investor's expected utility function contains multiple attributes. Besides the traditional attributes risk and return calculated over a certain time interval, data envelopment analysis can incorporate other attributes to the investor such as expenses, capital constraints, and even the attributes risk and return calculated over different time periods. We analyzed 106 funds in the period of December 1997 to November 1999. The results identified seven dominant funds which were confronted with the less efficient ones in terms of attributes and weighted differences. We made also a comparison with the results using the Sharpe Ratio. It is worthwhile mentioning that the main objective of this paper is not to prescribe an alternative technique for the mean-variance optimization, but to describe a new application of data envelopment analysis.

Posted Content
TL;DR: The authors used a stationary overlapping-generations model to show that life-cycle effects can either mitigate or accentuate the equity premium, the critical ingredient being whether agents accumulate or deccumulate risky assets as they age.
Abstract: Constantines and Duffie (1996) show that for Idiosyncratic risk to matter for asset pricing the shocks must (i) be highly persistent and (ii) become more volatile during economic contractions. We show that data from the Panel Study on Income Dynamics (PSID) are consistent with these requirements. Our results are based on econometric methods that incorporate macroeconomic information going beyond the time horizon of the PSID, dating back to 1910. We go on to argue that life-cycle effects are fundamental for how idiosyncratic risk affects asset pricing. We use a stationary overlapping-generations model to show that life-cycle effects can either mitigate or accentuate the equity premium, the critical ingredient being whether agents accumulate or deccumulate risky assets as they age. Our model predicts the latter and is able to account for both the average equity premium and the Sharpe ratio observed on the US stock market.

Posted Content
TL;DR: In this article, the relation between performance measures and preferences functions was studied and it was shown that the first three measures correspond to the preferences of investors with a low degree of risk aversion, whereas the latter three measures were associated with investors with intermediate and high degrees of risk avoidance.
Abstract: In this article we study the relation between performance measures and preferences functions. In particular, we examine to what extent performance measures can be used as alternatives for preference functions. We study the Sharpe ratio, Sharpe’s alpha, the expected return measure, the Sortino ratio, the Fouse index, and the upside potential ratio. We find that the first three measures correspond to the preferences of investors with a low degree of risk aversion, whereas the latter three measures correspond to the preferences of investors with intermediate and high degrees of risk aversion.

01 Jan 2001
TL;DR: In this paper, the relation between performance measures and preferences functions was studied and it was shown that the first three measures correspond to the preferences of investors with a low degree of risk aversion, whereas the latter three measures were associated with investors with intermediate and high degrees of risk avoidance.
Abstract: In this article we study the relation between performance measures and preferences functions. In particular, we examine to what extent performance measures can be used as alternatives for preference functions. We study the Sharpe ratio, Sharpe’s alpha, the expected return measure, the Sortino ratio, the Fouse index, and the upside potential ratio. We find that the first three measures correspond to the preferences of investors with a low degree of risk aversion, whereas the latter three measures correspond to the preferences of investors with intermediate and high degrees of risk aversion.

Journal ArticleDOI
TL;DR: In this article, several criteria that produce rankings of risk management strategies are evaluated, including expected return, value at risk, the Sharpe ratio, the necessary condition for first-degree stochastic dominance with a risk-free asset, and the necessary conditions for second-degree stability.
Abstract: Several criteria that produce rankings of risk management strategies are evaluated. The criteria considered are expected return, value at risk, the Sharpe ratio, the necessary condition for first‐degree stochastic dominance with a risk‐free asset, and the necessary condition for second‐degree stochastic dominance with a risk‐free asset. The criteria performed relatively well in that the most desirable strategy under each criterion was always at least a member of the second‐degree stochastic dominance efficient set. There was also a relatively high degree of consistency between the highest ranked strategies under the various criteria. The effectiveness of the criteria increases as decision makers are assumed to be more risk averse and have greater access to financial leverage

Posted Content
TL;DR: In this paper, the authors investigated the properties of mean-variance efficient portfolios when the number of assets is large and showed that the proportion of assets held short converges to 50% as the number grows, and the investment proportions are extreme with several assets held in large positions.
Abstract: We investigate the properties of mean-variance efficient portfolios when the number of assets is large We show analytically and empirically that the proportion of assets held short converges to 50% as the number of assets grows, and the investment proportions are extreme, with several assets held in large positions The cost of the no-shortselling constraint increases dramatically with the number of assets For about 100 assets the Sharpe ratio can be more than doubled with the removal of this constraint These results have profound implications for the theoretical validity of the CAPM, and for policy regarding short-selling limitations

Journal ArticleDOI
TL;DR: In this article, the authors examined the relationship between the risk preferences of the investor and the risk-adjusted performance measure and concluded that it is difficult to interpret differences in the outcomes of risk adjusted performance measures exclusively as differences in forecasting skills of portfolio managers.
Abstract: Many performance measures, such as the classic Sharpe ratio have difficulty in evaluating the performance of mutual funds with skewed return distributions. Common causes for skewness are the use of options in the portfolio or superior market timing skills of the portfolio manager. In this article we examine to what extent downside risk and the upside potential ratio can be used to evaluate skewed return distributions. In order to accomplish this goal, we first show the relation between the risk preferences of the investor and the risk-adjusted performance measure. We conclude that it is difficult to interpret differences in the outcomes of risk-adjusted performance measures exclusively as differences in forecasting skills of portfolio managers. We illustrate this with an example of a simulation study of a protective put strategy. We show that the Sharpe ratio leads to incorrect conclusions in the case of protective put strategies. On the other hand, the upside potential ratio leads to correct conclusions. Finally, we apply downside risk and the upside potential ratio in the process of selecting a mutual fund from a sample of mutual funds in the Euronext stock markets. The rankings appear similar, which can be attributed to the absence of significant skewness in the sample. However, find that the remaining differences can be quite significant for individual fund managers, and that these differences can be attributed to skewness. Therefore, we prefer to use the UPR as an alternative to the Sharpe ratio, as it accounts better for the use of options and forecasting skills.

Posted Content
TL;DR: In this paper, Neely et al. used genetic programming and genetic algorithms to find technical trading patterns in the stock market, and found that these trading rules lead to positive excess returns which are statistically and economically significant.
Abstract: Background This paper is a continuation of our investigation of the paradox of technical analysis in the stock market (Fyfe, Marney and Tarbert (1999), Marney et. al (2000)). The Efficient Markets Hypothesis (hereafter the EMH) holds that there should be no discernible pattern in share price data or the prices of other frequently traded financial instruments, as financial markets are efficient. Prices therefore should follow an information-free random-walk. Nevertheless, technical analysis is a common and presumably profitable practice among investment professionals. Applications of Genetic Programming and Genetic Algorithms to the extraction of Technical Trading Patterns from financial data. The subset of technical trading research which is concerned with the application of GAs, GPs and neural networks is very new and underdeveloped and therefore of considerable potential. The most notable empirical work which has been done in this area is that of Neely, Dittmar and Weller (1996, 1997), Neely and Weller (2001) and Neely (2001). We have also done some work in this area ourselves (Fyfe et al. 1999, Marney et al. 2000). The theoretical underpinning for this kind of approach to finding technical trading patterns is provided by the work of Arthur et al. (1997). Using the main six trading currencies, Neely et al. (1996, 1997) find strong evidence of economically significant out-of-sample excess returns to technical trading rules identified by their genetic program. In Allen and Karjaleinen (1999) a genetic algorithm is used to find technical trading rules for the S&P index. Compared to a simple buy-and-hold strategy, these trading rules lead to positive excess returns which are statistically and economically significant. In Fyfe et. al. (1999), a GP is used to discover a successful IbuyI rule. This discovery, as such, however, was not really a refutation of the EMH, as it was really a form of timing specific buy and hold, which was triggered only once. Nevertheless, the return is superior to buy and hold. Using the S&P 500 index, Neely (2001) finds no evidence that technical trading rules identified by a GP significantly outperform buy-and-hold on a risk-adjusted basis. For the case of intraday trading on the forex market, Neely and Weller (2001) find no evidence of excess returns to trading rules derived from a GP and an optimised linear forecasting model. Indeed Neely (2001) observes that a number of studies÷'have generally evaluated raw excess returns rather than explicitly risk-adjusted returns, leaving unclear the implications of their work for the efficient markets hypothesis' (2001, p.1). On the other hand, Neely et al. (1996, 1997) did calculate betas associated with foreign currency portfolio holdings, and did not find evidence of excessive risk bearing. Brown, Geotzman and Kumar (1998) and Bessember and Chan (1998) can also be cited in favour of the hypothesis of superior risk-adjusted returns from technical trading signals. Marney et al. (2000) looked again at their 1999 findings by, amongst other things, adjusting for risk. It was found that although there were other rules which apparently performed well by being very active in the market, the impressive returns to these rules turn out on closer inspection to be illusory, as risk adjusted returns did not compare well with simple buy and hold. Nevertheless, paradoxically, we did find a useful role for technical trading. It is possible to substantially improve on buy and hold by timing it right. Hence our argument is that it is worth analysing the market to find a good intervention point. Purpose and method of the investigation Given that very little work has been done on generating technical trading rules which produce excess risk-adjusted profits, and given that the empirical evidence is somewhat ambiguous, there is clearly considerable scope for additional work in this area. What we propose to do then is to re-examine our previous findings, this time within a more rigorous framework which makes use of a wider data set, more extensive use of techniques of risk adjustment, and more demanding assessment of the robustness of the result with respect to GP representation. 1. Hypotheses Can the GP generate technical trading rules which will generate risk-adjusted excess returns out of sample? Secondly, the is there any further evidence for 'timing-specific' buy and hold. Thirdly, are there any technical trading rules which generalise across data sets or time-periods? 2. Data Set Our data set is drawn from long time series for 5 US shares from a disparate set of industrial sectors and also the S&P 500. 3. Risk adjustment In this study we look at a variety of risk measures including Betas, Sharpe ratios and the X* statistic. 4. The GP - As in Marney et al. (2000) we consider how robust our conclusion is with respect to the GP method used.

Posted Content
TL;DR: In this paper, Neely et al. used genetic programming and genetic algorithms to find technical trading patterns in the stock market, and found that these trading rules lead to positive excess returns which are statistically and economically significant.
Abstract: Background This paper is a continuation of our investigation of the paradox of technical analysis in the stock market (Fyfe, Marney and Tarbert (1999), Marney et. al (2000)). The Efficient Markets Hypothesis (hereafter the EMH) holds that there should be no discernible pattern in share price data or the prices of other frequently traded financial instruments, as financial markets are efficient. Prices therefore should follow an information-free random-walk. Nevertheless, technical analysis is a common and presumably profitable practice among investment professionals. Applications of Genetic Programming and Genetic Algorithms to the extraction of Technical Trading Patterns from financial data. The subset of technical trading research which is concerned with the application of GAs, GPs and neural networks is very new and underdeveloped and therefore of considerable potential. The most notable empirical work which has been done in this area is that of Neely, Dittmar and Weller (1996, 1997), Neely and Weller (2001) and Neely (2001). We have also done some work in this area ourselves (Fyfe et al. 1999, Marney et al. 2000). The theoretical underpinning for this kind of approach to finding technical trading patterns is provided by the work of Arthur et al. (1997). Using the main six trading currencies, Neely et al. (1996, 1997) find strong evidence of economically significant out-of-sample excess returns to technical trading rules identified by their genetic program. In Allen and Karjaleinen (1999) a genetic algorithm is used to find technical trading rules for the S&P index. Compared to a simple buy-and-hold strategy, these trading rules lead to positive excess returns which are statistically and economically significant. In Fyfe et. al. (1999), a GP is used to discover a successful IbuyI rule. This discovery, as such, however, was not really a refutation of the EMH, as it was really a form of timing specific buy and hold, which was triggered only once. Nevertheless, the return is superior to buy and hold. Using the S&P 500 index, Neely (2001) finds no evidence that technical trading rules identified by a GP significantly outperform buy-and-hold on a risk-adjusted basis. For the case of intraday trading on the forex market, Neely and Weller (2001) find no evidence of excess returns to trading rules derived from a GP and an optimised linear forecasting model. Indeed Neely (2001) observes that a number of studies÷'have generally evaluated raw excess returns rather than explicitly risk-adjusted returns, leaving unclear the implications of their work for the efficient markets hypothesis' (2001, p.1). On the other hand, Neely et al. (1996, 1997) did calculate betas associated with foreign currency portfolio holdings, and did not find evidence of excessive risk bearing. Brown, Geotzman and Kumar (1998) and Bessember and Chan (1998) can also be cited in favour of the hypothesis of superior risk-adjusted returns from technical trading signals. Marney et al. (2000) looked again at their 1999 findings by, amongst other things, adjusting for risk. It was found that although there were other rules which apparently performed well by being very active in the market, the impressive returns to these rules turn out on closer inspection to be illusory, as risk adjusted returns did not compare well with simple buy and hold. Nevertheless, paradoxically, we did find a useful role for technical trading. It is possible to substantially improve on buy and hold by timing it right. Hence our argument is that it is worth analysing the market to find a good intervention point. Purpose and method of the investigation Given that very little work has been done on generating technical trading rules which produce excess risk-adjusted profits, and given that the empirical evidence is somewhat ambiguous, there is clearly considerable scope for additional work in this area. What we propose to do then is to re-examine our previous findings, this time within a more rigorous framework which makes use of a wider data set, more extensive use of techniques of risk adjustment, and more demanding assessment of the robustness of the result with respect to GP representation. 1. Hypotheses Can the GP generate technical trading rules which will generate risk-adjusted excess returns out of sample? Secondly, the is there any further evidence for 'timing-specific' buy and hold. Thirdly, are there any technical trading rules which generalise across data sets or time-periods? 2. Data Set Our data set is drawn from long time series for 5 US shares from a disparate set of industrial sectors and also the S&P 500. 3. Risk adjustment In this study we look at a variety of risk measures including Betas, Sharpe ratios and the X* statistic. 4. The GP - As in Marney et al. (2000) we consider how robust our conclusion is with respect to the GP method used.

Journal ArticleDOI
TL;DR: In this article, the authors used minimum-variance kernels to estimate risk premia associated with economic risk variables and to test multi-beta models in terms of restricted MV kernels, and found that the MV kernel implied by the intertemporal capital asset pricing model consistently outperforms a pricing kernel based on the size and book-to-market factors of Fama and French.
Abstract: This paper uses minimum-variance (MV) admissible kernels to estimate risk premia associated with economic risk variables and to test multi-beta models. Estimating risk premia using MV kernels is appealing because it avoids the need to 1) identify all relevant sources of risk and 2) assume a linear factor model for asset returns. Testing multi-beta models in terms of restricted MV kernels has the advantage that 1) the candidate kernel has the smallest volatility and 2) test statistics are easy to interpret in terms of Sharpe ratios. The authors find that several economic variables command significant risk premia and that the signs of the premia mostly correspond to the effect that these variables have on the risk-return trade-off, consistent with the implications of the intertemporal capital asset pricing model (I-CAPM). They also find that the MV kernel implied by the I-CAPM, while formally rejected by the data, consistently outperforms a pricing kernel based on the size and book-to-market factors of Fama and French (1993).

Journal ArticleDOI
TL;DR: In this article, the authors collected quarterly track records, covering 495 mutual funds, institutional commingled funds and separate accounts, endowments, and other asset pools, and ranked them by their Sharpe ratios over the long period from January 1980 to March 2000.
Abstract: If one wants to compare track records of managers across widely different asset classes and investment styles, the Sharpe ratio (which is both benchmark-independent and scalable to different levels of risk) is perhaps the best measure of risk-adjusted return. The authors collected quarterly track records, covering 495 mutual funds, institutional commingled funds and separate accounts, endowments, and other asset pools, and ranked them by their Sharpe ratios over the long period from January 1980 to March 2000. Some of the high-ranking funds are well known, such as Berkshire Hathaway and Fidelity9s Magellan Fund; but a number of surprises emerge. The number-one Sharpe ratio fund over that period was a tactical asset allocation product managed by Barclays Global Investors. Most managers underperformed their benchmarks, but the number of truly exceptional track records should give pause to those who assume that active management is fruitless because of the efficiency of markets.

Book ChapterDOI
17 Dec 2001
TL;DR: In this article, a rank measure that takes into account a large number of securities and grades them according to the relative returns is introduced, and a learning decision support system for stock picking based on the rank predictions is constructed.
Abstract: Most models for prediction of the stock market focus on individual securities. In this paper we introduce a rank measure that takes into account a large number of securities and grades them according to the relative returns. It turns out that this rank measure, besides being more related to a real trading situation, is more predictable than the individual returns. The ranks are predicted with perceptrons with a step function for generation of trading signals. A learning decision support system for stock picking based on the rank predictions is constructed. An algorithm that maximizes the Sharpe ratio for a simulated trader computes the optimal decision parameters for the trader. The trading simulation is executed in a general purpose trading simulator ASTA. The trading results from the Swedish stock market show significantly higher returns and also Sharpe ratios, relative the benchmark.

Journal ArticleDOI
TL;DR: In this paper, the authors evaluated the 1995-1998 performance of ten domestic balanced mutual funds in the Greek financial market using daily net asset value per unit using daily average return, total risk, coefficient of variation, systematic risk, Treynor's index, Sharpe's index and Jensen's alpha.
Abstract: Assesses the 1995‐1998 performance of ten domestic balanced mutual funds in the Greek financial market using daily net asset value per unit. Ranks them on the basis of daily average return, total risk, coefficient of variation, systematic risk, Treynor’s index, Sharpe’s index and Jensen’s alpha. Shows that their risks and returns were lower than the Athens Stock Exchange index, that they followed defensive investment policies, that some achieved high returns with low risk and that there was some variation of ranking according to the techniques used.

Journal ArticleDOI
TL;DR: In this paper, the authors show that when the price impact of trades is time stationary, only linear price-impact functions rule out quasi-arbitrage and thus support viable market prices.
Abstract: In an environment where trading volume affects security prices and where prices are uncertain when trades are submitted, quasi-arbitrage is the availability of a series of trades which generate infinite expected profits with an infinite Sharpe ratio. We show that when the price impact of trades is time stationary, only linear price-impact functions rule out quasi-arbitrage and thus support viable market prices. This holds whether a single asset or a portfolio of assets is traded. When the temporary and permanent effects of trades on prices are independent, only the permanent price impact must be linear while the temporary one can be of a more general form. We also extend the analysis to a nonstationary framework.

Posted Content
TL;DR: The authors analyzed a general equilibrium exchange economy with a continuum of agents who have 'catching up with the Joneses' preferences and differ only with respect to the curvature of their utility functions.
Abstract: We analyze a general equilibrium exchange economy with a continuum of agents who have 'catching up with the Joneses' preferences and differ only with respect to the curvature of their utility functions. While individual risk aversion does not change over time, dynamic redistribution of wealth among the agents leads to countercyclical time variation in the Sharpe ratio of stock returns. We show that both the conditional risk premium and the return volatility are negatively related to the level of stock prices, as observed empirically. Therefore, our model exhibits many of the empirically observed properties of aggregate stock returns, e.g., patterns of autocorrelation in returns, the 'leverage effect' in return volatility and long-horizon return predictability. For comparison, otherwise similar representative agent economies with the same type of preferences exhibit counter-factual behavior, e.g., a constant Sharpe ratio of returns and procyclical risk premium and return volatility.

Journal Article
TL;DR: In this article, the authors proposed an approach to replace the conditional expectation by polynomial regression or, more generally, by finite-dimensional regression of Y versus X in case of the variables obeying an elliptical joint distribution.
Abstract: An evergreen debate in Finance concerns the rules for making portfolio hedge decisions A traditional tool proposed in the literature is the well-known standard deviation based Sharpe Ratio, which has been recently generalized in order to involve also other popular risk measures , such as VaR (Value-at-Risk) or CVaR (Conditional Value at Risk) This approach gives the correct choice of portfolio selection in a mean- world as long as is homogeneous of order 1 But, unfortunately, in important cases calculating the exact incremental Sharpe Ratio for ranking profitable portfolios turns out to be computationally too costly Therefore, more easy-to-use rules for a rapid portfolio selection are needed The research in this direction for VaR is just the aim of the paper Approximation formulae are carried out which are based on certain derivatives of VaR and involve quantities similar to the skewness and kurtosis of the random variables under consideration Starting point for the approximations is the observation that the partial derivatives of portfolio VaR with respect to the portfolio weights are just the conditional expectations of the asset returns given that the portfolio return equals VaR Since the conditional expectation of a random variable Y given another random variable X can be considered the best possible regression of Y versus X in least squares sense, the idea is to replace the conditional expectation by polynomial regression or, more generally, by finite-dimensional regression of Y versus X In case of the variables obeying an elliptical joint distribution, the resulting approximation formulae coincide with the exact formula for the standard deviation taken as risk measure By means of a number of numerical examples and counter-examples the properties of the formulae are discussed