Archives for June 2013

The Fallacy of 1/N and Static Weight Allocation

In the last few years there has been a increasing tendency to ignore the value of a disciplined quantitative approach to the portfolio allocation process in favor of simple and static weighting schemes such as equal weighting or some type of adjusted volatility weighting. The former simply ignores the underlying security dynamics, assuming equal risk-return, whilst the latter assumes risk is equal to variance in which case it also ignores covariance between securities and other pertinent features of observed security dynamics. It is true that a simplified approach is easier to explain to clients (and easier to understand by most practitioners), and may be justified in cases when there is little data available for modelling the underlying dynamics.

In a much cited paper DeMiguel, Garlappi, and Uppal (2009) argued that it is difficult to beat the equal weighting strategy, considering a number of mostly mean-variance related models on limited history monthly data. The premise of their argument was based on the uncertainty in estimation of quantities such as covariance. While it is certainly true that uncertainty about such quantities is related to the size of the dataset, using a very short history of monthly data to make that point and publish the paper does not appear to me to provide for a scholarly answer to the paper’s title (“Optimal Versus Naive Diversification: How Inefficient is the 1/N Portfolio Strategy?”). Additionally, they do not even test the significance in the difference of the Sharpe ratios which makes it rather difficult to draw any conclusions.

In defense of optimization, and a direct reply to that study, which was criticized for using very short histories for the estimation of the covariance, Kritzman, Page, and Turkington (2010) provide a very thorough empirical application across a number of portfolios using the mean-variance criterion, including one comprising daily returns of the point in time constituents of the S&P500 from 1998 to 2008, and show substantially large differences (though again not statistically tested) in Sharpe ratios against the equal weighting strategy. The arguments against 1/N are readily generalized to any static weight allocation scheme, such as employed by most benchmark indices, with clear implications for benchmark policy choice and investment manager reward schemes. The question of the efficiency of benchmark indices was addressed some time ago by Grinold (1992) who asked whether index benchmarks are efficient, using the test of Gibbons, Ross, and Shanken (1989) on the country indices of Australia, Germany, Japan, U.K. and U.S. for the period ending 1991 (with start dates as early as 1973 for the U.S.). He found that 4 out of the 5 indices were not efficient ex-post. More recently, Demey, Maillard, and Roncalli (2010) also argued that neither capitalization nor price weighted indices are efficient, exposing investors to what should be diversifiable risk.

In this blog article, I will try to provide one empirical application, using a popular Global Tactical Asset Allocation model based on an approach proposed by Faber(2007), to show the value of a quantitive approach to asset allocation over the naive 1/N strategy. I make use of the rmgarch and parma packages, together with the blotter package for realistic portfolio accounting. Because of the complexity of the setup, the code to generate the tables and figures will be provided seperately as a link (see bottom of page) rather than displayed inline and there are a number of custom functions written on top of blotter. While the quantstrat package provides one excellent avenue for implementing such strategies, I have not used it in this case, opting for a more custom solution which provides for more transparency in what is actually happening under the hood. Feel free to drop me a note if you find any errors or have any suggestions.

Data Universe Setup

One of the issues in many of the backtests one reads about is a failure to take into account companies which have delisted. This leads to the so called survivorship bias problem when backtesting equities, and is even more pronounced in hedge fund databases where reporting is voluntary (e.g. funds which do badly in one year may decide not to report). A realistic test of the efficiency of a benchmark index would require having access to the point in time constituents of that index, which would probably require access to a proprietary database. In this application, I will instead consider a set of market aggregates, which alleviates this problem, and also provides for two additional benefits. First, the implementation of the strategy becomes scalable and there are no issues of liquidity (for most of the indices), and second, the use of aggregates eliminates some of the idiosyncratic noise found in individual securities which enables the identification of trends using a moving average filter much easier. Because ETF’s usually have a limited history going back to the early 2000’s, I have opted to use where appropriate a Vanguard fund as the instrument of choice for testing. Nevertheless, almost half of the securities do not trade for most of 1990 and early 2000 making it a little difficult to draw firm conclusions for the whole period since portfolio optimization benefits from a large and diversified group of securities.

The universe is comprised of the following assets:

  • VFINX (US Equity s:1987), VEURX (EU Equity s:1990), VPACX (Asia Pacific Equity s:1990), VEIEX (Emerging Market Equity s:1995)
  • SH (ProShares Short S&P 500 s:2006)
  • VGSIX (REIT s:1996)
  • IXC (Global Energy s:2001), GLD (Gold s:2004)
  • VWEHX (High Yield Corporate s:1989), VCLT (Long Term Corporate s:2009)
  • DBV (Carry Trade G10 s:2006)

This represents a select exposure to global Equities, US Corporates, Commodities, US Real Estate and Rates/Currencies. The use of the Short S&P 500 index is to obtain some additional diversification in times of global crises, since the strategy is long only.

Data Universe Characteristics

A first look at some of the statistics of this dataset reveals that it exhibits the typical set of stylized facts.

TickerMeanMedianMinMaxStdevSkewnessKurtosisLB(1)LB(2)ARCH-LM(1)JB
VFINX0.00030.0007-0.22860.10960.012-1.292430.340.0001000
VEURX0.00030.0008-0.10910.11740.0132-0.321211.3670.01370.001100
VPACX0.00010-0.08560.12770.01340.22659.1190.33440.075800
VEIEX0.00030.0011-0.1260.13670.0146-0.394313.320000
SH-0.0004-0.0008-0.11680.090.0146-0.157112.1540000
VGSIX0.00040-0.21490.1690.0184-0.201123.6960000
IXC0.00040.001-0.14120.14340.0173-0.385712.1740.0021000
GLD0.00050.0008-0.09190.1070.0131-0.37518.7280.44710.744300
VWEHX0.00030-0.05210.02310.0032-0.672221.7720000
VCLT0.00030.0006-0.03520.02240.0064-0.28324.7680.00670.02180.00030
DBV00.0006-0.0850.06420.0097-0.792714.8150.00010.000300

There is serial correlation (Ljung-Box test) and ARCH effects (ARCH-LM test) in addition to clear rejection of normality (Jarque-Bera test). This is of course not surprising for daily returns, though the question of such effects at lower frequencies is debatable (there is usually not enough data to form adequate tests, whilst aggregation methods require a host of additional assumptions e.g. weak GARCH etc). Given the presence of such such dynamics in the dataset, with varying degrees of skewness and kurtosis across the securities, it is certainly not reasonable to opt for a 1/N approach to the allocation of capital.

fallacy_1

The figure shows the pairwise complete correlations of the securities used in this application which range from the highly correlated (Equities, Energy, REITS and Carry Trade) to the low correlation (Fixed Income and Precious Metals) and the negatively correlated (Equity Short).

The Modelling and Optimization Process

There are 2 competing considerations to take into account when modelling security dynamics and the generation of forecasts. Obviously, more data is preferred to less when using such models as GARCH in order to obtain good estimates of the persistence. On the other hand, the higher the frequency of the data the more the noise in any signal generating mechanism such as the moving average system. In addition, we want to optimize and rebalance based on a lower frequency (monthly) basis so as to eliminate transaction costs and turnover. The signal generating mechanism I apply is based on the crossover of a slow and fast exponential moving average on monthly prices, the length of which is determined dynamically at each point in time. This filter is applied to all securities with a minimum of 600 daily data points, and those with a positive signal are then considered part of the modelling universe. If there are 3 or less securities in the universe, the optimization is not undertaken and instead a fixed weight is assigned to each security (and this may add up to less than 100% investment). The modelling of the universe of positive signal prospects is undertaken using an AR(1)-GARCH(1,1)-student model and the joint dynamics using a static student Copula model (using Kendall’s tau for the correlation) and a parametric transformation (effectively a CCC-Student model). In order to generate the monthly forecast scenario from the higher frequency daily data modelling process, 6000 25-ahead simulated points are generated and collapsed by summation to approximate the monthly forecast return distribution. This scenario matrix of size 6000 x n.securities is then optimized using a fractional mean-LPM(4) model with short-sale constraints and position bounds. The LPM(4) model was chosen in order to capture extreme losses below the portfolio mean (the LPM threshold is the portfolio mean), and a minimum bound of 2.5% was also imposed in order that all securities with a positive signal belong to the final trade set. An upper bound of 50% was also imposed on each security as a concentration limit (this is high because of the small universe of securities used and the effective universe once filtered for positive trends). The fractional optimization model and the LPM measure is described in detail in the vignette of the parma package which also provides some insights into the scaling property of this measure in the case when the threshold is the portfolio mean. The steps are detailed below:

  • Convert prices to monthly.
  • Determine, for each security, and each time point, the optimal slow and fast window lengths of the EMA filter using a Total Return to Volatility metric, and generate the signals at the end of each month.
  • Model those securities with positive signals and generate the month ahead simulated scenario density.
  • Optimize the scenario using a fractional mean-LPM(4) model and obtain the allocation weights.

Backtesting

Once the weights are dynamically determined, the model is ready to backtest. The allocation/rebalancing is performed the day after the weights are determined, which in this case is the first tradeable day of the month. In order to provide for a realistic backtest, a $20 roundtrip commission is charged per trade and 1% yearly management fee deducted monthly on a pro-rata basis. The blotter package is used for the order generation and tracking of the portfolio equity, with a number of custom wrapper functions. The same approach is applied to the equal weighted portfolio so that the strategies are directly comparable and individually provide for a realistic representation of their respective performance. In addition to the optimal (OP) and equal weight (EQ) models, an alternative (OPalt) model is also tested which is similar to the OP model but takes into account the forecasted portfolio reward/risk portfolio measure which is generated by the model scenario and optimal weights in order to ‘deleverage’ from a full investment constraint to a 2/3 investment in times when the reward/risk ratio is below ½. The reason I have included this portfolio is in order to gain some additional insight into the timing value of the information in the econometric model as opposed to that of the EMA filter which is known to lag turning points (it is the compromise made for eliminating costly noise in terms of whipsaw patterns).

Figure 1 shows the total number of securities at any given time in the OP portfolio. Notice the red bars represent those times that there were less than 4 securities in the portfolio in which case the optimization was not run and equal weighting given to each security.
fallacy_2

Results and Discussion

 

1994-2013OPaltOPEQS&P5002000-2013OPaltOPEQS&P500
CAGR8.9411.1610.346.52CAGR8.459.898.210.24
Vol8.4310.9211.6519.5Vol9.1911.8812.8221.18
Sharpe0.710.760.650.27Sharpe0.70.680.520.02
LW p-value0.1130.154LW p-value0.030.078
BestM6.639.829.8210.5BestM6.39.046.9610.5
WorstM-6.91-10.54-10.54-19.14WorstM-5.89-8.84-7.01-19.14
UpM69.468.5363.3663.09UpM67.567.562.559.01
DnM30.631.4736.6436.91DnM32.532.537.540.99
MaxDraw14.519.0419.0459.65MaxDraw14.518.6518.2459.65
2003-2013OPaltOPEQS&P5002010-2013OPaltOPEQS&P500
CAGR11.4913.9312.215.79CAGR7.368.234.1911.77
Vol9.8312.4913.6820.52Vol7.398.9211.9317.81
Sharpe0.990.980.790.3Sharpe0.990.920.40.71
LW p-value0.0650.1LW p-value0.0540.098
BestM6.39.046.9610.5BestM3.885.766.110.5
WorstM-5.89-8.84-7.01-19.14WorstM-5.89-8.84-7.01-8.37
UpM70.9770.9764.5264.8UpM67.567.557.563.41
DnM29.0329.0335.4835.2DnM32.532.542.536.59
MaxDraw14.518.6518.2459.65MaxDraw10.1314.8318.2420.03

Tables 1 and 2 present the results for various subperiods of the 2 optimal portfolios, the equal weighted portfolio and the S&P 500 index (as captured by the VFINX fund). I use subperiod analysis since half of the securities in the universe only start in the early to mid 2000s, which means that the 1990-2000 period is contaminated with a small number of assets and hence more likely to have equal weights (when the number of signals is less than 4 the model allocates equally to each). The results provide for a very clear picture of the incremental value of each portfolio. The OP and EW portfolios, over the whole sample appear to have the same drawdowns. This is likely related to the fact that both portfolios have the same signal generating mechanism which somewhat lags the turning points and is based on monthly data (so that daily P&L accounting is likely to reveal larger intra-month fluctuations). The OPalt portfolio on the other hand, which makes use of information from the DGP and optimal weight vector has lower drawdowns which indicates that there is value in making use of information from this model beyond what the signals provide for timing purposes. The Sharpe ratios for the optimal portfolios are higher than that of the equal weighted portfolio in all subperiods, but significantly different at the 10% level only the period after 2000 (the ‘LW p-value’ row contains the p-values from the test of Ledoit and Wolf (2008) of the optimal portfolio against the equal weighted portfolio under the null hypothesis that their Sharpe ratio differences are zero), again for the reasons hypothesized earlier. Overall, one may cautiously conclude from this small example that there is value in adopting a disciplined quantitative approach to the allocation of capital beyond the naive 1/N approach.

Figure 2a gives the contribution to P&L of each security over 2 subperiods, whilst Figure 2b gives the active contribution (over the equal weighted portfolio) of each security to P&L, which gives some additional insight to where the bets where concentrated.

fallacy_3

fallacy_4

Figure 3a shows the average yearly allocation per security of the optimal portfolio, whilst Figure 3b the active allocation (over the equal weighted portfolio). The figures show a clear difference between the optimal and equal allocation methods, with the former having made positive active bets on Gold and High Yield Corporates over the entire period, and underweighted Asia Pacific Equities in the 1990s.

fallacy_5

fallacy_6

Finally, Figure 4 provides the evolution of Terminal Wealth for the 3 portfolios, though this is always somewhat less informative than the subperiod analysis since it always depends on the starting point and will not usually be a good indicator of the average risk-return profile, as can be deduced from the OPalt portfolio which has the lowest Terminal Wealth but the highest Sharpe Ratio and lowest drawdowns.

fallacy_37jpg

Conclusion

There is really no substitute for a disciplined approach to the modelling of security dynamics and stochastic programming to obtain the optimal weights. Even when there is not enough data, it is certainly possible to map the data available to either a set of higher frequency factors or closely related prospects with more data. This article provided for a very simple example of what is possible. Interesting extensions to the approach adopted might include:

  • A more sophisticated signal generating mechanism, taking into account explanatory variables (not just looking at the price trend) for turning point or directional analysis.
  • More advanced money management overlay to limit intra-month losses. The blotter wrapper function allows for this and a simple example is included (though not used) in the code. To my knowledge it remains an open question whether stop losses add or subtract value, and it is my experience that including them within a strategy for intra-month risk reduction should be done with caution.
  • No short sale constraint. Technically, there is no reason not to remove the constraint, but the realistic modelling of this is more complicated because of the need to track margin etc. The blotter wrapper is not yet setup to deal with this case.

Code

There are 5 files in the zip (unstarched_code). Place them in an empty directory which you will designate as your working directory in R (‘setwd’). The main file which you will need to run is the ‘unstarched_script_run.R’ file. Be patient, it takes some time to complete, mainly because of the portfolio trades/accounting, the progress of which will be displayed on screen. I estimate about 60 minutes to complete.

References

Demey, P., Maillard, S., & Roncalli, T. (2010). Risk-based indexation. Available at SSRN 1582998.

DeMiguel, V., Garlappi, L., & Uppal, R. (2009). Optimal versus naive diversification: How inefficient is the 1/N portfolio strategy?. Review of Financial Studies, 22(5), 1915-1953.

Faber, M. T. (2007). A quantitative approach to tactical asset allocation. Journal of Wealth Management.

Gibbons, M. R., Ross, S. A., & Shanken, J. (1989). A test of the efficiency of a given portfolio. Econometrica: Journal of the Econometric Society, 1121-1152.

Grinold, R. C. (1992). Are Benchmark Portfolios Efficient?. The Journal of Portfolio Management, 19(1), 34-40.

Kritzman, M., Page, S., & Turkington, D. (2010). In defense of optimization: the fallacy of 1/N. Financial Analysts Journal, 66(2), 31.

Ledoit, O., & Wolf, M. (2008). Robust performance hypothesis testing with the Sharpe ratio. Journal of Empirical Finance, 15(5), 850-859.