Journal Papers

A Framework for Assessing Factors and Implementing Smart Beta Strategies

By Jason Hsu Vitali Kalesnik Vivek Viswanathan

JUNE 2015 Read Time: 10 min

Executive Summary
In the thickly planted field of factors, practitioners sorely need help separating the wheat from the chaff; that is, separating the equity factors that can deliver on the promise of long-term outperformance from those that cannot. With over 250 purported factors from which investors can make hay, determining which are actually premium bearing could be a daunting task. But practitioners now have a three-step process to help them identify factor robustness. Jason Hsu, Vitali Kalesnik, and Vivek Viswanathan outline the heuristic in their article, “A Framework for Assessing Factors and Implementing Smart Beta Strategies,” published in the Summer 2015 issue of the Journal of Index Investing

The first step in validating a factor begins with screening out all factors that have not been debated and vetted in the top-tier journals over a lengthy period. Key is the ability to replicate the published results. Although a substantial literature on a factor does not guarantee its persistence, a dearth of literature is a good indicator that it may lack a sound theoretical foundation. 

The second step in the framework is eliminating factors whose effect 1) lacks persistence across subperiods within a longer-horizon dataset, or 2) lacks statistical significance in most countries. Because most factors are identified using U.S. data, verifying their existence in a broad range of country markets beyond the United States is a critical test of robustness. When a factor that earns a positive premium in the United States doesn’t outside the United States, it throws up a data-mining red flag. Regardless of the explanation for the factor—investor behavior or risk—it should be observable across markets. If not, practitioners should cull the factor. 

The third step in determining the robustness of a factor is to test alternative definitions of the factor. Typically, a new factor is announced with the trumpeting of an optimal backtest; that is, one with a large adjusted alpha and t-stat, and the practitioner would be wise to assume the published results inflate the existence and size of the premium. Slightly varying the definition of the factor and using the variation(s) to test for the existence of the premium is a good way to alleviate any suspicions about cherry picking the data. Excellent and well-known examples of this type of validation are found in the research of Fama and French (1992), who discovered that earnings yield and dividend yield each produce results similar to book-to-market ratio in constructing the value factor, and Jegadeesh and Titman (1993), who showed that momentum is robust to different look-back formation and holding periods. 

Using the three steps just described, the authors test six factors: value, momentum, low beta (low volatility), quality, illiquidity, and size. Four factors—value, momentum, low beta, and illiquidity—are robust across the United States, United Kingdom,, and Europe ex U.K., and two factors—value and low beta—are also robust to Japan. The U.S. data sample period is 1967–2013 and the non-U.S. sample period is 1987–2013 (with the exception, for the illiquidity factor, of the U.K. and Japan data samples, which begin in 1992). Value, momentum, low beta, and illiquidity are robust to each of the four perturbations in definition that the authors test. These four factors successfully pass the three-step screen.

Having identified four robust factors, the goal of smart beta is to implement them in a passive portfolio. Robustness alone is not enough. A factor must be investable in a smart beta strategy, which means it should be acquirable through liquid and high-capacity equity issuers. Equally essential is the ability to capture the factor premium with low turnover and minimal transactions costs.

A factor will explicitly lend itself to implementation through either an active or a passive strategy. Novy–Marx and Velikov (2014) analyze factors in relation to turnover and transactions costs. They find that factors associated with low turnover, such as market, low volatility, and value, which are not heavily impacted by transactions costs, are well suited to the transparent index-based approach of a smart beta strategy. In contrast, the momentum and illiquidity factors, which are characterized by higher turnover, experience estimated transactions costs of 20 to 57 bps a month. Implementing these two factor exposures through a transparent index-based approach would invite front running and impose a substantial burden on the strategy’s ability to outperform. As a result, these two factor exposures may be better implemented by a highly skilled active manager.

Because most researchers measure factor premiums from backtested paper portfolios rather than out-of-sample, extant portfolios, the practical consideration of transactions costs can meaningfully lower or eliminate the value added from capturing the premium. Thus, whether the implementation is active or passive, managing transactions costs is critical. A strategy’s turnover should be calibrated to earn the factor premium without undue cost. For example, to fulfill its potential for outperformance, a momentum strategy might require monthly turnover, but if a value strategy were to pursue such frequent turnover, a significant chunk of its returns would be consumed by transaction costs.

Lastly, the practitioner must decide on how the factors are allocated in the portfolio. In many respects, factor allocation can be implemented in the same way as traditional asset allocation—and has many of the same caveats. The factor exposure decision is based in large part on whether the investor is sensitive to absolute risk or to risk relative to a benchmark (i.e., tracking error). An investor who is sensitive to absolute risk may gravitate toward a low-volatility portfolio’s high Sharpe Ratio, whereas the latter investor may be strongly averse to a low-volatility portfolio’s high tracking error relative to a capitalization-weighted benchmark.

And although correlations, volatilities, and expected return metrics inform factor allocation, research has shown that their time-varying nature and mean reversion can make historical sample estimates misleading, essentially sabotaging the attempted factor optimization process. But the industry is making progress on this front. Financial researchers are finding promising new conditioning variables to use in estimating forward return and risk parameters for factors. This knowledge will allow investors to move beyond an equal allocation or a suboptimal allocation based on time series–based estimates. 

In summary, the good news is that Hsu, Kalesnik, and Viswanathan have clearly articulated a straightforward three-step process to help practitioners isolate robustness in a crowded field of factors. The not-so-good news (for practitioners hoping to harvest newly discovered sources of excess return in their portfolios) is that not many factors will make the cut.

Summarized by Kay Jaitly, CFA.

Featured Tags

Learn More About the Authors

Partner & Senior Advisor
Director of Research for Europe