Investment professionals can overdo performance measurement simply because technology makes it so easy and it seems a worthwhile task to constantly gauge if clients are on the path to meeting their long-term financial goals.
Too often, however, “doing something” based on short-term performance measurement can degrade the long-term performance potential of a portfolio by chasing recent winners.
If we must regularly assess performance, let’s focus on performance relative to expectation distributions, such as a strategy’s expected tracking error of returns relative to its benchmark.
In the last installment of the advisor series, our colleagues discussed how financial advisors can better serve their clients by shifting their due diligence efforts to identifying more reliable product designs in smart beta. In this article, we discuss why the investment industry is so obsessed with short-term performance evaluation and how investors can steer clear of the pitfalls arising from measuring and chasing returns.
If you’ve been a regular reader of our article series dedicated to the concerns of advisors, we hope you’ve found our ideas helpful in finding new ways to maximize your clients’ chances of achieving their financial goals. We started with a model of return and risk expectations (check!), added adequate diversification (check!), and reviewed product design and implementation considerations (check!). Next, it seems natural to closely monitor how investments perform over time, right? Well, not so fast. Before we engage in the common practice of performance assessment, let’s take a deep breath and recognize that, in many cases, we’d be better off not engaging in regularly scheduled performance measurement. Of course, that’s much easier said than done. And since we almost certainly must measure performance, we recommend doing so within a framework that acknowledges short-term noise and encourages investors to stay the course as they pursue long-term investment returns that are less random and more predictable.
Most investment professionals—financial advisors included—seem to devote nearly as much time to performance measurement on the job, as we humans seem to spend on social media, off the job, these days. Sadly, we appear to be doing far too much of both. In the case of performance measurement, technology simply makes it so easy to run the numbers.1 Keeping close tabs on portfolio performance must be “proof” we are acting as responsible fiduciaries and investors, and is a guide to us in making superior decisions. Unfortunately, the evidence suggests otherwise. Just because we can do something, and have been doing it often in the past, doesn't make it a worthwhile activity.2 Instead, we might all be better served by taking a step back from the endless noise and step off the treadmill of “doing” to ask ourselves a few questions: Why do we spend so much time as a profession measuring and comparing the recent performance of securities, managers, and investment styles? Does this practice add value over the long run?
I’ve taken the unconventional step of deleting my Facebook and Twitter accounts over the last year. I can’t say that I miss either. Now, I’m not advocating you should fully unplug and stop looking at performance. I’m simply reflecting what we believe at Research Affiliates: that investors need to recognize a) the large sunk costs that come from obsessively monitoring performance and b) the natural (and dangerous) consequences that emerge from this activity. Namely, costly mistakes arise when we react to short-term performance assessments by piling into the recently brilliantly performing strategies and selling the performance laggards. In short, traditional performance measurement can be far worse than a neutral time sink akin to playing Candy Crush. It can actually detract from a client’s realized returns when the activity of measuring performance turns into chasing performance. The natural human instinct of “Don’t just sit there, do something!” encourages us to favor investments that have done well recently and to pull away from recent poor performers—just when predictable long-run mean reversion lies ahead.
So, why do we spend so much time measuring and comparing the past performance of securities, managers, and investment styles? One likely reason is a common belief that looking at historical returns leads to better portfolio management and investment decisions for our clients. And to a certain extent, that belief is obviously backed by our own experience.
As we have noted earlier in the series, several sources of a long-term robust return premium exist, but let’s use value as an example here. Those who “believe in value” have the added comfort of the self-evident logic that prices matter in investing as they do in everyday life, aligning with the existence of very long return series that “confirm” this logic with data. From this perspective, looking at realized performance is helpful and, in fact, is the basis for applying a scientific approach to investing.
But problems arise when we outsource our lack of conviction in an investment, such as a factor or a style, to its time series. In the lingo of finance academics, this is the difference between having a Bayesian prior (and informing it further with available data) and not having a prior, whatsoever, on which to base educated investment decisions.3
Nonprofessional investors can be forgiven for taking this unfortunate shortcut. They do not have the knowledge base advisors do, which can lead them, understandably, to believe that the “proof is in the pudding” in terms of performance, as it is with many things in life. (To be clear, this observation is not a slight to nonprofessional investors—even the most skilled of experts can fall prey to behavioral fallacies and knowledge gaps.) Obviously, not everyone can, or should, be a professional investor. Job specialization is very necessary, as you and I likely have little desire to build our own car or perform our own surgery. But those of us who do make our living as well-trained and well-intentioned overseers of capital, including financial advisors, can and must recognize the dangers of short-term performance measurement.4 The most evident way to combat this peril is to develop investment beliefs informed by, but not solely derived from, data.5
The reality, however, is that performance measurement often leads us to now-cast—a combination of “now” and “forecast”—which presumes that the near-term future will look an awful lot like the near-term past. The human brain loves a good way to make sense of an uncertain future, and where better to look than in the readily available past. Blindly relying on recent performance to infer a lasting future trend is fraught with danger.
Let’s illustrate this with an example and assume our client, Chaser Chad, has reasonable expectations of long-term return and risk along with a solid understanding of the benefits of diversification. He believes that value is a robust source of excess return, and thus a portion of his portfolio includes several different value-oriented equity strategies. Actively keeping tabs on his investments, Chaser Chad notices that over the last three years, one of his value strategies has outperformed another one in his portfolio by nearly 3.5% a year. Hence, he wishes to sell the underperforming strategy and move those funds to the brilliantly performing one. Chaser Chad is not alone: this thinking is pervasive in our industry. Granted, over a long time span, a wide performance differential between two seemingly similar strategies may signal differences in execution, fees, or costs. But over shorter periods, “noise”6 can be staggeringly attention grabbing, tantalizing investors to create stories and “reasons” to justify action.
Consider eight hypothetical long-only value strategies, all using the sameconstruction method with just one variation, the value signal definition: book to price, earnings to price, cash-flow to price, dividends to price, sales to price, assets to price, operating profits to price, and dividends-plus-buybacks to price, respectively. Because they are all value strategies based on a priori equally valid metrics, our prior would be to expect them to deliver reasonably similar return outcomes over the long run. And yes, all generate an annualized return within a modestly narrow band of 11.4% to 12.8% over a 50-year period ending December 31, 2017.
When we begin to assess performance over shorter horizons, such as 3 years rather than 50, these nearly equivalent strategies from a 50-year perspective now appear to be extraordinarily different: their shorter-term returns deviate significantly. These performance differences are reflected in surprisingly high levels of three-year pairwise tracking errors to each another. From 1968 through 2017, the average rolling three-year pairwise tracking error was nearly 3%. In nearly one-fifth of the instances, the pairwise tracking error across these eight value strategies exceeded 4%. In the most extreme scenario, in the three-year period ending January 2001, two of the strategies—operating profits to price and dividends-plus-buybacks to price—exhibited a tracking error of returns to one another of nearly 9%. This wide variability in short-term performance is essentially noise and easily gives the illusion that an underlying reason explains the differential and that investors—especially the attentive, skilled, and opportunistic ones—have a chance to profit by gearing their long-term investments to the near-term evidence of superior returns.
A ranking of the eight strategies over each of the five-year horizons between 1968 and 2017 shows that none persistently remained as leader or laggard over subsequent five-year horizons. For example, a high-ranked strategy in the five-year window 1988–1992, the sales-to-price strategy, falls to bottom-ranked in the next five-year window, 1993–1997. The strategy then reverts to a top-ranked placement over the next two five-year windows. We see this pattern throughout; dividends-plus-buybacks to price is another example. My colleague John West is fond of saying that mean reversion is unreliably reliable, and we see here quite clearly that the process of mean reversion turns return-chasing behavior into a drag on investment outcomes.
The notion that everyone will delete their social media accounts is unrealistic—although I wonder what would happen if everyone did. Similarly, the notion that the investment industry will abandon short-term performance assessment is impractical. If we must regularly assess performance, let’s focus on performance relative to expectation distributions, such as a strategy’s tracking error of returns relative to its benchmark. By doing so, we are better able to short-circuit our behavioral tendencies and recognize that near-term performance is well within the range of reasonably expected deviations the vast majority of the time.7 From this perspective, we can conclude that investors are better off doing nothing much of the time, as opposed to doing something, assuming that we have positive priors for the strategies we are evaluating.8
The choice of “doing less” rather than more also has the distinct advantage of being a trading-cost-reducing strategy. As we de-emphasize the importance of short-term performance, we also gain an appreciation for the fact that long-term performance expectations have less noise built into them. Therefore, we can approach the task of assessing long-term performance with more confidence, though not certainty. Over longer horizons, such as a 10-year window, excess returns fall within a smaller range of outcomes, making performance measurement and managing to these measurements—thus bypassing the follies of return chasing (Arnott, Kalesnik, and Wu, 2017)—a more rewarding activity for advisors and their clients.
Arnott, Robert, Vitali Kalesnik, and Lillian Wu. 2017. “The Folly of Hiring Winners and Firing Losers.” Research Affiliates Publications (September).
Bernstein, Peter. 1995. “Risk as a History of Ideas.” Financial Analysts Journal, vol. 51, no. 1 (January/February):7–11.
Black, Fischer. 1986. “Noise.” Journal of Finance, vol. 41, no. 3 (July):528–543.
Brightman, Chris, James Masturzo, and Jonathan Treussard. 2014. “Our Investment Beliefs.” Research Affiliates Publications (October).
Harvey, Campbell. 2017. “The Scientific Outlook in Financial Economics: Transcript of the Presidential Address and Presentation Slides.” Duke I&E Research Paper No. 2017-06 (January 7). Available at SSRN.
Silver, Nate. 2012. The Signal and the Noise. Penguin Books: New York, NY:194–195.
Treussard, Jonathan. 2011. “An Options Approach to the Valuation of Active Management.” Unpublished white paper.
Treussard, Jonathan, and Robert Arnott. 2017. “I Was Blind, But Now I See: Bubbles in Academe.” Research Affiliates Publications (June).