John Lim

, 1:12 pm ET**IT’S A COMMONLY HELD** belief that market risk is a function of time in the market. Simply put, risk falls as our holding period lengthens. This is the notion behind time diversification—the idea that more time allows us to diversify across different investment periods, resulting in reduced risk.

For example, the S&P 500 has historically generated positive returns in nearly every 20-year holding period, even after adjusting for inflation. Armed with this data, one of the first things financial advisors ask clients is about their time horizon. The longer clients have to invest, the more risk they can take—or so it’s assumed.

Unfortunately, this assumption is incorrect, at least according to financial theory. It’s certainly true that, as our investment time horizon lengthens, the probability that the *average* return will be positive increases. But if we’re concerned with the range of potential total returns, risk paradoxically rises with longer holding periods.

How can that be? In the world of finance, risk is often measured using standard deviation. Standard deviation is a measure of how far actual returns deviate from the average historical return. The higher the standard deviation, the greater the risk.

Consider the annual returns of the U.S. stock market. We’ll assume that returns are independent from year to year and have a normal, or bell-shaped, distribution. Let’s also assume an average return of 5% and a standard deviation of 30%.

Under those assumptions, 68% of the time, annual returns would fall between +35% and -25%. Higher standard deviations imply greater risk, because it means bigger swings up or down from the average.

For nerdier readers, here’s how this works mathematically. (Those less nerdy can skip to the next paragraph.) If the standard deviation of returns over one year is 30%, or 0.3, the standard deviation of the return over *n* years equals 0.3 * √*n*. For a 30-year time horizon (*n* = 30), 30-year returns have a standard deviation of 0.3 * √30 or 1.64. In other words, standard deviation, or risk, has ballooned from 30% over one year to 164% over 30 years. The upshot: Risk is higher, not lower, over longer time periods.

But what about the notion that the longer we remain invested in the market, the lower the probability of losing money? The probability of losing money in stocks does indeed fall with longer holding periods. Since 1871, there’s been just one 20-year period when the return of the S&P 500 was negative. From June 1901 to June 1921, its inflation-adjusted return was -4.3%.

Probabilities, however, don’t tell the whole story. Standard deviation also reflects the *magnitude* of deviations from the average. As the time horizon lengthens, the magnitude of worst-case outcomes grows.

For instance, an investor might ask the following question: “How much does my portfolio stand to lose if stock returns are in the worst 1% of possible outcomes?” In other words, what is my 1% “value at risk” in an awful market scenario?

In our example, the worst 1% one-year scenario would be a 47.7% loss. That’s painful. But the worst 1% 30-year scenario would be a bone-crushing loss of 90.2%. In short, risk—at least as measured by standard deviation—actually increases with time. Why? Just as gains compound, losses can, too. In a worst-case scenario, multiple losing years can result in a devastating cumulative loss.

DI think John Lim was addressing the concept that if there is a probability of some loss for one year, that “the growing improbability of a loss is offset by the increasing magnitude of potential losses” (Mark Ktrizman, Financial Analysts Journal, 2015:71 (1), referenced in John Clements’ comment above) with longer time horizons. Although I am not a statistician, my experience from taking a few statistical classes left me with a nagging feeling after reading John Lim’s article. After going back to my statistical book, I finally figured it out what concerned me.

The statement “If the standard deviation of returns over one year is 30%, or 0.3, the standard deviation of the return over n years equals 0.3 * √n.” is incorrect. N is the number of annual return data points used in the analysis of the mean and standard deviation of one-year returns. The analysis of yearly returns can only provide information on a probability of returns on any random year, that is, the standard deviation of returns in any given year is 30%. This analysis says nothing about the returns for a 30-year period. A completely different analysis is required to address the probability of returns for a 30-year period. A Monte Carlo analysis could be used to calculate the probability of total return for a longer periods of time using numerous (N observations) combination of 30 one-year intervals using the given probability of one-year returns stated above. This method was probably used to generate Figure A in Kritzman article that show risk decreases with time horizon for a their specific one-year return probability. The probability of returns for a 30-year period could also be calculated using the returns of random 30-year periods from the stock market data, in which each random 30-year period is one observation (N).

My analysis does not address the discussion of time diversification using the utility function presented later in the Kritzman article.

Please provide a source for the mathematics in your article.

If you Google “time diversification,” you’ll find ample reading material. At the height of the debate over time diversification during the 1990s, this was an often-cited article:

https://www.acsu.buffalo.edu/~keechung/Lecture%20Notes%20and%20Syllabus%20(MGF633)/What%20Practitioners%20Need%20to%20Know%20About%20Time%20Diversification.pdf

Thanks.I’m not surprised that this is a controversial issue.

A Fundamental Misunderstanding of Risk: The Bias Associated with the Annualized Calculation of Standard DeviationQuantifiable, measurable risk is of critical importance when making data-driven decisions in finance and investment management, but what if the generally accepted practice of the investment industry for calculating risk possessed incorrect mathematical assumptions and embedded biases? This piece revisits the discussion surrounding the methodology used to calculate annualized standard deviation statistics commonly used when reporting the performance of investment products. It goes on to present a new example illustrating the bias when applied to an efficient frontier.

https://www.tandfonline.com/doi/full/10.1080/23322039.2020.1857005

Good question, Rick. I don’t have formal financial training, but it’s my understanding that annual returns are often assumed to take on a normal distribution and that year-to-year returns are assumed to be uncorrelated or independent, as you put it. This assumption is reasonable most of the time but as I’ll address in an upcoming post, it does have a major weakness. (I don’t want to spill the beans just yet…)

I’m sure the variance or standard deviation would differ depending on the time period you examine. It would make the most sense to me that you should examine as large a time period as possible.

John, thanks for the insightful article. This topic is a tough one for many of us. The concept of using a deviance, or variance to represent portfolio risk isn’t intuitive. To most of my friends and family, risk is “how much money can I lose?”.

Do you have a feel for how independent yearly returns are? Does it matter much what period you choose? I may have to back to my CFP texts to review this. Thanks.