This notebook features the Quantopian-based research presented in the paper "Momentum with Volatility Timing". Specifically, the paper proposes two extensions to the conventional momentum factor strategy by introducing the volatility-timed winners approach. First, for resolving factor underperformance in the 2011-2018 post-crisis period, the conventional winners-minus-losers momentum is replaced with the winners-only component. Second, the proposed approach substitutes constant volatility scaling with the threshold function and uses past volatilities as a timing predictor for changing momentum strategies. The approach was confirmed with Spearman rank correlation and demonstrated in relation to different strategies including momentum volatility scaling, risk-based asset allocation, time series momentum and MSCI momentum indexes.

The notebook is structured as follows:

In [1]:

```
import datetime
import numpy as np
import pandas as pd
from pandas.tseries.offsets import BDay
import matplotlib.pyplot as plt
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.factors import CustomFactor, Returns, DailyReturns
from quantopian.pipeline.filters import QTradableStocksUS
from quantopian.research import run_pipeline
import alphalens as al
import math
```

In [2]:

```
import pyfolio as pf
import empyrical as ep
```

This section is based on the Quantopian and Alphalens framework that is comprehensively described in Lecture 39: Factor Analysis with Alphalens. In comparisson with the lecture, however, the paper applies the Quantopian framework to the conventional winners-minus-losers momentum strategy that was documented by Jegadeesh and Titman (1993). The corresponding factor is computed by ranking stocks according to their prior behavior over the course of 11 months with a 1 month lag:

In [3]:

```
class Momentum(CustomFactor):
""" Conventional Momentum factor """
inputs = [USEquityPricing.close]
window_length = 252
def compute(self, today, assets, out, prices):
out[:] = (prices[-21] - prices[-252])/prices[-252]
```

Defining time interval covering the 2008-2009 market downturn

In [4]:

```
start_date, end_date = '2004-07-01', '2010-12-31'
```

Running pipeline with Momentum class for calculating momentum prior scores

In [5]:

```
universe = QTradableStocksUS()
def make_pipeline():
pipe = Pipeline()
pipe.add(Momentum(mask=universe), "momentum")
pipe.set_screen(universe)
return pipe
pipe = make_pipeline()
results = run_pipeline(pipe, start_date, end_date)
results.dropna(inplace=True)
mom_scores = results['momentum']
```

Retrieving prices for pipeline assets

In [6]:

```
asset_list = results.index.levels[1]
prices = get_pricing(asset_list, start_date=start_date, end_date=end_date, fields='open_price')
```

Calculating forward returns and quantile ranks with Alphalens

In [7]:

```
mom_data = al.utils.get_clean_factor_and_forward_returns(factor=mom_scores,
prices=prices,
quantiles=10,
periods=(1, 21))
```

Computing returns for factor quantiles

In [8]:

```
mom_returns, mom_stds = al.performance.mean_return_by_quantile(mom_data,
by_date=True,
by_group=False,
demeaned=False,
group_adjust=False)
mom_returns_1D = mom_returns['1D'].unstack('factor_quantile')
```

The section compares the performance of the conventional winners-minus-losers (WML) momentum strategy and market. As shown in **Figure 1**, during the recession the market cumulative returns began to steeply fall and reached a low in 2009. Afterwards, the economy began to recover and the returns subsequently started increasing. The WML momentum strategy, in contrast, fared well through the recession, with returns soaring through the duration of the 2008-2009 economic downturn, but the factor then experienced an abrupt, substantial momentum crash the moment excess market returns began to recover. Momentumâ€™s performance subsequently fell below its 2005 level and finished 2010 below the market.

WML is calculated by grouping the top and bottom 10% of assets into equally-weighted winners and losers portfolios, respectively, and then taking the difference of these components.

Note: The momentum factors in the Carhart factor model and the Kenneth R. French Data Library are computed using the top and bottom 30% of assets.

In [9]:

```
wml10 = mom_returns_1D[10] - mom_returns_1D[1]
```

Retrieving market prices and returns with the SPDR S&P 500 ETF Trust SPY index

Note: In the paper, the risk-free rate is subtracted from the market, winners, and losers returns. The BIL risk-free proxy available through get_pricing, however, goes back only to May 2007. Therefore, as an alternative the risk-free rates from the Kenneth R. French Data Library can be uploaded to a notebook and applied. For simplicity this notebook ignores RF.

In [10]:

```
mkt_start_date = datetime.datetime.strptime(start_date, '%Y-%m-%d') - BDay(1)
mkt_prices = get_pricing('SPY', start_date=mkt_start_date, end_date=end_date, fields='price')
mkt_returns = mkt_prices.pct_change(periods=1)[1:-21]
```

Calculating cumulative returns and comparing performance

In [11]:

```
days = wml10.index.values
mkt_cum = np.cumprod(mkt_returns + 1)
wml10_cum = np.cumprod(wml10 + 1)
plt.figure(figsize=(10,6))
plt.plot(days, mkt_cum/mkt_cum[0], color='y')
plt.plot(days, wml10_cum/wml10_cum[0], color='blue')
plt.title('Figure 1', fontsize=18)
plt.ylabel('Cumulative Returns')
plt.legend(['MKT','WML10'],loc='upper left')
plt.show()
```

To gain insight into the behavior of WML10, we need to extend the analysis towards the consideration of the factorâ€™s winning and losing components, W10 and L10, respectively. According to **Figure 2**, the spike in WML10 cumulative returns in mid-2008 was caused by L10 shifting downwards, resulting in a larger gap between winners and losers. Then, after the market began to regain strength and subsequently momentum crashed, the top and bottom 10% portfolios switched places: losers outperformed winners. Therefore, the momentum winners-minus-losers strategy became no longer profitable and resulted in going long the underperforming past winners while shorting overperforming past losers.

In [12]:

```
w10 = mom_returns_1D[10]
w10_cum = np.cumprod(w10 + 1)
l10 = mom_returns_1D[1]
l10_cum = np.cumprod(l10 + 1)
plt.figure(figsize=(10,6))
plt.plot(days, mkt_cum/mkt_cum[0], color='y')
plt.plot(days, w10_cum/w10_cum[0], color='green')
plt.plot(days, l10_cum/l10_cum[0], color='blue')
plt.title('Figure 2', fontsize=18)
plt.ylabel('Cumulative Returns')
plt.legend(['MKT','W10','L10'],loc='upper left')
plt.show()
```

The momentum underperformance during the market downturn is addressed by different volatility-scaling approaches (e.g., Moskowitz, Ooi, Pedersen, 2012; Barroso and Santa-Clara, 2015). As will be shown, the market downturn broke the conceptual correlation between priors and forward returns. Therefore, the paper proposed to bypass this interval using volatility as a timing predictor and replacing scaling with the threshold function.

Defining the time window for calculating realized volatility

In [13]:

```
vw = 126
```

Calculating realized volatility and volatility-timed winners

In [14]:

```
w10_r2 = w10**2
w10_rv = np.sqrt(w10_r2.rolling(vw).sum()*2)
w10_scale = 0.27/w10_rv
w10_thrd = w10_scale.apply(lambda x: np.where(x < 1, 0, 1))
w10_timed = w10*w10_thrd
```

Comparing performance of the volatility-timed winners approach and the conventional winners-minus-losers momentum

In [15]:

```
w10_timed_cum = np.cumprod(w10_timed[vw:]+1)
f, (ax1, ax2) = plt.subplots(2, sharex=True, sharey=False, gridspec_kw={'height_ratios': [3, 1]}, figsize=(10,6))
#Cumulative returns
ax1.plot(days[vw:], mkt_cum[vw:]/mkt_cum[vw], color='y', label="MKT")
ax1.plot(days[vw:], wml10_cum[vw:]/wml10_cum[vw], color="blue", label="WML10")
ax1.plot(days[vw:], w10_timed_cum/w10_timed_cum[0], color="green", label="W10-Timed")
ax1.set_ylabel('Cumulative Returns')
ax1.set_title('Figure 4', fontsize=18)
ax1.legend(loc='upper left')
#Annualized realized volatility
ax2.plot(days[vw:], w10_rv[vw:], color="red", label="W10 RV")
ax2.set_ylabel('Annual RV')
ax2.axhline(y=0.27, color='red', linestyle='--')
ax2.legend(loc='upper left')
f.subplots_adjust(hspace=.09)
```

The volatility-timed approach highlighted the relationship between the performance of the momentum factor and market downturn. Therefore, the momentum strategy was further investigated with Spearman rank correlation described in Lecture 23. The coefficients were computed between the 11 month momentum factor prior and 21-day forward returns. According to **Table 1**, this correlation during the 2005-2010 interval had a p-value of 0.6 and failed to reject the null hypothesis at alpha 1% of no monotonic relationship between the ranked variables. Then, as shown in **Table 2**, the volatility-timed winners approach resolved and enhanced the rank correlation between the momentum prior and forward returns by capturing and excluding the interval with the negative oscillations.

Table 1: Conventional Momentum Information Coefficient

In [16]:

```
ic = al.performance.factor_information_coefficient(mom_data)
al.plotting.plot_information_table(ic[vw:])
```

Removing the volatility-timed intervals from the momentum data

In [17]:

```
w10_rv_timed = w10_rv[vw:][w10_rv < 0.27]
ts= [pd.Timestamp(x).tz_localize('UTC') for x in w10_rv_timed.index.values]
mom_data_timed = mom_data.loc[ts]
```

Table 2: Volatility-Based Information Coefficient

In [18]:

```
ic_timed = al.performance.factor_information_coefficient(mom_data_timed)
al.plotting.plot_information_table(ic_timed)
```

The section shows a pipeline implemented from the IDE-based variant suggested by @Vladimir within this thread. In brief, the pipeline from Section 1 was extended with DownsideVolatility and PercentileFilter which replaced Alphalens-based factor processing.

In [19]:

```
class DownsideVolatility(CustomFactor):
inputs = [DailyReturns()]
window_length = 126
def compute(self, today, assets, out, returns):
returns[returns > 0] = np.nan
down_vol = np.nanstd(returns, axis = 0)
ann_down_vol = down_vol*math.sqrt(252)
out[:] = ann_down_vol
class RV(CustomFactor):
inputs = [DailyReturns()]
window_length = 1
r2s = np.zeros(126)
def compute(self, today, assets, out, returns):
r = np.nanmean(returns)
r2 = r**2
self.r2s[:-1] = self.r2s[1:]; self.r2s[-1] = r2
rv = np.sqrt(np.sum(self.r2s)*2)
out[:] = np.ones(returns.shape[1])*rv
def make_pipeline():
pipe = Pipeline()
mom = Momentum(mask=universe).downsample('month_start')
mom_w = mom.winsorize (0.01, 1. - 0.01)
w10_filter = mom_w.percentile_between(90, 100)
dv_factor = DownsideVolatility(mask=w10_filter)
rv_factor = RV(mask=w10_filter)
pipe.add(dv_factor, 'dv')
pipe.add(rv_factor, 'rv')
pipe.set_screen(universe)
return pipe
w10_pipeline = make_pipeline()
results = run_pipeline(w10_pipeline, start_date, end_date)
results.dropna(inplace=True)
```

Comparing downside volatility with the momentum realized volatility from Section 4. The former is related to the asset-oriented time series momentum approach described in the Comparison with Time Series Momentum and Risk-Based Allocation section (see Eq. 4 and Eq. 9). According to **Figure 5** (and Exhibit 9), the corresponding volatilities require different thresholds.

In [20]:

```
w10_dv = results['dv'].mean(level=0)
plt.figure(figsize=(10,6))
plt.plot(days[vw:], w10_rv[vw:], color = 'g')
plt.plot(days[vw:], w10_dv[vw:-21], color = 'b')
plt.title('Figure 5', fontsize=18)
plt.ylabel('Volatility')
plt.legend(['W10 RV','W10 DV'], loc = 'upper left')
plt.show()
```

The section presents the summary statistics and drawdown periods of the Pyfolio tearsheet (see Lecture 33: Portfolio Analysis)

In [21]:

```
pf.plotting.show_perf_stats(w10_timed[vw:], mkt_returns[vw:])
```

In [22]:

```
pf.plotting.show_worst_drawdown_periods(w10_timed[vw:],5)
```