Researching & Developing a Market Neutral Strategy¶

The process involves the following steps:

  • Researching partner data.
  • Designing a pipeline.
  • Analyzing an alpha factor with Alphalens.
  • Implementing our factor in the IDE (see backtest in next comment).
  • Evaluating the backtest using Pyfolio.

Part 1 - Investigate the Data with Blaze¶

One way to get a leg up when researching a trading strategy is to look for alpha in datasets that might be used less often than pricing data. The data sets that receive the most attention are the least likely to have much signal left over. We believe that incorporating non-pricing datasets into your models is one of the single biggest improvements you can make towards finding trading signals. Rather than trying to incorporate the signals raw into a model, the best approach is to develop a hypothesis of how the data might be used to forecast returns. Towards that end let's show an example workflow that uses Blaze on a partner dataset

To start out, let's investigate a partner dataset using Blaze. Blaze allows you to define expressions for selecting and transforming data without loading all of the data into memory. This makes it a nice tool for interacting with large amounts of data in research.

In [1]:
import matplotlib.pyplot as plt
import pandas as pd

import blaze as bz

from zipline.utils.tradingcalendar import get_trading_days

from import precog_top_500 as dataset

Interactive datasets are Blaze expressions. Blaze expressions have a similar API to pandas, with some differences.

In [2]:
<class 'blaze.expr.expressions.Field'>

Let's start by looking at a sample of data from the Alpha Vertex PreCog dataset for AAPL. PreCog is a machine learning model that incorporates hundreds of data points covering the economy, company financial performance market data, and investor sentiment to generate its outlooks.

In [3]:
aapl_sid = symbols('AAPL').sid

# Look at a sample of AAPL sentiment data starting from 2013-12-01.
dataset[(dataset.sid == aapl_sid) & (dataset.asof_date >= '2016-01-01')].peek()
symbol name sid predicted_five_day_log_return asof_date timestamp
0 AAPL APPLE INC 24 -0.017 2016-01-04 2016-01-05
1 AAPL APPLE INC 24 -0.013 2016-01-05 2016-01-06
2 AAPL APPLE INC 24 -0.018 2016-01-06 2016-01-07
3 AAPL APPLE INC 24 -0.027 2016-01-07 2016-01-08
4 AAPL APPLE INC 24 -0.025 2016-01-08 2016-01-09
5 AAPL APPLE INC 24 -0.014 2016-01-11 2016-01-12
6 AAPL APPLE INC 24 -0.022 2016-01-12 2016-01-13
7 AAPL APPLE INC 24 0.000 2016-01-13 2016-01-14
8 AAPL APPLE INC 24 0.027 2016-01-14 2016-01-15
9 AAPL APPLE INC 24 0.018 2016-01-15 2016-01-16
10 AAPL APPLE INC 24 0.013 2016-01-19 2016-01-20

Let's see how many securities are covered by this dataset since January 2016.

In [4]:
num_sids = bz.compute(dataset.sid.distinct().count())
print 'Number of sids in the data: %d' % num_sids
Number of sids in the data: 619

Let's go back to AAPL and let's look at the signal each day. To do this, we can create a Blaze expression that selects trading days and another for the AAPL sid (24).

In [5]:
# Mask for AAPL.
stock_mask = (dataset.sid == aapl_sid)

# Blaze expression for AAPL sentiment on trading days between 12/2013 and 12/2014
av_expr = dataset[stock_mask & (dataset.asof_date >= '2016-01-01')].sort('asof_date')

Compute the expression. This returns the result in a pandas DataFrame.

In [6]:
av_df = bz.compute(av_expr)

Plot the PreCog signal for AAPL.

In [7]:
av_df.plot(x='asof_date', y='predicted_five_day_log_return')
<matplotlib.axes._subplots.AxesSubplot at 0x7fd60ee8ac10>

Great! Now let's use this data in a pipeline.

Part 2 - Define Our Hypothesis¶

Now that we have a dataset that we want to use, let's use it in a pipeline. In addition to the PreCog dataset, we will also use the EventVestor Earnings Calendar dataset to avoid trading around earnings announcements, and the EventVestor Mergers & Acquisitions dataset to avoid trading acquisition targets. We will work with the free versions of these datasets.

Specifically, let's build a pipeline that ranks stocks by the prediction they received from the PreCog model. Let's also add a filter where we only consider stocks in the Q1500US that have had the daily direction of their prediction (positive or negative) correct at least 8 out of the last 15 trading days (above 50%).

This should leave us with a large basket of stocks to trade, which is good for both risk management and capacity considerations.

In [8]:
from quantopian.pipeline import Pipeline, CustomFactor
from quantopian.research import run_pipeline

from quantopian.pipeline.factors import SimpleMovingAverage, RollingLinearRegressionOfReturns
from quantopian.pipeline.filters.morningstar import Q1500US

from quantopian.pipeline.classifiers.morningstar import Sector

# Sentdex Sentiment free from 15 Oct 2012 to 1 month ago.
from import sentiment
from import USEquityPricing
from import precog_top_500 as precog

# EventVestor Earnings Calendar free from 01 Feb 2007 to 1 year ago.
from quantopian.pipeline.factors.eventvestor import (

# EventVestor Mergers & Acquisitions free from 01 Feb 2007 to 1 year ago.
from quantopian.pipeline.filters.eventvestor import IsAnnouncedAcqTarget

from quantopian.pipeline.factors import BusinessDaysSincePreviousEvent

import numpy as np
import pandas as pd
In [9]:
class PredictionQuality(CustomFactor):
    Create a customized factor to calculate the prediction quality
    for each stock in the universe.
    Compares the percentage of predictions with the correct sign 
    over a rolling window (3 weeks) for each stock.

    # data used to create custom factor
    inputs = [precog.predicted_five_day_log_return, USEquityPricing.close]

    # change this to what you want
    window_length = 15

    def compute(self, today, assets, out, pred_ret, px_close):

        log_ret5 = np.log(px_close) - np.log(np.roll(px_close, 5, axis=0))

        log_ret5 = log_ret5[5:]
        n = len(log_ret5)
        # predicted returns
        pred_ret = pred_ret[:n]

        # number of predictions with incorrect sign
        err = np.absolute((np.sign(log_ret5) - np.sign(pred_ret)))/2.0

        # custom quality measure
        pred_quality = (1 - pd.DataFrame(err).ewm(min_periods=n, com=n).mean()).iloc[-1].values
        out[:] = pred_quality
In [10]:
def make_pipeline():
    Dynamically apply the custom factors defined below to 
    select candidate stocks from the PreCog universe 
    pred_quality_thresh      = 0.5
     # Filter for stocks that are not within 2 days of an earnings announcement.
    not_near_earnings_announcement = ~((BusinessDaysUntilNextEarnings() <= 2)
                                | (BusinessDaysSincePreviousEarnings() <= 2))
    # Filter for stocks that are announced acquisition target.
    not_announced_acq_target = ~IsAnnouncedAcqTarget()
    # Our universe is made up of stocks that have a non-null sentiment & precog signal that was 
    # updated in the last day, are not within 2 days of an earnings announcement, are not announced 
    # acquisition targets, and are in the Q1500US.
    universe = (
        & precog.predicted_five_day_log_return.latest.notnull()
        & not_near_earnings_announcement
        & not_announced_acq_target
    # Prediction quality factor.
    prediction_quality = PredictionQuality(mask=universe)
    # Filter for stocks above the threshold quality.
    quality= prediction_quality > pred_quality_thresh

    latest_prediction = precog.predicted_five_day_log_return.latest
    non_outliers = latest_prediction.percentile_between(1,99, mask=quality)
    normalized_return = latest_prediction.zscore(mask=non_outliers)
    normalized_prediction_rank = normalized_return.rank()

    ## create pipeline
    columns = {
        'av_rank': normalized_prediction_rank,
    pipe = Pipeline(columns=columns, screen=universe)
    return pipe
In [11]:
result = run_pipeline(make_pipeline(), start_date='2015-02-01', end_date='2017-03-01')
In [12]:
2015-02-02 00:00:00+00:00 Equity(2 [ARNC]) 91.0
Equity(24 [AAPL]) 200.0
Equity(67 [ADSK]) 124.0
Equity(76 [TAP]) 46.0
Equity(114 [ADBE]) NaN

Part 3 - Test Our Hypothesis Using Alphalens¶

Now we can analyze our av_rank factor with Alphalens. To do this, we need to get pricing data using get_pricing.

In [13]:
# All assets that were returned in the pipeline result.
assets = result.index.levels[1].unique()

# We need to get a little more pricing data than the length of our factor so we 
# can compare forward returns. We'll tack on another month in this example.
pricing = get_pricing(assets, start_date='2015-02-01', end_date='2017-04-01', fields='open_price')

Then we run a factor tearsheet on our factor. We will analyze 3 quantiles, looking at 1, 5, and 10-day lookahead periods.

If you are interested in learning more about factor tearsheets and how to analyze them, check out the Factor Analysis lecture in the Quantopian lecture series.

In [14]:
import alphalens

factor_data = alphalens.utils.get_clean_factor_and_forward_returns(

Quantiles Statistics
min max mean std count count %
1 1.0 74.0 24.396894 15.176296 23631 20.175880
2 16.0 148.0 71.951556 21.596404 23326 19.915475
3 30.0 222.0 119.179928 30.777963 23326 19.915475
4 44.0 296.0 166.434708 40.868666 23326 19.915475
5 58.0 370.0 213.762672 51.353281 23516 20.077695
Returns Analysis
1 5 10
Ann. alpha 0.098 0.023 -0.012
beta -0.024 -0.037 0.050
Mean Period Wise Return Top Quantile (bps) 5.819 9.146 -4.390
Mean Period Wise Return Bottom Quantile (bps) -3.998 -3.201 4.904
Mean Period Wise Spread (bps) 8.924 2.496 -0.160
/usr/local/lib/python2.7/dist-packages/alphalens/ FutureWarning: pd.rolling_apply is deprecated for Series and will be removed in a future version, replace with 
  min_periods=1, args=(period,))
/usr/local/lib/python2.7/dist-packages/alphalens/ FutureWarning: pd.rolling_apply is deprecated for DataFrame and will be removed in a future version, replace with 
  min_periods=1, args=(period,))
/usr/local/lib/python2.7/dist-packages/alphalens/ FutureWarning: pd.rolling_apply is deprecated for Series and will be removed in a future version, replace with 
  min_periods=1, args=(period,))
/usr/local/lib/python2.7/dist-packages/alphalens/ FutureWarning: pd.rolling_apply is deprecated for DataFrame and will be removed in a future version, replace with 
  min_periods=1, args=(period,))
/usr/local/lib/python2.7/dist-packages/alphalens/ FutureWarning: pd.rolling_mean is deprecated for Series and will be removed in a future version, replace with 
  pd.rolling_mean(mean_returns_spread_bps, 22).plot(color='orangered',
Information Analysis
1 5 10
IC Mean 0.026 0.017 -0.005
IC Std. 0.123 0.139 0.139
t-stat(IC) 4.750 2.739 -0.863
p-value(IC) 0.000 0.006 0.388
IC Skew -0.104 0.052 0.134
IC Kurtosis 0.186 0.567 0.357
Ann. IR 3.294 1.899 -0.599
/usr/local/lib/python2.7/dist-packages/alphalens/ FutureWarning: pd.rolling_mean is deprecated for Series and will be removed in a future version, replace with 
  pd.rolling_mean(ic, 22).plot(ax=a,
Turnover Analysis
1 5 10
Quantile 1 Mean Turnover 0.481 0.830 0.907
Quantile 2 Mean Turnover 0.664 0.868 0.911
Quantile 3 Mean Turnover 0.696 0.865 0.899
Quantile 4 Mean Turnover 0.667 0.871 0.911
Quantile 5 Mean Turnover 0.483 0.820 0.902
1 5 10
Mean Factor Rank Autocorrelation 0.7 0.148 -0.063
<matplotlib.figure.Figure at 0x7fd5f63f0090>