Notebook

# The Quest for Capacity: Optimizing Sharpe Ratio under Varying Capital Levels¶

With interest rates at all-time lows, the pressures on instituional and private investors to seek new harbors for vast sums of capital are high. Quantitative hedge funds are thus racing to increase capacity of their portfolio of trading algorithms. Despite these market forces, surprisingly little public information is available on estimating and maximizing capacity.

In this blog post we will take a look at the problem of maximizing the Sharpe Ratio of a portfolio of uncorrelated trading algorithms under different capital bases.

## What are the limiting factors of capacity of an algorithm?¶

Fixed transaction costs are usually not a concern at institutional trading levels as brokers offer very competitive transaction fees. The main headwind when trading larger amounts comes instead from slippage and the fact that large orders drive up the price (for more information on the impact of slippage on trading algorithms, see also my other blog post on liqudity).

In [3]:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
sns.set_style('white')
sns.set_context('talk')
plt.set_cmap('viridis')

<matplotlib.figure.Figure at 0x7f14acfd5190>

## Simulate Sharpe Decay curves¶

Consider, for example, the following three strategies with carefully chosen Sharpe Decay curves. As we deploy more capital to them, their Sharpe Ratio decays because of the increase in slippage cost. Strategy 'A' for example has a very high Sharpe Ratio at low capital deployments but decays rapidly and even becomes negative if more than $50MM is invested. In [4]: x = np.linspace(1, 200, 100) er_funcs = ( lambda x: np.exp(-x/50)*0.5 -0.2, lambda x: np.exp(-x/40)*0.2 - 0.02, lambda x: np.exp(-x/70)*0.15 ) vols = ( 0.06, 0.06, 0.07 ) sharpe_decay = pd.DataFrame(columns=['A', 'B', 'C'], index=x, data=np.asarray([f(x)/v for f, v in zip(er_funcs, vols)]).T) ax = sharpe_decay.plot() ax.set(title='Sharpe decay of 3 strategies (simulations)', xlabel='Algorithm Gross Market Value ($)',
ylabel='Sharpe Ratio');
sns.despine()


## Changes of optimal allocation¶

We can compute the portfolio Sharpe Ratios under a certain capital levels as a simple weighted sum.

In [5]:
def portfolio_sharpe(weight, total_capital=10):
weight = np.array(weight)
if np.any(weight < 0):
return np.nan
capital = total_capital * weight
numerator = np.sum([f(c)*w for f, c, w in zip(er_funcs, capital, weight)])
denominator = np.sqrt(np.sum([(v*w)**2 for v, w in zip(vols, weight)]))
return numerator / denominator

weight = (1., 0., 0.)
print("Portfolio Sharpe Ratio under $10MM GMV with weight vector {} = {}").format(weight, portfolio_sharpe(weight)) weight = (0., 1., 0.) print("Portfolio Sharpe Ratio under$10MM GMV with weight vector {} = {}").format(weight, portfolio_sharpe(weight))
weight = (0., 0., 1.)
print("Portfolio Sharpe Ratio under $10MM GMV with weight vector {} = {}").format(weight, portfolio_sharpe(weight))  Portfolio Sharpe Ratio under$10MM GMV with weight vector (1.0, 0.0, 0.0) = 3.48942294232
Portfolio Sharpe Ratio under $10MM GMV with weight vector (0.0, 1.0, 0.0) = 2.2626692769 Portfolio Sharpe Ratio under$10MM GMV with weight vector (0.0, 0.0, 1.0) = 1.85759549946


## Optimization¶

At the core, this is a non-linear, constrained optimization problem. scipy provides many different optimization routines and Sequential Least SQuares Programming (SLSQP) suits us well for this specific type of problem (thanks to James Christopher for alerting me to this method).

In [6]:
from scipy import optimize

num_algos = sharpe_decay.shape[1]
guess = np.ones(num_algos) / num_algos

def objective(weight, total_capital):
return -portfolio_sharpe(weight, total_capital=total_capital)

# Set up equality constrained
cons = {'type':'eq',
'fun': lambda x: np.sum(np.abs(x)) - 1}

# Set up bounds for individual weights
bnds = [(0, 1)] * num_algos

results = optimize.minimize(objective, guess, args=10,
constraints=cons, bounds=bnds, method='SLSQP',
options={'disp': True})
results

Optimization terminated successfully.    (Exit mode 0)
Current function value: -5.4760079301
Iterations: 5
Function evaluations: 28

Out[6]:
  status: 0
success: True
njev: 5
nfev: 28
fun: -5.4760079300965874
x: array([ 0.44902442,  0.32466861,  0.22630698])
message: 'Optimization terminated successfully.'
jac: array([ 0.66530937,  0.65833038,  0.65885687,  0.        ])
nit: 5

Looping over different capital levels yields optimal weights and portfolio Sharpe Ratio for each allocation size.

In [7]:
weights = pd.DataFrame(index=range(1, 200), columns=['A', 'B', 'C'], dtype=np.float32)
sharpe = pd.Series(index=range(1, 200), dtype=np.float32)
for i in weights.index:
val = optimize.minimize(objective, guess, args=i, constraints=cons,
bounds=bnds, method='SLSQP').x
weights.loc[i, :] = val
sharpe.loc[i] = portfolio_sharpe(weights.loc[i, :].values, i)


This provides us with the Portfolio Sharpe Decay curve.

In [8]:
ax = sharpe.plot()
ax.set(xlabel='Capital in $MM', ylabel='Portfolio Sharpe Ratio', title='Performance under different capital levels') sns.despine()  At each level, we achieve the maximum Sharpe Ratio possible by finding the weighting to optimize Sharpe under the external capacity constraints. In [9]: ax = weights.plot() ax.set(xlabel='Capital in$MM', ylabel='weight', title='Allocations under different capital levels')
sns.despine()


It is also instructive to look at the total capital deployed to each strategy as we increase the total portfolio capital:

In [10]:
ax = weights.multiply(pd.Series(weights.index), axis='index').plot()
ax.set(xlabel='Portfolio Gross Market Value $MM', ylabel='Algorithm Gross Market Value$MM',
title='Allocations under different capital levels')
sns.despine()


## Conclusions¶

In this blog post we presented a general framework to think about capacity at the algorithm and portfolio level. From here, it is trivial to make the optimization more complex. For example, we probably would want to consider portfolio volatility when computing our allocations and we can relax the zero correlation assumption.