With interest rates at all-time lows, the pressures on instituional and private investors to seek new harbors for vast sums of capital are high. Quantitative hedge funds are thus racing to increase capacity of their portfolio of trading algorithms. Despite these market forces, surprisingly little public information is available on estimating and maximizing capacity.

In this blog post we will take a look at the problem of maximizing the Sharpe Ratio of a portfolio of uncorrelated trading algorithms under different capital bases.

Fixed transaction costs are usually not a concern at institutional trading levels as brokers offer very competitive transaction fees. The main headwind when trading larger amounts comes instead from slippage and the fact that large orders drive up the price (for more information on the impact of slippage on trading algorithms, see also my other blog post on liqudity).

In [3]:

```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
sns.set_style('white')
sns.set_context('talk')
plt.set_cmap('viridis')
```

Consider, for example, the following three strategies with carefully chosen Sharpe Decay curves. As we deploy more capital to them, their Sharpe Ratio decays because of the increase in slippage cost. Strategy 'A' for example has a very high Sharpe Ratio at low capital deployments but decays rapidly and even becomes negative if more than $50MM is invested.

In [4]:

```
x = np.linspace(1, 200, 100)
er_funcs = (
lambda x: np.exp(-x/50)*0.5 -0.2,
lambda x: np.exp(-x/40)*0.2 - 0.02,
lambda x: np.exp(-x/70)*0.15
)
vols = (
0.06,
0.06,
0.07
)
sharpe_decay = pd.DataFrame(columns=['A', 'B', 'C'],
index=x,
data=np.asarray([f(x)/v for f, v in zip(er_funcs, vols)]).T)
ax = sharpe_decay.plot()
ax.set(title='Sharpe decay of 3 strategies (simulations)', xlabel='Algorithm Gross Market Value ($)',
ylabel='Sharpe Ratio');
sns.despine()
```

We can compute the portfolio Sharpe Ratios under a certain capital levels as a simple weighted sum.

In [5]:

```
def portfolio_sharpe(weight, total_capital=10):
weight = np.array(weight)
if np.any(weight < 0):
return np.nan
capital = total_capital * weight
numerator = np.sum([f(c)*w for f, c, w in zip(er_funcs, capital, weight)])
denominator = np.sqrt(np.sum([(v*w)**2 for v, w in zip(vols, weight)]))
return numerator / denominator
weight = (1., 0., 0.)
print("Portfolio Sharpe Ratio under $10MM GMV with weight vector {} = {}").format(weight, portfolio_sharpe(weight))
weight = (0., 1., 0.)
print("Portfolio Sharpe Ratio under $10MM GMV with weight vector {} = {}").format(weight, portfolio_sharpe(weight))
weight = (0., 0., 1.)
print("Portfolio Sharpe Ratio under $10MM GMV with weight vector {} = {}").format(weight, portfolio_sharpe(weight))
```

`scipy`

provides many different optimization routines and Sequential Least SQuares Programming (SLSQP) suits us well for this specific type of problem (thanks to James Christopher for alerting me to this method).

In [6]:

```
from scipy import optimize
num_algos = sharpe_decay.shape[1]
guess = np.ones(num_algos) / num_algos
def objective(weight, total_capital):
return -portfolio_sharpe(weight, total_capital=total_capital)
# Set up equality constrained
cons = {'type':'eq',
'fun': lambda x: np.sum(np.abs(x)) - 1}
# Set up bounds for individual weights
bnds = [(0, 1)] * num_algos
results = optimize.minimize(objective, guess, args=10,
constraints=cons, bounds=bnds, method='SLSQP',
options={'disp': True})
results
```

Out[6]:

In [7]:

```
weights = pd.DataFrame(index=range(1, 200), columns=['A', 'B', 'C'], dtype=np.float32)
sharpe = pd.Series(index=range(1, 200), dtype=np.float32)
for i in weights.index:
val = optimize.minimize(objective, guess, args=i, constraints=cons,
bounds=bnds, method='SLSQP').x
weights.loc[i, :] = val
sharpe.loc[i] = portfolio_sharpe(weights.loc[i, :].values, i)
```

This provides us with the Portfolio Sharpe Decay curve.

In [8]:

```
ax = sharpe.plot()
ax.set(xlabel='Capital in $MM', ylabel='Portfolio Sharpe Ratio',
title='Performance under different capital levels')
sns.despine()
```

In [9]:

```
ax = weights.plot()
ax.set(xlabel='Capital in $MM', ylabel='weight', title='Allocations under different capital levels')
sns.despine()
```

In [10]:

```
ax = weights.multiply(pd.Series(weights.index), axis='index').plot()
ax.set(xlabel='Portfolio Gross Market Value $MM', ylabel='Algorithm Gross Market Value $MM',
title='Allocations under different capital levels')
sns.despine()
```

In this blog post we presented a general framework to think about capacity at the algorithm and portfolio level. From here, it is trivial to make the optimization more complex. For example, we probably would want to consider portfolio volatility when computing our allocations and we can relax the zero correlation assumption.