Jekyll2022-01-26T21:04:35+00:00https://smabie.github.io/feed.xmlsmabieQuantitative trading, finance, machine learning, and functional programming.smabieForecasting Market Kurtosis with the Volatility Smile (Poorly)2020-09-02T00:00:00+00:002020-09-02T00:00:00+00:00https://smabie.github.io/posts/2020/09/02/smile-fc<p>Note: I want to thank Algoseek for providing historical SPY option-chain
data at a very reasonable price. Algoseek provides all the market data
you could possibly need, including equity, future, option, forex, crypto
data, and more! If you're interested in high quality market data at low
quality prices, check them out <a href="https://www.algoseek.com/">here</a>!</p>
<p>Before the Black Monday market crash of 1987, options of different
strike prices for the same expiration had roughly the same implied
volatility (IV), inline with the popular Black-Scholes model. After the
crash however, sellers of options realized that perhaps they were taking
on too much long-tail risk since the underlying equities exhibit excess
kurtosis compared to a normal distribution – one of the assumptions of
Black-Scholes. Sellers proceeded to start charging a premium for options
that were either far in-the-money (ITM) or out-of-the-money (OTM).
Let's look at different SPY calls on 2020-05-01 all expiring on
2020-06-01:</p>
<p><img src="/assets/smile.svg" alt="Volatility Smile" /></p>
<p>While Black-Scholes would suggest a uniform distribution (each option
would have the same IV), it's clear from the graph that the far ITM and
OTM options are overpriced with respect to Black-Scholes: this
phenomenon is dubbed the "<a href="https://en.wikipedia.org/wiki/Volatility_smile">volatility
smile</a>." But if the
volatility smile is due to excess kurtosis of the underlying, can we
forecast future market kurtosis using the dispersion of option IVs? In
this short post we'll look at SPY call options between 2012-01-03 and
2020-07-16 in order to see if a higher standard deviation of option IVs
is a predictor of SPY excess kurtosis.</p>
<h2 id="methodology">Methodology</h2>
<p>The methodology is simple, we will look at a simple linear regression
between the standard deviation of calls with the same maturity date and
the excess kurtosis of SPY between the current date and the maturity
date:</p>
\[\text{Kurt}[r_{SPY}] \sim \sigma_{IV}\]
<p>We will restrict the data set to only calls that expire before today's
date so that the kurtosis of the market between the date of the option
and maturity can be fully quantified. If the IVs of each option at each
strike is the same in accordance to Black-Scholes, the standard
deviation will be zero; if the market returns fit a log-normal
distribution, the excess kurtosis will be zero as well. Let's look at
the output of the linear regression:</p>
<p><img src="/assets/linreg.png" alt="Linear Regression" /></p>
<p>Not a particularly great fit, but it does seem like there's a positive
correlation between the standard deviation of IVs and the future market
kurtosis. In addition, the R-squared is 4.12%, which means that the
linear model predicts only 4.12% of the future kurtosis of the market.
Even so, the model does explain some of the future kurtosis of the
market and the coefficient is positive, meaning that indeed a greater
dispersion of option IVs is positively correlated with future excess
kurtosis.</p>
<h2 id="conclusion">Conclusion</h2>
<p>While not a particularly powerful nor predictive model on its own, this
result does lend some credence to the origin of and reason for the
volatility smile of IVs.</p>
<p>Thanks for reading, hope you liked this short post! Like last time, I
won't be sharing the code for this one as the option data isn't
publicly available and without the data, the code doesn't have much
value. If, however, you are interested in accessing the data, check out
<a href="https://algoseek.com">Algoseek</a> for all of your option data needs (and
more!).</p>["smabie"]Note: I want to thank Algoseek for providing historical SPY option-chain data at a very reasonable price. Algoseek provides all the market data you could possibly need, including equity, future, option, forex, crypto data, and more! If you're interested in high quality market data at low quality prices, check them out here!Portfolio Construction with Risk Parity2020-08-01T00:00:00+00:002020-08-01T00:00:00+00:00https://smabie.github.io/posts/2020/08/01/parity<p>Just like how there's more than one way to skin a cat, there's more
than one way to construct a portfolio. The first systematic method of
construction was <em>Modern Portfolio Theory</em>, put forth by Harry
Markowitz. MPT's approach is simple: choose weights for each asset that
maximize the amount of return received for the amount of risk or
volatility taken. However, trying to maximize the risk-adjusted return
of a portfolio leads to very unstable allocations, as ex-ante estimates
of return are notoriously difficult and previous return isn't a good
predictor of future performance.</p>
<p>Another approach that came later is the minimum variance approach which
concerns itself with solely minimizing volatility, irrespective of
return. While this method might converge on sub-optimal portfolios
according to MPT, they tend to be a lot more stable as volatility is
easier to forecast than returns. Minimum variance optimization too
suffers from some problems: without any constraints, the minimum
variance portfolio often heavily weights only a few different assets,
suggesting minimal allocations in the rest. For example, given two
assets, a stock and a bond, the minimum variance method would almost
always return a portfolio in which the bond was allocated the vast
majority of the capital. Even if it produced reasonable results, the
results are highly unstable: the change in portfolio weights year to
year or even month to month can be massive.</p>
<p>For more background on both MPT and minimum variance optimization, check
out one of my previous posts:<a href="https://cryptm.org/posts/2020/07/09/alt.html"> The Value of Alternative
Investments</a>. In the
mid-90s, a third approach was discovered by the hedge fund
<em>Bridgewater</em>, founded by Ray Dalio, that tries remedy some of the
problems with both of the aforementioned portfolio construction methods:
risk parity.</p>
<h2 id="math-of-risk-parity">Math of Risk Parity</h2>
<p>Risk parity is a conceptually simple approach to portfolio construction
in which we try and construct a portfolio in which each asset
contributes a commensurate level of risk to the portfolio. In contrast
to a traditional 60/40 portfolio in which the equity component is
contributing almost all of the risk, risk parity tries to equalize the
risk between each asset. In the two asset case of equities and bonds,
the risk parity approach would spit out a set of weights so that each
asset contributes equally to the volatility of the overall portfolio. To
do this, it would most likely heavily weight the bond compared to the
equity. After the weights are determined, the portfolio would be
leveraged up in order to match a investor defined volatility budget.
With this approach, we often end up with a better risk-adjusted return
than a 60/40 portfolio and also get the benefits on diversification,
unlike with a minimum variance portfolio.</p>
<p>We're going to use the example of a two asset portfolio with allocation
weights $x_a$ and $x_b$ and standard deviations $\sigma_a$ and
$\sigma_b$. To keep things explicit, we're going to eschew matrix
notation. As we showed in the blog post linked above, the variance of
the entire portfolio is:</p>
\[\sigma_P = x_a^2\sigma_a^2 + x_b^2\sigma_b^2 + 2 x_a x_b \sigma_a \sigma_b \rho\]
<p>Where $\rho$ is the standard correlation between $x_a$ and $x_b$. So now
we want to divide up the risk so that they equal each other. But how do
we do that?</p>
<p>To start off, we can use Euler's homogeneous function theorem to help.
Euler's theorem states that if a function is homogeneous, i.e:</p>
\[f(tx, ty) = t^n f(x, y)\]
<p>Then:</p>
\[n t^{n-1} f(x, y)= x \frac{\partial f}{\partial (x\,t)} + y\frac{\partial f}{\partial (y\,t)}\]
<p>In the case of $t=1$:</p>
\[n f(x, y) =x\frac{\partial f}{\partial x} + y\frac{\partial f}{\partial y}\]
<p>Well, it turns out that our equation for $\sigma_P(x_a, x_b)$ is
homogeneous. By using Euler's theorem, we can figure out how to split
up the risk for each asset. First let's calculate the partial
derivative with respect to each variable:</p>
\[\frac{\partial \sigma_P}{\partial x_a} = 2(\rho\sigma_a\sigma_b x_b + \sigma_a^2x_a)\]
\[\frac{\partial \sigma_P}{\partial x_b}= 2(\rho \sigma_{a} \sigma_{b} x_{a} +\sigma_{b}^{2} x_{b})\]
\[n \sigma_P = 2x_a(\rho\sigma_a\sigma_b x_b + \sigma_a^2x_a) + 2x_b(\rho \sigma_{a} \sigma_{b} x_{a} +\sigma_{b}^{2} x_{b})\]
<p>Simplifying and solving for $n$ we get:</p>
\[n = \frac{2x_a(\rho\sigma_a\sigma_b x_b + \sigma_a^2x_a) + 2x_b(\rho \sigma_{a} \sigma_{b} x_{a} +\sigma_{b}^{2} x_{b})}{\sigma_P} = 2\]
<p>We want each component to be equal:</p>
\[x_a \frac{\partial \sigma_P}{\partial x_a}=x_b \frac{\partial \sigma_P}{\partial x_b}\]
\[\rho\sigma_{a}\sigma_{b}x_{a}x_{b}+\sigma_{a}^{2}x_{a}^{2} =\rho \sigma_{a} \sigma_{b} x_{a} x_{b} + \sigma_{b}^{2} x_{b}^{2}\]
<p>Assuming:</p>
\[x_a + x_b = 1\]
<p>We substitute $x_b = x_a - 1$ and then solve for $x_a$. Discarding the
negative solution, we get:</p>
\[x_a = \frac{\sigma_b}{\sigma_a + \sigma_b}\]
<p>And solving for $x_b$:</p>
\[x_b = \frac{\sigma_a}{\sigma_a+\sigma_b}\]
<p>Note that the solution does not depend on the correlation $\rho$, which
some might find counter-intuitive. While we showed that there exists a
closed form solution for the case of two assets, a numerical solution is
required when $n>2$.</p>
<h2 id="risk-parity-in-practice">Risk Parity in Practice</h2>
<p>We're going to construct a risk-parity portfolio from two assets: <em>SPY</em>
(S&P 500), and <em>LQD</em>, which is an investment grade bonds ETF. Our
weights will be calculated as described above, using a 252-day rolling
window for the volatility calculations. Below is a graph of the weights
over time:</p>
<p><img src="/assets/rpw.png" alt="Two asset risk parity weights" /></p>
<p>As expected, bonds make up the majority of the portfolio in order to
match the risk of the equity component. Let's look at the cumulative
returns of our risk parity portfolio compared to a traditional 60/40
portfolio:</p>
<p><img src="/assets/rp64.png" alt="60/40 vs risk parity" /></p>
<p>Due to the majority of our risk parity portfolio being bonds, the
portfolio underperforms a classic 60/40 portfolio that takes on
significantly more equity risk. Having a relatively low natural return
is expected of a risk parity portfolio, which is why leveraged is
applied after portfolio construction. But what leverage ratio to use? In
order to have an apples-to-apples comparison between a risk parity
portfolio and a 60/40 one, we will leverage up in order to try and match
the volatility of a 60/40 portfolio:</p>
\[l = \frac{\sigma_{60/40}}{\sigma_{RP}}\]
<p>For each $\sigma$, we use a 252-day rolling window. We calculate the
leverage ratio needed on each day and leverage up accordingly. Ideally,
this will ensure that the amount of risk taken by the risk parity
portfolio is always inline with the amount of risk a 60/40 portfolio is
taking. Let's look at a graph of our leveraged risk parity portfolio, a
classic 60/40 portfolio, and the leverage ratio:</p>
<p><img src="/assets/rpr.png" alt="risk parity vs 60/40 vs leverage ratio" /></p>
<p>Not bad! The amount of leverage taken is reasonable and the performance
of the risk parity portfolio looks pretty good. Let's look at a table
of metrics:</p>
<table>
<thead>
<tr>
<th>Metric</th>
<th>Risk Parity</th>
<th>60/40</th>
</tr>
</thead>
<tbody>
<tr>
<td>Beta</td>
<td>0.51</td>
<td>0.68</td>
</tr>
<tr>
<td>Ann. Ret</td>
<td>9.9%</td>
<td>7.8%</td>
</tr>
<tr>
<td>Ann. Vol</td>
<td>12.8%</td>
<td>12.8%</td>
</tr>
<tr>
<td>Ann. Sharpe</td>
<td>0.77</td>
<td>0.6</td>
</tr>
</tbody>
</table>
<p>Our simple dynamic leverage strategy works very well, with the
volatility of the two portfolios being identical. But even with the same
volatility, the risk parity portfolio significantly outperforms while
also having less beta exposure and a better Sharpe ratio as well.
Another plus is that the amount of leverage taken is very reasonable,
and well within the ability of even retail investors to obtain.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Risk parity is an exciting and effective technique that is a viable
alternative to minimum variance portfolios as well as fixed weight
portfolios such as 60/40. While risk parity portfolios might not reduce
volatility as much as minimum variance portfolios, they tend to be more
stable over time, incur less turnover, and provide greater
diversification. Compared to a 60/40 portfolio, they are usually
superior unless the borrowing costs are too high. This interest rate
exposure represents a distinct risk for risk parity strategies; a risk
investors should be cognizant of.</p>
<p>Thanks for reading and hope you liked this post! You can check out the
Quantopian notebook
<a href="https://www.quantopian.com/posts/risk-parity">here</a>. Feel free to
change the time periods and assets used to construct the portfolio.</p>["smabie"]Just like how there's more than one way to skin a cat, there's more than one way to construct a portfolio. The first systematic method of construction was Modern Portfolio Theory, put forth by Harry Markowitz. MPT's approach is simple: choose weights for each asset that maximize the amount of return received for the amount of risk or volatility taken. However, trying to maximize the risk-adjusted return of a portfolio leads to very unstable allocations, as ex-ante estimates of return are notoriously difficult and previous return isn't a good predictor of future performance.Risk Imparity: A Simple Strategy for Alpha Generation2020-07-29T00:00:00+00:002020-07-29T00:00:00+00:00https://smabie.github.io/posts/2020/07/29/imparity<p>Risk parity is a popular method of investing that aims to outperform the
traditional 60/40 equity/bond portfolio by equalizing the risk of the
bond portion with that of the equity part of the portfolio and then
leveraging up the portfolio to hit the desired risk target. Using the
covariance, and volatility of the components, we can generate a
portfolio and leverage ratio for a given level of desired risk. This
works for two reasons: bonds have a low or inverse correlation to
equities and they generally have superior risk-adjusted returns as well,
and combining low correlation assets allows us to lower the volatility
of the portfolio. Bridgewater and Ray Dalio pioneered this approach in
the 90s with the introduction of his now famous <em>All Weather</em> fund.</p>
<p>The appeal of this approach is obvious compared to the 60/40 portfolio:
by setting a volatility target, we stabilize our risk throughout
different market regimes. This is an attractive property for investors
who might otherwise choose a 60/40 or even all equity portfolio; they
can sleep well knowing an appropriate level of risk will be taken,
regardless of the market. Well, in this post, we're going to take a
less sensible approach and instead of equalizing the risk in our
portfolio, we're going to leverage up the equity component
(representing almost all of the risk of the portfolio) and use the bonds
to generate yield.</p>
<h2 id="risk-imparity">Risk Imparity</h2>
<p>The intuition behind this approach is pretty straightforward: we want to
take on leverage with equities and cancel out the leverage by choosing
an appropriate weight. For example, if we purchased a 2x bull leveraged
S&P 500 ETF such as <em>SSO</em>, we would choose an appropriate weight to
cancel out the leverage, 0.5 in this case. If we kept the rest of the
money in cash, the return of the portfolio would only be slightly worse
than that of the S&P 500, due to the 0.90% expense ratio we pay. If we
didn't have to pay an expense ratio, the return streams would look
identical besides for perhaps some marginal tracking error. But if we
instead invested the rest of the money in a low risk investment, such as
investment grade bonds, we would earn yield on the money and diversify
our portfolio. As long as the bonds contribute more profit than the cost
of the expense ratio, the strategy should beat the vanilla index. Below
is the strategy in full:</p>
<ol>
<li>
<p>Our portfolio will consist of two assets, <em>UPRO</em>, a 3x leveraged S&P
500 ETF, and <em>LQD</em>, an investment grade bond ETF.</p>
</li>
<li>
<p>Each day, we will rebalance so that 1/3 of our portfolio is
allocated towards <em>UPRO</em> and 2/3s towards <em>LQD</em>.</p>
</li>
</ol>
<p>Let's look at the returns against the S&P 500 between 2009-06-26 and
2020-07-28 (we choose the beginning date because <em>UPRO</em> was at the
beginning of 2009 and the data only became available for it at the start
date):</p>
<p><img src="/assets/iret.png" alt="Imparity Portfolio Returns" /></p>
<p>Not bad, let's look at a table of metrics:</p>
<table>
<thead>
<tr>
<th>Metric</th>
<th>Portfolio</th>
<th>S&P 500</th>
</tr>
</thead>
<tbody>
<tr>
<td>Ann. Rets</td>
<td>18.1%</td>
<td>14.1%</td>
</tr>
<tr>
<td>Ann. Vol</td>
<td>18.4%</td>
<td>17.4%</td>
</tr>
<tr>
<td>Ann. Sharpe</td>
<td>0.98</td>
<td>0.81</td>
</tr>
<tr>
<td>Beta</td>
<td>1.02</td>
<td>1.0</td>
</tr>
</tbody>
</table>
<p>The results are pretty impressive for such a simple strategy: better
Sharpe ratio, comparable beta, and a 4% better annualized return, with
only slightly higher volatility. One problem however, is apparent: if
the traditional negative correlation between bonds and equities starts
to break like it did during the start of Coronavirus, the portfolio
takes a major hit, performing worse than the S&P 500. In order to
confirm this, let's look at <em>LQD</em> alone:</p>
<p><img src="/assets/lqd.png" alt="LQD Rets" /></p>
<p>And indeed, even investment grade bonds collapsed during the start of
Coronavirus. But what about the Great Recession, how did the strategy
perform then? We don't have real data from that time-period, but we can
synthetically generate a comparable return stream by just multiplying
<em>SPY</em> by 3. This of course won't take into account the expense ratio of
<em>UPRO</em> and the tracking error, but the results should still be pretty
similar. Let's take a look:</p>
<p><img src="/assets/siret.png" alt="Synthetic Imparity Portfolio Returns" /></p>
<p>Unlike during Coronavirus, The strategy held up pretty well during the
crash of '07-08, returning approximately the same as a pure S&P 500
portfolio. This might suggest a regime change in bond/equity correlation
or a temporary blip that isn't indicative of future correlation.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Though there are risks associated with this strategy not present in a
pure S&P 500 portfolio, in most market environments, this approach
delivers a better risk-adjusted return without taking on any additional
market risk. Another benefit of the strategy is its simplicity: by using
only two different ETFs, a retail investor can manage the strategy with
minimal effort. investor could also adjust the rebalancing period to
weekly or monthly in order to reduce the complexity and transaction
costs. The strategy can also be modified by adjusting the type of bond
for the portfolio: more risk-averse investors could stick with short to
medium term Treasury notes while higher octane ones could substitute in
junk bonds or other alternative investments.</p>
<p>Thanks for reading, hope you enjoyed this post. Click
<a href="https://www.quantopian.com/posts/risk-imparity">here</a> for the
Quantopian notebook used. Seeing that this strategy is pretty trivial,
if anyone could help me in locating prior literature about it, I would
be most appreciative. Email me at <a href="mailto:sturm@cryptm.org">sturm@cryptm.org</a> if so.</p>["smabie"]Risk parity is a popular method of investing that aims to outperform the traditional 60/40 equity/bond portfolio by equalizing the risk of the bond portion with that of the equity part of the portfolio and then leveraging up the portfolio to hit the desired risk target. Using the covariance, and volatility of the components, we can generate a portfolio and leverage ratio for a given level of desired risk. This works for two reasons: bonds have a low or inverse correlation to equities and they generally have superior risk-adjusted returns as well, and combining low correlation assets allows us to lower the volatility of the portfolio. Bridgewater and Ray Dalio pioneered this approach in the 90s with the introduction of his now famous All Weather fund.Beta Stability: A New Persistent Factor?2020-07-23T00:00:00+00:002020-07-23T00:00:00+00:00https://smabie.github.io/posts/2020/07/23/stable<p>The <a href="https://en.wikipedia.org/wiki/Beta_(finance)">beta</a> exposure of a
stock is one of the first and most important statistics any investor
looks at. Though perhaps simplistic, it distills a myriad of various
properties of a stock into a single, easily digestible number. Using
beta, it becomes easy to roughly forecast how an investor's portfolio
would perform under different market conditions. Maybe out portfolio has
a beta of 1.3 and we expect the market to return 11% next year: great,
our portfolio should return around 14.3% (1.3*11%). While there's a
lot of problems with this simplistic model, in this post, we're going
to focus on just one: the beta of a stock changes significantly over
time. Consider this graph of Apple's rolling, 252-day beta:</p>
<p><img src="/assets/aaplbeta.png" alt="AAPL Beta" /></p>
<p>As we can see, the dispersion is quite high, and the rolling beta rarely
stays the same for any reasonable period of time. There could be any
number of reasons for why the rolling beta changes, such as changes in
the drivers of market movements, the evolution of Apple's business
model, geographic changes in sales, and etc.</p>
<p>Clearly, some companies are going to have higher beta stability than
others. One might imagine that a mature utility company with stable
recurring revenue would have greater beta stability than a recently
IPO'd tech company who's revenue model and business units are still in
flux. So that brings up the question, when constructing a portfolio,
which kind of companies would we prefer: the relatively unstable beta
company, or the more stable one? In this post, we're going to form and
test the hypothesis that, in the long run, a portfolio that consists of
stocks with high beta stability generates excess returns.</p>
<h2 id="the-beta-stable-factor">The Beta Stable Factor</h2>
<p>Our goal is to construct a beta neutral long/short portfolio that tries
to capture the abnormal return that we hypothesized might be associated
with a beta stable portfolio. To do this, we're going use to the
<em>Q500US</em> universe provided by <a href="https://www.quantopian.com/">Quantopian</a>.
This universe consists of the top 500 most liquid (and therefore,
probably largest market cap) equities on any given day. We construct our
alpha factor as follows:</p>
\[f = \text{Z}[\text{rank}(-|\beta_{21}-\beta_{63}|)]\]
<p>Where $\beta_{21}$ is the 21-day (one month) rolling beta, and
$\beta_{63}$ the 63-day (three month) rolling beta. We then take the
absolute value, negate, rank, and Z-score. This factor gives us a
portfolio that goes long stocks that have lower changes in beta and
shorts those that have greater changes in beta.</p>
<p>Because we suspect that higher beta stocks tend to have more
instability, we will zero out the beta exposure by dynamically adjusting
our net long exposure. We use the same method that we did in <a href="https://cryptm.org/posts/2020/06/28/beta.html">The Low
Beta Anomaly</a> post, so go
check it out if you want the details. Backtesting over a 10 year period,
2010-01-01 to 2020-01-01, let's first look at our net long exposure:</p>
<p><img src="/assets/nl.png" alt="Net long exposure" /></p>
<p>The reason for our net long exposure being positive is that there is
some correlation between beta instability and high beta stocks. Beta
generally has a right hand skew, so our unadjusted portfolio would have
a higher average beta for the short-side than the long-side. Overall,
this would result in a portfolio that had significant negative beta
exposure, an undesirable property to have in a world where the market
usually goes up. By increasing the long-side of our portfolio, we can
offset the high beta short-side and maintain beta neutrality. Even so,
the net long exposure is relatively small compared to what we observed
for our low beta portfolio (again, to compare, check out <a href="https://cryptm.org/posts/2020/06/28/beta.html">The Low Beta
Anomaly</a> post): around
10% vs 45%, respectively. This is good, because it means we are
capturing an independent effect with our beta stability hypothesis, not
just another reformulation of the <a href="https://en.wikipedia.org/wiki/Low-volatility_anomaly">low beta/volatility
anomaly</a>.</p>
<p>Now for the good part, the cumulative return over the same time period:</p>
<p><img src="/assets/cr.png" alt="Cumulative Return" /></p>
<p>Wow, that looks pretty good for a beta neutral strategy! Let's
decompose the returns into quantile:</p>
<p><img src="/assets/qcr.png" alt="Cumulative Return by Quantile" /></p>
<p>This graph shows the cumulative return by quantile if each quantile was
held long (remember that the lower quantiles are held short in our
portfolio). The profit from our strategy comes from the difference
between the upper quantiles and the lower ones. Let's look at a table
of metrics to better understand the strategy:</p>
<table>
<thead>
<tr>
<th>Metric</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>Beta</td>
<td>-.018</td>
</tr>
<tr>
<td>Ann. Ret</td>
<td>3.56%</td>
</tr>
<tr>
<td>Ann. Vol</td>
<td>2.7%</td>
</tr>
<tr>
<td>Ann. Sharpe</td>
<td>1.28</td>
</tr>
</tbody>
</table>
<p>Not bad at all! Our beta minimization worked well, the return is pretty
significant, and our Sharpe ratio is very respectable, at 1.28.</p>
<p>Even so, in an ideal beta neutral strategy, we would want to be making
money off the short-side of the portfolio, not merely hedging our market
risk with it: i.e, we would want to see the lower quantiles lose money,
not merely make less than the greater quantiles. In the next section,
we'll consider a long-only variation of the same beta stable strategy.</p>
<h2 id="long-only-beta-stable">Long-only Beta Stable</h2>
<p>For the long-only variation, we'll use the same factor as before, but
instead of investing in the entire <em>Q500US</em> universe, only invest in the
top 100 stocks. This will give us a portfolio with higher allocations to
stocks with greater beta stability and one with 100% net long exposure.
Using the same time period, consider the quantile graph below:</p>
<p><img src="/assets/qcr2.png" alt="Cumulative Return by Quantile" /></p>
<p>As expected, we only see a small difference of the returns by quantile.
This is because we are only investing in the 100 most stable equities,
significantly narrowing the gap between the lowest and highest quantile.
Still, there is some effect (the first quantile realizes the lowest
returns), which bodes well for the robustness of our factor. Now, for
the moment of truth, our factor portfolio compared to the S&P 500:</p>
<p><img src="/assets/bsvs.png" alt="Beta Stable vs S&P 500" /></p>
<p>Wow! This is an amazing result: over a 10 year period, our factor
portfolio returns a little under a 50% superior return compared to the
S&P 500. Here's a table of metrics for comparison:</p>
<table>
<thead>
<tr>
<th>Metric</th>
<th>Portfolio</th>
<th>S&P 500</th>
</tr>
</thead>
<tbody>
<tr>
<td>Beta</td>
<td>0.99</td>
<td>1.0</td>
</tr>
<tr>
<td>Ann. Ret</td>
<td>14.7%</td>
<td>13.37%</td>
</tr>
<tr>
<td>Ann. Vol</td>
<td>15.5%</td>
<td>14.6%</td>
</tr>
<tr>
<td>Ann. Sharpe</td>
<td>0.94</td>
<td>0.91</td>
</tr>
</tbody>
</table>
<p>With over a 1% of excess return, a higher Sharpe ratio, and an
equivalent beta, the long-only beta stable strategy appears to be a huge
success!</p>
<h2 id="conclusion">Conclusion</h2>
<p>Based on the results, perhaps we should consider beta stability a new
persistent factor, alongside the classics like low-volatility, value,
and size? This is a bold claim of course, and much more research needs
to be done; but the preliminary results are very positive, especially
for the long-only version. It would be very easy to overlay onto an
existing beta exposed portfolio, generating a moderate amount of excess
return without taking on any more risk. The beta neutral version also
has promise, though ideally it would be combined with other factors
(such as maybe value or low-beta) in order to boost the Sharpe ratio and
unleveraged return.</p>
<p>Thanks for reading, I hope you enjoyed this post! for the source code,
check out the Quantopian notebook
<a href="https://www.quantopian.com/posts/beta-stability">here</a>. Feel free to
play around with the start and end dates, universe, and anything else.
If you have any comments or feedback, contact me at <a href="mailto:sturm@cryptm.org">sturm@cryptm.org</a>.</p>["smabie"]The beta exposure of a stock is one of the first and most important statistics any investor looks at. Though perhaps simplistic, it distills a myriad of various properties of a stock into a single, easily digestible number. Using beta, it becomes easy to roughly forecast how an investor's portfolio would perform under different market conditions. Maybe out portfolio has a beta of 1.3 and we expect the market to return 11% next year: great, our portfolio should return around 14.3% (1.3*11%). While there's a lot of problems with this simplistic model, in this post, we're going to focus on just one: the beta of a stock changes significantly over time. Consider this graph of Apple's rolling, 252-day beta:The Value of Alternative Investments2020-07-09T00:00:00+00:002020-07-09T00:00:00+00:00https://smabie.github.io/posts/2020/07/09/alt<p>A retail investor would be forgiven for assuming that there are really
only two assets classes one should consider for personal investments:
equities (stocks) and fixed income (bonds). Wherever you look, these two
asset classes dominate the financial landscape. All robo-advisors and
most personal advisors implement a mix of bonds and equities for the
portfolios of their clients. The idea behind this mix is simple: bonds
are low risk and will earn you a marginal return, while equities are
riskier and will be the real engine of returns in good times. In bad
times, bonds will rise, mitigating some of the losses of your equity
portfolio: it's been long known and accepted that stocks and bonds have
a negative correlation. Tying this strategy together is periodic
rebalancing, taking money off the table during bull markets (and moving
them to the safer bonds), and deploying money to the equity portion
during bear markets. This in effect overlays a mean-reversion strategy
onto the portfolio, boosting the returns of an otherwise static
portfolio. The archetypal allocation is 60% equities, and 40% bonds,
though each investor's allocation is going to differ based on his risk
tolerance, age, and personal goals.</p>
<p>Though there's nothing wrong with this two asset class mix, the
historically negative correlation between stocks and bonds means that in
most market environments, bonds are going to drag down the portfolio.
Ideally, we would want to mix in uncorrelated assets and investments
instead of negatively correlated ones, in order to reduce the volatility
of our portfolio without dragging down our returns. Alternative
investments such as hedge funds, venture capital, metals, or real estate
all serve this purpose. These alternative investments are not only
popular for their occasionally spectacular returns, but also for their
low correlation to the broader equity market. For this post, we'll use
the oldest and possibly most maligned investment, gold, and explore ways
to mix it into a pure equity, S&P 500 portfolio.</p>
<h2 id="gold">Gold</h2>
<p>Gold and other metals are an interesting and, unfortunately,
underappreciated asset class. The much derided "gold-bug" is made out
to be paranoid, crazy, and irrational. Despite being so rare and
valuable, the percentage of the world's gold supply that is used for
industrial uses is comparatively small, creating a phenomenon that
untethers the price of gold from other asset classes. Let's look at the
returns of gold versus the S&P 500 between 2010-01-01 and 2020-07-08:</p>
<p><img src="/assets/gold.png" alt="S&P 500 vs Gold" /></p>
<p>It's evident that the returns of gold are unspectacular, to say the
least. It doesn't seem to consistently lose or gain value as much as it
meanders around, seemingly uncorrelated to the S&P 500. And indeed, it
is almost entirely uncorrelated, with a correlation to the S&P 500 of
only 1% over this time period. Is it even worth including in our
portfolio? And if it is, how would we determine the allocation size?</p>
<h2 id="monte-carlo">Monte-Carlo</h2>
<p>To clarify, we're trying to determine the weights for the S&P 500 and
gold parts of our portfolio, taking on no leverage:</p>
\[|x_a| + |x_b| = 1\]
<p>And maximizing the risk-adjusted return. For every unit of risk we take,
we would like to maximize the amount of return we receive. We'll first
start off by doing the most straightforward thing, and randomly choosing
different weights for our two asset portfolio and then graphing them:</p>
<p><img src="/assets/ef.png" alt="Return vs Volatility" /></p>
<p>This hyperbola formed is called the <em>Markowitz Bullet</em>, after Harry
Markowitz, winner of the Nobel Prize for Economics for his work on
<em>Modern Portfolio Theory</em>. The portfolio with the best return vs
volatility profile is called the <em>frontier</em> or <em>efficient</em> portfolio.
There exists only one such portfolio and it is the portfolio every
rational investor should prefer, ignoring borrowings costs and
<a href="https://en.wikipedia.org/wiki/Volatility_tax">volatility drag</a> (for
more information about volatility drag, check out my other post
<a href="https://cryptm.org/posts/2019/11/28/div.html">here</a>, or the beginning
of my three part series
<a href="https://cryptm.org/posts/2019/10/04/vol.html">here</a>).</p>
<p>While it's relatively easy to find what would have been the frontier
portfolio looking backwards, it is much more difficult to estimate the
frontier portfolio over some future time period. We can forecast and
then reduce future volatility with modest accuracy, but forecasting
expected return is a notoriously difficult problem. So difficult, in
fact, that few quantitative investors even try, instead resigning
themselves to solely minimize volatility. Likewise, we'll resign
ourselves to the same fate.</p>
<p>In the general case of a many asset portfolio, no closed form solution
can be found for the minimization of volatility, we must instead use an
optimizer or do a Monte-Carlo simulation; but in the two asset case, we
can find a symbolic solution. So that's what we'll do next!</p>
<h2 id="symbolic-solution">Symbolic Solution</h2>
<p>Recall that our portfolio return is going to be a function of our two
assets and their weights:</p>
\[P = x_a X_a+ x_b X_b\]
<p>Likewise, our expected portfolio mean would look like:</p>
\[\begin{align}
E[P] &= \mu_P\\
&= x_a E[X_a] + x_b E[X_b]\\
&= x_a \mu_a + x_b \mu_b\\
\end{align}\]
<p>And the equation for variance:</p>
\[\text{Var}[X] = \sigma^2 = E[(X-\mu)^2]\]
<p>Covariance is similar, except instead of squaring, we multiply each
variable after demeaning:</p>
\[\text{Cov}[X,Y] = E[(X-\mu_X)(Y-\mu_Y)]\]
<p>Now we just need to derive the portfolio variance:</p>
\[\text{Var}[P] = \text{Var}[x_aX_a + x_b X_b]\]
<p>First we substitute for variance and rearrange:</p>
\[\begin{align}
\text{Var}[x_aX_a + x_b X_b] =& E[(x_aX_a + x_b X_b - E[x_a X_a + x_b X_b])^2]\\
=& E[(x_a X_a - E[x_a X_a] + x_b X_b - E[x_b X_b])^2]
\end{align}\]
<p>Now we can pull the constants out of the expectations, substitute, and
expand:</p>
\[\begin{align}
\text{Var}[x_a X_a + x_b X_b] =& E[(x_a (X_a - \mu_a) + x_b (X_b - \mu_b))^2] \\
=& E[x_a^2(X_a - \mu_a)^2 + x_b^2(X_b - \mu_b)^2 + 2 x_a x_b (X_a - \mu_a)(X_b - \mu_b)]
\end{align}\]
<p>Finally, we break up the expectations, and replace:</p>
\[\begin{align}
\text{Var}[x_a X_a + x_b X_b] =& x^2_a E[(X_a - \mu_a)^2] + x^2_b E[X_b - \mu_b] + 2x_a x_b E[(X_a-\mu_a)(X_b-\mu_b)]\\
=& x^2_a \sigma_a^2 + x^2_b \sigma_b^2 + 2 x_a x_b \text{Cov}[X_a,X_b]\\
=& x^2_a \sigma_a^2 + x^2_b \sigma_b^2 + 2 x_a x_b \rho_{ab} \sigma_a \sigma_b
\end{align}\]
<p>Where $\rho_{ab}$ is the correlation between $X_a$ and $X_b$:</p>
\[\rho_{ab} = \frac{\text{Cov}[X_a,X_b]}{\sigma_a\sigma_b}\]
<p>Now that we have an equation for variance in terms of the weights, mean,
volatility, and correlation between the two assets, we can formulate our
objective:</p>
\[\min\left( \text{Var}[x_aX_a + x_b X_b]\right)\]
<p>Where:</p>
\[x_a + x_b = 1\]
<p>We perform the substitution $x_b = 1 - x_a$ and then take the derivative
with respect to $x_a$:</p>
\[\frac{\mathrm{d} \sigma_P}{\mathrm{d}x_a}=2 \, \rho_{\mathit{ab}} \sigma_{a} \sigma_{b} - 2 \, \sigma_{b}^{2} - 2 \, {\left(2 \, \rho_{\mathit{ab}} \sigma_{a} \sigma_{b} - \sigma_{a}^{2} - \sigma_{b}^{2}\right)} x_{a}\]
<p>Now we set to 0 and solve for $x_a$:</p>
\[x_{a} = \frac{\rho_{\mathit{ab}} \sigma_{a} \sigma_{b} - \sigma_{b}^{2}}{2 \, \rho_{\mathit{ab}} \sigma_{a} \sigma_{b} - \sigma_{a}^{2} - \sigma_{b}^{2}}\]
<p>And $x_b$:</p>
\[x_b = 1 - x_a\]
<p>Note that sometimes this equation will give us values for $x_a$ and
$x_b$ that imply leverage, i.e:</p>
\[|x_a| + |x_b| > 1\]
<p>But we can always rescale them by dividing each one by $|x_a| + |x_b|$.
In the next section, we'll put this formula to work.</p>
<h2 id="portfolio-weights">Portfolio Weights</h2>
<p>We're going to use a one year or 252 day rolling window to generate the
weights. Everyday, we slide our window over one and generate the new
weights for that day. Below is the generated graph:</p>
<p><img src="/assets/gw.png" alt="S&P 500 Weight vs Gold weight" /></p>
<p>Well that's interesting, the weights change considerably over time. In
a real portfolio, not only would we be using an optimizer and a
multitude of risk factors to determine the weights, but we would also
have constraints on position size, turnover, and the magnitude of
deviation from the "ideal" portfolio, and etc. But even in this simple
and contrived example, we see that our formula is doing its job: as the
volatility of the S&P 500 increases, like at the end of 2018 and during
Coronavirus, we start cutting back our equity exposure.</p>
<p>Let's now look at the returns of our portfolio compared to both gold
and the S&P 500:</p>
<p><img src="/assets/altr.png" alt="Portfolio vs S&P 500 vs Gold" /></p>
<p>A lot of gains from the market are given up, as we might have expected,
but the return stream becomes a lot less volatile. Just looking at a
graph of cumulative returns isn't super informative, so here's a table
of various metrics:</p>
<table>
<thead>
<tr>
<th>Metric</th>
<th>Gold</th>
<th>S&P 500</th>
<th>Portfolio</th>
</tr>
</thead>
<tbody>
<tr>
<td>Ann. Vol</td>
<td>15.8%</td>
<td>17.4%</td>
<td>10.8%</td>
</tr>
<tr>
<td>Ann. Ret</td>
<td>4.3%</td>
<td>12.4%</td>
<td>8.4%</td>
</tr>
<tr>
<td>Beta</td>
<td>0.01</td>
<td>1</td>
<td>0.41</td>
</tr>
<tr>
<td>Ann. Sharpe</td>
<td>0.27</td>
<td>0.71</td>
<td>0.77</td>
</tr>
</tbody>
</table>
<p>Wow! Our volatility is lower than either gold or the S&P 500 alone, our
beta has been cut more than in half compared to the S&P 500, and our
Sharpe ratio is even higher than both! Even with mixing in an asset that
had poor returns and high volatility, we've managed to construct a
portfolio that, on a risk-adjusted basis, is superior to the S&P 500.
And, if we so desired, could be levered up in order to beat the return
of the S&P 500, while maintaining comparatively low volatility.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Even with a bad Sharpe ratio and less than stellar returns, gold
enhanced our pure equity portfolio. The same also can be said about
other, more popular, alternative investments. Hedge funds specifically
are often derided for their low returns and frequent meltdowns (such as
Long-Term Capital Management); but like with gold, looks can be
deceiving. Returns and volatility aren't the whole story. When
designing a portfolio, each individual investment is irrelevant, and
instead the return stream of the portfolio as a whole is what matters.
Considered in isolation, a lot of alternative investments look
sub-optimal and irrational. It is only when you zoom out and think about
the needs of the investor (their existing allocations, investment goals,
monetary needs, etc) does everything come into perspective. When it
comes to investing, the whole is certainly greater than the sum of its
parts!</p>
<p>I hope you liked the post and if you did, let me know! You can also
check out the notebook, developed on Quantopian,
<a href="https://www.quantopian.com/posts/the-value-of-alternative-investments">here</a>.
Possible things you could mess around with are the start and end dates
and the two assets to construct a portfolio from.</p>["smabie"]A retail investor would be forgiven for assuming that there are really only two assets classes one should consider for personal investments: equities (stocks) and fixed income (bonds). Wherever you look, these two asset classes dominate the financial landscape. All robo-advisors and most personal advisors implement a mix of bonds and equities for the portfolios of their clients. The idea behind this mix is simple: bonds are low risk and will earn you a marginal return, while equities are riskier and will be the real engine of returns in good times. In bad times, bonds will rise, mitigating some of the losses of your equity portfolio: it's been long known and accepted that stocks and bonds have a negative correlation. Tying this strategy together is periodic rebalancing, taking money off the table during bull markets (and moving them to the safer bonds), and deploying money to the equity portion during bear markets. This in effect overlays a mean-reversion strategy onto the portfolio, boosting the returns of an otherwise static portfolio. The archetypal allocation is 60% equities, and 40% bonds, though each investor's allocation is going to differ based on his risk tolerance, age, and personal goals.The Low Beta Anomaly2020-06-28T00:00:00+00:002020-06-28T00:00:00+00:00https://smabie.github.io/posts/2020/06/28/beta<p>Perhaps more than any other development, finance was ushered into the
modern era with the development of the <em>Capital Asset Pricing Model</em>
(CAPM) by William Sharpe in the early 60s. Though commonly criticized as
too simple and reductionist, the model is still used today as an easy
way to determine a stock's exposure to the market:</p>
\[r_i - r_f = r_f + \beta_i (r_m - r_f)\]
<p>$\beta_i$ was originally formulated as
$\frac{\text{Cov}(r_m,r_i)}{\text{Var}(r_m)}$, but it is more commonly
calculated by taking the slope of a linear regression between the asset
and the market. $r_m$ is the expected market return, $r_f$ is the
expected risk-free rate, and $r_i$ is the expected return of the asset.</p>
<p>Perhaps the model is so popular because the interpretation of $\beta$ is
so concrete and easy to understand: a stock's $\beta$ is simply a
multiplier on the market's return. Stocks with high betas are more
volatile than the market, and those with smaller betas less so. In
tumultuous times, investors try and cut their volatility by rotating
into low beta stocks; while in bull markets, investors clamor to those
with the highest betas.</p>
<p>There's been a problem with CAPM, and indeed, the very concept of beta,
for a long time now: it's called the low beta anomaly. Academics
noticed that there was systematic discrepancies between high and low
beta stocks: it seemed like higher beta stocks were under performing and
low beta stocks were outperforming. Consider the <em>Security Market Line</em>
(SML):</p>
<p><img src="/assets/sml.png" alt="Security Market Line" /></p>
<p>According to CAPM, the returns of low beta stocks were supposed to
linearly scaled by their beta exposure, likewise with higher beta
stocks. Instead, it was noticed that there was an unexpected curve:</p>
<p><img src="/assets/lba.png" alt="Low beta anomaly" /></p>
<p>As we can see, high volatility (and thus, high beta) portfolios are
returning significantly less than what is expected, thus throwing a
wrench into the very concept of beta. After seeing the chart, a natural
question is: can we exploit this mispricing while not being exposed to
the market? In this post, we'll look into whether it's possible to
profit off of this effect and also discuss the potential structural and
behavioral reasons for this anomaly.</p>
<h2 id="a-simple-approach">A Simple Approach</h2>
<p>Of course, the simplest way to try and capture the excess return of the
anomaly is to construct a portfolio of low beta equities and call it a
day. But what if we don't want to be exposed to the market, even to the
degree that low beta equities are? Like most answers in finance, the
answer is a market neutral long/short portfolio!</p>
<p>For this strategy, we'll be using the Quantopian <em>Q1500US</em> universe,
which consists of the 1500 most liquid US equities on any given day. We
then construct our low beta factor as such:</p>
\[f = \text{Z}[\text{rank}(-\beta_{252})]\]
<p>This will give us large positive values for stocks that have low beta,
and small negative values for those that have high beta. Before we rank
and z-score, in order to calculate the beta for each equity, we find the
slope of a simple linear regression over a 252 day rolling window. We
then use these factor values as the weights of our long/short portfolio.
Let's look at the total long/short return of our simple strategy
between 2003-01-01 and 2020-01-01:</p>
<p><img src="/assets/betacum.png" alt="Cumulative return" /></p>
<p>Well… this isn't good! Why are we losing money? Does this mean that
the low beta anomaly isn't actually true? Digging into the numbers a
little more, it turns out that the total portfolio beta for this
strategy is actually negative, -38% to be exact. This means that even
though we are market neutral, we have a negative exposure to the market:
when the market goes up, our portfolio loses money, and vice versa.
Since the market goes up most of the time, this isn't a great property
to have. Ideally, we want to have zero beta exposure, not negative
exposure. Let's look at the skew of the distribution of betas over
time:</p>
<p><img src="/assets/skew.png" alt="Skew over time" /></p>
<p>In order for our market neutral portfolio to have zero beta, we would
need the skew of the betas to be zero, like for a standard normal
distribution. To make this a little more concrete, let's look at a
histogram for a single time period of stock betas:</p>
<p><img src="/assets/hist.png" alt="Histogram" /></p>
<p>Now, the problem becomes evident: even though we are market neutral, we
end up taking on negative beta exposure because the distribution of the
beta of stocks has a long side-side tail. Almost no stocks actually have
negative beta exposure, while many have high beta exposures! This means
that when longing and shorting in equal proportion, we end up with a
large negative beta since our long positions don't have a low enough
beta in order to cancel out our high beta short positions.</p>
<h2 id="fixing-the-problem">Fixing the Problem</h2>
<p>So, how do we proceed? We need to find some way to adjust the weights of
our portfolio so that we end up with zero beta exposure. Fortunately,
calculating the beta exposure is relatively straightforward. Given a
column vector of weights:</p>
\[\mathbf{X} = \begin{bmatrix}
x_1 \\
x_2 \\
\vdots \\
x_n \\
\end{bmatrix}\]
<p>And a vector of beta exposures:</p>
\[\mathbf{B} = \begin{bmatrix}
\beta_1 & \beta_2 & \dots & \beta_n
\end{bmatrix}\]
<p>We can easily find the beta of portfolio:</p>
\[\beta_p = \mathbf{BX}\]
<p>Traditionally, we would solve this equation by using an optimizer with a
constraint that $\beta_p =0$. But because we only have one factor, we
can find an analytical solution. What we want to do is to try and solve
for a $\beta_p$ of zero. To do this, we first need to rewrite the
equation in terms of the weights that are positive (the low beta side)
and those that are negative (the high beta side):</p>
\[\mathbf{X_\alpha} = \begin{bmatrix}
x_1 \\
x_2 \\
\vdots \\
x_j
\end{bmatrix}\]
\[\mathbf{X_\beta} = \begin{bmatrix}
x_1 \\
x_2 \\
\vdots \\
x_k
\end{bmatrix}\]
<p>Likewise, we split up the betas as well:</p>
\[\mathbf{B_\alpha} =
\begin{bmatrix}
\beta_1 & \beta_2 & \dots & \beta_j
\end{bmatrix}\]
\[\mathbf{B_\beta} =
\begin{bmatrix}
\beta_1 & \beta_2 & \dots & \beta_k
\end{bmatrix}\]
<p>Where:</p>
\[n = j + k\]
<p>Now, our equation becomes:</p>
\[\beta_p = \mathbf{B_\alpha X_\alpha}+\mathbf{B_\beta X_\beta}\]
<p>Where the $\alpha$ vectors contain the positive weights and the $\beta$
vectors, the negative weights. We want to scale up the negative weights
(make them larger, though still negative), so we introduce a scaling
factor, $\lambda$, and set $\beta_p=0$:</p>
\[\mathbf{B_\alpha X_\alpha}+ \lambda \mathbf{B_\beta X_\beta}=0\]
<p>We now solve for $\lambda$:</p>
\[\lambda = - \frac{\mathbf{B_\alpha X_\alpha}}{\mathbf{B_\beta X_\beta}}\]
<p>Assuming a leverage ratio of one, i.e:</p>
\[\sum_{i=1}^j |x_{\alpha,i}| + \sum_{i=1}^k |x_{\beta,i}| = 1\]
<p>We can calculate the net long exposure, using $\lambda$ as a parameter:</p>
\[l = \frac{1-\lambda}{1+\lambda}\]
<p>Below is the graph of net long exposure over time:</p>
<p><img src="/assets/ne.png" alt="Net long exposure" /></p>
<p>And the cumulative return:</p>
<p><img src="/assets/betacum2.png" alt="Cumulative return" /></p>
<p>Wow, Looks a lot better! Ignoring the Great Recession, the return stream
seems very solid, with limited volatility. Below is a table of some
additional strategy information:</p>
<table>
<thead>
<tr>
<th>Metric</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>Beta</td>
<td>1.8%</td>
</tr>
<tr>
<td>Ann. Ret</td>
<td>4.0%</td>
</tr>
<tr>
<td>Ann. Vol</td>
<td>4.6%</td>
</tr>
<tr>
<td>Ret/Vol</td>
<td>0.84</td>
</tr>
</tbody>
</table>
<p>With a beta of only 1.8%, it's clear that our method for reducing beta
exposure works quite well. Still, it's surprising that a net long
exposure of such magnitude can still be taken with little to no
correlation to the market. Evidently, there is indeed an excess return
associated with low-beta equities, and quite a large one, at that.</p>
<h2 id="explanation">Explanation</h2>
<p>Many academics have tried to explain the structural and behavioral
reasons for the low beta anomaly. The most common explanation put forth
is that due to leverage constraints and borrowing costs, investors seek
out high beta securities in order to achieve a higher natural return. If
high leverage ratios were available to all investors, one might expect
that this mispricing of high beta stocks might go away, as the beta of
any given stock would become less important.</p>
<p>Another possible explanation is that the conflict of interest between
money managers and clients creates an incentive for fund managers to
take excess risk through high beta equities. In good times, managers
collect a performance fee and a management fee while in bad times, only
a management fee. This asymmetric payoff incentives managers to try and
score a big "win," while limited liability prevents managers from ever
losing money on losses.</p>
<p>A third explanation is that high volatility (and thus high beta) stocks
receive more attention from the financial community as they are simply
more interesting to discuss. This interest and attention encourages
increased buying, thus pushing down the expected return of said stocks.</p>
<h2 id="conclusion">Conclusion</h2>
<p>In this post we have shown that there is indeed a significant abnormal
return associated with low beta stocks. This abnormal return can not
only be captured with a long only (and beta exposed) portfolio, but also
a beta neutral one. Despite the less than stellar risk-adjusted return
of the strategy, perhaps the Sharpe ratio can be improved by controlling
sector and style risks in addition to beta exposure. Perhaps one could
also overlay the low beta factor on top of an existing factor strategy
in order to reduce volatility and boost the risk-adjusted return.</p>
<p>That's all for now, and thanks for reading! If you're interested in
the code and want to play around with it, check it out
<a href="https://www.quantopian.com/posts/low-beta-anomaly">here</a>.</p>["smabie"]Perhaps more than any other development, finance was ushered into the modern era with the development of the Capital Asset Pricing Model (CAPM) by William Sharpe in the early 60s. Though commonly criticized as too simple and reductionist, the model is still used today as an easy way to determine a stock's exposure to the market:Analysis of CS:GO Win-rates2020-06-12T00:00:00+00:002020-06-12T00:00:00+00:00https://smabie.github.io/posts/2020/06/12/csgo<p>This post is going to be a little different from usual; instead of
markets, we're going to look at a video game, namely,<a href="https://en.wikipedia.org/wiki/Counter-Strike:_Global_Offensive"> Counter-Strike:
Global
Offensive</a>
(CS:GO). CS:GO, like most great games, is easy to learn but deceptively
hard to master. For those at are unfamiliar with the game, we'll give a
quick overview below.</p>
<p>CS:GO is a competitive zero-sum game in which two teams of 5 players
each try to win rounds. The first team to 16 points wins the game. The
game is asymmetric as there are two distinct sides: the terrorists
(T-side) and counter-terrorists (CT-side). After 15 rounds, each team
switches sides. The goal of the T-side is to either eliminate all CT
players or to plant a bomb and have it explode before the CT-side can
defuse it; the CT-side wins the round if kill all the players on the
T-side before the bomb is planted, if they defuse the bomb, or if time
runs out on the round. this asymmetric rule-set implies that the CT-side
can lose a round even if they kill all the opposing players, while the T
side cannot lose if they eliminate all CT players.</p>
<p>In order to get a better idea of the structure of the game, let's look
at a top-down perspective of one of the most famous and iconic maps,
Dust2:</p>
<p><img src="/assets/dust2.png" alt="Dust2" /></p>
<p>T-side starts the game in T-spawn and must plant their bomb at one of
two locations: B-site or A-site, designated by the red areas on the map.
the CTs start off in CT spawn and must try and defend these two sites.
If T-side manages to break the CT defenses and plant the bomb, the
remaining players on CT try and retake the bombsite (either A or B) and
defuse the bomb before it explodes.</p>
<p>Because of the asymmetric nature of the game, I thought it would be
interesting to analyze how much a kill effects the game for each side.
To start off with, we'll first look at the situation where the number
of players on each side is equal.</p>
<h2 id="even-match-ups">Even Match-ups</h2>
<p>First up, let's look at the so-called even match-up, where there are an
equal number of players on each team: 5v5, 4v4, etc. Using over 400,000
rounds of match data from mid-2018 (click
<a href="https://www.kaggle.com/skihikingkevin/csgo-matchmaking-damage">here</a>
for the original dataset), we aggregate all rounds with even match-ups
and the side that wins. From this, we can calculate a cumulative win
probability for each match-up:</p>
<p><img src="/assets/evencs.png" alt="Even Match-ups" /></p>
<p>From the above graph, it's seems that the T-side enjoys a significant
advantage. Even when the match starts, the CT-side has a less than 50%
of winning the round and as trades are made (a situation where each team
loses a player), the advantage the T-side has only goes up. But is this
significant? Let's look at a table of CT-side win rates and their
associated p-value:</p>
<table>
<thead>
<tr>
<th>CT win rate</th>
<th>p-value</th>
<th>players</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.48729</td>
<td>2.67482e-55</td>
<td>5</td>
</tr>
<tr>
<td>0.469768</td>
<td>1.17974e-151</td>
<td>4</td>
</tr>
<tr>
<td>0.455098</td>
<td>3.84482e-237</td>
<td>3</td>
</tr>
<tr>
<td>0.439982</td>
<td>1.52963e-319</td>
<td>2</td>
</tr>
<tr>
<td>0.430299</td>
<td>2.35914e-282</td>
<td>1</td>
</tr>
</tbody>
</table>
<p>It's clear from the minuscule magnitude of the p-values that we can
reject the null hypothesis, namely that each even match-up is fair: a
50% chance of each side winning the round.</p>
<h2 id="all-match-ups">All Match-ups</h2>
<p>Now that we've established that even when each side has the same number
of players the T-side has an advantage, let's consider all the possible
combinations. Clearly a situation when 5 CTs are up against 4 Ts is not
a fair fight (We might assume that CTs have the advantage), but unfair
is it? After crunching the number for each permutation, we get the
following graph:</p>
<p><img src="/assets/csperm.png" alt="All Permutations" /></p>
<p>Interesting! When the CTs have close to the number of players as T-side,
the first kills make the most difference. A 5v5 for the CT side gives
them a 48% chance of victory, but netting the first kill shifts the odds
considerably to 68%. When the difference is large, the final kills have
the highest percentage chance, as the chance of the CT-side winning is
so low to start off with. Below is a graph of the exact figures:</p>
<table>
<thead>
<tr>
<th>1 CT</th>
<th>2 CT</th>
<th>3 CT</th>
<th>4 CT</th>
<th>5 CT</th>
<th># T alive</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.00218907</td>
<td>0.0227307</td>
<td>0.106122</td>
<td>0.274956</td>
<td>0.487288</td>
<td>5</td>
</tr>
<tr>
<td>0.00689667</td>
<td>0.0682988</td>
<td>0.235994</td>
<td>0.469768</td>
<td>0.683482</td>
<td>4</td>
</tr>
<tr>
<td>0.0287596</td>
<td>0.187184</td>
<td>0.455091</td>
<td>0.70026</td>
<td>0.858241</td>
<td>3</td>
</tr>
<tr>
<td>0.123364</td>
<td>0.439972</td>
<td>0.732706</td>
<td>0.893306</td>
<td>0.961373</td>
<td>2</td>
</tr>
<tr>
<td>0.430299</td>
<td>0.7915</td>
<td>0.941975</td>
<td>0.985024</td>
<td>0.9967</td>
<td>1</td>
</tr>
<tr>
<td>0.906434</td>
<td>0.989945</td>
<td>0.998562</td>
<td>0.999731</td>
<td>0.999873</td>
<td>0</td>
</tr>
</tbody>
</table>
<h2 id="conclusions">Conclusions</h2>
<p>I know little about professional CS:GO strategy and the data isn't
taken from professional games, but we can quickly infer a couple things
from the data:</p>
<ol>
<li>T-side wants to trade down as much as possible, CT never wants to
trade.</li>
<li>CT-side needs to be play very conservative, maximizing the number of
players on their team that are alive.</li>
<li>T-side wants to play in a very aggressive style in order to take map
control and trade.</li>
</ol>
<p>Anyways, this has been a fun little post to write, I hope you enjoyed
it! Click <a href="https://github.com/smabie/csgo-kill-value">here</a> to view the
GitHub project.</p>["smabie"]This post is going to be a little different from usual; instead of markets, we're going to look at a video game, namely, Counter-Strike: Global Offensive (CS:GO). CS:GO, like most great games, is easy to learn but deceptively hard to master. For those at are unfamiliar with the game, we'll give a quick overview below.ETFs, Volatility and Leverage: Towards a New Leveraged ETF Part 32020-06-09T00:00:00+00:002020-06-09T00:00:00+00:00https://smabie.github.io/posts/2020/06/09/vol3<p>In the final installment of this three part series, we are going to use
our results from the previous two posts to construct a fully automated
variable leverage ETF. In <a href="https://cryptm.org/posts/2019/10/04/vol.html">part
one</a>, we derived the
optimal leverage ratio for maximizing returns and in <a href="https://cryptm.org/posts/2020/05/28/vol2.html">part
two</a> we applied the ARMA
and GARCH models to forecast returns and volatility, respectively.</p>
<p>The ETF we are going to build will take variable leverage with a minimum
leverage of 1 and a maximum leverage of 3. This means that the leverage
ratio we will be assuming will deviate considerably from what is
optimal. The reason for this is twofold: 1) many investors don't want
to short the market, regardless of what our model says, preferring a
baseline of pure beta exposure, and 2) the SEC rejected an application
for the creation of a 4x S&P 500 ETF, so it would be unlikely that a
higher leverage ratio could be assumed, at least in an ETF wrapper. If
one was instead obtaining leverage directly through futures, -20x to 20x
leverage could be taken.</p>
<p>Right now, we have two components, $\text{E}(r_m)$, the expected monthly
return generated from our ARMA model, and $\text{E}(\sigma_d)$, the
expected daily volatility from our GARCH model. First off, we need to
forward fill our monthly return data, in order to generate return data
for everyday. We forward fill instead of back fill in order to avoid
lookahead bias. Also, because our ARMA model is forecasted the future
one month expected return, we convert our daily volatility into monthly
variance: $21\text{E}(\sigma_d)^2$. So our raw leverage ratio becomes:</p>
\[l = \frac{\text{E}(r_m^{\text{fill}})}{21\text{E}(\sigma_d)^2}\]
<p>Let's look at graph of our leverage ratio over time to get a sense of
what we're dealing with:</p>
<p><img src="/assets/olev.png" alt="Optimal leverage" /></p>
<p>Oof, most investors won't be happy with that! Not only is our optimal
leverage calculation taking huge short and long positions, but it's
changing the direction of the portfolio very frequently. We want to
smooth out the changes in leverage as well as constrain the amount taken
between 1 and 3. In order to do this, we apply the following
transformations on the time-series:</p>
<ol>
<li>
<p>First, if the leverage ratio is less than 1 at a time-point, we set
it to 0.</p>
</li>
<li>
<p>In order to constrain the values, we then add $e$ and logscale it.
This should give us values approximately between 1 and 3.</p>
</li>
<li>
<p>To smooth the leverage, we apply an <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/computation.html#exponentially-weighted-windows">exponential rolling
window</a>,
with an $\alpha$ of 0.05.</p>
</li>
<li>
<p>If any values are above 3, we set them to 3.</p>
</li>
</ol>
<p>Below is the full transformation:</p>
\[f(x) =\begin{cases}
x &\text{if }x > 1\\
0 &\text{else}\\
\end{cases}\]
\[\sum_{t=1}^n \min(\text{EWM}_{\alpha=0.05}[\log(f(x_t)+e)],\,3)\]
<p>Consider the graph of adjusted leverage:</p>
<p><img src="/assets/alev.png" alt="Adjusted leverage" /></p>
<p>Though we're significantly deviated from what the optimal leverage is,
our adjusted leverage looks a lot more reasonable. The transitions are
smoother and the values are bounded between 1 and 3, as desired. Now we
simply multiple the leverage ratio by the returns of the S&P 500 and we
have our strategy! This is the moment of truth, let's look at a graph
of the returns of the S&P 500, the returns of our ETF, and the leverage
ratio:</p>
<p><img src="/assets/etf.png" alt="variable leverage S&P 500 vs S&P 500 vs leverage
ratio" /></p>
<p>Not bad! Our ETF is behaving as expected: taking on more and more
leverage during bull runs and reducing exposure when the market drops.
The leverage ratio fluctuates between 1 and 2 due to our smoothing
factor, though more risk could be taken by choosing a greater $\alpha$
value.</p>
<h2 id="conclusion">Conclusion</h2>
<p>At long last, we've reached the end of this series. We've talked about
investors, mostly irrational, aversion to holding leveraged ETFs over a
long period of time, derived the optimal leverage ratio to maximize
returns, created models to forecast returns and volatility, and used all
of this to create a variable leverage ETF. Could such a product be
brought to market, and would investors be interested? Can investors be
convinced to hold any product that isn't just vanilla beta? I'm not
sure, but it is certainly an interesting opportunity. No product on the
market fulfills this niche, if there is even one to exploit. I hope you
enjoyed this series as much as I enjoyed researching and writing it.
Though we did succeed in creating a rudimentary model, so much more work
could be done on it. I believe that a variable leverage product could
have great potential in the retail space, especially now that vanilla
beta has been completely commoditized. New ETF ideas that are "beta
plus" not only have the potential to deliver value to investors, but
also could command much higher fees.</p>
<p>You can check out the notebook
<a href="https://github.com/smabie/towards-a-new-etf-part3">here</a>. Feel free to
play around with any and all parameters.</p>["smabie"]In the final installment of this three part series, we are going to use our results from the previous two posts to construct a fully automated variable leverage ETF. In part one, we derived the optimal leverage ratio for maximizing returns and in part two we applied the ARMA and GARCH models to forecast returns and volatility, respectively.ETFs, Volatility and Leverage: Towards a New Leveraged ETF Part 22020-05-28T00:00:00+00:002020-05-28T00:00:00+00:00https://smabie.github.io/posts/2020/05/28/vol2<p>In <a href="https://cryptm.org/posts/2019/10/04/vol.html">part one</a>, we looked
into the relationship between volatility, returns, and leverage and
derived an equation for the optimal leverage ratio that maximizes the
expected return of a portfolio. This leverage ratio is dependent on two
principle components, expected variance and expected returns:</p>
\[l = \frac{r_b}{\sigma^2_b}\]
<p>Where $r_b$ is the unlevered return and $\sigma^2_b$, the variance of
the portfolio. Our next task, though clear, is hardly straightforward:
we must forecast the future expected return and volatility of the
portfolio in order to set an appropriate leverage ratio. In Part three,
we'll use our forecasts developed in this post, with modifications and
extensions, to realize a complete trading strategy that attempts to
deliver a higher return than the market through the application of
variable leverage.</p>
<h2 id="autoregressive-model">Autoregressive Model</h2>
<p>There are an unaccountably numerous number of modeling techniques that
attempt to do time-series forecasting, each of varying complexity and
sophistication. To start off with, let's consider a moving average (AR)
model:</p>
\[\text{AR}(p): x_t = \alpha + B_1x_{t-1}+B_2x_{t-2}+\cdots+B_px_{t-p}+\epsilon_t\]
<p>Simply put, an AR model is simply a multiple regression on previous
observed values with the addition of a white noise component. Values of
$x$ come from previous observed values in the series. For example, a
model that simply used today's returns as a predictor for tomorrow's
returns could be approximately formalized as a AR(1) model with
$\alpha=0$ and $B_1=1$:</p>
\[\text{AR}(1): x_t = x_{t-1} + \epsilon_t\]
<p>Note that $\epsilon_t\sim N(0, \sigma^2)$, which means it is a white
noise term with a mean of zero and a standard deviation that equals the
volatility of the time series in question.</p>
<p>There are various ways to find an appropriate $p$, but perhaps the most
straightforward is to apply the autocorrelation function, also called
ACF. The ACF takes in a number that represents the number of previous
values we are interested in and calculates the correlations over the
different lagged values. For example, if we wanted to find the ACF with
a lag of one, we would simply find the correlation between all values
(except the first one), and the previous value in the time series. By
calculating the ACF over a number of lags, we can discover if there is a
serial correlation between values or if they are independent. For
example, if we found that there is a correlation, positive or negative,
between today's market returns and the returns of yesterday, but not
the day before yesterday, we could use an AR(1) model to capture this
relationship.</p>
<h2 id="moving-average-model">Moving Average Model</h2>
<p>Another type of simple model often used to forecast time series is an
autoregressive or AR model:</p>
\[\text{MA}(q):x_t = \mu + \epsilon_t + \theta_1\epsilon_{t-1}+\cdots+\theta_q \epsilon_{t-q}\]
<p>A MA model is defined by its lag, $q$, the number of previous terms to
consider when generating a forecast. Each future forecast in
parameterized by the mean, which must remain constant and
$\theta_1\cdots\theta_q$, the lag exposures.
$\epsilon_{t-q}\cdots\epsilon_t$ are lagged white noise terms generated
from previous observations. These epsilon error terms are unobservable,
independent of each other, and respect a normal distribution. It is
important to note that because $\mu$ is constant, the time-series in
question must be stationary: the mean and variance must not change over
time.</p>
<p>Instead of using the ACF to discover the serial correlations and the $q$
parameter, we use the partial autocorrelation function (PACF) instead.
The PACF is similar to the ACF, except that it calculates the partial
correlation instead of a regular correlation over lagged values. Partial
correlation differs in that it controls for some specified variable
before the correlation function is run. In the case of PACF, we control
for the linear dependence of the non-lagged values with the values in
between. For example, let's say we're looking at the PACF of lag 2 and
have three variables to consider: $x_t$, $x_{t-1}$, and $x_{t-2}$. The
partial autocorrelation is the same as normal correlation between $x_t$
and $x_{t-1}$. But when considering $x_t$ and $x_{t-2}$, we first remove
the dependence that $x_t$ has on $x_{t-1}$ before running the
correlation.</p>
<h2 id="arima-model">ARIMA Model</h2>
<p>A common approach is to combine both of these models, forming an ARMA
model:</p>
\[\text{ARMA}(p,\,q): \text{AR}(p) + \text{MA}(q)\]
<p>Note that an ARMA model is only appropriate when the series is
stationary, the mean and variance must not change over time. Because of
this, such a model is inappropriate for the forecasting of price data,
instead returns must be used. What we're implicitly doing is
differencing the price series, also called integration of order one. An
ARIMA (autoregressive integrated moving average) model captures this
differencing:</p>
\[\text{ARIMA}(p,\,d\,,q)\]
<p>Where $d$ is the number of times we differenced. This implies that
ARIMA(1, 1, 1) on price data is identical to ARMA(1, 1) on return data:</p>
\[\text{ARIMA}(p,\,1,\,q) p_t = \text{ARMA}(p,\,q) r_t\]
<p>Also note that:</p>
\[\text{ARIMA}(p,\,0,\,q) = \text{ARMA}(p,\,q)\]
<p>In order to find an appropriate $d$, wee can use the Augmented Dickey
Fuller (ADF) test to iteratively determine the integration order
necessary. The ADF test produces a p-value indicating the probability
that the series is not stationary. The process for determining the
integration order and therefore $d$ for an ARIMA model might look as
follows:</p>
<ol>
<li>Set $d=0$</li>
<li>Apply the ADF test on the series</li>
<li>If $p\leq0.05$, return $d$</li>
<li>Otherwise, take the difference between each successive value and
increment $d$.</li>
<li>Go to step 2</li>
</ol>
<h2 id="forecasting-returns">Forecasting Returns</h2>
<p>Now that we've covered some background on time series forecasting,
let's jump into it: predicting the future return. We will be using an
ARMA instead of an ARIMA model, as in general, returns are stationary
over long periods of time. First, we should look at the ACF output so we
can find a good $p$. Different return windows might exhibit different
serial correlations, so let's look at the ACF of one day, one week, and
one month returns between 2003-01-01 and 2020-01-01:</p>
<p><img src="/assets/acf.png" alt="ACF" /></p>
<p>It definitely seems that both weekly and daily data do not exhibit very
much, if any, serial correlation between subsequent periods. Monthly
returns, on the other hand, exhibit a weak correlation between adjacent
months. From this graph, a $p$ of 1 is probably the most appropriate
value.</p>
<p>Moving on to the PACF, we observe much the same results as before:</p>
<p><img src="/assets/pacf.png" alt="PACF" /></p>
<p>This isn't surprising, as over short lag periods with low correlation
to each other, the PACF should look relatively similar to the ACF. As
before, we'll decide on a $q$ of 1.</p>
<p>In order to avoid lookahead bias, we will use a starting window value of
96 months, or 8 years. For each subsequent month, we will add the
observation and then refit our ARMA parameters. This means that our
window increases by one every single iteration: on our second to last
observation, we are using almost all of the available data to train the
model. Another possible approach would be to use a constant rolling
window, though we would have to take care to ensure that each window was
stationary. A third approach would be to use some percentage of the data
for fitting parameters and the rest for validation, but an increasing
window ensures that new data is always used for fitting as it is made
available. Below is the graph of the forecasted versus actualized return
for the S&P 500:</p>
<p><img src="/assets/armaret.png" alt="Actual vs forecasted S&P 500 return" /></p>
<p>Though not exceptional, the model does get the direction of the return
(either up or down) right 55% of the time: not terrible for a simple and
straightforward model. It's also evident that the model significantly
underestimates the volatility of the market, with the magnitude of
predicted returns being quite modest. This is unsurprising for several
reasons: market returns exhibit excess kurtosis compared to a normal
distribution, and the residuals must also follow a normal distribution.
All in all, this is a decent start, especially considering that
predicting returns with a high accuracy is a notoriously difficult, and
perhaps intractable, problem.</p>
<h2 id="vix-index">VIX Index</h2>
<p>Now that we have the foundations of the return model out of the way, we
can move onto to the volatility component. Two principle methods of
forecasting volatility are commonly used: the VIX and GARCH. Let's
first discuss the VIX.</p>
<p>By using the implied volatility of S&P 500 calls and puts, the VIX index
aims to predict the one month volatility of the market. By using the
current price of puts and calls for the S&P 500, we can use the Black
Scholes options model to solve for the implied volatility, or the future
volatility necessary to justify the current prices. At the time of
writing, the VIX index currently is at 28.59. This means that the future
expected one month annualized volatility is 9.9% or the square root of
12 multiplied by 2.859%. Unfortunately, using the VIX as a measure of
future volatility has some problems: because options are primarily used
to hedge downside risk, the VIX often underestimates the future upside
volatility and overestimates the future downside volatility.</p>
<p>In a strong bull market, call options are often undervalued, as there
are other, more common ways to express a long view on the market, such
as futures or equities. During bear markets, put options become
overpriced as they are one of the only ways to hedge a portfolio without
assuming unlimited price risk (such a shorting or selling futures). The
consequence of this is that the VIX often serves as nothing more than an
index that represents the inverse of market returns: when the market
goes up, the VIX goes down, and vice-versa. Below are two graphs of the
correlation between the VIX and future one month volatility and previous
volatility and future volatility:</p>
<p><img src="/assets/vix.png" alt="VIX vs lagged Volatility" /></p>
<p>Perhaps surprisingly, the VIX has practically the same predictive power
as simply using last month's volatility.</p>
<h2 id="garch-model">GARCH Model</h2>
<p>Instead of using the VIX, we are going to use an Generalized
Autoregressive Conditional Heteroskedasticity (GARCH) model. GARCH is
more complicated than ARIMA, so we won't get into the mathematics of
how it works, but, in short, GARCH allows for the volatility to
experience "shocks" through time. The output of a GARCH model is the
conditional volatility: the instantaneous volatility with respect to
some model. One can think of the conditional volatility as the
unobservable latent volatility that changes over time. Unlike
traditional volatility, which must be calculated over some time window,
conditional volatility exists at an instantaneous point of time. Though
we're not going to talk about GARCH parameters in this post, for those
that are already familiar with GARCH, note that we are using the
standard GARCH(1, 1) model, the most common parameters for forecasting
the volatility of returns.</p>
<h2 id="forecasting-volatility">Forecasting Volatility</h2>
<p>Unlike with our ARMA model, we are going to using the daily instead of
monthly returns of the S&P 500 for our GARCH model. Much like before, we
are going to take an ever expanding window, starting at 63 days or 3
months. Also like with returns, by the penultimate observation, we will
be using almost all available data for training our model. The following
graph is of annualized conditional volatility vs the annualized 21 day
(one month) volatility:</p>
<p><img src="/assets/vol.png" alt="Conditional volatility vs 21d volatility" /></p>
<p>Just from the graph, the accuracy of our model looks <em>very</em> good. And in
fact it is, with a correlation of 97.6% and a Mean Absolute Percentage
Error (MAPE) of only 14.8%. All things considered, this is quite good
and highlights the incredible predictive power of GARCH models in
forecasting volatility.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Now that we have a monthly model for forecasting returns and a daily
modal for forecasting volatility, we conclude this blog post.
Modifications will need to made to turn this into a real, long-only S&P
500 ETF, but a good foundation has been laid. I hope you enjoyed this
post! Click <a href="https://github.com/smabie/towards-a-new-etf-part2">here</a> to
check out the notebook. If you install
<a href="https://www.anaconda.com/">anaconda</a>, no additional Python libraries
should be necessary to execute the notebook.</p>["smabie"]In part one, we looked into the relationship between volatility, returns, and leverage and derived an equation for the optimal leverage ratio that maximizes the expected return of a portfolio. This leverage ratio is dependent on two principle components, expected variance and expected returns:One Weird Trick to Profit off of a Global Market Meltdown2020-05-18T00:00:00+00:002020-05-18T00:00:00+00:00https://smabie.github.io/posts/2020/05/18/owt<p>Did you know there's one weird trick that Wall Street doesn't want you
to know? For only one payment of $39.99, you can get access to this
limited time only exclusive video that will show you how to fight that
bear market! Click here before Wall Street makes it illegal… Or you
could just read this post, I guess.</p>
<p>Anyways, when backtesting data I came across an interesting and
surprisingly simple strategy that has performed very well during the
coronavirus period. Simply put, most of the volatility during this
period has occured overnight, that is, between market close and the
opening auction the next day. This volatility has primarily been
negative, with large downward price differences between the closing
price and the opening price the next day. The intraday market on the
otherhand has mostly stayed flat or gone up modestly. Let's take a look
at three different portfolios: one where we just hold the S&P 500, the
other where we buy the S&P 500 at market open and sell at market close,
and a third where we do the same but also short the S&P 500 between the
close and market open:</p>
<p><img src="/assets/owtr.png" alt="" /></p>
<p>We see that before coronavirus significantly impacts the market, the
returns of all three strategies are relatively similar: little
dispersion is observed in the overnight market and large moves are
primarily contained to the regular intraday session. Around the end of
February, things begin to change: global coronavirus news begins to
dramatically increase the difference between the closing and opening
prices. By executing a strategy as simple as only holding the index
intraday, we convert a 10% loss into a 10% profit, a pretty incredible
difference. Taken one step further, we can even short the overnight
market and net an eye-popping 25% gain over this five and a half month
period!</p>
<p>While the mechanisms for this trend aren't entirely clear, I suspect
this pattern occurs when global markets start to exhibit a very high
correlation to each other. Since coronavirus is a global problem, most
markets across the world are being affected by the same fundamental
factors. Because of this, the price difference between the close and
open in the US has been a reflection of moves in the European and Asian
markets: when the foreign markets crash, it's highly likely that the
American market will open down in the morning. If, on the otherhand,
coronavirus caused a local recession (imagine if coronavirus was only a
US phenomenon), I suspect that we wouldn't see the same overnight
high-volatility trend.</p>
<p>Now, let's look at the previous global recession, here's the same
graph as before but between 2007-06-01 and 2009-06-01:</p>
<p><img src="/assets/owtr2.png" alt="" /></p>
<p>We observe the exact same phenomenon, with large moves in the overnight
markets. Even so, much more volatilty was present in the intraday
markets than now: even when using the long intraday, short overnight
strategy we would have still ended up down about 10% over the two year
period. Using our aforementioned hypothesis, this makes sense at the
overnight difference isn't as pronounced: the US was the epicenter of
the Great Recession, so the overnight markets are affected comparatively
less than during the current period.</p>
<p>Finally, for contrast, the same strategies during the raging bull market
between 2016-01-01 and 2018-01-01:</p>
<p><img src="/assets/owtr3.png" alt="" /></p>
<p>Unsurprisingly, both the long intraday and long intraday, short
overnight strategies leave money on the table compared to the vanilla
portfolio. Even so, both net a not so unsubstantial return during this
two year period: the long intraday of around 23% and the long intraday,
short overnight of around 8%.</p>
<h2 id="conclusion">Conclusion</h2>
<p>One last thing to keep in mind is that even with zero comissions, the
act of buying after the open and selling before the close incurs large
trading costs: even assuming you're buying large cap stocks with low
spreads (let's assume around 4 bips or 0.04%), the annualized cost
comes to around 10% alone!</p>
<p>Even so, shorting or zeroing out overnight exposure can be a powerful
tactic during tumulteous markets: especially if markets around the world
are moving in lockstep. In addition, a small return can be made on top
from the lending of overnight money (though it doesn't help that
interest rates are often lowered close to zero during these periods).</p>
<p>I hope you enjoyed this post! If you want to check out the Quantopian
notebook used to generate the above graphs, click
<a href="https://www.quantopian.com/posts/coronavirus-intraday">here</a>. Feel free
to play around with the dates, though unfortunately, data isn't
available before 2002.</p>["smabie"]Did you know there's one weird trick that Wall Street doesn't want you to know? For only one payment of $39.99, you can get access to this limited time only exclusive video that will show you how to fight that bear market! Click here before Wall Street makes it illegal… Or you could just read this post, I guess.