Jekyll2020-10-07T23:57:08+00:00https://smabie.github.io/feed.xmlSturm MabieQuantitative trader interested in finance, machine learning, and functional programming.Sturm MabieForecasting Market Kurtosis with the Volatility Smile (Poorly)2020-09-02T00:00:00+00:002020-09-02T00:00:00+00:00https://smabie.github.io/posts/2020/09/02/smile-fc<p>Note: I want to thank Algoseek for providing historical SPY option-chain
data at a very reasonable price. Algoseek provides all the market data
you could possibly need, including equity, future, option, forex, crypto
data, and more! If you're interested in high quality market data at low
quality prices, check them out <a href="https://www.algoseek.com/">here</a>!</p>
<p>Before the Black Monday market crash of 1987, options of different
strike prices for the same expiration had roughly the same implied
volatility (IV), inline with the popular Black-Scholes model. After the
crash however, sellers of options realized that perhaps they were taking
on too much long-tail risk since the underlying equities exhibit excess
kurtosis compared to a normal distribution – one of the assumptions of
Black-Scholes. Sellers proceeded to start charging a premium for options
that were either far in-the-money (ITM) or out-of-the-money (OTM).
Let's look at different SPY calls on 2020-05-01 all expiring on
2020-06-01:</p>
<p><img src="/assets/smile.svg" alt="Volatility Smile" /></p>
<p>While Black-Scholes would suggest a uniform distribution (each option
would have the same IV), it's clear from the graph that the far ITM and
OTM options are overpriced with respect to Black-Scholes: this
phenomenon is dubbed the "<a href="https://en.wikipedia.org/wiki/Volatility_smile">volatility
smile</a>." But if the
volatility smile is due to excess kurtosis of the underlying, can we
forecast future market kurtosis using the dispersion of option IVs? In
this short post we'll look at SPY call options between 2012-01-03 and
2020-07-16 in order to see if a higher standard deviation of option IVs
is a predictor of SPY excess kurtosis.</p>
<h2 id="methodology">Methodology</h2>
<p>The methodology is simple, we will look at a simple linear regression
between the standard deviation of calls with the same maturity date and
the excess kurtosis of SPY between the current date and the maturity
date:</p>
\[\text{Kurt}[r_{SPY}] \sim \sigma_{IV}\]
<p>We will restrict the data set to only calls that expire before today's
date so that the kurtosis of the market between the date of the option
and maturity can be fully quantified. If the IVs of each option at each
strike is the same in accordance to Black-Scholes, the standard
deviation will be zero; if the market returns fit a log-normal
distribution, the excess kurtosis will be zero as well. Let's look at
the output of the linear regression:</p>
<p><img src="/assets/linreg.png" alt="Linear Regression" /></p>
<p>Not a particularly great fit, but it does seem like there's a positive
correlation between the standard deviation of IVs and the future market
kurtosis. In addition, the R-squared is 4.12%, which means that the
linear model predicts only 4.12% of the future kurtosis of the market.
Even so, the model does explain some of the future kurtosis of the
market and the coefficient is positive, meaning that indeed a greater
dispersion of option IVs is positively correlated with future excess
kurtosis.</p>
<h2 id="conclusion">Conclusion</h2>
<p>While not a particularly powerful nor predictive model on its own, this
result does lend some credence to the origin of and reason for the
volatility smile of IVs.</p>
<p>Thanks for reading, hope you liked this short post! Like last time, I
won't be sharing the code for this one as the option data isn't
publicly available and without the data, the code doesn't have much
value. If, however, you are interested in accessing the data, check out
<a href="https://algoseek.com">Algoseek</a> for all of your option data needs (and
more!).</p>["Sturm Mabie"]Note: I want to thank Algoseek for providing historical SPY option-chain data at a very reasonable price. Algoseek provides all the market data you could possibly need, including equity, future, option, forex, crypto data, and more! If you're interested in high quality market data at low quality prices, check them out here!St. Petersburg Paradox, the Martingale, and Risk Management2020-08-14T00:00:00+00:002020-08-14T00:00:00+00:00https://smabie.github.io/posts/2020/08/14/pdx<p>Many financial practitioners have long been interested in gambling and
games of chance, from Ed Thorpe's, a former hedge fund manager, seminal
work on blackjack card counting <a href="https://www.amazon.com/Beat-Dealer-Winning-Strategy-Twenty-One/dp/0394703103">Beat the
Dealer</a>,
to the famous hedge fund titan poker tournament (Take 'Em to School
Poker Tournament). The similarity is more than skin deep, with gambling
methods such as the <a href="https://en.wikipedia.org/wiki/Kelly_criterion">Kelly
Criterion</a> even making
their way downtown to investment strategy and risk management. In this
post, we're going to discuss the mathematics and implications for
finance of two different gambling related topics: the St. Petersburg
Paradox and the Martingale Strategy.</p>
<h2 id="st-petersburg-paradox">St. Petersburg Paradox</h2>
<p>First proposed by Bernoulli, the St. Petersburg paradox is a simple game
that illustrates how people's utility function differs from pure
expected value theory. Given a fair coin, the coin is flipped until a
heads does not appear and the player collects $2^n$ dollars where $n$ is
the number of heads. We can calculate the expected value easily by
summing the probabilities and payoffs:</p>
\[E = \sum_{i=1}^\infty \frac{1}{2^i} 2^i = 1 + 1 + \cdots + 1 = \infty\]
<p>Even though the expected value in infinity, most people are willing to
pay relatively little to play such a game, since it's unlikely that the
player will win a large sum, and much more likely to not win any money
or only make a moderate amount. The long tail of large potential profit
forms a geometric distribution, a discrete distribution which models the
likelihood of $n$ number of events happening in sequence:</p>
\[\text{P}(X = n) = q^{n-1}p\]
<p>Where $q = 1 - p$. Since we're using a fair coin, this simply becomes:</p>
\[\text{P}(X = n) = \left(\frac{1}{2}\right)^n\]
<p>The cumulative distribution function (CDF) is simply the chance that
this doesn't happen:</p>
\[\text{P}(X \leq n) = 1 - \left(\frac{1}{2}\right)^n\]
<p>In order to make this a little more concrete, let's look at a graph of
the likelihood and payoffs:</p>
<p><img src="/assets/pwin.svg" alt="Payoff vs # Heads" /></p>
<p>And a graph of the CDF:</p>
<p><img src="/assets/pcdf.svg" alt="Payoff vs # Heads" /></p>
<p>As we can see, the likelihood of receiving larger and larger payoffs
becomes exponentially smaller, with 93.75% of the time the player
getting 4 or less heads in a row, netting a maximum payoff of only
\$16.</p>
<p>Now imagine that instead of only playing this game once, you could pay a
certain amount and play over and over again until you lost all of your
bankroll. How much would you be willing to pay now? Though probably not
an infinite number of dollars, the value of the game has gone up
significantly for you. Assuming you have an ample bankroll to weather
the storm, over the long run you will profit handsomely: making small
amounts occasionally interspersed with massive windfalls.</p>
<h2 id="the-martingale">The Martingale</h2>
<p>The Martingale Strategy is a popular betting strategy that has been used
by centuries by gamblers. The idea is simple: every time you lose a bet,
place a subsequent bet that is double the size so that you can make back
your losses, plus one extra dollar. Assuming an infinite bankroll, the
payoff will always be 1 dollar:</p>
\[2^n - (2^{n-1} + 2^{n-2} \cdots + 2^0) = 1\]
<p>$2^n$ represents your winnings, while the sum of $2^{n-1}$ to $2^0$ are
your losses up to that point. For example, let's use the simple example
of betting on a fair coin. We have 3 dollars, so we can make a maximum
of two bets of \$1 and \$2. The expected value looks like:</p>
\[\text{E} = 0.5 (\$1) + 0.5 (0.5 (\$1) + 0.5 (-\$3)) = 0\]
<p>Even though the expected value is 0, the chance of losing \$3 is only
25%, while the chance of winning \$1 is 75%. In the general case, we
need $2^n-1$ dollars to place $n$ bets, where the chance of winning $1
is:</p>
\[\text{P}(\$1) = 1 - \left(\frac{1}{2}\right)^n\]
<p>And losing:</p>
\[\text{P}(-\$(2^n-1)) = \left(\frac{1}{2}\right)^n\]
<p>Let's look at a graph of our chance of winning as a function of our
bankroll:</p>
<p><img src="/assets/mp.svg" alt="Martingale" /></p>
<p>Much like the St. Petersburg Paradox, the Martingale exhibits a
long-tail, except while the former has a long-tail of a windfall, the
Martingale has a long-tail of absolute ruin. In general, if you want a
$\left(\frac{1}{2}\right)^n$ chance of ruin, then you need $2^n-1$
dollars. So for example, to push your chance of losing it all to around
one in a million, you would need about a million dollars to make that
happen; and with such a large capital base, a one dollar profit in the
best case isn't particularly appealing.</p>
<p>The Martingale at first glance looks like a reasonable strategy, because
the risk of ruin is pushed towards a very long, sometimes unobservable,
tail – which people have a hard time intuitively reasoning about. The
low probability risk of losing it all is not correctly taken into
account, much in the same way that the low probability of a massive
windfall isn't taken into account for the intuitive fair value of the
St. Petersburg Paradox.</p>
<h2 id="application-to-investing">Application to Investing</h2>
<p>Imagine you were a manager of a trading desk, and one of your traders
was offering the St. Petersburg Paradox to clients for a hefty fee while
simultaneously Martingale betting on some derivative. You are unaware of
what he's actually doing, but you can see the return stream from his
trades. He's only been working a couple months and yet, he's minting
money every day, though occasionally suffers some moderate to large
losses. Based on his record, you might think that he's doing really
well and that while his returns are moderately volatile, he generates a
lot of profit, so that his risk-adjusted returns are pretty good.</p>
<p>But you would be wrong, the trader is taking on massive levels of risk.
But from your perspective of only looking at his return streams, this
risk is non-existent, simply because the disastrous long-tail outcome
has never been observed. When a distribution of returns exhibits a lot
of excess kurtosis and skew, one needs to think very hard about how to
appropriately manage risk. Popular risk models such as <em>Value at Risk</em>
(VaR) that estimate the worst 95% or 99% outcome become insufficient
when the return distribution deviates significantly from that of a
log-normal distribution. If you used a 99% VaR model to measure the risk
of your trader's strategies, it would fail to capture almost all of it,
leaving you and your firm unknowingly exposed to existential extinction
events.</p>
<h2 id="conclusion">Conclusion</h2>
<p>While it's rare that any return stream is perfectly modeled by a
log-normal distribution (most returns exhibit excess kurtosis and
left-side skew), it's a good approximation for some, and completely
unfit for others. Strategies such as selling volatility/variance
insurance or selling far out-of-the-money calls or puts share many
properties with the example of the aforementioned trader. It might look
like superior risk-adjusted returns are being generated, but in fact
this risk is just concentrated into a long-tail and thus unobservable
until disaster strikes. This is often likened to "picking up pennies in
front of a steam-roller."</p>
<p>Thank for reading, I hope you enjoyed this post! It's a little
different than usual, but a recent Matt Levine
<a href="https://www.bloomberg.com/opinion/articles/2020-06-25/the-government-wants-esg-out-of-pensions">article</a>
about a Canadian pension fund blow-up (from taking long-tail risk) got
me thinking about the similarities between St. Petersburg Paradox and
the Martingale Strategy and long-tail risky strategies such as selling
variance insurance. No code for this post, as only a couple of trivial
graphs were made.</p>["Sturm Mabie"]Many financial practitioners have long been interested in gambling and games of chance, from Ed Thorpe's, a former hedge fund manager, seminal work on blackjack card counting Beat the Dealer, to the famous hedge fund titan poker tournament (Take 'Em to School Poker Tournament). The similarity is more than skin deep, with gambling methods such as the Kelly Criterion even making their way downtown to investment strategy and risk management. In this post, we're going to discuss the mathematics and implications for finance of two different gambling related topics: the St. Petersburg Paradox and the Martingale Strategy.Implied Volatility of At-The-Money Options2020-08-09T00:00:00+00:002020-08-09T00:00:00+00:00https://smabie.github.io/posts/2020/08/09/iv<p>Note: I want to thank Algoseek for providing historical SPY option-chain
data at a very reasonable price. Algoseek provides all the market data
you could possibly need, including equity, future, option, forex, crypto
data, and more! If you're interested in high quality market data at low
quality prices, check them out <a href="https://www.algoseek.com/">here</a>!</p>
<p>Since the creation of the <em>Chicago Board Options Exchange</em> (CBOE) in
1973, options have played an important part in financial markets,
allowing investors to hedge their portfolios against drawdowns, obtain
leverage while managing risk, and express nuanced investment hypotheses.
Options' asymmetric nature means that buying options always comes with
limited risk, a desirable property when compared to shorting a stock or
selling futures, both of which have unlimited downside.</p>
<p>In this post we're going to talk about the mathematics of options, how
they are priced, and how to potentially profit off of them.</p>
<h2 id="black-scholes-model">Black-Scholes Model</h2>
<p>Before the advent of <em>Black-Scholes</em> (BS), options were not traded on an
exchange and instead they were bespoke contracts called warrants that
were traded over-the-counter (OTC). Only when a reasonable model (BS)
for pricing options was discovered did a market open for trading them,
the CBOE. Options can be broadly split into two categories:
American-style options, and European-style options. While European-style
options can be traded in for the underlying only at expiration,
American-style options can be exchanged for the underlying shares at any
time.</p>
<p>BS and its derivatives all concern themselves with European-style
options as they are mathematically easier to work with, though as the
name implies, are not very common in America. Even so, it's very rare
for an investor to early exercise their American option, even when they
have the right to do so. The reason for this is that an option's value
can be decomposed into two parts: the intrinsic value of the option and
the time value of the option. The more in the money an option is, the
higher its intrinsic value, the longer the option has to maturity, the
greater the option's time value. By choosing to exercise an option
instead of merely selling it, an investor will generally make less
money, as they are only capturing the intrinsic value of the option and
missing out on the time value.</p>
<p>Because of this, American options are rarely exercised early, and thus,
BS should give us a relatively reasonable approximation, even if it's
not a completely mathematically correct thing to do.</p>
<p>The original Black-Scholes model has 6 principle inputs: the expected
future volatility of the underlying asset, the current price of the
underlying, the risk-free rate, the time to maturity, the strike price,
and whether the option is a call or a put. Given these inputs, the model
can generate a fair market price, or value, of the option.
Unfortunately, unlike the other inputs, the expected future volatility
of the underlying is unobservable and can only be estimated.</p>
<p>Using Black-Scholes and a root finding algorithm (such as <a href="https://en.wikipedia.org/wiki/Brent%27s_method">Brent's
method</a>), we can also go
the other way, finding the expected future volatility of the asset by
using the current market price. This expected volatility is called the
<em>implied volatility</em>, or simply IV. For an option to be fairly priced
from a mathematical perspective, the option's IV should be equal to the
realized volatility, which we can only observe after the option has
expired. Let's consider <em>SPY</em> (S&P 500 ETF) options that are
at-the-money (ATM) and also have the shortest time to maturity. Below is
a graph of the annualized IV of puts and calls vs the realized
volatility over a 63-day rolling window:</p>
<p><img src="/assets/iv.svg" alt="IV vs RV" /></p>
<p>Note that because we're simply using a trailing window for the
volatility calculation, this isn't an exact apples-to-apples
comparison. Regardless, the results are interesting: the historical
volatility is almost always lower than the IV for both puts and calls.
From this chart, it's evident that options are rarely sold for a fair
price, with a large premium on puts and a moderate premium on calls. And
this is for close to or ATM options, which tend to have lower IVs than
other options (see <a href="https://en.wikipedia.org/wiki/Volatility_smile">volatility
smile</a> for more
information). This means that the further an option's strike price is
from the current spot price, the more expensive it is compared to what
Black-Scholes predicts. According to Black-Scholes, the IV of all
options with the same time-to-maturity should have the same IV,
regardless of strike price.</p>
<p>Starting in around 2018, the IV of calls and puts are actually lower
than the realized volatility, implying that options (both put and calls)
are cheaper than their mathematical fair value. Also note that puts
almost always command a premium over calls. This should come as no
surprise: there are many more natural buyers of puts (predominantly
investors looking for portfolio insurance) than buyers of calls
(primarily speculators).</p>
<h2 id="implied-volatilityrealized-volatility-arbitrage">Implied Volatility/Realized Volatility Arbitrage</h2>
<p>It's clear from the above graph of IVs that, most of the time, we are
getting ripped off when buying options. So instead of buying options, we
would want to sell them instead. Let's consider two simple strategies:
one where we sell an ATM call everyday, and one where we sell an ATM put
everyday. Below is a graph of the cumulative profit and loss of each
strategy:</p>
<p><img src="/assets/opnl.svg" alt="Call PnL vs Put PnL" /></p>
<p>Surprisingly, even though puts have a higher IV on average, and thus
command a higher price, we actually make more money selling calls than
puts. We initially start making really good profits, but then things
start to flatten out for both strategies, especially around 2018. Based
on the IV graph we saw, this makes sense: we are selling options for
below their fair mathematical value and losing money because of the
positive difference between the realized volatility and IV.</p>
<p>Even ignoring the regime change after 2018, selling calls and puts like
this without a hedge in incredibly dangerous. Remember that unlike with
buying options, selling options puts us in a situation in which we are
taking virtually unlimited risk. If the volatility of the market spikes
unexpectedly, we could very easily get into hot water: the amount of
long-tail risk taken is so staggering as to make all but the riskiest
investors run for the hills. With one bad day, not only could we be
wiped out, but we could be on the hook for money we do not have.</p>
<h2 id="hedging-risk">Hedging Risk</h2>
<p>In order to try and capture the option premiums while hedging away some
our risk, we can hold the underlying ETF for the call strategy, and
short the SPY ETF for the put strategy. While this hedges away the risk
of large moves upward (downward) for the call (put) strategy, it still
leaves us exposed to downward (upward) moves for the call (put)
strategy. Unlike in the unhedged version where we sold a contract each
day, we only sell one contract at a time to ensure that we are correctly
hedged. When the option we sold matures, we buy the next available
option that is closest to ATM, has the minimum maturity time, and has
available bids. Note that we are not perfectly hedged because there will
always be a small difference between the strike and spot price. Let's
take a look at the return of both strategies:</p>
<p><img src="/assets/callstrat.svg" alt="Call Returns vs S&P 500" /></p>
<p><img src="/assets/putstrat.svg" alt="Put Returns" /></p>
<p>The put strategy makes a marginal amount of return until 2018 when the
IV of puts starts to trend lower than the actual realized volatility.
Though we make money from the option premiums, it is barely enough to
cover the persistent negative return from shorting the S&P 500 in good
times and not enough in bad times, when the realized volatility outpaces
the IV. The call strategy on the other hand does very well until 2018,
improving upon the return of the S&P 500. Like with puts, we start to
lose money after the start of 2018 and finish slightly below the returns
of the S&P 500 on 2020-07-16. We do observe less of a drawdown when the
market crashes though, as the premiums from the calls we write cushion
the blows to a certain extent.</p>
<p>In general, it's clear that even though puts are less fairly priced
than calls, it's generally a better idea to write calls instead of
puts. Hedging by longing the underlying doesn't have the large negative
carry associated with hedging puts in bull markets and if the IV becomes
unfavorable, one can simply stop selling calls (while still holding the
underlying) and hold a pure beta portfolio until the historical realized
volatility drops below the IV.</p>
<p>However, if we tried to pause writing puts, we would still hold SPY
short, costing us a lot of money in most market environments. We would
be forced to unwind the hedge until the realized volatility drops. And
even in favorable volatility environments, the premium from the puts was
slight compared the cost of the short hedge, with only around a 15%
return being realized from 2012 to 2018.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Perhaps the only take away we can honestly put forth is that options are
difficult to profit off of, for both the buyers, and the sellers. In
most market environments, the premium you are paying for portfolio
insurance by buying puts is simply too large to be worth it. Even calls
are overpriced, though not to such a large extent. But while buying
options isn't the best idea, selling them comes with their own set of
challenges, especially if you want to sell puts. If you forecast a low
volatility environment of steady but marginal upward growth, selling
covered calls, even ATM, can enhance your returns substantially; they
also help mitigate losses during bear markets as well. The only
situation in which selling ATM covered calls would not be appropriate
would be if a strong bull market is forecasted: the premiums from the
options are unlikely to be large enough to offset the money left on the
table when capping the upside.</p>
<p>Thanks for reading, hope you liked my first options post! I won't be
sharing the code for this one as the option data isn't publicly
available and without the data, the code doesn't have much value. If,
however, you are interested in accessing the data, check out
<a href="https://algoseek.com">Algoseek</a> for all of your option data needs (and
more!).</p>["Sturm Mabie"]Note: I want to thank Algoseek for providing historical SPY option-chain data at a very reasonable price. Algoseek provides all the market data you could possibly need, including equity, future, option, forex, crypto data, and more! If you're interested in high quality market data at low quality prices, check them out here!Portfolio Construction with Risk Parity2020-08-01T00:00:00+00:002020-08-01T00:00:00+00:00https://smabie.github.io/posts/2020/08/01/parity<p>Just like how there's more than one way to skin a cat, there's more
than one way to construct a portfolio. The first systematic method of
construction was <em>Modern Portfolio Theory</em>, put forth by Harry
Markowitz. MPT's approach is simple: choose weights for each asset that
maximize the amount of return received for the amount of risk or
volatility taken. However, trying to maximize the risk-adjusted return
of a portfolio leads to very unstable allocations, as ex-ante estimates
of return are notoriously difficult and previous return isn't a good
predictor of future performance.</p>
<p>Another approach that came later is the minimum variance approach which
concerns itself with solely minimizing volatility, irrespective of
return. While this method might converge on sub-optimal portfolios
according to MPT, they tend to be a lot more stable as volatility is
easier to forecast than returns. Minimum variance optimization too
suffers from some problems: without any constraints, the minimum
variance portfolio often heavily weights only a few different assets,
suggesting minimal allocations in the rest. For example, given two
assets, a stock and a bond, the minimum variance method would almost
always return a portfolio in which the bond was allocated the vast
majority of the capital. Even if it produced reasonable results, the
results are highly unstable: the change in portfolio weights year to
year or even month to month can be massive.</p>
<p>For more background on both MPT and minimum variance optimization, check
out one of my previous posts:<a href="https://cryptm.org/posts/2020/07/09/alt.html"> The Value of Alternative
Investments</a>. In the
mid-90s, a third approach was discovered by the hedge fund
<em>Bridgewater</em>, founded by Ray Dalio, that tries remedy some of the
problems with both of the aforementioned portfolio construction methods:
risk parity.</p>
<h2 id="math-of-risk-parity">Math of Risk Parity</h2>
<p>Risk parity is a conceptually simple approach to portfolio construction
in which we try and construct a portfolio in which each asset
contributes a commensurate level of risk to the portfolio. In contrast
to a traditional 60/40 portfolio in which the equity component is
contributing almost all of the risk, risk parity tries to equalize the
risk between each asset. In the two asset case of equities and bonds,
the risk parity approach would spit out a set of weights so that each
asset contributes equally to the volatility of the overall portfolio. To
do this, it would most likely heavily weight the bond compared to the
equity. After the weights are determined, the portfolio would be
leveraged up in order to match a investor defined volatility budget.
With this approach, we often end up with a better risk-adjusted return
than a 60/40 portfolio and also get the benefits on diversification,
unlike with a minimum variance portfolio.</p>
<p>We're going to use the example of a two asset portfolio with allocation
weights $x_a$ and $x_b$ and standard deviations $\sigma_a$ and
$\sigma_b$. To keep things explicit, we're going to eschew matrix
notation. As we showed in the blog post linked above, the variance of
the entire portfolio is:</p>
\[\sigma_P = x_a^2\sigma_a^2 + x_b^2\sigma_b^2 + 2 x_a x_b \sigma_a \sigma_b \rho\]
<p>Where $\rho$ is the standard correlation between $x_a$ and $x_b$. So now
we want to divide up the risk so that they equal each other. But how do
we do that?</p>
<p>To start off, we can use Euler's homogeneous function theorem to help.
Euler's theorem states that if a function is homogeneous, i.e:</p>
\[f(tx, ty) = t^n f(x, y)\]
<p>Then:</p>
\[n t^{n-1} f(x, y)= x \frac{\partial f}{\partial (x\,t)} + y\frac{\partial f}{\partial (y\,t)}\]
<p>In the case of $t=1$:</p>
\[n f(x, y) =x\frac{\partial f}{\partial x} + y\frac{\partial f}{\partial y}\]
<p>Well, it turns out that our equation for $\sigma_P(x_a, x_b)$ is
homogeneous. By using Euler's theorem, we can figure out how to split
up the risk for each asset. First let's calculate the partial
derivative with respect to each variable:</p>
\[\frac{\partial \sigma_P}{\partial x_a} = 2(\rho\sigma_a\sigma_b x_b + \sigma_a^2x_a)\]
\[\frac{\partial \sigma_P}{\partial x_b}= 2(\rho \sigma_{a} \sigma_{b} x_{a} +\sigma_{b}^{2} x_{b})\]
\[n \sigma_P = 2x_a(\rho\sigma_a\sigma_b x_b + \sigma_a^2x_a) + 2x_b(\rho \sigma_{a} \sigma_{b} x_{a} +\sigma_{b}^{2} x_{b})\]
<p>Simplifying and solving for $n$ we get:</p>
\[n = \frac{2x_a(\rho\sigma_a\sigma_b x_b + \sigma_a^2x_a) + 2x_b(\rho \sigma_{a} \sigma_{b} x_{a} +\sigma_{b}^{2} x_{b})}{\sigma_P} = 2\]
<p>We want each component to be equal:</p>
\[x_a \frac{\partial \sigma_P}{\partial x_a}=x_b \frac{\partial \sigma_P}{\partial x_b}\]
\[\rho\sigma_{a}\sigma_{b}x_{a}x_{b}+\sigma_{a}^{2}x_{a}^{2} =\rho \sigma_{a} \sigma_{b} x_{a} x_{b} + \sigma_{b}^{2} x_{b}^{2}\]
<p>Assuming:</p>
\[x_a + x_b = 1\]
<p>We substitute $x_b = x_a - 1$ and then solve for $x_a$. Discarding the
negative solution, we get:</p>
\[x_a = \frac{\sigma_b}{\sigma_a + \sigma_b}\]
<p>And solving for $x_b$:</p>
\[x_b = \frac{\sigma_a}{\sigma_a+\sigma_b}\]
<p>Note that the solution does not depend on the correlation $\rho$, which
some might find counter-intuitive. While we showed that there exists a
closed form solution for the case of two assets, a numerical solution is
required when $n>2$.</p>
<h2 id="risk-parity-in-practice">Risk Parity in Practice</h2>
<p>We're going to construct a risk-parity portfolio from two assets: <em>SPY</em>
(S&P 500), and <em>LQD</em>, which is an investment grade bonds ETF. Our
weights will be calculated as described above, using a 252-day rolling
window for the volatility calculations. Below is a graph of the weights
over time:</p>
<p><img src="/assets/rpw.png" alt="Two asset risk parity weights" /></p>
<p>As expected, bonds make up the majority of the portfolio in order to
match the risk of the equity component. Let's look at the cumulative
returns of our risk parity portfolio compared to a traditional 60/40
portfolio:</p>
<p><img src="/assets/rp64.png" alt="60/40 vs risk parity" /></p>
<p>Due to the majority of our risk parity portfolio being bonds, the
portfolio underperforms a classic 60/40 portfolio that takes on
significantly more equity risk. Having a relatively low natural return
is expected of a risk parity portfolio, which is why leveraged is
applied after portfolio construction. But what leverage ratio to use? In
order to have an apples-to-apples comparison between a risk parity
portfolio and a 60/40 one, we will leverage up in order to try and match
the volatility of a 60/40 portfolio:</p>
\[l = \frac{\sigma_{60/40}}{\sigma_{RP}}\]
<p>For each $\sigma$, we use a 252-day rolling window. We calculate the
leverage ratio needed on each day and leverage up accordingly. Ideally,
this will ensure that the amount of risk taken by the risk parity
portfolio is always inline with the amount of risk a 60/40 portfolio is
taking. Let's look at a graph of our leveraged risk parity portfolio, a
classic 60/40 portfolio, and the leverage ratio:</p>
<p><img src="/assets/rpr.png" alt="risk parity vs 60/40 vs leverage ratio" /></p>
<p>Not bad! The amount of leverage taken is reasonable and the performance
of the risk parity portfolio looks pretty good. Let's look at a table
of metrics:</p>
<table>
<thead>
<tr>
<th>Metric</th>
<th>Risk Parity</th>
<th>60/40</th>
</tr>
</thead>
<tbody>
<tr>
<td>Beta</td>
<td>0.51</td>
<td>0.68</td>
</tr>
<tr>
<td>Ann. Ret</td>
<td>9.9%</td>
<td>7.8%</td>
</tr>
<tr>
<td>Ann. Vol</td>
<td>12.8%</td>
<td>12.8%</td>
</tr>
<tr>
<td>Ann. Sharpe</td>
<td>0.77</td>
<td>0.6</td>
</tr>
</tbody>
</table>
<p>Our simple dynamic leverage strategy works very well, with the
volatility of the two portfolios being identical. But even with the same
volatility, the risk parity portfolio significantly outperforms while
also having less beta exposure and a better Sharpe ratio as well.
Another plus is that the amount of leverage taken is very reasonable,
and well within the ability of even retail investors to obtain.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Risk parity is an exciting and effective technique that is a viable
alternative to minimum variance portfolios as well as fixed weight
portfolios such as 60/40. While risk parity portfolios might not reduce
volatility as much as minimum variance portfolios, they tend to be more
stable over time, incur less turnover, and provide greater
diversification. Compared to a 60/40 portfolio, they are usually
superior unless the borrowing costs are too high. This interest rate
exposure represents a distinct risk for risk parity strategies; a risk
investors should be cognizant of.</p>
<p>Thanks for reading and hope you liked this post! You can check out the
Quantopian notebook
<a href="https://www.quantopian.com/posts/risk-parity">here</a>. Feel free to
change the time periods and assets used to construct the portfolio.</p>["Sturm Mabie"]Just like how there's more than one way to skin a cat, there's more than one way to construct a portfolio. The first systematic method of construction was Modern Portfolio Theory, put forth by Harry Markowitz. MPT's approach is simple: choose weights for each asset that maximize the amount of return received for the amount of risk or volatility taken. However, trying to maximize the risk-adjusted return of a portfolio leads to very unstable allocations, as ex-ante estimates of return are notoriously difficult and previous return isn't a good predictor of future performance.Risk Imparity: A Simple Strategy for Alpha Generation2020-07-29T00:00:00+00:002020-07-29T00:00:00+00:00https://smabie.github.io/posts/2020/07/29/imparity<p>Risk parity is a popular method of investing that aims to outperform the
traditional 60/40 equity/bond portfolio by equalizing the risk of the
bond portion with that of the equity part of the portfolio and then
leveraging up the portfolio to hit the desired risk target. Using the
covariance, and volatility of the components, we can generate a
portfolio and leverage ratio for a given level of desired risk. This
works for two reasons: bonds have a low or inverse correlation to
equities and they generally have superior risk-adjusted returns as well,
and combining low correlation assets allows us to lower the volatility
of the portfolio. Bridgewater and Ray Dalio pioneered this approach in
the 90s with the introduction of his now famous <em>All Weather</em> fund.</p>
<p>The appeal of this approach is obvious compared to the 60/40 portfolio:
by setting a volatility target, we stabilize our risk throughout
different market regimes. This is an attractive property for investors
who might otherwise choose a 60/40 or even all equity portfolio; they
can sleep well knowing an appropriate level of risk will be taken,
regardless of the market. Well, in this post, we're going to take a
less sensible approach and instead of equalizing the risk in our
portfolio, we're going to leverage up the equity component
(representing almost all of the risk of the portfolio) and use the bonds
to generate yield.</p>
<h2 id="risk-imparity">Risk Imparity</h2>
<p>The intuition behind this approach is pretty straightforward: we want to
take on leverage with equities and cancel out the leverage by choosing
an appropriate weight. For example, if we purchased a 2x bull leveraged
S&P 500 ETF such as <em>SSO</em>, we would choose an appropriate weight to
cancel out the leverage, 0.5 in this case. If we kept the rest of the
money in cash, the return of the portfolio would only be slightly worse
than that of the S&P 500, due to the 0.90% expense ratio we pay. If we
didn't have to pay an expense ratio, the return streams would look
identical besides for perhaps some marginal tracking error. But if we
instead invested the rest of the money in a low risk investment, such as
investment grade bonds, we would earn yield on the money and diversify
our portfolio. As long as the bonds contribute more profit than the cost
of the expense ratio, the strategy should beat the vanilla index. Below
is the strategy in full:</p>
<ol>
<li>
<p>Our portfolio will consist of two assets, <em>UPRO</em>, a 3x leveraged S&P
500 ETF, and <em>LQD</em>, an investment grade bond ETF.</p>
</li>
<li>
<p>Each day, we will rebalance so that 1/3 of our portfolio is
allocated towards <em>UPRO</em> and 2/3s towards <em>LQD</em>.</p>
</li>
</ol>
<p>Let's look at the returns against the S&P 500 between 2009-06-26 and
2020-07-28 (we choose the beginning date because <em>UPRO</em> was at the
beginning of 2009 and the data only became available for it at the start
date):</p>
<p><img src="/assets/iret.png" alt="Imparity Portfolio Returns" /></p>
<p>Not bad, let's look at a table of metrics:</p>
<table>
<thead>
<tr>
<th>Metric</th>
<th>Portfolio</th>
<th>S&P 500</th>
</tr>
</thead>
<tbody>
<tr>
<td>Ann. Rets</td>
<td>18.1%</td>
<td>14.1%</td>
</tr>
<tr>
<td>Ann. Vol</td>
<td>18.4%</td>
<td>17.4%</td>
</tr>
<tr>
<td>Ann. Sharpe</td>
<td>0.98</td>
<td>0.81</td>
</tr>
<tr>
<td>Beta</td>
<td>1.02</td>
<td>1.0</td>
</tr>
</tbody>
</table>
<p>The results are pretty impressive for such a simple strategy: better
Sharpe ratio, comparable beta, and a 4% better annualized return, with
only slightly higher volatility. One problem however, is apparent: if
the traditional negative correlation between bonds and equities starts
to break like it did during the start of Coronavirus, the portfolio
takes a major hit, performing worse than the S&P 500. In order to
confirm this, let's look at <em>LQD</em> alone:</p>
<p><img src="/assets/lqd.png" alt="LQD Rets" /></p>
<p>And indeed, even investment grade bonds collapsed during the start of
Coronavirus. But what about the Great Recession, how did the strategy
perform then? We don't have real data from that time-period, but we can
synthetically generate a comparable return stream by just multiplying
<em>SPY</em> by 3. This of course won't take into account the expense ratio of
<em>UPRO</em> and the tracking error, but the results should still be pretty
similar. Let's take a look:</p>
<p><img src="/assets/siret.png" alt="Synthetic Imparity Portfolio Returns" /></p>
<p>Unlike during Coronavirus, The strategy held up pretty well during the
crash of '07-08, returning approximately the same as a pure S&P 500
portfolio. This might suggest a regime change in bond/equity correlation
or a temporary blip that isn't indicative of future correlation.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Though there are risks associated with this strategy not present in a
pure S&P 500 portfolio, in most market environments, this approach
delivers a better risk-adjusted return without taking on any additional
market risk. Another benefit of the strategy is its simplicity: by using
only two different ETFs, a retail investor can manage the strategy with
minimal effort. investor could also adjust the rebalancing period to
weekly or monthly in order to reduce the complexity and transaction
costs. The strategy can also be modified by adjusting the type of bond
for the portfolio: more risk-averse investors could stick with short to
medium term Treasury notes while higher octane ones could substitute in
junk bonds or other alternative investments.</p>
<p>Thanks for reading, hope you enjoyed this post. Click
<a href="https://www.quantopian.com/posts/risk-imparity">here</a> for the
Quantopian notebook used. Seeing that this strategy is pretty trivial,
if anyone could help me in locating prior literature about it, I would
be most appreciative. Email me at <a href="mailto:sturm@cryptm.org">sturm@cryptm.org</a> if so.</p>["Sturm Mabie"]Risk parity is a popular method of investing that aims to outperform the traditional 60/40 equity/bond portfolio by equalizing the risk of the bond portion with that of the equity part of the portfolio and then leveraging up the portfolio to hit the desired risk target. Using the covariance, and volatility of the components, we can generate a portfolio and leverage ratio for a given level of desired risk. This works for two reasons: bonds have a low or inverse correlation to equities and they generally have superior risk-adjusted returns as well, and combining low correlation assets allows us to lower the volatility of the portfolio. Bridgewater and Ray Dalio pioneered this approach in the 90s with the introduction of his now famous All Weather fund.Beta Stability: A New Persistent Factor?2020-07-23T00:00:00+00:002020-07-23T00:00:00+00:00https://smabie.github.io/posts/2020/07/23/stable<p>The <a href="https://en.wikipedia.org/wiki/Beta_(finance)">beta</a> exposure of a
stock is one of the first and most important statistics any investor
looks at. Though perhaps simplistic, it distills a myriad of various
properties of a stock into a single, easily digestible number. Using
beta, it becomes easy to roughly forecast how an investor's portfolio
would perform under different market conditions. Maybe out portfolio has
a beta of 1.3 and we expect the market to return 11% next year: great,
our portfolio should return around 14.3% (1.3*11%). While there's a
lot of problems with this simplistic model, in this post, we're going
to focus on just one: the beta of a stock changes significantly over
time. Consider this graph of Apple's rolling, 252-day beta:</p>
<p><img src="/assets/aaplbeta.png" alt="AAPL Beta" /></p>
<p>As we can see, the dispersion is quite high, and the rolling beta rarely
stays the same for any reasonable period of time. There could be any
number of reasons for why the rolling beta changes, such as changes in
the drivers of market movements, the evolution of Apple's business
model, geographic changes in sales, and etc.</p>
<p>Clearly, some companies are going to have higher beta stability than
others. One might imagine that a mature utility company with stable
recurring revenue would have greater beta stability than a recently
IPO'd tech company who's revenue model and business units are still in
flux. So that brings up the question, when constructing a portfolio,
which kind of companies would we prefer: the relatively unstable beta
company, or the more stable one? In this post, we're going to form and
test the hypothesis that, in the long run, a portfolio that consists of
stocks with high beta stability generates excess returns.</p>
<h2 id="the-beta-stable-factor">The Beta Stable Factor</h2>
<p>Our goal is to construct a beta neutral long/short portfolio that tries
to capture the abnormal return that we hypothesized might be associated
with a beta stable portfolio. To do this, we're going use to the
<em>Q500US</em> universe provided by <a href="https://www.quantopian.com/">Quantopian</a>.
This universe consists of the top 500 most liquid (and therefore,
probably largest market cap) equities on any given day. We construct our
alpha factor as follows:</p>
\[f = \text{Z}[\text{rank}(-|\beta_{21}-\beta_{63}|)]\]
<p>Where $\beta_{21}$ is the 21-day (one month) rolling beta, and
$\beta_{63}$ the 63-day (three month) rolling beta. We then take the
absolute value, negate, rank, and Z-score. This factor gives us a
portfolio that goes long stocks that have lower changes in beta and
shorts those that have greater changes in beta.</p>
<p>Because we suspect that higher beta stocks tend to have more
instability, we will zero out the beta exposure by dynamically adjusting
our net long exposure. We use the same method that we did in <a href="https://cryptm.org/posts/2020/06/28/beta.html">The Low
Beta Anomaly</a> post, so go
check it out if you want the details. Backtesting over a 10 year period,
2010-01-01 to 2020-01-01, let's first look at our net long exposure:</p>
<p><img src="/assets/nl.png" alt="Net long exposure" /></p>
<p>The reason for our net long exposure being positive is that there is
some correlation between beta instability and high beta stocks. Beta
generally has a right hand skew, so our unadjusted portfolio would have
a higher average beta for the short-side than the long-side. Overall,
this would result in a portfolio that had significant negative beta
exposure, an undesirable property to have in a world where the market
usually goes up. By increasing the long-side of our portfolio, we can
offset the high beta short-side and maintain beta neutrality. Even so,
the net long exposure is relatively small compared to what we observed
for our low beta portfolio (again, to compare, check out <a href="https://cryptm.org/posts/2020/06/28/beta.html">The Low Beta
Anomaly</a> post): around
10% vs 45%, respectively. This is good, because it means we are
capturing an independent effect with our beta stability hypothesis, not
just another reformulation of the <a href="https://en.wikipedia.org/wiki/Low-volatility_anomaly">low beta/volatility
anomaly</a>.</p>
<p>Now for the good part, the cumulative return over the same time period:</p>
<p><img src="/assets/cr.png" alt="Cumulative Return" /></p>
<p>Wow, that looks pretty good for a beta neutral strategy! Let's
decompose the returns into quantile:</p>
<p><img src="/assets/qcr.png" alt="Cumulative Return by Quantile" /></p>
<p>This graph shows the cumulative return by quantile if each quantile was
held long (remember that the lower quantiles are held short in our
portfolio). The profit from our strategy comes from the difference
between the upper quantiles and the lower ones. Let's look at a table
of metrics to better understand the strategy:</p>
<table>
<thead>
<tr>
<th>Metric</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>Beta</td>
<td>-.018</td>
</tr>
<tr>
<td>Ann. Ret</td>
<td>3.56%</td>
</tr>
<tr>
<td>Ann. Vol</td>
<td>2.7%</td>
</tr>
<tr>
<td>Ann. Sharpe</td>
<td>1.28</td>
</tr>
</tbody>
</table>
<p>Not bad at all! Our beta minimization worked well, the return is pretty
significant, and our Sharpe ratio is very respectable, at 1.28.</p>
<p>Even so, in an ideal beta neutral strategy, we would want to be making
money off the short-side of the portfolio, not merely hedging our market
risk with it: i.e, we would want to see the lower quantiles lose money,
not merely make less than the greater quantiles. In the next section,
we'll consider a long-only variation of the same beta stable strategy.</p>
<h2 id="long-only-beta-stable">Long-only Beta Stable</h2>
<p>For the long-only variation, we'll use the same factor as before, but
instead of investing in the entire <em>Q500US</em> universe, only invest in the
top 100 stocks. This will give us a portfolio with higher allocations to
stocks with greater beta stability and one with 100% net long exposure.
Using the same time period, consider the quantile graph below:</p>
<p><img src="/assets/qcr2.png" alt="Cumulative Return by Quantile" /></p>
<p>As expected, we only see a small difference of the returns by quantile.
This is because we are only investing in the 100 most stable equities,
significantly narrowing the gap between the lowest and highest quantile.
Still, there is some effect (the first quantile realizes the lowest
returns), which bodes well for the robustness of our factor. Now, for
the moment of truth, our factor portfolio compared to the S&P 500:</p>
<p><img src="/assets/bsvs.png" alt="Beta Stable vs S&P 500" /></p>
<p>Wow! This is an amazing result: over a 10 year period, our factor
portfolio returns a little under a 50% superior return compared to the
S&P 500. Here's a table of metrics for comparison:</p>
<table>
<thead>
<tr>
<th>Metric</th>
<th>Portfolio</th>
<th>S&P 500</th>
</tr>
</thead>
<tbody>
<tr>
<td>Beta</td>
<td>0.99</td>
<td>1.0</td>
</tr>
<tr>
<td>Ann. Ret</td>
<td>14.7%</td>
<td>13.37%</td>
</tr>
<tr>
<td>Ann. Vol</td>
<td>15.5%</td>
<td>14.6%</td>
</tr>
<tr>
<td>Ann. Sharpe</td>
<td>0.94</td>
<td>0.91</td>
</tr>
</tbody>
</table>
<p>With over a 1% of excess return, a higher Sharpe ratio, and an
equivalent beta, the long-only beta stable strategy appears to be a huge
success!</p>
<h2 id="conclusion">Conclusion</h2>
<p>Based on the results, perhaps we should consider beta stability a new
persistent factor, alongside the classics like low-volatility, value,
and size? This is a bold claim of course, and much more research needs
to be done; but the preliminary results are very positive, especially
for the long-only version. It would be very easy to overlay onto an
existing beta exposed portfolio, generating a moderate amount of excess
return without taking on any more risk. The beta neutral version also
has promise, though ideally it would be combined with other factors
(such as maybe value or low-beta) in order to boost the Sharpe ratio and
unleveraged return.</p>
<p>Thanks for reading, I hope you enjoyed this post! for the source code,
check out the Quantopian notebook
<a href="https://www.quantopian.com/posts/beta-stability">here</a>. Feel free to
play around with the start and end dates, universe, and anything else.
If you have any comments or feedback, contact me at <a href="mailto:sturm@cryptm.org">sturm@cryptm.org</a>.</p>["Sturm Mabie"]The beta exposure of a stock is one of the first and most important statistics any investor looks at. Though perhaps simplistic, it distills a myriad of various properties of a stock into a single, easily digestible number. Using beta, it becomes easy to roughly forecast how an investor's portfolio would perform under different market conditions. Maybe out portfolio has a beta of 1.3 and we expect the market to return 11% next year: great, our portfolio should return around 14.3% (1.3*11%). While there's a lot of problems with this simplistic model, in this post, we're going to focus on just one: the beta of a stock changes significantly over time. Consider this graph of Apple's rolling, 252-day beta:The Value of Alternative Investments2020-07-09T00:00:00+00:002020-07-09T00:00:00+00:00https://smabie.github.io/posts/2020/07/09/alt<p>A retail investor would be forgiven for assuming that there are really
only two assets classes one should consider for personal investments:
equities (stocks) and fixed income (bonds). Wherever you look, these two
asset classes dominate the financial landscape. All robo-advisors and
most personal advisors implement a mix of bonds and equities for the
portfolios of their clients. The idea behind this mix is simple: bonds
are low risk and will earn you a marginal return, while equities are
riskier and will be the real engine of returns in good times. In bad
times, bonds will rise, mitigating some of the losses of your equity
portfolio: it's been long known and accepted that stocks and bonds have
a negative correlation. Tying this strategy together is periodic
rebalancing, taking money off the table during bull markets (and moving
them to the safer bonds), and deploying money to the equity portion
during bear markets. This in effect overlays a mean-reversion strategy
onto the portfolio, boosting the returns of an otherwise static
portfolio. The archetypal allocation is 60% equities, and 40% bonds,
though each investor's allocation is going to differ based on his risk
tolerance, age, and personal goals.</p>
<p>Though there's nothing wrong with this two asset class mix, the
historically negative correlation between stocks and bonds means that in
most market environments, bonds are going to drag down the portfolio.
Ideally, we would want to mix in uncorrelated assets and investments
instead of negatively correlated ones, in order to reduce the volatility
of our portfolio without dragging down our returns. Alternative
investments such as hedge funds, venture capital, metals, or real estate
all serve this purpose. These alternative investments are not only
popular for their occasionally spectacular returns, but also for their
low correlation to the broader equity market. For this post, we'll use
the oldest and possibly most maligned investment, gold, and explore ways
to mix it into a pure equity, S&P 500 portfolio.</p>
<h2 id="gold">Gold</h2>
<p>Gold and other metals are an interesting and, unfortunately,
underappreciated asset class. The much derided "gold-bug" is made out
to be paranoid, crazy, and irrational. Despite being so rare and
valuable, the percentage of the world's gold supply that is used for
industrial uses is comparatively small, creating a phenomenon that
untethers the price of gold from other asset classes. Let's look at the
returns of gold versus the S&P 500 between 2010-01-01 and 2020-07-08:</p>
<p><img src="/assets/gold.png" alt="S&P 500 vs Gold" /></p>
<p>It's evident that the returns of gold are unspectacular, to say the
least. It doesn't seem to consistently lose or gain value as much as it
meanders around, seemingly uncorrelated to the S&P 500. And indeed, it
is almost entirely uncorrelated, with a correlation to the S&P 500 of
only 1% over this time period. Is it even worth including in our
portfolio? And if it is, how would we determine the allocation size?</p>
<h2 id="monte-carlo">Monte-Carlo</h2>
<p>To clarify, we're trying to determine the weights for the S&P 500 and
gold parts of our portfolio, taking on no leverage:</p>
\[|x_a| + |x_b| = 1\]
<p>And maximizing the risk-adjusted return. For every unit of risk we take,
we would like to maximize the amount of return we receive. We'll first
start off by doing the most straightforward thing, and randomly choosing
different weights for our two asset portfolio and then graphing them:</p>
<p><img src="/assets/ef.png" alt="Return vs Volatility" /></p>
<p>This hyperbola formed is called the <em>Markowitz Bullet</em>, after Harry
Markowitz, winner of the Nobel Prize for Economics for his work on
<em>Modern Portfolio Theory</em>. The portfolio with the best return vs
volatility profile is called the <em>frontier</em> or <em>efficient</em> portfolio.
There exists only one such portfolio and it is the portfolio every
rational investor should prefer, ignoring borrowings costs and
<a href="https://en.wikipedia.org/wiki/Volatility_tax">volatility drag</a> (for
more information about volatility drag, check out my other post
<a href="https://cryptm.org/posts/2019/11/28/div.html">here</a>, or the beginning
of my three part series
<a href="https://cryptm.org/posts/2019/10/04/vol.html">here</a>).</p>
<p>While it's relatively easy to find what would have been the frontier
portfolio looking backwards, it is much more difficult to estimate the
frontier portfolio over some future time period. We can forecast and
then reduce future volatility with modest accuracy, but forecasting
expected return is a notoriously difficult problem. So difficult, in
fact, that few quantitative investors even try, instead resigning
themselves to solely minimize volatility. Likewise, we'll resign
ourselves to the same fate.</p>
<p>In the general case of a many asset portfolio, no closed form solution
can be found for the minimization of volatility, we must instead use an
optimizer or do a Monte-Carlo simulation; but in the two asset case, we
can find a symbolic solution. So that's what we'll do next!</p>
<h2 id="symbolic-solution">Symbolic Solution</h2>
<p>Recall that our portfolio return is going to be a function of our two
assets and their weights:</p>
\[P = x_a X_a+ x_b X_b\]
<p>Likewise, our expected portfolio mean would look like:</p>
\[\begin{align}
E[P] &= \mu_P\\
&= x_a E[X_a] + x_b E[X_b]\\
&= x_a \mu_a + x_b \mu_b\\
\end{align}\]
<p>And the equation for variance:</p>
\[\text{Var}[X] = \sigma^2 = E[(X-\mu)^2]\]
<p>Covariance is similar, except instead of squaring, we multiply each
variable after demeaning:</p>
\[\text{Cov}[X,Y] = E[(X-\mu_X)(Y-\mu_Y)]\]
<p>Now we just need to derive the portfolio variance:</p>
\[\text{Var}[P] = \text{Var}[x_aX_a + x_b X_b]\]
<p>First we substitute for variance and rearrange:</p>
\[\begin{align}
\text{Var}[x_aX_a + x_b X_b] =& E[(x_aX_a + x_b X_b - E[x_a X_a + x_b X_b])^2]\\
=& E[(x_a X_a - E[x_a X_a] + x_b X_b - E[x_b X_b])^2]
\end{align}\]
<p>Now we can pull the constants out of the expectations, substitute, and
expand:</p>
\[\begin{align}
\text{Var}[x_a X_a + x_b X_b] =& E[(x_a (X_a - \mu_a) + x_b (X_b - \mu_b))^2] \\
=& E[x_a^2(X_a - \mu_a)^2 + x_b^2(X_b - \mu_b)^2 + 2 x_a x_b (X_a - \mu_a)(X_b - \mu_b)]
\end{align}\]
<p>Finally, we break up the expectations, and replace:</p>
\[\begin{align}
\text{Var}[x_a X_a + x_b X_b] =& x^2_a E[(X_a - \mu_a)^2] + x^2_b E[X_b - \mu_b] + 2x_a x_b E[(X_a-\mu_a)(X_b-\mu_b)]\\
=& x^2_a \sigma_a^2 + x^2_b \sigma_b^2 + 2 x_a x_b \text{Cov}[X_a,X_b]\\
=& x^2_a \sigma_a^2 + x^2_b \sigma_b^2 + 2 x_a x_b \rho_{ab} \sigma_a \sigma_b
\end{align}\]
<p>Where $\rho_{ab}$ is the correlation between $X_a$ and $X_b$:</p>
\[\rho_{ab} = \frac{\text{Cov}[X_a,X_b]}{\sigma_a\sigma_b}\]
<p>Now that we have an equation for variance in terms of the weights, mean,
volatility, and correlation between the two assets, we can formulate our
objective:</p>
\[\min\left( \text{Var}[x_aX_a + x_b X_b]\right)\]
<p>Where:</p>
\[x_a + x_b = 1\]
<p>We perform the substitution $x_b = 1 - x_a$ and then take the derivative
with respect to $x_a$:</p>
\[\frac{\mathrm{d} \sigma_P}{\mathrm{d}x_a}=2 \, \rho_{\mathit{ab}} \sigma_{a} \sigma_{b} - 2 \, \sigma_{b}^{2} - 2 \, {\left(2 \, \rho_{\mathit{ab}} \sigma_{a} \sigma_{b} - \sigma_{a}^{2} - \sigma_{b}^{2}\right)} x_{a}\]
<p>Now we set to 0 and solve for $x_a$:</p>
\[x_{a} = \frac{\rho_{\mathit{ab}} \sigma_{a} \sigma_{b} - \sigma_{b}^{2}}{2 \, \rho_{\mathit{ab}} \sigma_{a} \sigma_{b} - \sigma_{a}^{2} - \sigma_{b}^{2}}\]
<p>And $x_b$:</p>
\[x_b = 1 - x_a\]
<p>Note that sometimes this equation will give us values for $x_a$ and
$x_b$ that imply leverage, i.e:</p>
\[|x_a| + |x_b| > 1\]
<p>But we can always rescale them by dividing each one by $|x_a| + |x_b|$.
In the next section, we'll put this formula to work.</p>
<h2 id="portfolio-weights">Portfolio Weights</h2>
<p>We're going to use a one year or 252 day rolling window to generate the
weights. Everyday, we slide our window over one and generate the new
weights for that day. Below is the generated graph:</p>
<p><img src="/assets/gw.png" alt="S&P 500 Weight vs Gold weight" /></p>
<p>Well that's interesting, the weights change considerably over time. In
a real portfolio, not only would we be using an optimizer and a
multitude of risk factors to determine the weights, but we would also
have constraints on position size, turnover, and the magnitude of
deviation from the "ideal" portfolio, and etc. But even in this simple
and contrived example, we see that our formula is doing its job: as the
volatility of the S&P 500 increases, like at the end of 2018 and during
Coronavirus, we start cutting back our equity exposure.</p>
<p>Let's now look at the returns of our portfolio compared to both gold
and the S&P 500:</p>
<p><img src="/assets/altr.png" alt="Portfolio vs S&P 500 vs Gold" /></p>
<p>A lot of gains from the market are given up, as we might have expected,
but the return stream becomes a lot less volatile. Just looking at a
graph of cumulative returns isn't super informative, so here's a table
of various metrics:</p>
<table>
<thead>
<tr>
<th>Metric</th>
<th>Gold</th>
<th>S&P 500</th>
<th>Portfolio</th>
</tr>
</thead>
<tbody>
<tr>
<td>Ann. Vol</td>
<td>15.8%</td>
<td>17.4%</td>
<td>10.8%</td>
</tr>
<tr>
<td>Ann. Ret</td>
<td>4.3%</td>
<td>12.4%</td>
<td>8.4%</td>
</tr>
<tr>
<td>Beta</td>
<td>0.01</td>
<td>1</td>
<td>0.41</td>
</tr>
<tr>
<td>Ann. Sharpe</td>
<td>0.27</td>
<td>0.71</td>
<td>0.77</td>
</tr>
</tbody>
</table>
<p>Wow! Our volatility is lower than either gold or the S&P 500 alone, our
beta has been cut more than in half compared to the S&P 500, and our
Sharpe ratio is even higher than both! Even with mixing in an asset that
had poor returns and high volatility, we've managed to construct a
portfolio that, on a risk-adjusted basis, is superior to the S&P 500.
And, if we so desired, could be levered up in order to beat the return
of the S&P 500, while maintaining comparatively low volatility.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Even with a bad Sharpe ratio and less than stellar returns, gold
enhanced our pure equity portfolio. The same also can be said about
other, more popular, alternative investments. Hedge funds specifically
are often derided for their low returns and frequent meltdowns (such as
Long-Term Capital Management); but like with gold, looks can be
deceiving. Returns and volatility aren't the whole story. When
designing a portfolio, each individual investment is irrelevant, and
instead the return stream of the portfolio as a whole is what matters.
Considered in isolation, a lot of alternative investments look
sub-optimal and irrational. It is only when you zoom out and think about
the needs of the investor (their existing allocations, investment goals,
monetary needs, etc) does everything come into perspective. When it
comes to investing, the whole is certainly greater than the sum of its
parts!</p>
<p>I hope you liked the post and if you did, let me know! You can also
check out the notebook, developed on Quantopian,
<a href="https://www.quantopian.com/posts/the-value-of-alternative-investments">here</a>.
Possible things you could mess around with are the start and end dates
and the two assets to construct a portfolio from.</p>["Sturm Mabie"]A retail investor would be forgiven for assuming that there are really only two assets classes one should consider for personal investments: equities (stocks) and fixed income (bonds). Wherever you look, these two asset classes dominate the financial landscape. All robo-advisors and most personal advisors implement a mix of bonds and equities for the portfolios of their clients. The idea behind this mix is simple: bonds are low risk and will earn you a marginal return, while equities are riskier and will be the real engine of returns in good times. In bad times, bonds will rise, mitigating some of the losses of your equity portfolio: it's been long known and accepted that stocks and bonds have a negative correlation. Tying this strategy together is periodic rebalancing, taking money off the table during bull markets (and moving them to the safer bonds), and deploying money to the equity portion during bear markets. This in effect overlays a mean-reversion strategy onto the portfolio, boosting the returns of an otherwise static portfolio. The archetypal allocation is 60% equities, and 40% bonds, though each investor's allocation is going to differ based on his risk tolerance, age, and personal goals.The Low Beta Anomaly2020-06-28T00:00:00+00:002020-06-28T00:00:00+00:00https://smabie.github.io/posts/2020/06/28/beta<p>Perhaps more than any other development, finance was ushered into the
modern era with the development of the <em>Capital Asset Pricing Model</em>
(CAPM) by William Sharpe in the early 60s. Though commonly criticized as
too simple and reductionist, the model is still used today as an easy
way to determine a stock's exposure to the market:</p>
\[r_i - r_f = r_f + \beta_i (r_m - r_f)\]
<p>$\beta_i$ was originally formulated as
$\frac{\text{Cov}(r_m,r_i)}{\text{Var}(r_m)}$, but it is more commonly
calculated by taking the slope of a linear regression between the asset
and the market. $r_m$ is the expected market return, $r_f$ is the
expected risk-free rate, and $r_i$ is the expected return of the asset.</p>
<p>Perhaps the model is so popular because the interpretation of $\beta$ is
so concrete and easy to understand: a stock's $\beta$ is simply a
multiplier on the market's return. Stocks with high betas are more
volatile than the market, and those with smaller betas less so. In
tumultuous times, investors try and cut their volatility by rotating
into low beta stocks; while in bull markets, investors clamor to those
with the highest betas.</p>
<p>There's been a problem with CAPM, and indeed, the very concept of beta,
for a long time now: it's called the low beta anomaly. Academics
noticed that there was systematic discrepancies between high and low
beta stocks: it seemed like higher beta stocks were under performing and
low beta stocks were outperforming. Consider the <em>Security Market Line</em>
(SML):</p>
<p><img src="/assets/sml.png" alt="Security Market Line" /></p>
<p>According to CAPM, the returns of low beta stocks were supposed to
linearly scaled by their beta exposure, likewise with higher beta
stocks. Instead, it was noticed that there was an unexpected curve:</p>
<p><img src="/assets/lba.png" alt="Low beta anomaly" /></p>
<p>As we can see, high volatility (and thus, high beta) portfolios are
returning significantly less than what is expected, thus throwing a
wrench into the very concept of beta. After seeing the chart, a natural
question is: can we exploit this mispricing while not being exposed to
the market? In this post, we'll look into whether it's possible to
profit off of this effect and also discuss the potential structural and
behavioral reasons for this anomaly.</p>
<h2 id="a-simple-approach">A Simple Approach</h2>
<p>Of course, the simplest way to try and capture the excess return of the
anomaly is to construct a portfolio of low beta equities and call it a
day. But what if we don't want to be exposed to the market, even to the
degree that low beta equities are? Like most answers in finance, the
answer is a market neutral long/short portfolio!</p>
<p>For this strategy, we'll be using the Quantopian <em>Q1500US</em> universe,
which consists of the 1500 most liquid US equities on any given day. We
then construct our low beta factor as such:</p>
\[f = \text{Z}[\text{rank}(-\beta_{252})]\]
<p>This will give us large positive values for stocks that have low beta,
and small negative values for those that have high beta. Before we rank
and z-score, in order to calculate the beta for each equity, we find the
slope of a simple linear regression over a 252 day rolling window. We
then use these factor values as the weights of our long/short portfolio.
Let's look at the total long/short return of our simple strategy
between 2003-01-01 and 2020-01-01:</p>
<p><img src="/assets/betacum.png" alt="Cumulative return" /></p>
<p>Well… this isn't good! Why are we losing money? Does this mean that
the low beta anomaly isn't actually true? Digging into the numbers a
little more, it turns out that the total portfolio beta for this
strategy is actually negative, -38% to be exact. This means that even
though we are market neutral, we have a negative exposure to the market:
when the market goes up, our portfolio loses money, and vice versa.
Since the market goes up most of the time, this isn't a great property
to have. Ideally, we want to have zero beta exposure, not negative
exposure. Let's look at the skew of the distribution of betas over
time:</p>
<p><img src="/assets/skew.png" alt="Skew over time" /></p>
<p>In order for our market neutral portfolio to have zero beta, we would
need the skew of the betas to be zero, like for a standard normal
distribution. To make this a little more concrete, let's look at a
histogram for a single time period of stock betas:</p>
<p><img src="/assets/hist.png" alt="Histogram" /></p>
<p>Now, the problem becomes evident: even though we are market neutral, we
end up taking on negative beta exposure because the distribution of the
beta of stocks has a long side-side tail. Almost no stocks actually have
negative beta exposure, while many have high beta exposures! This means
that when longing and shorting in equal proportion, we end up with a
large negative beta since our long positions don't have a low enough
beta in order to cancel out our high beta short positions.</p>
<h2 id="fixing-the-problem">Fixing the Problem</h2>
<p>So, how do we proceed? We need to find some way to adjust the weights of
our portfolio so that we end up with zero beta exposure. Fortunately,
calculating the beta exposure is relatively straightforward. Given a
column vector of weights:</p>
\[\mathbf{X} = \begin{bmatrix}
x_1 \\
x_2 \\
\vdots \\
x_n \\
\end{bmatrix}\]
<p>And a vector of beta exposures:</p>
\[\mathbf{B} = \begin{bmatrix}
\beta_1 & \beta_2 & \dots & \beta_n
\end{bmatrix}\]
<p>We can easily find the beta of portfolio:</p>
\[\beta_p = \mathbf{BX}\]
<p>Traditionally, we would solve this equation by using an optimizer with a
constraint that $\beta_p =0$. But because we only have one factor, we
can find an analytical solution. What we want to do is to try and solve
for a $\beta_p$ of zero. To do this, we first need to rewrite the
equation in terms of the weights that are positive (the low beta side)
and those that are negative (the high beta side):</p>
\[\mathbf{X_\alpha} = \begin{bmatrix}
x_1 \\
x_2 \\
\vdots \\
x_j
\end{bmatrix}\]
\[\mathbf{X_\beta} = \begin{bmatrix}
x_1 \\
x_2 \\
\vdots \\
x_k
\end{bmatrix}\]
<p>Likewise, we split up the betas as well:</p>
\[\mathbf{B_\alpha} =
\begin{bmatrix}
\beta_1 & \beta_2 & \dots & \beta_j
\end{bmatrix}\]
\[\mathbf{B_\beta} =
\begin{bmatrix}
\beta_1 & \beta_2 & \dots & \beta_k
\end{bmatrix}\]
<p>Where:</p>
\[n = j + k\]
<p>Now, our equation becomes:</p>
\[\beta_p = \mathbf{B_\alpha X_\alpha}+\mathbf{B_\beta X_\beta}\]
<p>Where the $\alpha$ vectors contain the positive weights and the $\beta$
vectors, the negative weights. We want to scale up the negative weights
(make them larger, though still negative), so we introduce a scaling
factor, $\lambda$, and set $\beta_p=0$:</p>
\[\mathbf{B_\alpha X_\alpha}+ \lambda \mathbf{B_\beta X_\beta}=0\]
<p>We now solve for $\lambda$:</p>
\[\lambda = - \frac{\mathbf{B_\alpha X_\alpha}}{\mathbf{B_\beta X_\beta}}\]
<p>Assuming a leverage ratio of one, i.e:</p>
\[\sum_{i=1}^j |x_{\alpha,i}| + \sum_{i=1}^k |x_{\beta,i}| = 1\]
<p>We can calculate the net long exposure, using $\lambda$ as a parameter:</p>
\[l = \frac{1-\lambda}{1+\lambda}\]
<p>Below is the graph of net long exposure over time:</p>
<p><img src="/assets/ne.png" alt="Net long exposure" /></p>
<p>And the cumulative return:</p>
<p><img src="/assets/betacum2.png" alt="Cumulative return" /></p>
<p>Wow, Looks a lot better! Ignoring the Great Recession, the return stream
seems very solid, with limited volatility. Below is a table of some
additional strategy information:</p>
<table>
<thead>
<tr>
<th>Metric</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>Beta</td>
<td>1.8%</td>
</tr>
<tr>
<td>Ann. Ret</td>
<td>4.0%</td>
</tr>
<tr>
<td>Ann. Vol</td>
<td>4.6%</td>
</tr>
<tr>
<td>Ret/Vol</td>
<td>0.84</td>
</tr>
</tbody>
</table>
<p>With a beta of only 1.8%, it's clear that our method for reducing beta
exposure works quite well. Still, it's surprising that a net long
exposure of such magnitude can still be taken with little to no
correlation to the market. Evidently, there is indeed an excess return
associated with low-beta equities, and quite a large one, at that.</p>
<h2 id="explanation">Explanation</h2>
<p>Many academics have tried to explain the structural and behavioral
reasons for the low beta anomaly. The most common explanation put forth
is that due to leverage constraints and borrowing costs, investors seek
out high beta securities in order to achieve a higher natural return. If
high leverage ratios were available to all investors, one might expect
that this mispricing of high beta stocks might go away, as the beta of
any given stock would become less important.</p>
<p>Another possible explanation is that the conflict of interest between
money managers and clients creates an incentive for fund managers to
take excess risk through high beta equities. In good times, managers
collect a performance fee and a management fee while in bad times, only
a management fee. This asymmetric payoff incentives managers to try and
score a big "win," while limited liability prevents managers from ever
losing money on losses.</p>
<p>A third explanation is that high volatility (and thus high beta) stocks
receive more attention from the financial community as they are simply
more interesting to discuss. This interest and attention encourages
increased buying, thus pushing down the expected return of said stocks.</p>
<h2 id="conclusion">Conclusion</h2>
<p>In this post we have shown that there is indeed a significant abnormal
return associated with low beta stocks. This abnormal return can not
only be captured with a long only (and beta exposed) portfolio, but also
a beta neutral one. Despite the less than stellar risk-adjusted return
of the strategy, perhaps the Sharpe ratio can be improved by controlling
sector and style risks in addition to beta exposure. Perhaps one could
also overlay the low beta factor on top of an existing factor strategy
in order to reduce volatility and boost the risk-adjusted return.</p>
<p>That's all for now, and thanks for reading! If you're interested in
the code and want to play around with it, check it out
<a href="https://www.quantopian.com/posts/low-beta-anomaly">here</a>.</p>["Sturm Mabie"]Perhaps more than any other development, finance was ushered into the modern era with the development of the Capital Asset Pricing Model (CAPM) by William Sharpe in the early 60s. Though commonly criticized as too simple and reductionist, the model is still used today as an easy way to determine a stock's exposure to the market:Analysis of CS:GO Win-rates2020-06-12T00:00:00+00:002020-06-12T00:00:00+00:00https://smabie.github.io/posts/2020/06/12/csgo<p>This post is going to be a little different from usual; instead of
markets, we're going to look at a video game, namely,<a href="https://en.wikipedia.org/wiki/Counter-Strike:_Global_Offensive"> Counter-Strike:
Global
Offensive</a>
(CS:GO). CS:GO, like most great games, is easy to learn but deceptively
hard to master. For those at are unfamiliar with the game, we'll give a
quick overview below.</p>
<p>CS:GO is a competitive zero-sum game in which two teams of 5 players
each try to win rounds. The first team to 16 points wins the game. The
game is asymmetric as there are two distinct sides: the terrorists
(T-side) and counter-terrorists (CT-side). After 15 rounds, each team
switches sides. The goal of the T-side is to either eliminate all CT
players or to plant a bomb and have it explode before the CT-side can
defuse it; the CT-side wins the round if kill all the players on the
T-side before the bomb is planted, if they defuse the bomb, or if time
runs out on the round. this asymmetric rule-set implies that the CT-side
can lose a round even if they kill all the opposing players, while the T
side cannot lose if they eliminate all CT players.</p>
<p>In order to get a better idea of the structure of the game, let's look
at a top-down perspective of one of the most famous and iconic maps,
Dust2:</p>
<p><img src="/assets/dust2.png" alt="Dust2" /></p>
<p>T-side starts the game in T-spawn and must plant their bomb at one of
two locations: B-site or A-site, designated by the red areas on the map.
the CTs start off in CT spawn and must try and defend these two sites.
If T-side manages to break the CT defenses and plant the bomb, the
remaining players on CT try and retake the bombsite (either A or B) and
defuse the bomb before it explodes.</p>
<p>Because of the asymmetric nature of the game, I thought it would be
interesting to analyze how much a kill effects the game for each side.
To start off with, we'll first look at the situation where the number
of players on each side is equal.</p>
<h2 id="even-match-ups">Even Match-ups</h2>
<p>First up, let's look at the so-called even match-up, where there are an
equal number of players on each team: 5v5, 4v4, etc. Using over 400,000
rounds of match data from mid-2018 (click
<a href="https://www.kaggle.com/skihikingkevin/csgo-matchmaking-damage">here</a>
for the original dataset), we aggregate all rounds with even match-ups
and the side that wins. From this, we can calculate a cumulative win
probability for each match-up:</p>
<p><img src="/assets/evencs.png" alt="Even Match-ups" /></p>
<p>From the above graph, it's seems that the T-side enjoys a significant
advantage. Even when the match starts, the CT-side has a less than 50%
of winning the round and as trades are made (a situation where each team
loses a player), the advantage the T-side has only goes up. But is this
significant? Let's look at a table of CT-side win rates and their
associated p-value:</p>
<table>
<thead>
<tr>
<th>CT win rate</th>
<th>p-value</th>
<th>players</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.48729</td>
<td>2.67482e-55</td>
<td>5</td>
</tr>
<tr>
<td>0.469768</td>
<td>1.17974e-151</td>
<td>4</td>
</tr>
<tr>
<td>0.455098</td>
<td>3.84482e-237</td>
<td>3</td>
</tr>
<tr>
<td>0.439982</td>
<td>1.52963e-319</td>
<td>2</td>
</tr>
<tr>
<td>0.430299</td>
<td>2.35914e-282</td>
<td>1</td>
</tr>
</tbody>
</table>
<p>It's clear from the minuscule magnitude of the p-values that we can
reject the null hypothesis, namely that each even match-up is fair: a
50% chance of each side winning the round.</p>
<h2 id="all-match-ups">All Match-ups</h2>
<p>Now that we've established that even when each side has the same number
of players the T-side has an advantage, let's consider all the possible
combinations. Clearly a situation when 5 CTs are up against 4 Ts is not
a fair fight (We might assume that CTs have the advantage), but unfair
is it? After crunching the number for each permutation, we get the
following graph:</p>
<p><img src="/assets/csperm.png" alt="All Permutations" /></p>
<p>Interesting! When the CTs have close to the number of players as T-side,
the first kills make the most difference. A 5v5 for the CT side gives
them a 48% chance of victory, but netting the first kill shifts the odds
considerably to 68%. When the difference is large, the final kills have
the highest percentage chance, as the chance of the CT-side winning is
so low to start off with. Below is a graph of the exact figures:</p>
<table>
<thead>
<tr>
<th>1 CT</th>
<th>2 CT</th>
<th>3 CT</th>
<th>4 CT</th>
<th>5 CT</th>
<th># T alive</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.00218907</td>
<td>0.0227307</td>
<td>0.106122</td>
<td>0.274956</td>
<td>0.487288</td>
<td>5</td>
</tr>
<tr>
<td>0.00689667</td>
<td>0.0682988</td>
<td>0.235994</td>
<td>0.469768</td>
<td>0.683482</td>
<td>4</td>
</tr>
<tr>
<td>0.0287596</td>
<td>0.187184</td>
<td>0.455091</td>
<td>0.70026</td>
<td>0.858241</td>
<td>3</td>
</tr>
<tr>
<td>0.123364</td>
<td>0.439972</td>
<td>0.732706</td>
<td>0.893306</td>
<td>0.961373</td>
<td>2</td>
</tr>
<tr>
<td>0.430299</td>
<td>0.7915</td>
<td>0.941975</td>
<td>0.985024</td>
<td>0.9967</td>
<td>1</td>
</tr>
<tr>
<td>0.906434</td>
<td>0.989945</td>
<td>0.998562</td>
<td>0.999731</td>
<td>0.999873</td>
<td>0</td>
</tr>
</tbody>
</table>
<h2 id="conclusions">Conclusions</h2>
<p>I know little about professional CS:GO strategy and the data isn't
taken from professional games, but we can quickly infer a couple things
from the data:</p>
<ol>
<li>T-side wants to trade down as much as possible, CT never wants to
trade.</li>
<li>CT-side needs to be play very conservative, maximizing the number of
players on their team that are alive.</li>
<li>T-side wants to play in a very aggressive style in order to take map
control and trade.</li>
</ol>
<p>Anyways, this has been a fun little post to write, I hope you enjoyed
it! Click <a href="https://github.com/smabie/csgo-kill-value">here</a> to view the
GitHub project.</p>["Sturm Mabie"]This post is going to be a little different from usual; instead of markets, we're going to look at a video game, namely, Counter-Strike: Global Offensive (CS:GO). CS:GO, like most great games, is easy to learn but deceptively hard to master. For those at are unfamiliar with the game, we'll give a quick overview below.ETFs, Volatility and Leverage: Towards a New Leveraged ETF Part 32020-06-09T00:00:00+00:002020-06-09T00:00:00+00:00https://smabie.github.io/posts/2020/06/09/vol3<p>In the final installment of this three part series, we are going to use
our results from the previous two posts to construct a fully automated
variable leverage ETF. In <a href="https://cryptm.org/posts/2019/10/04/vol.html">part
one</a>, we derived the
optimal leverage ratio for maximizing returns and in <a href="https://cryptm.org/posts/2020/05/28/vol2.html">part
two</a> we applied the ARMA
and GARCH models to forecast returns and volatility, respectively.</p>
<p>The ETF we are going to build will take variable leverage with a minimum
leverage of 1 and a maximum leverage of 3. This means that the leverage
ratio we will be assuming will deviate considerably from what is
optimal. The reason for this is twofold: 1) many investors don't want
to short the market, regardless of what our model says, preferring a
baseline of pure beta exposure, and 2) the SEC rejected an application
for the creation of a 4x S&P 500 ETF, so it would be unlikely that a
higher leverage ratio could be assumed, at least in an ETF wrapper. If
one was instead obtaining leverage directly through futures, -20x to 20x
leverage could be taken.</p>
<p>Right now, we have two components, $\text{E}(r_m)$, the expected monthly
return generated from our ARMA model, and $\text{E}(\sigma_d)$, the
expected daily volatility from our GARCH model. First off, we need to
forward fill our monthly return data, in order to generate return data
for everyday. We forward fill instead of back fill in order to avoid
lookahead bias. Also, because our ARMA model is forecasted the future
one month expected return, we convert our daily volatility into monthly
variance: $21\text{E}(\sigma_d)^2$. So our raw leverage ratio becomes:</p>
\[l = \frac{\text{E}(r_m^{\text{fill}})}{21\text{E}(\sigma_d)^2}\]
<p>Let's look at graph of our leverage ratio over time to get a sense of
what we're dealing with:</p>
<p><img src="/assets/olev.png" alt="Optimal leverage" /></p>
<p>Oof, most investors won't be happy with that! Not only is our optimal
leverage calculation taking huge short and long positions, but it's
changing the direction of the portfolio very frequently. We want to
smooth out the changes in leverage as well as constrain the amount taken
between 1 and 3. In order to do this, we apply the following
transformations on the time-series:</p>
<ol>
<li>
<p>First, if the leverage ratio is less than 1 at a time-point, we set
it to 0.</p>
</li>
<li>
<p>In order to constrain the values, we then add $e$ and logscale it.
This should give us values approximately between 1 and 3.</p>
</li>
<li>
<p>To smooth the leverage, we apply an <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/computation.html#exponentially-weighted-windows">exponential rolling
window</a>,
with an $\alpha$ of 0.05.</p>
</li>
<li>
<p>If any values are above 3, we set them to 3.</p>
</li>
</ol>
<p>Below is the full transformation:</p>
\[f(x) =\begin{cases}
x &\text{if }x > 1\\
0 &\text{else}\\
\end{cases}\]
\[\sum_{t=1}^n \min(\text{EWM}_{\alpha=0.05}[\log(f(x_t)+e)],\,3)\]
<p>Consider the graph of adjusted leverage:</p>
<p><img src="/assets/alev.png" alt="Adjusted leverage" /></p>
<p>Though we're significantly deviated from what the optimal leverage is,
our adjusted leverage looks a lot more reasonable. The transitions are
smoother and the values are bounded between 1 and 3, as desired. Now we
simply multiple the leverage ratio by the returns of the S&P 500 and we
have our strategy! This is the moment of truth, let's look at a graph
of the returns of the S&P 500, the returns of our ETF, and the leverage
ratio:</p>
<p><img src="/assets/etf.png" alt="variable leverage S&P 500 vs S&P 500 vs leverage
ratio" /></p>
<p>Not bad! Our ETF is behaving as expected: taking on more and more
leverage during bull runs and reducing exposure when the market drops.
The leverage ratio fluctuates between 1 and 2 due to our smoothing
factor, though more risk could be taken by choosing a greater $\alpha$
value.</p>
<h2 id="conclusion">Conclusion</h2>
<p>At long last, we've reached the end of this series. We've talked about
investors, mostly irrational, aversion to holding leveraged ETFs over a
long period of time, derived the optimal leverage ratio to maximize
returns, created models to forecast returns and volatility, and used all
of this to create a variable leverage ETF. Could such a product be
brought to market, and would investors be interested? Can investors be
convinced to hold any product that isn't just vanilla beta? I'm not
sure, but it is certainly an interesting opportunity. No product on the
market fulfills this niche, if there is even one to exploit. I hope you
enjoyed this series as much as I enjoyed researching and writing it.
Though we did succeed in creating a rudimentary model, so much more work
could be done on it. I believe that a variable leverage product could
have great potential in the retail space, especially now that vanilla
beta has been completely commoditized. New ETF ideas that are "beta
plus" not only have the potential to deliver value to investors, but
also could command much higher fees.</p>
<p>You can check out the notebook
<a href="https://github.com/smabie/towards-a-new-etf-part3">here</a>. Feel free to
play around with any and all parameters.</p>["Sturm Mabie"]In the final installment of this three part series, we are going to use our results from the previous two posts to construct a fully automated variable leverage ETF. In part one, we derived the optimal leverage ratio for maximizing returns and in part two we applied the ARMA and GARCH models to forecast returns and volatility, respectively.