Aug 06

A Better Robo Advisory

Building a Better Robo Advisor

The more we learned about the current crop of robo advisory firms, the more we realized we could do better. This brief blog post hits the high points of that thinking.

Not Just the Same Robo Advisory Technology

It appears that all major robo advisory companies use 50+ year-old MPT (modern portfolio theory). At Sigma1 we use so-called post-modern portfolio theory (PMPT) that is much more current. At the heart of PMPT is optimizing return versus semivariance. The details are not important to most people, but the takeaway is the PMPT, in theory, allows greater downside risk mitigation and does not penalize portfolios that have sharp upward jumps.

Robo advisors, we infer, must use some sort of Monte Carlo analysis to estimate “poor market condition” returns. We believe we have superior technology in this area too.

Finally, while most robo advisory firms offer tax loss harvesting, we believe we can 1) set up portfolios that do it better, 2) go beyond just tax loss harvesting to achieve greater portfolio tax efficiency.

Aug 01

I Robo: The Rise of the Robo Advisor

Think Ahead About Your Role in a Robo Advisory World

Financial innovation is here and it is here to stay.  Financial advisors, broker/dealers, hybrids, and even financial planners should be thinking about how to adapt to inevitable changes launched by disruptive investing technologies.

Robo Design — Chip designers have been using it for decades

I have an unique perspective on technological disruption.  For over ten years, my job was to develop software to make microchip designers more productive. Another way of describing my work was to replace microchip design tasks done by humans with software. In essence, my job was to put some chip designers out of work. My role was called (digital circuit) design automation, or DA.

In reality my work and the work of software design automation engineers like myself resulted in making designers faster and more productive — able to develop larger chips with roughly the same number of design engineers.

Robo Advisors: Infancy now, but growing very fast!

“The robos are coming, the robos are coming!” It’s true. Data though the end of 2014 shows that robo advisors managed $19 billion in assets with a 65% growth rate in just eight short months. This is essentially triple-digit growth, annual doubling.  $19 billion (likely $30 billion now), is just a drop in the bucket now… but with firms like Vanguard and Schwab already developing and rolling out robo advising option of their own these crazy growth rates are sustainable for a while.

With total US assets under management (AUM) exceeding $34 trillion, an estimated $30 billion for robo advisors represents less than 0.1% of managed assets.  If, however, robo advisors grow double their managed assets annually for the next five years that amounts to about 3% of total AUM management by robo advisors. If in the second five years the robo advisory annual grow rate slows to 50% that still mean that robo advisors will control in the neighborhood of 20% of managed assets by 2025.

“Robo-Shields” and Robo Friends

Deborah Fox was clever enough to coin and trademark the term “robo-shield.” The basic idea is for traditional (human) investment advisors to protect their business by offering robo-like services ranging from client access to their online data to tax harvesting. I call this the half-robo defense

Another route to explore is the “robo friends”, or “full robo-hybrid” approach. This is partnering with an internal or external robo advisor.  As an investment advisor, the robo advisor is subservient to you, and provides portfolio allocation and tax-loss harvesting, while you focus on the client relationship.  I believe that the “robo friends” model will win over the pure robo advising model — most people prefer to have someone to call when they have investment questions or concerns, and they like to have relationships with their human advisors. We shall see.

What matters most is staying abreast of the robo advisor revolution and having a plan for finding a place in the brave new world of robo advising.


Jul 31

Semivariance Excel Example

The most in-demand topic on this blog is for an Excel semivariance example. I have posted mathematical semivariance formulas before, but now I am providing a description of exactly how to compute semivariance in “vanilla” Excel… no VBA required.

The starting point is row D. Cell D$2 contains average returns of over the past 36 months. The range D31:D66 contains those returns.  Thus the contents of D$2 are simply:


This leads us to the semivariance formula:


We will now examine each building block of this formula starting with


We only want to measure “dips” below the mean return. For all the observations that “dip” below the mean we take the square of the dip, otherwise we return zero. Obviously this is a vector operation, the IF function returns a vector of values.

Next we divide the resulting vector by the number of observations (months) minus 1. We can simply COUNT the number of observations with COUNT(D31:D66-1).  [NOTE 1: The minus 1 means we are taking the semivariance of a sample, not a population. NOTE 2: We could just as easily taken the division “outside” the SUM — the result is the same either way.]

Next is the SUM. The following formula is the monthly semivariance of our returns in row D:


You’ll notice the added curly braces around this formula. This specifies that this formula should be treated as a vector (matrix) operation.  The curly braces allow this formula to stand alone.  The way the curly braces are applied to a vector (or matrix) formula is to hit <CTRL><SHIFT><ENTER> rather than just <ENTER>. Hitting <CTRL><SHIFT><ENTER> is required after every edit.

We now have monthly semivariance. If we wanted annual semivariance we could simply multiply by 12.

Often, however, we ultimately want annual semi-deviation (also called semi-standard deviation) for computing things like Sortino ratios, etc. Going up one more layer in the call stack brings us to the SQRT operation, specifically:


This is monthly (downside) semi-deviation. We are just one step away from computing annual semi-deviation. That step is multiplying by SQRT(12), which brings us back to the big full formula.

There it is in a nutshell. You now have the formulas to compute semivariance and semi-deviation in Excel.



Jun 28

Quant Cross-Training

A very astute professor of finance told our graduate finance class that the best way to become a bona fide quant is NOT to get a Ph.D. in Finance!  It is better, he said, to get a Ph.D. in statistics, applied mathematics, or even physics. Why? Because a Ph.D in Finance is generally not sufficiently quantitative. A quant needs a strong background in Stochastic Calculus.

“Quants for Hire?”

Our company has been described as a “quants for hire” firm. That is flattering. While we currently have 4 folks with master’s of science degrees (and one close to finishing a master’s) what we do is probably more accurately described as “quant-like” or “quant-lite” software and services. However “Quants for Hire” definitely has a nice succinct ring to it.

Quant-like Tangents to Financial Learning

Most of our quant-like work has been fairly vanilla — back testing trading strategies in Excel, Monte Carlo simulations (also in Excel), factor analysis, options strategy analysis. So far our clients like Excel and are not very interested in R. The main application of R has been to double-check our Excel back tests!

We have attracted fairly sophisticated clients.  They seem reasonably comfortable about talking about viewing portfolios as unit vectors that can be linearly combined.  They tend to understand correlation matrices, Sortino ratios, and in some cases even relate to partial derivatives and gradients. But they tend to push back on explanations involving geometric Brownian motion, Ito’s lemma, and the finer points of  Black-Scholes-Merton. They do, however, appear to appreciate that we “know our stuff.”

I’ve got a decent set of R skills, but I’m looking to take them to the next level. I’m taking a page from my professor in tackling non-financial quantitative problems. My current problem du jour image compression. I came up with an R script that achieves very high compression levels for lossy compression.  It is shorter than 200 lines commented and shorter than 100 lines when stripped of comments and blank (formatting) lines.

It can easily achieve 20X or greater compression, albeit with a loss in quality. In my initial tests my R algorithm (IC_DXB1.1) was somewhat comparable to JPEG (GIMP 2.8) at 20X compression, though I the JPEG clearly looks better in general. I also found an elegant R compressor that is extremely compact R code… the kernel is about 5 lines! Let’s call this SVD (singular value decomposition) for reference. So here’s the bake off results (all ~20X compressed to ~1.5KB):

JPEG:                                                             IC_DXB1.1:

20X Compressed with JPEG

JPEG 20X Compressed

20X Compressed with IC_DXB1.1

IC_DXB1.1 20X Compressed

20X Compressed with SVD in R

SVD 20X Compressed










What’s interesting to me is that each algorithm uses radically different approaches. JPEG uses DCT (discrete cosine transform) plus a frequency “mask” or filter that reduces more and more high-frequency components to achieve compression. My ic_dxb1.1 algorithm uses a variant of B-splines. The SVD approach uses singular value decomposition from linear algebra.

Obviously tens of thousands of hours have been invested in JPEG encoding. And, unfortunately, 99%+ of JPEG images are not as compact as they could be due to a series of patent disputes around arithmetic coding. Even thought the patents have all (to the best of my knowledge) expired, there is simply too much inertia behind the alternative Huffman coding at the present. It is worth noting that my analysis of all 3 algorithms is based on Huffman coding for consistency.  All three approaches could ultimately use either Huffman or arithmetic coding.


So this Image Stuff Relates to Finance How?

Another of my professors explained that, fundamentally, finance is about information. One set of financial interview questions start with the premise that you have immediate (light-speed, real-time) access to all public information. Generally how would you make use of this information to make money trading? Alternatively you are to assume (correctly) that information costs money… how would your prioritize your firm’s information access?  How important is frequency and latency?

Having boat loads of real-time data and knowing what to do with it are two different things. I use R to back test strategies, because it easy to write readable R code with a low bug rate. If I had to implement those strategies in a high-frequency trading environment, I would not use R, I would likely use C or C++. R is fast compared to Excel (maybe 5X faster), but is slow compared to good C/C++ implementations (often 100X slower).

My thinking is that while knowledge is important, so is creativity. By dabbling in areas outside of my “realm of expertise”, I improve my knowledge while simultaneously exercising my creativity.

Both signal processing and quant finance can reasonably be viewed as signal processing problems. Signal processing and information theory are closely related. So I would argue that developing skills in one area is cross-training skills in the other… and with greater opportunity for developing creativity. Finance is inextricably linked to information.

The Future of Finance Requires Disruptive (Software) Technology

It aint gonna be pretty for traditional financial advisors, hybrid advisors, broker/dealers, etc. Not with the rapid market acceptance of robo advisors.

Robo advising will have at least three important disruptive impacts:

  1. Accelerating downward pressure on advisory fees
  2. Taking of market share and AUM
  3. Increasing market demand for investment tax management services such as tax-loss harvesting

Are you ready for the rise of the bots? We at Sigma1 are, and we are looking forward to it. That is because we believe we have the software and skills to make robo advisors work better. And we are not resting on our laurels — we are focusing our professional development on software, computer science, advanced mathematics, information theory, and the like.

Jun 18

Dividends and Tax-Optimal Investing

The previous post showed after-tax results of a hypothetical 8% return portfolio. The primary weakness in this analysis was a missing bifurcation of return: dividends versus capital gains.

The analysis in this post adds the missing bifurcation. It is instructive to compare the two results. This new analysis accounts for the qualified dividends and assumes that these dividends are reinvested. It is an easy mistake to assume that since the qualified dividend rate is identical to the capital gains rate, that dividends are equivalent to capital gains on a post-tax basis. This assumption is demonstrably false.

Tax Efficiency with dividends.

Tax Efficiency with dividends.

Though both scenarios model a net 8% annual pre-tax return, the “6+2″ model (6% capital appreciation, 2% dividend) shows a lower 6.98% after-tax return for the most tax-efficient scenario versus a 7.20% after-tax return for the capital-appreciation-only model. (The “6+2″ model assumes that all dividends are re-invested post-tax.)

This insight suggests an interesting strategy to potentially boost total after-tax returns. We can assume that our “6+2″ model represents the expected 30-year average returns for a total US stock market index ETF like VTI, We can deconstruct VTI into a value half and a growth half. We then put the higher-dividend value half in a tax-sheltered account such as an IRA, while we leave the lower-dividend growth half in a taxable account.

This value/growth split only produces about 3% more return over 30 years, an additional future value of $2422 per $10,000 invested in this way.

While this value/growth split works, I suspect most investors would not find it to be worth the extra effort. The analysis above assumes that the growth half is “7+1″ model.  In reality the split costs about 4 extra basis points of expense ratio — VTI has a 5 bps expense ratio, while the growth and value ETFs all have 9 bps expense ratios. This cuts the 10 bps per year after-tax boost to only 6 bps. Definitely not worth the hassle.

Now, consider the ETF Global X SuperDividend ETF (SDIV) which has a dividend yield of about 5.93%. Even if all of the dividends from this ETF receive qualified-dividend tax treatment, it is probably better to hold this ETF in a tax-sheltered account. All things equal it is better to hold higher yielding assets in a tax-sheltered account when possible.

Perhaps more important is to hold assets that you are likely to trade more frequently in a tax-sheltered account and assets that you are less likely to trade in a taxable account. The trick then is to be highly disciplined to not trade taxable assets that have appreciated (it is okay to sell taxable assets that have declined in value — tax loss harvesting).

The graph shows the benefits of long-term discipline on after-tax return, and the potential costs of a lack of trading discipline. Of course this whole analysis changes if capital gains tax rates are increased in the future — one hopes one will have sufficient advanced notice to take “evasive” action.  It is also possible that one could be blindsided by tax raising surprises that give no advanced notice or are even retroactive! Unfortunately there are many forms of tax risk including the very real possibility of future tax increases.

Jun 15

Investment Tax Management Boosts Returns in Surprising Ways

Tax Deferral Illustrated

Figure 1. Tax Deferral benefits both parties.

A Common Tax Misconception

When asked to consider tax deferral investment strategies, many people instinctively conclude that tax deferral benefits the investor at the expense of the government. Such a belief is half-right. Tax deferral ultimately benefits both the investor and the government’s tax revenues. While there are exceptions involving inheritance, in most other cases both parties benefit. Figure 1 summarizes the relationship between higher after-tax returns and higher nominal net cash flows to the government.

The reason I lead with the government’s side of the tax equation is for tax policy wonks in Washington D.C. I suspect many of them already know this information, and this is simply another data point to add to their arsenal of tax facts. For the others, I hope this a wake-up call. The message:

When investors, investment advisors, and fund managers successfully defer long-term capital gains, investors and governments win in the long run.

The phrase “in the long run” is important. When taxes are deferred, the government’s share grows along with the investor’s. In the short term, taxes are reduced; in the long run taxes are increased. For the investor this long-run tax increase is more than offset by increased compounding of return.

Please note that all of these win/win outcomes occur under a assumption of fixed tax rates — which is 20% in this example. It is also worth noting that these outcomes occur for funds that are spent at any point in the investor’s lifetime. This analysis does not necessarily apply to taxable assets that are passed on via inheritance.

Critical observers may acknowledge the government tax “win” holds for nominal tax dollars, but wonder whether it still holds in inflation-adjusted terms. The answer is “yes” so long as the the investor’s (long-run) pre-tax returns exceed the (long run) rate of inflation. In other words so long as g > i (g is pre-tax return, i is inflation), the yellow line will be upward sloping; More effective tax-deferral strategies, with higher post-tax returns, will benefit both parties. As inflation increases the slope of the yellow line gets flatter, but it retains an upward slope so long as pre-tax return is greater than inflation.

Tax Advantages for Investors

Responsible investors face many challenges when trying to preserve and grow wealth. Among these challenges are taxes and inflation. I will start by addressing two important maxims in managing investment taxes:

  1. Avoid net realized short-term (ST) gains
  2. Defer net long-term gains as long as possible

It is okay to realize some ST gains, however it is important to offset those gains with capital losses. The simplest way of achieving this offset is to realize an equal or greater amount of ST capital losses within the same tax year. ST capital losses directly offset ST capital gains.

A workable, but more complex way of offsetting ST gains is with net LT capital losses.The term net is crucial here, as LT capital losses can only be used to offset ST capital gain once they have been first used to offset LT capital gains.  It is only LT capital losses in excess of LT capital gains that offset ST gains.

If the above explanation makes your head spin, you are not alone. Managing capital gains is really an exercise in linear programming. In order to make this tax exercise less (mentally) taxing, here are some simple concepts to help:

  • ST capital losses are better than LT capital losses
  • ST capital gains are worse than LT capital gains
  • When possible offset ST losses with ST gains

Because ST capital losses are better than LT, it often makes sense to see how long you have held assets that have larger paper (unrealized) losses. All things equal it is better to “harvest” the losses from the ST losers than from the LT losers.

Managing net ST capital gains can potentially save you a large amount of taxes, resulting in higher post-tax returns.

Tax Advantages for the Patient Investor

Deferring LT capital gains requires patience and discipline. Motivation can help reinforce patience. For motivation we go back to the example used to create Figure 1. The example starts today with $10,000 investment in a taxable account and a 30-year time horizon. The example assumes a starting cost basis of zero and an annual return of 8%.

This example was set up to help answer the question: “What is the impact of ‘tax events’ on after-tax returns?” To keep things as simple as possible a “tax event” is an event that triggers a long-term capital gains tax realization in a tax year. Also, in all cases, the investor liquidates the account at the start of year 31. (This year-31 sale is not counted in the tax event count.)

It turns out that it not just the number of tax events that matters — it is also the timing. To capture some of this timing-dependent behavior, I set up my spreadsheets to model two different timing modes. The first is called “stacked” and it simply stacks all tax events in back-to-back years. These second mode is called “spaced” because the tax events are spaced uniformly.  Thus 2 stacked tax events occur in years 1, 2, while 2 spaced tax events occur in years 10 and 20. The results are interesting:

Why tax deferral matters

Tax “event” impact on after-tax returns.

The most important thing to notice is that if an investor can completely avoid all “tax events” for 30 years the (compound) after-tax return is 7.2% per year, but if the investor triggers just one taxable event the after tax return is significantly reduced. A single “stacked” tax event in year 1 reduces after tax returns to 6.49% while a single “spaced” tax event in year 15 reduces returns to 6.67%. Thus for a single event the spaced tax event curve is higher, while for all other numbers of tax events (except 30 where they are identical) it is lower than the stacked-event curve.

The main take-away from this graph is that tax deferral discipline matters. The difference between 7.2% and 6.67% after-tax return, over thirty years is huge when framed in dollar terms. With zero (excess) tax events the after-tax result in year 31 is $80,501. With one excess tax event (with the least harmful timing!) that sum drops to $69,476.

In the worst case the future value drops to $51,444 with an annual compound after-tax return of only 5.61%.

Tax Complexity, Tax Modeling Complexity, and Other Factors

One of the challenges faced when bringing fresh perspectives to the tax-plus-investing dialog is in providing examples that paint the broad portfolio tax management themes in a concise way. The first challenge is that the tax code is constantly changing, so predicting future tax rates and tax rules is an imprecise game at best. The second challenge is that the tax code is so complex that any generalization will mostly likely have a counterexample buried somewhere in the tax code. The third complication is that baring significant future tax code changes and obscure tax code counterexamples, creating a one-size-fits-all model for investors results in large oversimplifications.

I believe that tax indifference is the wrong answer to the question of portfolio tax optimization. The right answer is more closely aligned with the maxim:

All models are wrong. Some are useful.

This common saying in statistics gets to the heart of the problem and the opportunity of investment tax management. It is better to build a model that gives deeper insight into opportunities that exist in reconciling prudent tax planning with prudent investment management, than to build no model at all.

The simple tax model used in this blog post makes some broad assumptions. Among these is that the long-term capital gains rate will be the same for 30 years and that the investor will occupy the same tax bracket for 30 years. The pre-tax return model is also very simple: 8% pre-tax return each and every year.

I argue that models as simple as this are still useful. They illustrate investment tax-management tax principles in a manner that is clear and draws the same conclusions as analysis using more complex tax modelling. (Complex models also have their place.)

I would like to highlight the oversimplification I think is most problematic from a tax perspective.  The model assumes all the returns (8% per year) are in the form of capital appreciation. A better “8%” model would be to assume a 2% dividend and 6% capital appreciation.  Dividends, even though receiving qualified-dividend tax treatment, would bring down the after-tax returns, especially on the left side of the curve.  I will likely remedy that oversimplification in a future blog post.

Investment Tax Management Summary

  1. Tax deferral does not hurt government revenues; it helps in the long run.
  2. Realized net short-term capital gains can crater post-tax investment returns and should be avoided.
  3. Deferral of (net) long-term capital gains can dramatically improve after-tax returns.
  4. Tax deferral strategies require serious investment discipline to achieve maximum benefit.
  5. Even simple tax modelling is far better than no tax modelling at all.  Simple tax models can be useful and powerful. Nonetheless, investment tax models can and should be improved over time.

Jun 03

Managing Portfolio Taxes and Tax Risk

Portfolio Tax-Management is Key to Attracting and Keeping High-Net-Worth Investors

With the right tools, the most reliable way to generate portfolio alpha is via tax alpha. One way to look at traditional alpha is as a zero-sum game (after subtracting risk-adjusted market returns). Very smart people with substantial financial backing are searching for tradition alpha every day. The competition for alpha is intense, and tautologically when someone generates positive alpha, elsewhere in the market others are offsetting with negative alpha.

Producing tax alpha, however, can be a viewed as win/win cooperative “game” involving the investment advisor representative (IAR) and client. Among the IAR’s many tasks in creating positive tax alpha is asking for regular (quarterly or twice-per-year) updates on the client’s tax situation and expected tax bracket. The client’s responsibility is to monitor and know their tax situation and to provide timely and accurate updates when their expected tax situation has significant changes.

Net-net worth investors tend to be both more tax savvy and tax aware than other investors. Whereas $5M in liquid assets used to be the threshold where investors expect and demand portfolio tax management, investors at the $2M and even $1M level are increasingly demanding portfolio tax management.

Firms that fail to offer effective portfolio tax management services are at risk of losing their most revenue-producing clients. Conversely, firms that offer and effectively communicate quality investment tax-management services stand to win highly sought-after high-net-worth advisory clients.

Developing a Tax-Aware Portfolio Mindset

Effective investment portfolio tax management begins with awareness of the tax implications of investment actions. As a rule of thumb, the two most tax important considerations are:

  1. Awareness of whether asset sale will: A) Generate a large capital gain or loss. B) Result in a short-term (ST) or long-term (LT) gain or loss.
  2. Awareness of the client’s: A) Tax situation, B) Tax preferences.

After choosing to increase one’s own investment tax proficiency, we advise clients to begin slowly and focus primarily on tax awareness for the first month or two. There are many reasons we favor this approach. First there is a tendency to initially swing to a tax-first mentality. We believe that it is best to maintain an investment-first mentality, with tax-awareness and tax-management playing an important secondary role in most cases. Developing a tax-aware mentality requires much more mental energy, especially in the beginning. Much like an amateur juggler adept at juggling three balls, adding a forth is very difficult at first, and it dropping one (or more) balls is increasingly likely. If overwhelmed it is best to drop the “tax ball” first, and pick it up later when circumstances permit.

Our view is that it takes 2-3 months of effort to reach a point where portfolio tax awareness becomes largely second nature. It is only after tax consciousness becomes second nature that the foundation for investment tax mastery is ready. Up until this point, and advisor can learn and become modestly proficient with a small number of basic tax techniques. The long-term goal of developing portfolio tax mastery is an art that requires attention, knowledge, creativity, and perseverance. The benefits to both the client and the practitioner are well worth the effort.

Portfolio Tax-Management Corporate Training

We at Sigma1 are working diligently to develop a set of portfolio tax-management training modules ranging from basic to advanced. We have begun offering limited free “beta” courses in investment management tax basic and intermediate level techniques to investment professionals in the the Northern Colorado region. All we ask in exchange is for participants to agree to fill out a brief survey on the day of our presentation and a followup survey in 6-8 weeks to see if and how they are using the training in their investment management process. If you or your firm is interested in participating in this free beta training, feel free to contact us.

Dec 20

How to Write a Mean-Variance Optimizer (Part III)… In R

Parts 1 and 2 left a trail of breadcrumbs to follow.  Now I provide a full-color map, a GPS, and local guide.  In other words the complete solution in the R statistical language.

Recall that the fast way to compute portfolio variance is:

The companion equation is rp = wTrtn, where rtn is a column vector of expected returns (or historic returns) for each asset.  The first goal is to find find w0 and wn. w0 minimizes variance regardless of return, while wn maximizes return regardless of variance.  The goal is to then create the set of vectors {w0,w1,…wn} that minimizes variance for a given level of expected return.

I just discovered that someone already wrote an excellent post that shows exactly how to write an MVO optimizer completely in R. Very convenient!  Enjoy…

Sep 19

The Equation Everyone in Finance Should Know (MV Optimization: How To, Part 2)

As the previous post shows, it all starts with…

In order get close to bare-metal access to your compute hardware, use C.  In order to utilize powerful, tested, convex optimization methods use CVXGEN.  You can start with this CVXGEN code, but you’ll have to retool it…

  • Discard the (m,m) matrix for an (n,n) matrix. I prefer to still call it V, but Sigma is fine too.  Just note that there is a major difference between Sigma (the covariance-variance matrix) and sigma (individual asset-return variances matrix; the diagonal of Sigma).
  • Go meta for the efficient frontier (EF).  We’re going to iteratively generate/call CVXGEN with multiple scripts. The differences will be w.r.t the E(Rp).
  • Computing Max: E(Rp)  is easy, given α.  [I’d strongly recommend renaming this to something like expect_ret comprised of (r1, r2, … rn). Alpha has too much overloaded meaning in finance].
  • [Rmax] The first computation is simple.  Maximize E(Rp) s.t constraints.  This is trivial and can be done w/o CVXGEN.
  • [Rmin] The first CVXGEN call is the simplest.  Minimize σp2 s.t. constraints, but ignoring E(Rp)
  • Using Rmin and Rmax, iteratively call CVXGEN q times (i=1 to q) using the additional constraint s.t. Rp_i= Rmin + (i/(q+1)*(Rmax-Rmin). This will produce q+2 portfolios on the EF [including Rmin and Rmax].  [Think of each step (1/(q+1))*(Rmax-Rmin) as a quantization of intermediate returns.]
  • Present, as you see fit, the following data…
    • (w0, w1, …wq+1)
    • [ E(Rp_0), …E(Rp_(q+1)) ]
    • [ σ(Rp_0), …σ(Rp_(q+1)) ]

My point is that —  in two short blog posts — I’ve hopefully shown how easily-accessible advanced MVO portfolio optimization has become.  In essence, you can do it for “free”… and stop paying for simple MVO optimization… so long as you “roll your own” in house.

I do this for the following reasons:

  • To spread MVO to the “masses”
  • To highlight that if “anyone with a master’s in finance and computer can do MVO for free” to consider their quantitative portfolio-optimization differentiation (AKA portfolio risk management differentiation), if any
  • To emphasize that this and the previous blog will not greatly help with semi-variance portfolio optimization

I ask you to consider that you, as one of the few that read this blog, have a potential advantage.  You know who to contact for advanced, relatively-inexpensive SVO software. Will you use that advantage?

Sep 05

How to Write a Mean-Variance Optimizer: Part 1

The Equation Everyone in Finance Show Know, but Many Probably Don’t!

Here it is:

… With thanks to which makes it really easy to write equations for the web.

This simple matrix equation is extremely powerful.  This is really two equations.  The first is all you really need.  The second is just merely there for illustrative purposes.

This formula says how the variance of a portfolio can be computed from the position weights wT = [w1 w2 … wn] and the covariance matrix V.

  • σii ≡ σi2 = Var(Ri)
  • σij ≡ Cov(Ri, Rj) for i ≠ j

The second equation is actually rather limiting.  It represents the smallest possible example to clarify the first equation — a two-asset portfolio.  Once you understand it for 2 assets, it is relatively easy to extrapolate to 3-asset portfolios, 4-asset portfolios, and before you know it, n-asset portfolios.

Now I show the truly powerful “naked” general form equation:

This is really all you need to know!  It works for 50-asset portfolios. For 100 assets. For 1000.  You get the point. It works in general. And it is exact. It is the E = mc2 of Modern Portfolio Theory (MPT).  It at least about 55 years old (2014 – 1959), while E = mc2 is about 99 years old (2014 – 1915).  Harry Markowitz, the Father of (M)PT simply called it “Portfolio Theory” because:

There’s nothing modern about it.


Yes, I’m calling Markowitz the Einstein of Portfolio Theory AND of finance!  (Now there are several other “post”-Einstein geniuses… Bohr, Heisenberg, Feynman… just as there are Sharpe, Scholes, Black, Merton, Fama, French, Shiller, [Graham?, Buffet?]…)   I’m saying that a physicist who doesn’t know E = mc2 is not much of a physicist. You can read between the lines for what I’m saying about those that dabble in portfolio theory… with other people’s money… without really knowing (or using) the financial analog.

Why Markowitz is Still “The Einstein” of Finance (Even if He was “Wrong”)

Markowitz said that “downside semi-variance” would be better.  Sharpe said “In light of the formidable
computational problems…[he] bases his analysis on the variance and standard deviation.”

Today we have no such excuse.  We have more than sufficient computational power on our laptops to optimize for downside semi-variance, σd. There is no such tidy, efficient equation for downside semi-variance.  (At least not that anyone can agree on… and none that that is exact in any sense of any reasonable mathematical definition of the word ‘exact’.)

Fama and French improve upon Markowitz (M)PT [I say that if M is used in MPT, it should mean “Markowitz,” not “modern”, but I digress.] Shiller, however, decimates it.  As does Buffet, in his own applied way.  I use the word decimate in its strict sense… killing one in ten.  (M)PT is not dead; it is still useful.  Diversification still works; rational investors are still risk-averse; and certain low-beta investments (bonds, gold, commodities…) are still poor very-long-term (20+ year) investments in isolation and relative to stocks, though they still can serve a role as Markowitz Portfolio Theory suggests.

Wanna Build your Own Optimizer (for Mean-Return Variance)?

This blog post tells you most of the important bits.  I don’t really need to write part 2, do I?   Not if you can answer these relatively easy questions…

  • What is the matrix expression for computing E(Rp) based on w?
  • What simple constraint is w subject to?
  • How does the general σp2 equation relate to the efficient frontier?
  • How might you adapt the general equation to efficiently compute the effects of a Δw event where wi increases and wj decreases?  (Hint “cache” the wx terms that don’t change,)
  • What other constraints may be imposed on w or subsets (asset categories within w)?  How will you efficiently deal with these constraints?
  • Is short-selling allowed?  What if it is?
  • OK… this one’s a bit tricky:  How can convex optimization methods be applied?

If you can answer these questions, a Part 2 really isn’t necessary is it?

Jul 16

Binary Options and Test Taking

Most of the important financial industry tests (Series 6, 7, 24, 26, CFA I, CFA II, CFA III, etc) only have two possible binary outcomes: PASS or FAIL. Failure is a waste of time and money. Over-studying, however, can also waste time. (Studying for a PASS/FAIL test is investing in a binary “real option.”)

All of the material is worth knowing for someone, but some information is simply not relevant to everyone. For example, investment advisor reps don’t necessarily need to know all of the rules for broker dealer agents (and vise versa). Knowing the stuff that is relevant to you is more valuable than simply allowing you to pass a test.

That said, the goal is to PASS. And you’ve got a million other things to do. So what’s a quant to do? Get quantitative of course!

Quantitative Test Prep

Step 1: Find representative sample tests. All else hinges on this. Obtaining sample tests from multiple independent sources may help.

Step 2: Determine your average score on practice tests.

Step 3: Determine the standard deviation of your scores.

Step 4: Calculate the probability of achieving a passing score given your mean score and standard deviation.

Step 5: Decide the risk/reward and whether more study provides sufficient ROI.

Assuming normal distributions, I use the 68/95/99.7 rule. Regardless of the standard deviation, if your practice average is the same as the minimum score your chance of success is only 50%. Naturally, if your mean practice score is 1-sigma above the threshold for passing, your chance on the real test is 84% [1-(1-0.68)/2]. If your mean score is plus 2 sigma, your chance of passing is almost 98% [1-(1-0.95)/2].

This little exercise shows two possible ways to improve your expected pass rate. The obvious way is getting better with the material. The less obvious way is reducing your standard deviation. Can this second way be achieved? If so how?

Keeping in mind the four-answer multiple-choice format, the mean deviation is:

MD = 2*p*(1-p)

Where p is the probability of answering a particular question correctly. Per-question deviation (PQD) is highest for p=0.5 at 0.5. PQD is lowest when p=1 at 0. For random guessing, PQD is 0.375.

Increasing your pq to from random guess 0.25 to 0.5 for a given category q will increase your expected score, but will also increase sigma. Taking the first derivative of MD with respect to p gives: 2-4p. Because the range of p is [0,1] (arguably [0.25,1)) the best incremental decrease in MD is greatest near p=1.

Now, the test candidate must decide what the the d/dt(pqc(t)) is for each question category (where t is time spent studying that category).  Studying the categories (qc) with the highest d/dt(pqc(t)) will most efficiently improve the expected score. Further studying the categories with the maximum d/dt(pqc(t))*(4p-2) will reduce PQD and hence reduce test standard deviation.

Deeper Analysis of the Meta Problem

Naturally, this analysis only scratches the tactical surface of the “binary-test optimization meta problem.” [The test itself is the problem, the tactics are part of the meta-problem of optimizing generalized multiple-choice test prep]. Improving from p=0.8 to p=0.9 is clearly better than improving from p=0.4 to p=0.5 in terms of PQD reduction, and equal in terms of increase of expected score.

Also of relevance is PQD (modified) downside semi-deviation, which I will call PQDd. I’ll spare you the derivation; it turns out that:

PQDd = p*sqrt(2*(1-p))

This value peaks at p=2/3 with a value of 0.5443. PQDd slowly ascends as p goes from 0.25 up to 0.667, then falls pretty rapidly for values of p>0.8.

We care about the random variable S which represents the actual test score. S is a function of the mean expected score μ and standard deviation σ… in a normal distribution. What we really care about is Pr(S>=Threshold), the probability that our score meets or exceeds the minimum passing score.

PQD = PQDd only when p = 0, 0.5, or 1.  For p in (0,0.5) PQDd<PQD and for p in (0.5,1) PDQd>PQD. Even though it seems a bit strange for discrete binary distribution, p in (0,0.5) has positive skewness and p in (0.5,1) negative skewness.

In the “final” analysis the chance of passing, pr(S=>Threshold), depends on score mean, μ, and downside deviation, σd.  In turn σd depends on PQD and PQDd.

Summary and Conclusions

Theoretically, one’s best course of action is to 1) increase the average expected score and 2) reduce σd. If practical, the best and most efficient way to achieve both objectives simultaneously is to improve areas that are in the 60-75% range (p=0.6 to 0.75) to the mid to high 90% range (p>=0.95).  This may seem counter-intuitive, but the math is solid.

Caveats: This analysis is mostly an exercise in showing the value of statics, variance, and downside variance in an area outside of finance.  It shows that there is more than one way to approach to a goal; in this case passing a standardized test.


Jun 29

Clover Patterns Show How Portfolios Manage Risk

Covariance illustration

Illustration of Classic Covariance.

The red and green “clover” pattern illustrates how traditional risk can be modeled.  The red “leaves” are triggered when both the portfolio and the “other asset” move together in concert.  The green leaves are triggered when the portfolio and asset move in opposite directions.

Each event represents a moment in time, say the closing price for each asset (the portfolio or the new asset).  A common time period is 3-years of total-return data [37 months of price and dividend data reduced to 36 monthly returns.]

Plain English

When a portfolio manager considers adding a new asset to an existing portfolio, she may wish to see how that asset’s returns would have interacted with the rest of the portfolio.  Would this new asset have made the portfolio more or less volatile?  Risk can be measured by looking at the time-series return data.  Each time the asset and the portfolio are in the red, risk is added. Each time they are in the green, risk is subtracted.  When all the reds and greens are summed up there is a “mathy” term for this sum: covariance.  “Variance” as in change, and “co” as in together. Covariance means the degree to which two items move together.

If there are mostly red events, the two assets move together most of the time.  Another way of saying this is that the assets are highly correlated. Again, that is “co” as in together and “related” as in relationship between their movements. If, however, the portfolio and asset move in opposite directions most of the time, the green areas, then the covariance is lower, and can even be negative.

Covariance Details

It is not only the whether the two assets move together or apart; it is also the degree to which they move.  Larger movements in the red region result in larger covariance than smaller movements.  Similarly, larger movements in the green region reduce covariance.  In fact it is the product of movements that affects how much the sum of covariance is moved up and down.  Notice how the clover-leaf leaves move to the center, (0,0) if either the asset or the portfolio doesn’t move at all.  This is because the product of zero times anything must be zero.

Getting Technical: The clover-leaf pattern relates to the angle between each pair of asset movements.  It does not show the affect of the magnitude of their positions.

If the incremental covariance of the asset to the portfolio is less than the variance of the portfolio, a portfolio that adds the asset would have had lower overall variance (historically).  Since there is a tenancy (but no guarantee!) for asset’s correlations to remain somewhat similar over time, the portfolio manager might use the covariance analysis to decide whether or not to add the new asset to the portfolio.

Semi-Variance: Another Way to Measure Risk


Semi-variance visualization

Semi-variance Visualization

After staring at the covariance visualization, something may strike you as odd — The fact that when the portfolio and the asset move UP together this increases the variance. Since variance is used as a measure of risk, that’s like saying the risk of positive returns.

Most ordinary investors would not consider the two assets going up together to be a bad thing.  In general they would consider this to be a good thing.

So why do many (most?) risk measures use a risk model that resembles the red and green cloverleaf?  Two reasons: 1) It makes the math easier, 2) history and inertia. Many (most?) textbooks today still define risk in terms of variance, or its related cousin standard deviation.

There is an alternative risk measure: semi-variance. The multi-colored cloverleaf, which I will call the yellow-grey cloverleaf, is a visualization of how semi-variance is computed. The grey leaf indicates that events that occur in that quadrant are ignored (multiplied by zero).  So far this is where most academics agree on how to measure semi-variance.

Variants on the Semi-Variance Theme

However differences exist on how to weight the other three clover leaves.  It is well-known that for measuring covariance each leaf is weighted equally, with a weight of 1. When it comes to quantifying semi-covariance, methods and opinions differ. Some favor a (0, 0.5, 0.5, 1) weighting scheme where the order is weights for quadrants 1, 2, 3, and 4 respectively. [As a decoder ring Q1 = grey leaf, Q2 = green leaf, Q3 = red leaf, Q4 = yellow leaf].

Personally, I favor weights (0, 3, 2, -1) for the asset versus portfolio semi-covariance calculation.  For asset vs asset semi-covariance matrices, I favor a (0, 1, 2, 1) weighting.  Notice that in both cases my weighting scheme results in an average weight per quadrant of 1.0, just like for regular covariance calculations.


Financial Industry Moving toward Semi-Variance (Gradually)

Semi-variance more closely resembles how ordinary investors view risk. Moreover it also mirrors a concept economists call “utility.” In general, losing $10,000 is more painful than gaining $10,000 is pleasurable. Additionally, losing $10,000 is more likely to adversely affect a person’s lifestyle than gaining $10,000 is to help improve it.  This is the concept of utility in a nutshell: losses and gains have an asymmetrical impact on investors. Losses have a bigger impact than gains of the same size.

Semi-variance optimization software is generally much more expensive than variance-based (MVO mean-variance optimization) software.  This creates an environment where larger investment companies are better equipped to afford and use semi-variance optimization for their investment portfolios.  This too is gradually changing as more competition enters the semi-variance optimization space.  My guestimate is that currently about 20% of professionally-managed U.S. portfolios (as measured by total assets under management, AUM) are using some form of semi-variance in their risk management process.  I predict that that percentage will exceed 50% by 2018.


Older posts «