# IBKR Quant Blog

1 2 3 4 5

### Getting Started with Neural Networks for Algorithmic Trading - Part II

In part I, Kris showed us how a perceptron learns.
In this post, Kris shows us how to implement a perceptron from scratch.

We’ll code our own perceptron learning algorithm from scratch using R. We’ll train it to classify a subset of the iris data set.1

In the full iris data set, there are three species. However, perceptrons are for binary classification (i.e., distinguishing between two possible outcomes). Therefore, for the purpose of this exercise, we will remove all observations of one of the species (virginica), and train a perceptron to distinguish between the remaining two. We also need to convert the species classification into a binary variable: we will use 1 for the first species, and -1 for the other. Furthermore, there are four variables in addition to the species classification: petal length, petal width, sepal length and sepal width.  For the purposes of illustration, we’ll train our perceptron using only petal length and width and drop the other two measurements. These data transformations result in the following plot of the remaining two species in the two-dimensional feature space of petal length and petal width:

The plot suggests that petal length and petal width are strong predictors of species – at least in our training data set. Can a perceptron learn to tell them apart?

Training our perceptron is simply a matter of initializing the weights (we initialize them to zero) and implementing the perceptron learning rule, which updates the weights based on the error of each observation with the current weights. We do that in a for()  loop which iterates over each observation, making a prediction based on the values of petal length and petal width of each observation, calculating the error of that prediction and updating the weights accordingly.

In this example, we perform five sweeps through the entire data set, that is, we train the perceptron for five epochs. At the end of each epoch, we calculate the total number of misclassified training observations, which we hope will decrease as training progresses. Here’s the code:

Visit Robot Wealth to download the code

Here’s the plot of the error rate:

We can see that it took two epochs to train the perceptron to correctly classify the entire dataset. After the first epoch, the weights hadn’t been sufficiently updated. In fact, after epoch 1, the perceptron predicted the same class for every observation! Therefore, it misclassified 50 out of the 100 observations (there are 50 observations of each species in the data set). However, after two epochs, the perceptron was able to correctly classify the entire data set by learning appropriate weights.

Another, perhaps more intuitive way, to view the weights that the perceptron learns is in terms of its decision boundary. In geometric terms, for the two-dimensional feature space in this example, the decision boundary is the straight line separating the perceptron’s predictions. On one side of the line, the perceptron always predicts -1, and on the other, it always predicts 1.2

We can derive the decision boundary from the perceptron’s activation function:

The decision boundary is simply the line that defines the location of the step in the activation function. That step occurs at z=0, so our decision boundary is given by

w1x1+w2x2+b =0

Equivalently

x2=−w1w2x1–bw2

which defines a straight line in x1,x2 feature space.

In our iris example, the perceptron learned the following decision boundary:

Here’s the complete code for training this perceptron and producing the plots shown above:

Congratulations! You just built and trained your first neural network.

Notes:

1. The iris data is a standard machine learning data set and consists of 150 observations of specimens of iris flowers. Each observation consists of four measurements (sepal length, sepal width, petal length and petal width) and the species of iris to which each observed flower belongs. Three different species are recorded in the data set (setosa, versicolor, and virginica). The problem of classifying the different species based on the measurements is not a particularly difficult task, and you’ll see this data set pop up time and again in demonstrations of machine learning.
2. In three-dimensional feature space, we would have a decision plane, and likewise in higher dimensions the corresponding decision boundary is in N−1 dimensions, where N is the number of features or predictors.

In the next post, Kris will ask the perceptron to learn a slightly more difficult problem.

Visit Robot Wealth website to Download the code and data used in this post.

Learn more about Robot Wealth here: https://robotwealth.com/

This article is from Robot Wealth and is being posted with Robot Wealth’s permission. The views expressed in this article are solely those of the author and/or Robot Wealth and IB is not endorsing or recommending any investment or trading discussed in the article. This material is for information only and is not and should not be construed as an offer to sell or the solicitation of an offer to buy any security. To the extent that this material discusses general market activity, industry or sector trends or other broad-based economic or political conditions, it should not be construed as research or investment advice. To the extent that it includes references to specific securities, commodities, currencies, or other instruments, those references do not constitute a recommendation by IB to buy, sell or hold such security. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice.

16848

### Getting Started with Neural Networks for Algorithmic Trading - Part I

If you’re interested in using artificial neural networks (ANNs) for algorithmic trading, but don’t know where to start, then this article is for you. Normally if you want to learn about neural networks, you need to be reasonably well versed in matrix and vector operations – the world of linear algebra. This article is different. I’ve attempted to provide a starting point that doesn’t involve any linear algebra and have deliberately left out all references to vectors and matrices. If you’re not strong on linear algebra, but are curious about neural networks, then I think you’ll enjoy this introduction. In addition, if you decide to take your study of neural networks further, when you do inevitably start using linear algebra, it will probably make a lot more sense as you’ll have something of head start.

The best place to start learning about neural networks is the perceptron. The perceptron is the simplest possible artificial neural network, consisting of just a single neuron and capable of learning a certain class of binary classification problems.1 Perceptrons are the perfect introduction to ANNs and if you can understand how they work, the leap to more complex networks and their attendant issues will not be nearly as far. So we will explore their history, what they do, how they learn and where they fail. We’ll build our own perceptron from scratch and train it to perform different classification tasks which will provide insight into where they can perform well, and where they are hopelessly outgunned. Lastly, we’ll explore one way we might apply a perceptron in a trading system.2

A Brief History of the Perceptron

The perceptron has a long history, dating back to at least the mid-1950s. Following its discovery, the New York Times ran an article that claimed that the perceptron was the basis of an artificial intelligence (AI) that would be able to walk, talk, see and even demonstrate consciousness. Soon after, this was proven to be hyperbole on a staggering scale, when the perceptron was shown to be wholly incapable of classifying certain types of problems. The disillusionment that followed essentially led to the first AI winter, and since then we have seen a repeating pattern of hyperbole followed by disappointment in relation to artificial intelligence.3

Still, the perceptron remains a useful tool for some classification problems and is the perfect place to start if you’re interested in learning more about neural networks. Before we demonstrate it in a trading application, let’s find out a little more about it.

Artificial Neural Networks: Modelling Nature

Algorithms modelled on biology are a fascinating area of computer science. Undoubtedly you’ve heard of the genetic algorithm, which is a powerful optimization tool modelled on evolutionary processes. Nature has been used as a model for other optimization algorithms, as well as the basis for various design innovations. In this same vein, ANNs attempt to learn relationships and patterns using a somewhat loose model of neurons in the brain. The perceptron is a model of a single neuron.4

In an ANN, neurons receive a number of inputs, weight each of those inputs, sum the weights, and then transform that sum using a special function called an activation function, of which there are many possible types. The output of that activation function is then either used as the prediction (in a single neuron model) or is combined with the outputs of other neurons for further use in more complex models, which we’ll get to in another article.

Here’s a sketch of that process in an ANN consisting of a single neuron:

Here, x1, x2, etc., are the inputs. b is called the bias term, think of it like the intercept term in a linear model y=mx+b. w1,w2,etc., are the weights applied to each input. The neuron firstly sums the weighted inputs (and the bias term), represented by S in the sketch above. Then, S is passed to the activation function, which simply transforms S in some way. The output of the activation function, z is then the output of the neuron.

The idea behind ANNs is that by selecting good values for the weight parameters (and the bias), the ANN can model the relationships between the inputs and some target. In the sketch above, z is the ANN’s prediction of the target given the input variables.

In the sketch, we have a single neuron with four weights and a bias parameter to learn. It isn’t uncommon for modern neural networks to consist of hundreds of neurons across multiple layers, in which the output of each neuron in one layer is inputted to all the neurons in the next layer. Such a fully connected network architecture can easily result in many thousands of weight parameters. This enables ANNs to approximate any arbitrary function, linear or nonlinear.

The perceptron consists of just a single neuron, like in our sketch above. This greatly simplifies the problem of learning the best weights, but it also has implications for the class of problems that a perceptron can solve.

What’s an Activation Function?

The purpose of the activation function is to take the input signal (that’s the weighted sum of the inputs and the bias) and turn it into an output signal. There are many different activation functions that convert an input signal in a slightly different way, depending on the purpose of the neuron.

Recall that the perceptron is a binary classifier. That is, it predicts either one or zero, on or off, up or down, etc. It follows, then, that our activation function needs to convert the input signal (which can be any real-valued number) into either a one or a zero5corresponding to the predicted class.

In biological terms, think of this activation function as firing (activating) the neuron (telling it to pass the signal on to the next neuron) when it returns 1, and doing nothing when it returns 0.

What sort of function accomplishes this? It’s called a step function, and its mathematical expression looks like this:

And when plotted, it looks like this:

This function then transforms any weighted sum of the inputs (S) and converts it into a binary output (either 1 or 0). The trick to making this useful is finding (learning) a set of weights, w, that lead to good predictions using this activation function.

How Does a Perceptron Learn?

We already know that the inputs to a neuron get multiplied by some weight value particular to each individual input. The sum of these weighted inputs is then transformed into an output via an activation function. In order to find the best values for our weights, we start by assigning them random values and then start feeding observations from our training data to the perceptron, one by one. Each output of the perceptron is compared with the actual target value for that observation, and, if the prediction was incorrect, the weights adjusted so that the prediction would have been closer to the actual target. This is repeated until the weights converge.

In perceptron learning, the weight update function is simple: when a target is misclassified, we simply take the sign of the error and then add or subtract the inputs that led to the misclassification to the existing weights.

If that target was -1 and we predicted 1, the error is −1−1=−2. We would then subtract each input value from the current weights (i.e., wi=wi–xi). If the target was 1 and we predicted -1, the error is
1- -1 = 2, so then add the inputs to the current weights (i.e., wi=wi+xi).6

This has the effect of moving the classifier’s decision boundary (which we will see below) in the direction that would have helped it classify the last observation correctly. In this way, weights are gradually updated until they converge. Sometimes (in fact, often) we’ll need to iterate through each of our training observations more than once in order to get the weights to converge. Each sweep through the training data is called an epoch.

Notes:

1. Perceptrons can solve linearly separable binary classification problems – more on this below.
2. While perceptrons are the best introduction to neural nets for the uninitiated, personally I have my doubts about using them in algorithmic trading systems – it’s difficult to imagine that the classification tasks for which they are suited have relevance to the markets. However, in the simple example below, my perceptron trading strategy returned a surprisingly good walk-forward result. Maybe they are worthy of a closer look after all.
3. As an aside, there are a lot of reasons to think that this time might be different (indeed, that’s probably not even in question any more), including the exponential growth in both compute resources and data availability, as well as advances in computer science that enable efficient training of large neural networks.
4. Note the words loose model of the brain. I recently undertook some study in computational neuroscience, and one of the surprising take-aways was how little we know about how the brain actually works, not to mention the incredible research currently being undertaken to remedy that.
5. or a 1 and a -1, or any other binary output
6. That means that if the set of weights (w1,w2,w3) misclassified the observation (x1,x2,x3,y=1) as y=−1, we would update the weights as follows: (w1+x1,w2+x2,w3+x3)

Visit Robot Wealth website to Download the code and data used in this post and check out the entire algo trading curriculum here.

Learn more about Robot Wealth here: https://robotwealth.com/

This article is from Robot Wealth and is being posted with Robot Wealth’s permission. The views expressed in this article are solely those of the author and/or Robot Wealth and IB is not endorsing or recommending any investment or trading discussed in the article. This material is for information only and is not and should not be construed as an offer to sell or the solicitation of an offer to buy any security. To the extent that this material discusses general market activity, industry or sector trends or other broad-based economic or political conditions, it should not be construed as research or investment advice. To the extent that it includes references to specific securities, commodities, currencies, or other instruments, those references do not constitute a recommendation by IB to buy, sell or hold such security. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice.

16847

### Quant Traders: Real Rocket Scientists? Magic? Rulers Of Wall Street

Known as the “the Rocket Scientists of Wall Street” (as initially labeled by Investopedia), Quant traders or analysts – “Quants” – may be (understandably) intimidating to many. However, as their returns increase, more are becoming aware of their power. Quants are a specialized group that can no longer be ignored.

Quants apply mathematical and statistical methods in order to create algorithms to solve financial and risk management problems. Their tech solutions are replacing humans in the finance sector. As David Siegel, Founder of Two Sigma Investments put it (2014):

“No human investment manager will be able to beat the computer”

Despite his cynicism, recent years have proved Siegel’s thesis correct. In May 2017, Two Sigma’s assets under management (AUM) reached \$45 billion, up from \$32 billion at the start of 2016. This brought their AUM to the number two spot (behind Renaissance Technologies, also a quant fund) as the nation’s second-biggest hedge fund firm.

With these returns, those outside the Quant niche are taking notice. In 2017, CNBC headlined “‘Quant:’ the buzzword hedge fund workers can no longer afford to ignore.” Forbes, also, published “Quants Are Eating Away at Wall Street’s Edge” this past March. In June, Bloomberg published "Robots Are Eating Money Managers’ Lunch" focusing on a similar theme.

While no media outlet wants to blatantly tell their audience that tech’s stronghold on the finance industry means that their subscription readers’ jobs are dead, headlines continue to draw attention. Last May, the Wall Street Journal depicted a monarchy as they published “The Quants Run Wall Street Now.”  The Journal named Quants as the “New Kings of Wall Street” the same month.

Learn more about Byte Academy here.

This article is from Byte Academy and is being posted with Byte Academy’s permission. The views expressed in this article are solely those of the author and/or Byte Academy and IB is not endorsing or recommending any investment or trading discussed in the article. This material is for information only and is not and should not be construed as an offer to sell or the solicitation of an offer to buy any security. To the extent that this material discusses general market activity, industry or sector trends or other broad-based economic or political conditions, it should not be construed as research or investment advice. To the extent that it includes references to specific securities, commodities, currencies, or other instruments, those references do not constitute a recommendation by IB to buy, sell or hold such security. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice.

16851

### Cutting Through the Smoke and Mirrors of AI on Wall Street (1 of 5)

This is the first article in a five part series on AI in Finance.

Artificial intelligence made lots of headlines in 2017. Alphabet (GOOGL) developed software that defeated the defending world champion in Go, then a few months later developed a new version that defeated the prior version 100 games to none.

These developments have spurred predictions that “AI Will Invade Every Corner of Wall Street.” Prognosticators see a world in which computers completely replace human investors.

"If computing power and data generation keep growing at the current rate, then machine learning could be involved in 99 percent of investment management in 25 years," Luke Ellis, CEO of fund management company Man Group, PLC, told Bloomberg.

Despite this optimism, advances in artificial intelligence have not yet translated to superior returns. According to Wired, quant funds over the past few years have, on average, failed to outperform hedge funds (which have themselves failed to outperform the market).

Most people do not understand that AI, especially the AI used in finance today, lacks the application of deep subject matter expertise[1]  to create the clean data and relationships that are the foundation of any successful investment strategy or AI. Winning games is one thing, but the real world is not a game that follows immutable rules in a strictly defined space. In the real world, humans change the rules, break the rules, or the rules don’t even exist. Current AI is nowhere near navigating real world situations without a great deal of human intervention.

Figure 1: AI Is Overhyped and Misunderstood: Systematic Funds Underperform

Sources: Preqin/Wired

Finding the Talent(s)

One of the biggest problems with AI today is lack of interest or ability of those with adequate subject matter expertise to communicate with the programmers building the AI. The programmers don’t understand the data they’re feeding into their AI, and the analysts lack the understanding of the technology to communicate what programmers need to know to understand the source data and interpret the results.

This disconnect creates a number of well-publicized issues for the application of AI in finance and investing:

Individuals with the skills and knowledge to bridge this divide are among the scarcest and most valuable people in finance. Nine out of 10 financial services firms have already started working on AI technologies, and they’re all competing in this scarce labor pool.

As we wrote in “Big Banks Will Win the Fintech Revolution,” the largest financial firms will be the biggest beneficiaries of technological advancements due to their scale and resources. Big banks can afford to pay the most for AI talent, and they have the biggest store of financial data to aid their new programmers.

A few banks are already making serious efforts to get the necessary talent. UBS (UBS) is on an AI hiring spree, while Morgan Stanley’s (MS) programmers and financial advisors have worked together to build “Next Best Action”, a platform that uses machine learning to aid its advisors in offering personalized advice to clients.

These efforts should eventually pay off in a big way, but for now they remain in their infancy. Financial institutions still have a long way to go before they can truly implement AI in an effective way.

The Big (Data) Problem with AI

The total amount of digital data in the world doubles every two years. As the volume of data grows exponentially, most of that data lacks the structures needed for machines to analyze it. As a result, AI projects, which are supposed to reduce the need for human labor, require countless man-hours to collect, scrub, and format data inputs.

Virtova founder, Sultan Meghji, told the Financial Revolutionists that many AI startups spend at least half their funding on data cleanup and management. Everyone wants to talk about teaching computers to think, but there’s no short cut or substitute for curating the data sets that machines use to learn.

To train an AI, you need a training data set for it to learn from. Training data sets tend to be of two kinds. First, you have relatively small, accurate data sets that don’t contain enough different kinds of examples to be effective. AI trained on these data sets become great at interpreting the training data, but they can’t handle the variety and vagaries of the real world.

Other training sets are large but not very accurate. In these cases, the AI gets to see lots of examples, sometimes with incorrect data, but it isn’t being given clear and consistent instructions on how to respond. AI trained on these larger, inaccurate data sets often determine that there are few consistent things to be learned from the data and are capable of doing very little on their own.

For successful machine learning, training data sets need to be both accurate and widely representative. In other words, the training data needs to accurately represent what happens in as much of the real world as possible. How else can we expect the machine to learn anything consistently useful?

Herein is the AI challenge: machines can’t learn without good training data sets and creating good training data sets requires more time than most realize from humans with deep subject matter expertise. Most humans with the depth of subject matter expertise required to curate a good training data set are not interested in such mundane work. An alternative approach is to have lots of humans with limited subject-matter-expertise do the work, but this approach has been unsuccessful so far.

The Big (Data) Problems Are Worse in the Finance & Investing World

In theory, curating training data sets should be less challenging in finance. After all, financial data is structured in the form of financial statements in official filings with the SEC. However, any layman can quickly see that there is not as much structure (humans do not always follow the rules) as one might presume in these filings. Plus, the structure that does exist is not all that useful for AI. In fact, it can be actively harmful.

Imagine a computer that wants to compare the financials of Coca-Cola (KO) and Pepsi (PEP). As the computer reads through the financial statements, how is it supposed to know that “Equity Method Investments” for KO and “Investments in Noncontrolled Affiliates” for PEP are the same? What about “Retained Earnings” vs. “Reinvested Earnings.” Industry groups have been trying to create a standardized financial nomenclature for years to solve this very problem.

In theory, the development of XBRL would solve this problem. In practice, XBRL still contains too many errors and custom tags to allow for fully automated reading of financial filings. Even the smartest machines need extensive training from humans with deep subject-matter expertise to be able to understand financial filings.

Without this pairing of sophisticated technology and expert analysts, any AI effort in finance is doomed to failure. As the saying goes, “garbage in, garbage out.” Dumping a bunch of unstructured, unverified data into a computer and expecting it to deliver an investment strategy is like dumping the contents of your pantry into the oven and expecting it to bake a pie. It doesn’t matter how good the machine is, it can’t function without the right preparation.

The Problem of False Positives

Even if the financial data is structured and verified, it may not be useful to a machine, and AI will struggle to tell what data is useful and what is not. The large volume of available financial data means there will inevitably be a large number of apparent patterns that are actually the result of pure randomness. This phenomenon is known as “overfitting,” and it’s such a recognized issue that it gets its own lesson in Stanford’s online course on machine learning.

Overfitting is not just an AI problem. Humans have always struggled with seeing patterns where none truly exist (heuristics). At least, though, we can be conscious of this flaw and try to counteract it. Computers, for all their sophistication, cannot claim this same level of consciousness. When programmers design machines to find patterns, that’s what those machines are going to do.

As AI gets more complex, the problem of overfitting gets worse. Anthony Ledford, the chief data scientist at one of Man Group’s quant funds, recently told The Wall Street Journal:

“The more complicated your model the better it is at explaining the data you use for training and the less good it is about explaining the data in the future.”

Many quant funds today are simply mining patterns from past data and hoping those patterns persist into the future. In reality, most of those patterns were either the result of randomness or conditions that no longer exist.

Again, we see the need for the pairing of AI with human intelligence. Machines can process data and find patterns more quickly and efficiently than any human, but for now they lack the intelligence to audit those patterns and understand whether or not they can be used to predict future results.

AI As a Black Box

Of course, to audit the results of AI, humans need to be able to understand how that AI thinks. They need some level of insight into the processes the machine is using and the patterns it discovers.

Right now, most AI is not transparent enough for potential users to trust it. All too often, the AI algorithms are a black box that take in data and spit out results without any transparency into their underlying machinations.

In part, this problem is unavoidable if we want the machines to operate with the scale needed for them to be useful. The code that goes into AI is so complex that few individuals could ever fully understand its inner workings.

In fact, software doesn’t even have to reach the complexity of AI to have these problems. Consider the unexpected acceleration problems that plagued the Toyota Camry about 10 years ago. So many programmers had worked on the engine control software that it turned into “spaghetti code,” a mass of unintelligible and often contradictory code that no one understood and caused great harm.

If the software to support human control of a car’s breaking and acceleration can become so complex, just imagine how much more confusing and susceptible to errors more sophisticated activities, like financial modeling, can be. One mistake in one line of code could alter the entire function of the system. The software wouldn’t break, it would just be performing a different task than intended without anyone realizing until, perhaps, it’s too late.

This problem is exacerbated by the divide between the people with adequate subject matter expertise in finance and the programmers. The finance experts don’t understand how the software works, while the programmers don’t understand how finance works.

Finance is far from the only sector to experience this problem. In “The Coming Software Apocalypse,” The Atlantic detailed several examples of major failures that occurred because the coders didn’t properly anticipate all the potential uses of their software. These failures were prolonged because the people using the code didn’t have any idea how it worked.

As long as AI remains a black box, its utility will be limited. Eventually, the lack of transparency will lead to a significant and undetected failure. Even before that point, it will be difficult to get investors to commit significant money to a program they cannot trust.

The Way Forward

For all these challenges, AI will continue to expand its reach on Wall Street. There’s no other way for financial firms to meet the dual mandate of reducing costs and improving their service. Technology is the only solution for analyzing the huge volumes of corporate financial data filed with the SEC every hour and meeting the Fiduciary Duty of Care.

The firms that understand this fact and take concrete steps to invest in technology will have a significant advantage over their competitors, which is why UBS and Morgan Stanley are among our top picks in the financial sector.

This article is the first in a five-part series on the role of AI in finance. Over the next two articles, we will dig deeper into the challenges facing AI and how they can be overcome, while the last two articles will show how AI can lead to significant benefits for both financial firms and their customers.

Disclosure: David Trainer and Sam McBride receive no compensation to write about any specific stock, sector, style, or theme.

[1] Harvard Business School features the powerful impact of our research automation technology in the case New Constructs: Disrupting Fundamental Analysis with Robo-Analysts.

Learn more about New Constructs here: https://www.newconstructs.com

In case you missed New Constructs’ webinar on “Machine Learning for Smarter Investing”, watch the recording on YouTube: https://www.youtube.com/watch?v=H_bgWhdEgWY&index=2&list=PL71vNXrERKUokMoNH3Wcw4tYfoQXsqRmc

This article is from New Constructs, LLC and is being posted with New Constructs, LLC’s permission. The views expressed in this article are solely those of the author and/or New Constructs, LLC and IB is not endorsing or recommending any investment or trading discussed in the article. This material is for information only and is not and should not be construed as an offer to sell or the solicitation of an offer to buy any security. To the extent that this material discusses general market activity, industry or sector trends or other broad-based economic or political conditions, it should not be construed as research or investment advice. To the extent that it includes references to specific securities, commodities, currencies, or other instruments, those references do not constitute a recommendation by IB to buy, sell or hold such security. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice.

16845

### Crisis Correlation Risk

Market Insight: Crisis Diversification

Most investors are well aware that a diversified portfolio is less prone to suffer large drawdowns (i.e., pullbacks from account equity highs) than a portfolio that shows high concentration. But, few investors realize that simple diversification among asset classes is NOT enough. This is because during times of stress, correlation among assets increases substantially. To increase the chances of avoiding large portfolio shocks, investors also need to ensure that they have diversification among strategies. Let's look at some data that will help demonstrate this important concept.

The chart below shows 6 "diverse" asset classes:

• AGG (Investment Grade Bonds)
• GLD (Gold)
• SPY (S&P 500)
• JNK (Junk Bonds)
• EEM (Emerging Equity Markets)
• IYR (Real Estate)

It is set up as a “% Change” chart that examines the stock market crash in 2008. The start date is 9/30/2008 and the end date is 11/24/2008.

This example from 2008 demonstrates how strain can carry through to normally diversified asset classes. A real crisis does not just impact one asset group. It does not just lead to rotation, or redistribution, of wealth. A real crisis leads to destruction of wealth. Stocks, bonds, gold, and real estate all saw substantial declines during the 2008 bear market as wealth was destroyed and investors in all types of instruments saw declines in the value of their portfolios. During October of 2008, investment grade bonds (AGG) held up the best with a decline maxing out at just over 10%. Real estate (IYR) did the worst, and its maximum drawdown was nearly 60%. Virtually nothing provided a safe haven.

Another way to see this is to look at the dates where these markets made major bottoms.

Major Market Bottoms

We see here that all of these instruments hit major lows within a 5-month period. That means that each of these asset classes experienced significant losses prior to these bottoms occurring. A long-only approach across many different asset classes may help cushion drawdowns during minor corrective periods. But this kind of approach does a poor job of protecting against major meltdowns that can cause severe damage to a portfolio. Instead of simply diversifying among asset classes, investors should also consider diversifying among strategies and investment techniques.

The table below is taken from the paper "Managed Futures – Riding the Wave", published by Societe Generale. It looks at returns of the managed futures index versus hedge funds, stocks, and bonds from 2000 - 2016.

What stands out here is that managed futures strategies not only performed well in terms of limiting drawdown, but their maximum drawdown actually occurred in 2013 - at a different time than everything else which bottomed in the same late ’08 – early ’09 period mentioned previously. Managed futures strategies could be one way to go.  Traders could also consider market-neutral strategies, trend following, intraday, or short-term multi-day trading techniques as other alternatives.

When constructing a portfolio, traders should remember that different strategies will shine during different environments. To better improve their chances of avoiding a substantial drawdown, traders should consider not just alternative asset classes, but also alternative strategies.  Non-correlated strategies can provide some real advantages.  I would encourage all traders and investors to examine their portfolios and consider whether the types of strategies, and not just the assets classes they employ, are diversified enough.

For more information on this topic, or to learn more about InvestiQuant and our solutions, see the November 2017 webinar “Crisis Correlation: The Hidden Risk of Diversification”: https://www.youtube.com/watch?v=x4mkAwyP7F4

Learn more about InvestiQuant here.

This article is from InvestiQuant and is being posted with InvestiQuant’s permission. The views expressed in this article are solely those of the author and/or InvestiQuant and IB is not endorsing or recommending any investment or trading discussed in the article. This material is for information only and is not and should not be construed as an offer to sell or the solicitation of an offer to buy any security. To the extent that this material discusses general market activity, industry or sector trends or other broad-based economic or political conditions, it should not be construed as research or investment advice. To the extent that it includes references to specific securities, commodities, currencies, or other instruments, those references do not constitute a recommendation by IB to buy, sell or hold such security. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice.

16822

1 2 3 4 5

###### Disclosures

We appreciate your feedback. If you have any questions or comments about IBKR Quant Blog please contact ibkrquant@ibkr.com.

The material (including articles and commentary) provided on IBKR Quant Blog is offered for informational purposes only. The posted material is NOT a recommendation by Interactive Brokers (IB) that you or your clients should contract for the services of or invest with any of the independent advisors or hedge funds or others who may post on IBKR Quant Blog or invest with any advisors or hedge funds. The advisors, hedge funds and other analysts who may post on IBKR Quant Blog are independent of IB and IB does not make any representations or warranties concerning the past or future performance of these advisors, hedge funds and others or the accuracy of the information they provide. Interactive Brokers does not conduct a "suitability review" to make sure the trading of any advisor or hedge fund or other party is suitable for you.

Securities or other financial instruments mentioned in the material posted are not suitable for all investors. The material posted does not take into account your particular investment objectives, financial situations or needs and is not intended as a recommendation to you of any particular securities, financial instruments or strategies. Before making any investment or trade, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice. Past performance is no guarantee of future results.

Any information provided by third parties has been obtained from sources believed to be reliable and accurate; however, IB does not warrant its accuracy and assumes no responsibility for any errors or omissions.

Any information posted by employees of IB or an affiliated company is based upon information that is believed to be reliable. However, neither IB nor its affiliates warrant its completeness, accuracy or adequacy. IB does not make any representations or warranties concerning the past or future performance of any financial instrument. By posting material on IB Quant Blog, IB is not representing that any particular financial instrument or trading strategy is appropriate for you.