IB Quant Blog


1 2 2


акции

Deep Learning for Trading Part 2: Configuring TensorFlow and Keras to run on GPU


In Part 1, we introduced Keras and discussed some of the major obstacles to using deep learning techniques in trading systems, including a warning about attempting to extract meaningful signals from historical market data.

Part 2 provides a walk-through of setting up Keras and Tensorflow for R using either the default CPU-based configuration, or the more complex and involved (but well worth it) GPU-based configuration under the Windows environment.

Stay tuned for Part 3 of this series which will be published next week.

CPU vs GPU for Deep Learning

No doubt you know that a computer’s Central Processing Unit (CPU) is its primary computation module. CPUs are designed and optimized for rapid computation on small amounts of data and as such, elementary arithmetic operations on a few numbers is blindingly fast. However, CPUs tend to struggle when asked to operate on larger amounts of data, for example performing matrix operations on large arrays. And guess what: the computational nuts and bolts of deep learning is all about such matrix operations. That’s bad news for a CPU.

The rendering of computer graphics relies on these same types of operations, and Graphical Processing Units (GPUs) were developed to optimize and accelerate them. GPUs typically consist of hundreds or even thousands of cores, enabling massive parallelization. This makes GPUs a far more suitable hardware for deep learning than the CPU.

Of course, you can do deep learning on a CPU. And this is fine for small scale research projects or just getting a feel for the technique. But for doing any serious deep learning research, access to a GPU will provide an enormous boost in productivity and shorten the feedback loop considerably. Instead of waiting days for a model to train, you might only have to wait hours. Instead of waiting hours, you’ll only have to wait minutes.

When selecting a GPU for deep learning, the most important characteristic is the memory bandwidth of the unit, not the number of cores as one might expect. That’s because it typically takes more time to read the data from memory than to perform the actual computations on that data! So if you want to do fast deep learning research, be sure to check the memory bandwidth of your GPU. By way of comparison, my (slightly outdated) NVIDIA GTX 970M has a memory bandwidth of around 120 GB/s. The GTX 980Ti clocks in at around 330 GB/s!

Baby Steps: Configuring Keras and TensorFlow to Run on the CPU

If you don’t have access to a GPU, or if you just want to try out some deep learning in Keras before committing to a full-blown deep learning research project, then the CPU installation is the right one for you. It will only take a couple of minutes and a few lines of code, as opposed to an hour or so and a deep dive into your system for the GPU option.

Here’s how to install Keras to run TensorFlow on the CPU.

At the time of writing, the Keras R package could be installed from CRAN, but I preferred to install directly from GitHub. To do so, you need to first install the devtools package, and then do

Dev Tools

Then, load the Keras package and make use of the convenient install_keras()  function to install both Keras and TensorFlow:

keras

That’s it! You now have the CPU-based versions of Keras and TensorFlow ready to go, which is fine if you are just starting out with deep learning and want to explore it at a high level. If you don’t want the GPU-based versions just yet, then I’m afraid that’s all we have for you until the next post!

Serious Deep Learning: Configuring Keras and TensorFlow to run on a GPU

Installing versions of Keras and TensorFlow compatible with NVIDIA GPUs is a little more involved, but is certainly worth doing if you have the appropriate hardware and intend to do a decent amount of deep learning research. The speed up in model training is really significant.

Here’s how to install and configure the NVIDIA GPU-compatible version of Keras and TensorFlow for R under Windows.

Step 1: What hardware do you have?

First, you need to work out if you have a compatible NVIDIA GPU installed on your Windows machine. To do so, open your NVIDIA Control Panel. Typically, it’s located under C:\Program Files\NVIDIA Corporation\Control Panel Client , but on recent Windows versions you can also find it by right-clicking on the desktop and selecting ‘NVIDIA Control Panel’, like in the screenshot below:

NVIDIA Control Panel

When the control panel opens, click on the System Information link in the lower left corner, circled in the screenshot below:

NVIDIA

 

This will bring up the details of your NVIDIA GPU. Note your GPU’s model name (here mine is a GeoForce GTX 970M, which you can see under the ‘Items’ column): While you’re at it, check how your GPU’s memory bandwidth stacks up (remember this parameter is the limiting factor of the GPU’s speed on deep learning tasks).

System Information

 

Step 2: Is your hardware compatible with TensorFlow?

Next, head over to NVIDIA’s GPU documentation, located at https://developer.nvidia.com/cuda-gpus. You’ll need to find your GPU model on this page and work out its Compute Capability Number. This needs to be 3.0 or higher to be compatible with TensorFlow. You can see in the screenshot below that my particular GPU model has a Compute Capability of 5.2, which means that I can use it to train deep learning models in TensorFlow. Hooray for productivity.

CUDA

In practice, my GPU model is now a few years old and there are much better ones available today. But still, using this GPU provides far superior model training times than using a CPU.

Step 3: Get CUDA

Next, you’ll need to download and install NVIDIA’s CUDA Toolkit. CUDA is NVIDIA’s parallel computing API that enables programming on the GPU. Thus, it provides the framework for harnessing the massive parallel processing capabilities of the GPU. At the time of writing, the release version of TensorFlow (1.4) was compatible with version 8 of the CUDA Toolkit (NOT version 9, which is the current release), which you’ll need to download via the CUDA archives here.1

Step 4: Get your latest driver

You’ll also need to get the latest drivers for your particular GPU from NVIDIA’s driver download page. Download the correct driver for your GPU and then install it.

 

Step 5: Get cuDNN

Finally, you’ll need to get NVIDIA’s CUDA Deep Neural Network library (cuDNN). cuDNN is essentially a library for deep learning built using the CUDA framework and enables computational tools like TensorFlow to access GPU acceleration. You can read all about cuDNN here. In order to download it, you will need to sign up for an NVIDIA developers account.

Having activated your NVIDIA developers account, you’ll need to download the correct version of cuDNN. The current release of TensorFlow (version 1.4) requires cuDNN version 6. However, the latest version of cuDNN is 7, and it’s not immediately obvious how to acquire version 6. You’ll need to head over to this page, and under the text on ‘What’s New in cuDNN 7?’ click the Download button. After agreeing to some terms and conditions, you’ll then be able to select from numerous versions of cuDNN. Make sure to get the version of cuDNN that is compatible with your version of CUDA (version 8), as there are different sub-versions of cuDNN for each version of CUDA.1

Confusing, no? I’ve circled the correct (at the time of writing) cuDNN version in the screenshot below (click for a clearer image):

cuDNN

Once you’ve downloaded the cuDNN zipped file, extract the contents to a directory of your choice.

 

Step 6: Modify the Windows %PATH%  variable

We also need to add the paths to the CUDA and cuDNN libraries to the Windows %PATH%  variable so that TensorFlow can find them.  To do so, open the Windows Control Panel, then click on System and Security, then System, then Advanced System Settings like in the screenshot below:

System

Then, when the System Properties window opens, click on Environment Variables. In the new window, under System Variables, select Path and click Edit. Then click New in the Edit Environment Variable window and add the paths to the CUDA and cuDNN libraries. On my machine, I added the following paths (but yours will depend on where they were installed):

Kris Longmore

Here’s a screenshot of the three windows and the relevant buttons involved in this process (click for a larger image):

Add to path

Step 7: Install GPU-enabled Keras

Having followed those steps, you’re finally in a position to install Keras and configure it to run TensorFlow on the GPU. From a fresh R or R-Studio session, install the Keras package if you haven’t yet done so, then load it and run install_keras()  with the argument tensorflow = 'gpu' :

Kris Longmore

The installation process might take quite some time, but don’t worry, you’ll get that time back and a whole lot more in faster training of your deep learning experiments.

 

That’s it! Congratulations! You are now ready to perform efficient deep learning research on your GPU! We’ll dive into that in the next unit.

 

A troubleshooting tip

When I first set this up, I found that Keras was throwing errors that it couldn’t find certain TensorFlow modules. Eventually I worked out that it was because I already had a version of TensorFlow installed in my main conda environment thanks to some Python work I’d done previously. If you have the same problem, explicitly setting the conda environment immediately after loading the Keras package should resolve it:

Kris Longmore

Also note that the compatible versions of CUDA and cuDNN may change as new versions of TensorFlow are released. It is worth double checking the correct versions at tensorflow.org.

 

Note:

  1. The compatible versions of CUDA and cuDNN may change as new versions of TensorFlow are released. It is worth double checking the correct versions at tensorflow.org

 

 

 

 

Learn more about Robot Wealth here: https://robotwealth.com/

This article is from Robot Wealth and is being posted with Robot Wealth’s permission. The views expressed in this article are solely those of the author and/or Robot Wealth and IB is not endorsing or recommending any investment or trading discussed in the article. This material is for information only and is not and should not be construed as an offer to sell or the solicitation of an offer to buy any security. To the extent that this material discusses general market activity, industry or sector trends or other broad-based economic or political conditions, it should not be construed as research or investment advice. To the extent that it includes references to specific securities, commodities, currencies, or other instruments, those references do not constitute a recommendation by IB to buy, sell or hold such security. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice.


16021




акции

Basic Operations on Stock data using Python


Python has emerged as the fastest growing programming language and this has stemmed from multiple factors like ease to learn, readability, conciseness, strong developer community, application across domains etc. Python has found wide acceptance in trading too and this has led to Python-based analytics platforms, Python APIs, and trading strategies being built using Python.

Python

The objective of this post is to illustrate how easy it is to learn Python and apply it to formulate and analyze trading strategies. If you are new to programming this blog might just help you overcome your fear of programming. Also, don’t forget to check out some nice links provided at the end of this blog to learn some exciting trading strategies which have been posted on our blog.

Let us run through some basic operations that can be performed on a stock data using Python. We start by reading the stock data from a CSV file. The CSV file contains the Open-High-Low-Close (OHLC) and Volume numbers for the stock.

pandas

The ‘TIME’ column seen here specifies the closing time of the day’s trading session. To delete the column we can simply use the ‘del’ command.

delTime

Now, let us use the type function to check whether the object is a pandas datetime index.

data-pandas

I would like to know the number of trading days (the number of rows) in the given data set. It can be done using the count method.

stocks-data-python

What if I want to know the maximum close price that was reached in the given period? This is made possible by using the max method.

maxprice

Is it also possible to know the date on which this maximum price was reached? To find the respective date we apply the index property as shown below.

data.close

 

Let us compute the daily percentage change in closing price. We add a new column of ‘Percentage_Change’ to our existing data set. In the next line of code, we have filtered the percent change column for all the values greater than 1.0. The result has been presented below.

data.percent.change

Finally, let us add a couple of indicators. We compute the 20-day simple moving average and the 5-day average volume. We can add more indicators to our data frame and then analyze the stock trend to see whether it is bullish or bearish. You can learn more on how to create various technical indicators in Python here.

avg_vol

 

In his short post, we covered some simple ways to analyze the data set and build more understanding of the stock data. Can you think of building a trading strategy using similar basic operations and simple indicators? Here are the links to articles on Python that can be explored for your own trading needs.

Trading Using Machine Learning In Python – SVM (Support Vector Machine)
Strategy using Trend-following Indicators: MACD, ST and ADX
Sentiment Analysis on News Articles using Python
Python Trading Strategy in Quantiacs Platform
In our upcoming posts, we will provide more ways and methods that can be used for trading using Python. Keep following our posts.


Next Step

If you want to learn various aspects of Algorithmic trading then check out QuantInsti’s Executive Programme in Algorithmic Trading (EPAT™).
 

 

Milind Paradkar holds an MBA in Finance from the University of Mumbai and a Bachelor’s degree in Physics from St. Xavier’s College, Mumbai. At QuantInsti®, Milind is involved in creating technical content on Algorithmic & Quantitative trading. Prior to QuantInsti®, Milind had worked at Deutsche Bank as a Senior Analyst where he was involved in the cash flow modeling of structured finance deals covering Asset-backed Securities (ABS) and Collateralized Debt Obligations (CDOs).

Learn more QuantInsti here https://www.quantinsti.com

This article is from QuantInsti and is being posted with QuantInsti’s permission. The views expressed in this article are solely those of the author and/or QuantInsti and IB is not endorsing or recommending any investment or trading discussed in the article. This material is for information only and is not and should not be construed as an offer to sell or the solicitation of an offer to buy any security. To the extent that this material discusses general market activity, industry or sector trends or other broad-based economic or political conditions, it should not be construed as research or investment advice. To the extent that it includes references to specific securities, commodities, currencies, or other instruments, those references do not constitute a recommendation by IB to buy, sell or hold such security. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice.


15983




акции

How to Run Trading Algorithms on Google Cloud Platform in 6 Easy Steps


Kris Longmore shares his experience using Interactive Brokers Gateway

Earlier this year, I attended the Google Next conference in San Francisco and gained some first hand perspective into what’s possible with Google’s cloud infrastructure. Since then, I’ve been leaning on Google Cloud Platform (GCP) to run my trading algorithms (and more) and it has become an important tool in my workflow.

In this post, I’m going to show you how to set up a GCP cloud compute instance to act as a server for hosting a trading algorithm. I’ll also discuss why such a setup can be a good option and when it might pay to consider alternatives. But cloud compute instances are just a tiny fraction of the whole GCP ecosystem, so before we go any further, let’s take a high level overview of the various components that make up GCP.

What is Google Cloud Platform?

GCP consists of a suite of cloud storage, compute, analytics and development infrastructure and services. Google says that GCP runs on the very same infrastructure that Google uses for its own products, such as Google Search. This suite of services and infrastructure goes well beyond simple cloud storage and compute resources, providing some very handy and affordable machine learning, big data, and analytics tools.

GCP consists of:

  • Google Compute Engine: on-demand virtual machines and an application development platform.
  • Google Storage: scalable object storage; like an (almost) infinite disk drive in the cloud.
  • BigTable and Cloud SQL: scalable NoSQL and SQL databases hosted in the cloud.
  • Big Data Tools:
    • BigQuery: big data warehouse geared up for analytics
    • DataFlow: data processing management
    • DataProc: managed Spark and Hadoop service
    • DataLab: analytics and visualization platform, like a Jupyter notebook in the cloud.
    • Data Studio: for turning data into nice visualizations and reports
  • Cloud Machine Learning: train your own models in the cloud, or access Google’s pre-trained neural network models for video intelligence, image classification, speech recognition, text processing and language translation.
  • Cloud Pub/Sub: send and receive messages between independent applications.
  • Management and Developer Tools: monitoring, logging, alerting and performance analytics, plus command line/powershell tools, hosted git repositories, and other tools for application development.
  • More that I haven’t mentioned here!

The services and infrastructure generally play nicely with each other and with the standard open source tools of development and analytics. For example, DataLab integrates with BigQuery and Cloud Machine Learning and runs Python code. Google have tried to make GCP a self-contained, one-stop-shop for development, analytics, and hosting. And from what I have seen, they are succeeding.

Using Google Compute Engine to Host a Trading Algorithm

Google Compute Engine (GCE) provides virtual machines (VMs) that run on hardware located in Google’s global network of data centres (a VM is simply an emulation of a computer system that provides the functionality of a physical computer). You can essentially use a VM just like you would a normal computer, without actually owning the requisite hardware. In the example below, I used a VM instance to:

  • Host and run some software applications (Zorro and R) that execute the code for the trading system.
  • Connect to a broker to receive market data and execute trades (in this case, using the Interactive Brokers IB Gateway software).

GCE allows you to quickly launch an instance using predefined CPU, RAM and storage specifications, as well as to create your own custom machine. You can also select from several pre-defined ‘images’, which consist of the operating system (both Linux and Windows options are available), its configuration and some standard software. What’s really nice is that that GCE enables you to create your own custom image that includes the software and tools specific to your use case. This means that you don’t have to upload your software and trading infrastructure each time you want to launch a new instance – you can simply create an instance from an image that you saved previously.

For a list of Pros and Cons, and for Step-by-Step Instructions on How to Run a Trading Algorithm on GCE, read the full article here:

RobotWealth-Cloud

 

Learn more about Robot Wealth here: https://robotwealth.com/

This article is from Robot Wealth and is being posted with Robot Wealth’s permission. The views expressed in this article are solely those of the author and/or Robot Wealth and IB is not endorsing or recommending any investment or trading discussed in the article. This material is for information only and is not and should not be construed as an offer to sell or the solicitation of an offer to buy any security. To the extent that this material discusses general market activity, industry or sector trends or other broad-based economic or political conditions, it should not be construed as research or investment advice. To the extent that it includes references to specific securities, commodities, currencies, or other instruments, those references do not constitute a recommendation by IB to buy, sell or hold such security. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice.


14573




акции

Deep Learning for Trading: Part 1


In the last few years, deep learning has gone from being an interesting but impractical academic pursuit to an ubiquitous technology that touches many aspects of our lives on a daily basis – including in the world of trading. This meteoric rise has been fuelled by a perfect storm of:

  • Frequent breakthroughs in deep learning research which regularly provide better tools for training deep neural networks
  • An explosion in the quantity and availability of data
  • The availability of cheap and plentiful compute power
  • The rise of open source deep learning tools that facilitate both the practical application of the technology and innovative research that drives the field ever forward

Deep learning excels at discovering complex and abstract patterns in data and has proven itself on tasks that have traditionally required the intuitive thinking of the human brain to solve. That is, deep learning is solving problems that have thus far proven beyond the ability of machines.

However, as anyone who has used deep learning in a trading application can attest, the problem is not nearly as simple as just feeding some market data to an algorithm and using the information to help make trading decisions. Some of the common issues that need to be solved include:

  1. Working out a sensible way to frame the forecasting problem, for example as a classification or regression problem.
     
  2. Scaling data in a way that facilitates training of the deep network.
     
  3. Deciding on an appropriate network architecture.
     
  4. Tuning the hyperparameters of the network and optimization algorithm such that the network converges sensibly and efficiently. Depending on the architecture chosen, there might be a couple of dozen hyperparameters that affect the model, which can provide a significant headache.
     
  5. Coming up with a cost function that is applicable to the problem.
     
  6. Dealing with the problem of an ever-changing market. Market data tends to be non-stationary, which means that a network trained on historical data might very well prove useless when used with future data.
     
  7. There may be very little signal in historical market data with respect to the future direction of the market. This makes sense intuitively if you consider that the market is impacted by more than just its historical price and volume. Further, pretty much everyone who trades a particular market will be looking at its historical data and using it in some way to inform their trading decisions. That means that market data alone may not give an individual much of a unique edge.

The first five issues listed above are common to most machine learning problems and their resolution represents a big part of what applied data science is all about. The implication is that while these problems are not trivial, they are by no means deal breakers.

What is Keras?

Keras is a high-level API for building and training neural networks. Its strength lies in its ability to facilitate fast and efficient research, which of course is very important for systematic traders, particularly those of the DIY persuasion for whom time is often the limiting factor to success. Keras is easy to learn and its syntax is particularly friendly. Keras also plays nicely with CPUs and GPUs and can integrate with the TensorFlow, Theano and CNTK backends – without limiting the flexibility of those tools. For example, pretty much anything you can implement in raw TensorFlow, you can also implement in Keras, likely at a fraction of the development effort.

Keras is also implemented in R.

What’s next?

In the deep learning experiments that follow in Part 2 and beyond, we’ll use the R implementation of Keras with TensorFlow backend. We’ll be exploring fully connected feedforward networks, various recurrent architectures including the Gated Recurrent Unit (GRU) and Long Short-Term Memory (LSTM), and even convolutional neural networks which normally find application in computer vision and image classification.

Stay tuned.

 

Learn more about Robot Wealth here: https://robotwealth.com/

This article is from Robot Wealth and is being posted with Robot Wealth’s permission. The views expressed in this article are solely those of the author and/or Robot Wealth and IB is not endorsing or recommending any investment or trading discussed in the article. This material is for information only and is not and should not be construed as an offer to sell or the solicitation of an offer to buy any security. To the extent that this material discusses general market activity, industry or sector trends or other broad-based economic or political conditions, it should not be construed as research or investment advice. To the extent that it includes references to specific securities, commodities, currencies, or other instruments, those references do not constitute a recommendation by IB to buy, sell or hold such security. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice.


15912




акции

Asset Allocation for Sector ETFs : An Empirical Perspective on Estimation Error



In this article, Majeed Simaan uses the quantmod and the lubridate packages.

Introduction

The conventional wisdom in finance implies that investors should make rational decisions in which risk is compensated by reward. Such that in order to achieve a greater reward, investors need to bear more risk. This is the ideal view in financial economic thought and the foundation of the Modern Portfolio Theory (MPT). In the following, I would like to address this view in the presence of uncertainty, which is the inevitable ingredient of day-to-day decision making.

Let us think about an investor who is interested in allocating his wealth among a set of assets. As a rational investor, he should choose an optimal allocation among the set that yields the best reward for the level of risk he is willing to take. By reward, I refer to how much he expects to earn on his portfolio decision; whereas risk denotes how volatile this prospect will be. Formally, the former is measured by the expected return of his portfolio, while the latter is proxied by the standard deviation of his portfolio return. Such paradigm has been known as the mean-variance (MV) model, pioneered by Harry Markowitz in the early 1950s.

One of the underlying assumptions of the MV model is that the investor possesses the full information about the underlying assets. Specifically, it assumes that he knows the model's inputs without any uncertainty. Nonetheless, in reality as decision makers, we can either form our views about these assets using historical data or through some speculations about the future of the underlying assets (or the sectors/market). The result of which, inevitably, induces what is called estimation error into the asset allocation problem.

It has been well established in the recent MPT literature that estimation error impairs the performance of MV optimal portfolios. It appears that investors are better off by indifferently allocating their wealth among the underlying assets rather than trying to solve for an optimal allocation. Such practice has been known as the naive approach, since it does not incorporate information about the underlying assets. Nonetheless, a more recent literature debates whether this naive strategy outperforms MV portfolios, after accounting for estimation error in the portfolio optimization problem.

In this article, I will test the above implications using monthly asset returns for 9 sector ETFs dating between Jan 1999 and July 2017. I exclude one sector, which is the XLRE ETF, due to limited data availability. I will demonstrate the impact of estimation error on constructing MV optimal portfolios and then refer the reader to possible remedies used in the literature. In doing so, I hope to address the importance of portfolio optimization in the presence of uncertainty and the significance of taking into account estimation error.

 

3D perspective on the mean-variance efficient frontier

Picture Source:  A 3D perspective on the mean-variance efficient frontier when estimation error is considered (source: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2495621)
 



ETF Data

I use the quantmod to download on 9 sector ETFs and the lubridate package to deal with date format

lubridate-quantmod

In and Out of Sample

I split the data into two parts, in-sample and out-of-sample. The former resembles the window upon which decision is placed in terms of asset allocation, whereas the latter denotes the realization period.

in-sample..out-sample

The parameters of the two periods would differ significantly, especially when the first period contains the dot-com bubble and part of the recent financial crises. Looking at the difference between the in-sample and out-of-sample parameters below, it is evident that estimation error in the mean returns is more severe than the case for the second moments, i.e. variances and covariances of asset returns:

covariance-asset

This small evidence is one of the main motivations in the practice of portfolio theory that focuses on the global minimum variance (henceforth GMV) portfolio. The GMV portfolio requires only information about the volatilities and covariances of the asset returns, unlike the case of efficient MV portfolios that also require knowledge about the mean returns. I shall get back to this issue later on.

MV Optimal Portfolios

There are a number of R packages available to perform portfolio optimization. Nevertheless, I will solve for optimal MV portfolios using a function that I designed myself. The function is coded using an R base constrained optimization function. In doing so, I hope to provide the reader with some exposure to the underlying science behind the practice of portfolio optimization.

Objective Function

The objective function is defined in terms of expected utility (EU). A decision maker chooses an optimal allocation that maximizes his EU of his terminal wealth. Hence, such a function takes into account two main components: the portfolio mean return and the volatility of the portfolio return. This represents the reward-risk trade-off, in which the EU increases with the former but decreases with the latter, an environment where risk in non-preferable such that investors are risk-averse.

EU-function

The EU function takes 4 arguments. The first input is a vector X that denotes the allocation among the assets. The second and third inputs are the mean vector, M, and the covariance matrix, S, of the asset return. These two resemble the knowledge of the decision maker about the underlying assets. Finally, the fourth input is the risk-aversion of the decision maker, denoted by k.

The k parameter determines the preference of the decision maker in terms of risk tolerance. Let us consider two extreme cases. If the investor is only concerned with maximizing reward, then k is close to zero such that his utility is mainly determined by the portfolio expected return, regardless of the associated risk. On the other hand, if k goes to infinity, then it implies that the utility is mostly affected by the portfolio risk, whereas the utility derived from the portfolio expected return is trivial. In the former case, the investor chooses the asset with the highest mean return; while in the latter he would choose the GMV portfolio since risk is the main component that affects his utility.

We will consider k as given and let it range between 2.5 and 100. Clearly, the larger the k is the more risk-averse the investor is. Additionally, in assessing the mean vector, M, and the covariance matrix, S, we will consider the sample estimates for now. Clearly, k, M, and S are treated as given, whereas the main control variable the investor needs to choose is the portfolio weights, i.e. X. Therefore, with a given level of risk-aversion and equipped with both the M and S, the investor will choose the allocation that maximizes his EU.

Optimization

I use numerical optimization to construct optimal portfolios. The base constrOptim R function allows users to find the minimum point given an initial guess of the control variable. In addition, a gradient can be added to make the optimization more efficient. Ideally, numerical optimization tools rely on random searching algorithms to figure out the minimum (optimal) point. Hence, if one is able to direct this search in a more indicative way, the result of which will make the search more efficient and reliable. This is where the gradient comes into the picture.

I define the following function that takes four arguments: M, S, k, and BC. BC is a list that contains the budget constraints with respect to which the investor chooses his optimal allocation.

MV_portfolio

The basic budget constraint is that the investor allocates all of his wealth in the portfolio, such that the allocated proportions sum to 1. Other constraints may include limits on positions in individual assets or exclude short-sales. The latter are common constraints used in the practice of portfolio management, whereas the former case is usually used in the theoretical literature to derive tractable analytical results. I define the following two BC items:

BC1-BC2

For a given info about the mean vector and the covariance matrix of the assets, the desired level of risk taking, and some budget constraints. The above MV_portfolio function returns the optimal portfolio weights. It does so by initializing X to an equally weighted portfolio, which also satisfies the budget constraints, i.e.

portfoliol-weight

The MV Efficient Frontier

One view is that investors could be compensated, in terms of portfolio expected return, the more risk they are willing to take. This results in the classical textbook parabola that captures the reward-risk trade-off, known as the MV efficient frontier. Such parabola is the corner stone of almost every Finance MBA class. Nonetheless, such trade-off in practice, i.e. when investors face estimation error, is not as clear.

Basic Budget Constraints

I demonstrate this issue in the following figure. The y-axis represents the portfolio mean return, while the x-axis denotes the portfolio risk return, proxied by the standard deviation of the portfolio return. The figure has two lines. The solid line is the classical MV efficient frontier, which is constructed using the out-of-sample data. This is the hypothetical case, which serves as our benchmark. On the other hand, the dashed line represents the frontier for the in-sample case, which is the more realistic one.

budget-constraints

The reward-risk trade-off is very evident in the solid line. This implies, if we could assess the future reward-risk trade-off, then there is an additional reward for tolerating more risk. However, the dashed line tells us a different story. Specifically, it implies that investors get punished for taking more risk, something that contradicts the whole foundation of financial economic thought.

The reason for the above evidence is the presence of estimation error. On the top left of the dashed line, I highlight the GMV in-sample constructed portfolio. In this case, I use the covariance matrix from the in-sample window to construct the portfolio that yields me the lowest standard deviation. Nonetheless, as we move away from this point, my portfolio also relies on the assets mean return, which associated with greater estimation error. Clearly, this justifies the conventional wisdom that argues that if you deviate from the GMV portfolio, your portfolio will suffer due to greater estimation error.

Standing next to the GMV portfolio, is the naive one denoted by a cross. Clearly, the naive strategy dominates most of MV portfolios (top and left to most points on the dashed line). However, that does not hold true for the GMV portfolio. In fact, the GMV portfolio dominates the naive one by achieving a higher mean return for a lower risk. In any case, we also observe, if estimation error is absent (black line), then the naive portfolio would have been considered MV sub-optimal.

Adding Short-Sales Constraints

It is common in the practice of portfolio management to use ad-hoc techniques, such as limiting the exposure to a certain sector or avoid short-sales at all. While such practice seems sub-optimal from a theoretical point of view, it has important implications on estimation error.

I repeat the same exercise as before but with the addition of short-sales constraints. In the same fashion of the previous figure, I demonstrate the case when short-sales are not allowed. I highlight the new results using the red color and compare with the previous one as follows

short-sales-constraints

In the absence of estimation error (i.e. hypothetical case), it is clear that solving a constrained optimization problem results in a sub-optimal solution. In this case, the red solid is below the black solid line, such that portfolio optimization that omits short-sales yields sub-optimal MV portfolios compared with the ones that do not impose such. Nevertheless, we also observe that short-sales constraints mitigate the risk exposure of the investor. For the same level of risk-aversion, the investor ends taking less risk. Alternatively, one can argue that no-short-sales investors are more risk-averse in nature.

Looking more at the realistic case, i.e. the presence of estimation error, it is clear from the red dashed line that short-sales constraints limit the exposure of the investor to excessive risk-taking. Nonetheless, we can still see that investors get punished for taking excessive risk for which he does not get rewarded accordingly. On the other hand, we still observe that the GMV portfolio dominates the naive one and that there is a small change in the location of the GMV point.

In either case, we can argue that short-sales limits the exposure of the investor to excessive risk. What's more interesting, nevertheless, is the following observation. While adding short-sales seem MV sub-optimal under full information perspective (i.e. the red solid line versus the black solid line), this is not the case when we take into account estimation error. Clearly, the red dashed line does not seem to be less MV sub-optimal as the black dashed. In fact, it appears that the former mitigates underperformance due to estimation error.

Next Steps

Most of the recent literature on portfolio optimization proposes different ways to mitigate estimation error. Those approaches include Bayesian and shrinkage methods. In fact, it has been established that some of the shrinkage approaches are consistent with adding short-sales constraints. Nevertheless, due to the greater estimation error associated with the assets mean return, the focus has been limited to the GMV portfolio alone. In the next article, I would like to devote the discussion to the GMV portfolio and apply some of these techniques to yield estimation-error-robust portfolios. Stay tuned!


Appendix


You can access the complete R source code used in this article via my R Corner available at my homepage.

 

 

Majeed Simaan is a PhD candidate in Finance at Rensselaer Polytechnic Institute.  His research interests revolve around Banking and Risk Management, with emphasis on asset allocation and pricing. He is well versed in research areas related to banking, asset pricing, and financial modeling. He has been involved in a number of projects that apply state of the art empirical research tools in the areas of financial networks (interconnectedness), machine learning, and textual analysis. His research has been published in the International Review of Economics and Finance and the Proceedings of the 2016 IEEE Symposium Series on Computational Intelligence.

Before joining RPI, Majeed pursued graduate training in the area of Mathematical Finance at the London School of Economics (LSE). He has a strong quantitative background in both computing and statistical learning. He holds both BA and MA in Statistics from the University of Haifa with specialization in actuarial science.

 
This article is from Majeed Simaan and is being posted with Majeed Simaan’s permission. The views expressed in this article are solely those of the author and/or Majeed Simaan and IB is not endorsing or recommending any investment or trading discussed in the article. This material is for information only and is not and should not be construed as an offer to sell or the solicitation of an offer to buy any security. To the extent that this material discusses general market activity, industry or sector trends or other broad-based economic or political conditions, it should not be construed as research or investment advice. To the extent that it includes references to specific securities, commodities, currencies, or other instruments, those references do not constitute a recommendation by IB to buy, sell or hold such security. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice.


15915




1 2 2

Disclosures

We appreciate your feedback. If you have any questions or comments about IB Quant Blog please contact ibquant@ibkr.com.

The material (including articles and commentary) provided on IB Quant Blog is offered for informational purposes only. The posted material is NOT a recommendation by Interactive Brokers (IB) that you or your clients should contract for the services of or invest with any of the independent advisors or hedge funds or others who may post on IB Quant Blog or invest with any advisors or hedge funds. The advisors, hedge funds and other analysts who may post on IB Quant Blog are independent of IB and IB does not make any representations or warranties concerning the past or future performance of these advisors, hedge funds and others or the accuracy of the information they provide. Interactive Brokers does not conduct a "suitability review" to make sure the trading of any advisor or hedge fund or other party is suitable for you.

Securities or other financial instruments mentioned in the material posted are not suitable for all investors. The material posted does not take into account your particular investment objectives, financial situations or needs and is not intended as a recommendation to you of any particular securities, financial instruments or strategies. Before making any investment or trade, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice. Past performance is no guarantee of future results.

Any information provided by third parties has been obtained from sources believed to be reliable and accurate; however, IB does not warrant its accuracy and assumes no responsibility for any errors or omissions.

Any information posted by employees of IB or an affiliated company is based upon information that is believed to be reliable. However, neither IB nor its affiliates warrant its completeness, accuracy or adequacy. IB does not make any representations or warranties concerning the past or future performance of any financial instrument. By posting material on IB Quant Blog, IB is not representing that any particular financial instrument or trading strategy is appropriate for you.