IB Quant Blog


1 2 3 2


株式

Popular Python: Learning The Lambda Function


 

 

As I was drafting this article on the little confusing and less used Python’s lambda function, exciting tweet dropped on my twitter timeline. It mentioned that Microsoft is exploring the idea of adding Python as one of the official Excel scripting languages. I am sure thousands of Python enthusiasts would be elated on knowing this. Yes! One of the powerful and popular programming languages in recent years, Python will be made available inside Excel if Microsoft decides to implement it.

 

This will make Python super popular which already has wide application in the areas of finance, data science, research, web development, robotics, software development, education and many other fields.

Lambda Function

Now let’s come to our main topic for this post, the short anonymous lambda function. Many python learners might not be aware of its existence. Lambda is used to construct a python function; it is an in-line function, unlike the conventional def function which allows constructing in blocks. Let’s us compare the syntax of the conventional function construct with the lambda function syntax to make things clear.

Lambda Syntax

One can have any number of statements in the conventional def function, and it starts by giving a name to the function. On the other hand, the lambda is an anonymous function. You need not provide any name to the lambda construct. Let us take simple examples to illustrate their usage.

Lambda Function

The lambda construct is more used for its convenience rather than the range of operations that can be performed using lambda. Remember the following points when you need to construct a lambda function.

  • Lambda is an expression and not a statement. It does not support a block of expressions.
  • Lambda is defined at the point where we need to use it and need not be named
  • Lambda does not require a return statement. It always returns something after evaluation.

Let us take some examples to illustrate how we can build a lambda construct. We will make use of multiple arguments, logical operators, comparison operators, and conditional statements.

Lambda construct with a single argument.

Lambda construct with a single argument

Lambda construct with multiple arguments.

Lambda construct with multiple arguments

Lambda construct with logical operators

Lambda construct with logical operators

Lambda construct with conditional expressions like the if..else statement and comparison operators.

Lambda construct with conditional expressions

We can also construct lambda with multiple if..else statements in the following manner.

construct lambda with multiple

Using Lambda with Map, Filter, and Reduce functions

Lambda functions are usually used in conjunction with the functions like map(), filter(), and reduce(). Let us illustrate their usage.

The map function is used to pass a function to each item in an iterable object and it returns a list containing all the results from the function call. The map function takes the following syntax.

Example:

The map function

The filter function is used to extract each element in the iterable object for which the function returns True. In this case, we will define the function using the lambda construct and apply the filter function.

Example:

The filter function

The reduce function is a unique function which reduces the input list to a single value by calling the function provided as part of the argument.  The reduce function by default starts from the first value of the list and passes the current output along the next item from the list.

Example:

The reduce function

 

These were some of the ways in which we can use the lambda construct.

Visit QuantInsti website for more articles on Python and its usage in algorithmic trading

 

 

This article is from QuantInsti and is being posted with QuantInsti’s permission. The views expressed in this article are solely those of the author and/or QuantInsti and IB is not endorsing or recommending any investment or trading discussed in the article. This material is for information only and is not and should not be construed as an offer to sell or the solicitation of an offer to buy any security. To the extent that this material discusses general market activity, industry or sector trends or other broad-based economic or political conditions, it should not be construed as research or investment advice. To the extent that it includes references to specific securities, commodities, currencies, or other instruments, those references do not constitute a recommendation by IB to buy, sell or hold such security. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice.


16085




株式

Deep Learning for Trading Part 2: Configuring TensorFlow and Keras to run on GPU


In Part 1, we introduced Keras and discussed some of the major obstacles to using deep learning techniques in trading systems, including a warning about attempting to extract meaningful signals from historical market data.

Part 2 provides a walk-through of setting up Keras and Tensorflow for R using either the default CPU-based configuration, or the more complex and involved (but well worth it) GPU-based configuration under the Windows environment.

Stay tuned for Part 3 of this series which will be published next week.

CPU vs GPU for Deep Learning

No doubt you know that a computer’s Central Processing Unit (CPU) is its primary computation module. CPUs are designed and optimized for rapid computation on small amounts of data and as such, elementary arithmetic operations on a few numbers is blindingly fast. However, CPUs tend to struggle when asked to operate on larger amounts of data, for example performing matrix operations on large arrays. And guess what: the computational nuts and bolts of deep learning is all about such matrix operations. That’s bad news for a CPU.

The rendering of computer graphics relies on these same types of operations, and Graphical Processing Units (GPUs) were developed to optimize and accelerate them. GPUs typically consist of hundreds or even thousands of cores, enabling massive parallelization. This makes GPUs a far more suitable hardware for deep learning than the CPU.

Of course, you can do deep learning on a CPU. And this is fine for small scale research projects or just getting a feel for the technique. But for doing any serious deep learning research, access to a GPU will provide an enormous boost in productivity and shorten the feedback loop considerably. Instead of waiting days for a model to train, you might only have to wait hours. Instead of waiting hours, you’ll only have to wait minutes.

When selecting a GPU for deep learning, the most important characteristic is the memory bandwidth of the unit, not the number of cores as one might expect. That’s because it typically takes more time to read the data from memory than to perform the actual computations on that data! So if you want to do fast deep learning research, be sure to check the memory bandwidth of your GPU. By way of comparison, my (slightly outdated) NVIDIA GTX 970M has a memory bandwidth of around 120 GB/s. The GTX 980Ti clocks in at around 330 GB/s!

Baby Steps: Configuring Keras and TensorFlow to Run on the CPU

If you don’t have access to a GPU, or if you just want to try out some deep learning in Keras before committing to a full-blown deep learning research project, then the CPU installation is the right one for you. It will only take a couple of minutes and a few lines of code, as opposed to an hour or so and a deep dive into your system for the GPU option.

Here’s how to install Keras to run TensorFlow on the CPU.

At the time of writing, the Keras R package could be installed from CRAN, but I preferred to install directly from GitHub. To do so, you need to first install the devtools package, and then do

Dev Tools

Then, load the Keras package and make use of the convenient install_keras()  function to install both Keras and TensorFlow:

keras

That’s it! You now have the CPU-based versions of Keras and TensorFlow ready to go, which is fine if you are just starting out with deep learning and want to explore it at a high level. If you don’t want the GPU-based versions just yet, then I’m afraid that’s all we have for you until the next post!

Serious Deep Learning: Configuring Keras and TensorFlow to run on a GPU

Installing versions of Keras and TensorFlow compatible with NVIDIA GPUs is a little more involved, but is certainly worth doing if you have the appropriate hardware and intend to do a decent amount of deep learning research. The speed up in model training is really significant.

Here’s how to install and configure the NVIDIA GPU-compatible version of Keras and TensorFlow for R under Windows.

Step 1: What hardware do you have?

First, you need to work out if you have a compatible NVIDIA GPU installed on your Windows machine. To do so, open your NVIDIA Control Panel. Typically, it’s located under C:\Program Files\NVIDIA Corporation\Control Panel Client , but on recent Windows versions you can also find it by right-clicking on the desktop and selecting ‘NVIDIA Control Panel’, like in the screenshot below:

NVIDIA Control Panel

When the control panel opens, click on the System Information link in the lower left corner, circled in the screenshot below:

NVIDIA

 

This will bring up the details of your NVIDIA GPU. Note your GPU’s model name (here mine is a GeoForce GTX 970M, which you can see under the ‘Items’ column): While you’re at it, check how your GPU’s memory bandwidth stacks up (remember this parameter is the limiting factor of the GPU’s speed on deep learning tasks).

System Information

 

Step 2: Is your hardware compatible with TensorFlow?

Next, head over to NVIDIA’s GPU documentation, located at https://developer.nvidia.com/cuda-gpus. You’ll need to find your GPU model on this page and work out its Compute Capability Number. This needs to be 3.0 or higher to be compatible with TensorFlow. You can see in the screenshot below that my particular GPU model has a Compute Capability of 5.2, which means that I can use it to train deep learning models in TensorFlow. Hooray for productivity.

CUDA

In practice, my GPU model is now a few years old and there are much better ones available today. But still, using this GPU provides far superior model training times than using a CPU.

Step 3: Get CUDA

Next, you’ll need to download and install NVIDIA’s CUDA Toolkit. CUDA is NVIDIA’s parallel computing API that enables programming on the GPU. Thus, it provides the framework for harnessing the massive parallel processing capabilities of the GPU. At the time of writing, the release version of TensorFlow (1.4) was compatible with version 8 of the CUDA Toolkit (NOT version 9, which is the current release), which you’ll need to download via the CUDA archives here.1

Step 4: Get your latest driver

You’ll also need to get the latest drivers for your particular GPU from NVIDIA’s driver download page. Download the correct driver for your GPU and then install it.

 

Step 5: Get cuDNN

Finally, you’ll need to get NVIDIA’s CUDA Deep Neural Network library (cuDNN). cuDNN is essentially a library for deep learning built using the CUDA framework and enables computational tools like TensorFlow to access GPU acceleration. You can read all about cuDNN here. In order to download it, you will need to sign up for an NVIDIA developers account.

Having activated your NVIDIA developers account, you’ll need to download the correct version of cuDNN. The current release of TensorFlow (version 1.4) requires cuDNN version 6. However, the latest version of cuDNN is 7, and it’s not immediately obvious how to acquire version 6. You’ll need to head over to this page, and under the text on ‘What’s New in cuDNN 7?’ click the Download button. After agreeing to some terms and conditions, you’ll then be able to select from numerous versions of cuDNN. Make sure to get the version of cuDNN that is compatible with your version of CUDA (version 8), as there are different sub-versions of cuDNN for each version of CUDA.1

Confusing, no? I’ve circled the correct (at the time of writing) cuDNN version in the screenshot below (click for a clearer image):

cuDNN

Once you’ve downloaded the cuDNN zipped file, extract the contents to a directory of your choice.

 

Step 6: Modify the Windows %PATH%  variable

We also need to add the paths to the CUDA and cuDNN libraries to the Windows %PATH%  variable so that TensorFlow can find them.  To do so, open the Windows Control Panel, then click on System and Security, then System, then Advanced System Settings like in the screenshot below:

System

Then, when the System Properties window opens, click on Environment Variables. In the new window, under System Variables, select Path and click Edit. Then click New in the Edit Environment Variable window and add the paths to the CUDA and cuDNN libraries. On my machine, I added the following paths (but yours will depend on where they were installed):

Kris Longmore

Here’s a screenshot of the three windows and the relevant buttons involved in this process (click for a larger image):

Add to path

Step 7: Install GPU-enabled Keras

Having followed those steps, you’re finally in a position to install Keras and configure it to run TensorFlow on the GPU. From a fresh R or R-Studio session, install the Keras package if you haven’t yet done so, then load it and run install_keras()  with the argument tensorflow = 'gpu' :

Kris Longmore

The installation process might take quite some time, but don’t worry, you’ll get that time back and a whole lot more in faster training of your deep learning experiments.

 

That’s it! Congratulations! You are now ready to perform efficient deep learning research on your GPU! We’ll dive into that in the next unit.

 

A troubleshooting tip

When I first set this up, I found that Keras was throwing errors that it couldn’t find certain TensorFlow modules. Eventually I worked out that it was because I already had a version of TensorFlow installed in my main conda environment thanks to some Python work I’d done previously. If you have the same problem, explicitly setting the conda environment immediately after loading the Keras package should resolve it:

Kris Longmore

Also note that the compatible versions of CUDA and cuDNN may change as new versions of TensorFlow are released. It is worth double checking the correct versions at tensorflow.org.

 

Note:

  1. The compatible versions of CUDA and cuDNN may change as new versions of TensorFlow are released. It is worth double checking the correct versions at tensorflow.org

 

 

 

 

Learn more about Robot Wealth here: https://robotwealth.com/

This article is from Robot Wealth and is being posted with Robot Wealth’s permission. The views expressed in this article are solely those of the author and/or Robot Wealth and IB is not endorsing or recommending any investment or trading discussed in the article. This material is for information only and is not and should not be construed as an offer to sell or the solicitation of an offer to buy any security. To the extent that this material discusses general market activity, industry or sector trends or other broad-based economic or political conditions, it should not be construed as research or investment advice. To the extent that it includes references to specific securities, commodities, currencies, or other instruments, those references do not constitute a recommendation by IB to buy, sell or hold such security. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice.


16021




株式

Basic Operations on Stock data using Python


Python has emerged as the fastest growing programming language and this has stemmed from multiple factors like ease to learn, readability, conciseness, strong developer community, application across domains etc. Python has found wide acceptance in trading too and this has led to Python-based analytics platforms, Python APIs, and trading strategies being built using Python.

Python

The objective of this post is to illustrate how easy it is to learn Python and apply it to formulate and analyze trading strategies. If you are new to programming this blog might just help you overcome your fear of programming. Also, don’t forget to check out some nice links provided at the end of this blog to learn some exciting trading strategies which have been posted on our blog.

Let us run through some basic operations that can be performed on a stock data using Python. We start by reading the stock data from a CSV file. The CSV file contains the Open-High-Low-Close (OHLC) and Volume numbers for the stock.

pandas

The ‘TIME’ column seen here specifies the closing time of the day’s trading session. To delete the column we can simply use the ‘del’ command.

delTime

Now, let us use the type function to check whether the object is a pandas datetime index.

data-pandas

I would like to know the number of trading days (the number of rows) in the given data set. It can be done using the count method.

stocks-data-python

What if I want to know the maximum close price that was reached in the given period? This is made possible by using the max method.

maxprice

Is it also possible to know the date on which this maximum price was reached? To find the respective date we apply the index property as shown below.

data.close

 

Let us compute the daily percentage change in closing price. We add a new column of ‘Percentage_Change’ to our existing data set. In the next line of code, we have filtered the percent change column for all the values greater than 1.0. The result has been presented below.

data.percent.change

Finally, let us add a couple of indicators. We compute the 20-day simple moving average and the 5-day average volume. We can add more indicators to our data frame and then analyze the stock trend to see whether it is bullish or bearish. You can learn more on how to create various technical indicators in Python here.

avg_vol

 

In his short post, we covered some simple ways to analyze the data set and build more understanding of the stock data. Can you think of building a trading strategy using similar basic operations and simple indicators? Here are the links to articles on Python that can be explored for your own trading needs.

Trading Using Machine Learning In Python – SVM (Support Vector Machine)
Strategy using Trend-following Indicators: MACD, ST and ADX
Sentiment Analysis on News Articles using Python
Python Trading Strategy in Quantiacs Platform
In our upcoming posts, we will provide more ways and methods that can be used for trading using Python. Keep following our posts.


Next Step

If you want to learn various aspects of Algorithmic trading then check out QuantInsti’s Executive Programme in Algorithmic Trading (EPAT™).
 

 

Milind Paradkar holds an MBA in Finance from the University of Mumbai and a Bachelor’s degree in Physics from St. Xavier’s College, Mumbai. At QuantInsti®, Milind is involved in creating technical content on Algorithmic & Quantitative trading. Prior to QuantInsti®, Milind had worked at Deutsche Bank as a Senior Analyst where he was involved in the cash flow modeling of structured finance deals covering Asset-backed Securities (ABS) and Collateralized Debt Obligations (CDOs).

Learn more QuantInsti here https://www.quantinsti.com

This article is from QuantInsti and is being posted with QuantInsti’s permission. The views expressed in this article are solely those of the author and/or QuantInsti and IB is not endorsing or recommending any investment or trading discussed in the article. This material is for information only and is not and should not be construed as an offer to sell or the solicitation of an offer to buy any security. To the extent that this material discusses general market activity, industry or sector trends or other broad-based economic or political conditions, it should not be construed as research or investment advice. To the extent that it includes references to specific securities, commodities, currencies, or other instruments, those references do not constitute a recommendation by IB to buy, sell or hold such security. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice.


15983




株式

How to Run Trading Algorithms on Google Cloud Platform in 6 Easy Steps


Kris Longmore shares his experience using Interactive Brokers Gateway

Earlier this year, I attended the Google Next conference in San Francisco and gained some first hand perspective into what’s possible with Google’s cloud infrastructure. Since then, I’ve been leaning on Google Cloud Platform (GCP) to run my trading algorithms (and more) and it has become an important tool in my workflow.

In this post, I’m going to show you how to set up a GCP cloud compute instance to act as a server for hosting a trading algorithm. I’ll also discuss why such a setup can be a good option and when it might pay to consider alternatives. But cloud compute instances are just a tiny fraction of the whole GCP ecosystem, so before we go any further, let’s take a high level overview of the various components that make up GCP.

What is Google Cloud Platform?

GCP consists of a suite of cloud storage, compute, analytics and development infrastructure and services. Google says that GCP runs on the very same infrastructure that Google uses for its own products, such as Google Search. This suite of services and infrastructure goes well beyond simple cloud storage and compute resources, providing some very handy and affordable machine learning, big data, and analytics tools.

GCP consists of:

  • Google Compute Engine: on-demand virtual machines and an application development platform.
  • Google Storage: scalable object storage; like an (almost) infinite disk drive in the cloud.
  • BigTable and Cloud SQL: scalable NoSQL and SQL databases hosted in the cloud.
  • Big Data Tools:
    • BigQuery: big data warehouse geared up for analytics
    • DataFlow: data processing management
    • DataProc: managed Spark and Hadoop service
    • DataLab: analytics and visualization platform, like a Jupyter notebook in the cloud.
    • Data Studio: for turning data into nice visualizations and reports
  • Cloud Machine Learning: train your own models in the cloud, or access Google’s pre-trained neural network models for video intelligence, image classification, speech recognition, text processing and language translation.
  • Cloud Pub/Sub: send and receive messages between independent applications.
  • Management and Developer Tools: monitoring, logging, alerting and performance analytics, plus command line/powershell tools, hosted git repositories, and other tools for application development.
  • More that I haven’t mentioned here!

The services and infrastructure generally play nicely with each other and with the standard open source tools of development and analytics. For example, DataLab integrates with BigQuery and Cloud Machine Learning and runs Python code. Google have tried to make GCP a self-contained, one-stop-shop for development, analytics, and hosting. And from what I have seen, they are succeeding.

Using Google Compute Engine to Host a Trading Algorithm

Google Compute Engine (GCE) provides virtual machines (VMs) that run on hardware located in Google’s global network of data centres (a VM is simply an emulation of a computer system that provides the functionality of a physical computer). You can essentially use a VM just like you would a normal computer, without actually owning the requisite hardware. In the example below, I used a VM instance to:

  • Host and run some software applications (Zorro and R) that execute the code for the trading system.
  • Connect to a broker to receive market data and execute trades (in this case, using the Interactive Brokers IB Gateway software).

GCE allows you to quickly launch an instance using predefined CPU, RAM and storage specifications, as well as to create your own custom machine. You can also select from several pre-defined ‘images’, which consist of the operating system (both Linux and Windows options are available), its configuration and some standard software. What’s really nice is that that GCE enables you to create your own custom image that includes the software and tools specific to your use case. This means that you don’t have to upload your software and trading infrastructure each time you want to launch a new instance – you can simply create an instance from an image that you saved previously.

For a list of Pros and Cons, and for Step-by-Step Instructions on How to Run a Trading Algorithm on GCE, read the full article here:

RobotWealth-Cloud

 

Learn more about Robot Wealth here: https://robotwealth.com/

This article is from Robot Wealth and is being posted with Robot Wealth’s permission. The views expressed in this article are solely those of the author and/or Robot Wealth and IB is not endorsing or recommending any investment or trading discussed in the article. This material is for information only and is not and should not be construed as an offer to sell or the solicitation of an offer to buy any security. To the extent that this material discusses general market activity, industry or sector trends or other broad-based economic or political conditions, it should not be construed as research or investment advice. To the extent that it includes references to specific securities, commodities, currencies, or other instruments, those references do not constitute a recommendation by IB to buy, sell or hold such security. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice.


14573




株式

Deep Learning for Trading: Part 1


In the last few years, deep learning has gone from being an interesting but impractical academic pursuit to an ubiquitous technology that touches many aspects of our lives on a daily basis – including in the world of trading. This meteoric rise has been fuelled by a perfect storm of:

  • Frequent breakthroughs in deep learning research which regularly provide better tools for training deep neural networks
  • An explosion in the quantity and availability of data
  • The availability of cheap and plentiful compute power
  • The rise of open source deep learning tools that facilitate both the practical application of the technology and innovative research that drives the field ever forward

Deep learning excels at discovering complex and abstract patterns in data and has proven itself on tasks that have traditionally required the intuitive thinking of the human brain to solve. That is, deep learning is solving problems that have thus far proven beyond the ability of machines.

However, as anyone who has used deep learning in a trading application can attest, the problem is not nearly as simple as just feeding some market data to an algorithm and using the information to help make trading decisions. Some of the common issues that need to be solved include:

  1. Working out a sensible way to frame the forecasting problem, for example as a classification or regression problem.
     
  2. Scaling data in a way that facilitates training of the deep network.
     
  3. Deciding on an appropriate network architecture.
     
  4. Tuning the hyperparameters of the network and optimization algorithm such that the network converges sensibly and efficiently. Depending on the architecture chosen, there might be a couple of dozen hyperparameters that affect the model, which can provide a significant headache.
     
  5. Coming up with a cost function that is applicable to the problem.
     
  6. Dealing with the problem of an ever-changing market. Market data tends to be non-stationary, which means that a network trained on historical data might very well prove useless when used with future data.
     
  7. There may be very little signal in historical market data with respect to the future direction of the market. This makes sense intuitively if you consider that the market is impacted by more than just its historical price and volume. Further, pretty much everyone who trades a particular market will be looking at its historical data and using it in some way to inform their trading decisions. That means that market data alone may not give an individual much of a unique edge.

The first five issues listed above are common to most machine learning problems and their resolution represents a big part of what applied data science is all about. The implication is that while these problems are not trivial, they are by no means deal breakers.

What is Keras?

Keras is a high-level API for building and training neural networks. Its strength lies in its ability to facilitate fast and efficient research, which of course is very important for systematic traders, particularly those of the DIY persuasion for whom time is often the limiting factor to success. Keras is easy to learn and its syntax is particularly friendly. Keras also plays nicely with CPUs and GPUs and can integrate with the TensorFlow, Theano and CNTK backends – without limiting the flexibility of those tools. For example, pretty much anything you can implement in raw TensorFlow, you can also implement in Keras, likely at a fraction of the development effort.

Keras is also implemented in R.

What’s next?

In the deep learning experiments that follow in Part 2 and beyond, we’ll use the R implementation of Keras with TensorFlow backend. We’ll be exploring fully connected feedforward networks, various recurrent architectures including the Gated Recurrent Unit (GRU) and Long Short-Term Memory (LSTM), and even convolutional neural networks which normally find application in computer vision and image classification.

Stay tuned.

 

Learn more about Robot Wealth here: https://robotwealth.com/

This article is from Robot Wealth and is being posted with Robot Wealth’s permission. The views expressed in this article are solely those of the author and/or Robot Wealth and IB is not endorsing or recommending any investment or trading discussed in the article. This material is for information only and is not and should not be construed as an offer to sell or the solicitation of an offer to buy any security. To the extent that this material discusses general market activity, industry or sector trends or other broad-based economic or political conditions, it should not be construed as research or investment advice. To the extent that it includes references to specific securities, commodities, currencies, or other instruments, those references do not constitute a recommendation by IB to buy, sell or hold such security. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice.


15912




1 2 3 2

Disclosures

We appreciate your feedback. If you have any questions or comments about IB Quant Blog please contact ibquant@ibkr.com.

The material (including articles and commentary) provided on IB Quant Blog is offered for informational purposes only. The posted material is NOT a recommendation by Interactive Brokers (IB) that you or your clients should contract for the services of or invest with any of the independent advisors or hedge funds or others who may post on IB Quant Blog or invest with any advisors or hedge funds. The advisors, hedge funds and other analysts who may post on IB Quant Blog are independent of IB and IB does not make any representations or warranties concerning the past or future performance of these advisors, hedge funds and others or the accuracy of the information they provide. Interactive Brokers does not conduct a "suitability review" to make sure the trading of any advisor or hedge fund or other party is suitable for you.

Securities or other financial instruments mentioned in the material posted are not suitable for all investors. The material posted does not take into account your particular investment objectives, financial situations or needs and is not intended as a recommendation to you of any particular securities, financial instruments or strategies. Before making any investment or trade, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice. Past performance is no guarantee of future results.

Any information provided by third parties has been obtained from sources believed to be reliable and accurate; however, IB does not warrant its accuracy and assumes no responsibility for any errors or omissions.

Any information posted by employees of IB or an affiliated company is based upon information that is believed to be reliable. However, neither IB nor its affiliates warrant its completeness, accuracy or adequacy. IB does not make any representations or warranties concerning the past or future performance of any financial instrument. By posting material on IB Quant Blog, IB is not representing that any particular financial instrument or trading strategy is appropriate for you.