# IB量化博客

1 2 3 4 5

### K-Means For Pair Selection In Python - StatArb Strategy

By Lamarcus Coleman

Read the previous six posts in this series: OverviewHeatmaps and ADF Tests, Historic Problem of Pair Selection, Understanding K-Means, Visualization and matplotlib subplot functionality

### In this post Lamarcus will show us how to build a StatArb strategy using K-Means

To Begin, we need to gather data for a group of stocks. We’ll continue using the S&P 500. There are 505 stocks in the S&P 500. We will collect some data for each of these stocks and use this data as features for K-Means. We will then identify a pair within one of the clusters, test it for cointegration using the ADF test, and then build a Statistical Arbitrage trading strategy using the pair.

Let’s get started!

We’ll begin by reading in some data from an Excel File containing the stocks and features will use.

#Importing Our Stock Data From Excel
file=pd.ExcelFile('KMeansStocks.xlsx')

#Parsing the Sheet from Our Excel file
stockData=file.parse('Example')

Now that we have imported our Stock Data from Excel, let’s take a look at it and see what features we will be using to build our K-Means based Statistical Arbitrage Strategy.

#Looking at the head of our Stock Data

#Looking at the tail of our Stock Data
stockData.tail()

We’re going to use the Dividend Yield, P/E, EPS, Market Cap, and EBITDA as the features for creating clusters across the S&P 500. From looking at the tail of our data, we can see that Yahoo doesn’t have a Dividend Yield, and is a missing P/E ratio. This brings up a good teaching moment. In the real world, data is not always clean and thus will require that you clean and prepare it so that it’s fit to analyze and eventually use to build a strategy.

In actuality, the data imported as been preprocessed a bit as I’ve already dropped some unnecessary columns from it.

In the next post, Lamarcus will demonstrate the Process of Implementing a Machine Learning Algorithm.

------------------------------------------------------------

*Disclaimer: All investments and trading in the stock market involve risk. Any decisions to place trades in the financial markets, including trading in stock or options or other financial instruments is a personal decision that should only be made after thorough research, including a personal risk and financial assessment and the engagement of professional assistance to the extent you believe necessary. The trading strategies or related information mentioned in this article is for informational purposes only.

If you want to learn more about K-Means Clustering for Pair Selection in Python, or to download the code, visit QuantInsti website and the educational offerings at their Executive Programme in Algorithmic Trading (EPAT™).

18410

### Back to Basics: Introduction to Algorithmic Trading - Part 5

In the previous post Kris shared his views on the programming skills quants need to build on.

In this post, he continues the discussion on Technical skills.

Statistics

It would be extremely difficult to be a successful algorithmic trader without a good working knowledge of statistics. Statistics underpins almost everything we do, from managing risk to measuring performance and making decisions about allocating to particular strategies. Importantly, you will also find that statistics will be the inspiration for many of your ideas for trading algorithms. Here are some specific examples of using statistics in algorithmic trading to illustrate just how vital this skill is:

• Statistical tests can provide insight into what sort of underlying process describes a market at a particular time. This can then generate ideas for how best to trade that market.
• Correlation of portfolio components can be used to manage risk (see important notes about this in the Risk Management section below).
• Regression analysis can help you test ideas relating to the various factors that may influence a market.
• Statistics can provide insight into whether a particular approach is outperforming due to taking on higher risk, or if it exploits a genuine source of alpha.

Aside from these, the most important application of statistics in algorithmic trading relates to the interpretation of backtest and simulation results. There are some significant pitfalls – like data dredging or “p-hacking” (Head et.al. (2015)) – that arise naturally as a result of the strategy development process and which aren’t obvious unless you understand the statistics of hypothesis testing and sequential comparison. Improperly accounting for these biases can be disastrous in a trading context. While this issue is incredibly important, it is far from obvious and it represents the most significant and common barrier to success that I have encountered since I started working with individual traders. Please, spend some time understanding this fundamentally important issue; I can’t emphasize enough how essential it is.

It also turns out that the human brain is woefully inadequate when it comes to performing sound statistical reasoning on the fly. Daniel Kahneman’s Thinking, Fast and Slow (2013) summarizes several decades of research into the cognitive biases with which humans are saddled. Kahneman finds that we tend to place far too much confidence in our own skills and judgements, that human reason systematically engages in fallacy and errors in judgment, and that we overwhelmingly tend to attribute too much meaning to chance. A significant implication of Kahneman’s work is that when it comes to drawing conclusions about a complex system with significant amounts of randomness, we are almost guaranteed to make poor decisions without a sound statistical framework. We simply can’t rely on our own interpretation.

As an aside, Kahneman’s Thinking, Fast and Slow is not a book about trading, but it probably assisted me with my trading more than any other book I’ve read. I highly recommend it. Further, it is no coincidence that Kahneman’s work essentially created the field of behavioral economics.

Risk Management

There are numerous risks that need to be managed as part of an algorithmic trading business. For example, there is infrastructure risk (the risk that your server goes down or suffers a power outage, dropped connection or any other interference) and counter-party risk (the risk that the counter-party of a trade can’t make good on a transaction, or the risk that your broker goes bankrupt and takes your trading account with them). While these risks are certainly very real and must be considered, in this section I more concerned with risk management at the trade and portfolio level. This sort of risk management attempts to quantify the risk of loss and determine the optimal allocation approach for a strategy or portfolio of strategies. This is a complex area and there are several approaches and issues of which the practitioner should be aware.

Two (related) allocation strategies that are worth learning about are Kelly allocation and Mean-Variance Optimization (MVO). These have been used in practice, but they carry some questionable assumptions and practical implementation issues. It is these assumptions that the newcomer to algorithmic trading should concern themselves with.

Probably the best place to learn about Kelly allocation is in Ralph Vince’s The Handbook of Portfolio Mathematics, although there are countless blog posts and online articles about Kelly allocation that will be easier to digest. One of the tricky things about implementing Kelly is that it requires regular rebalancing of a portfolio that leads to buying into wins and selling into losses – something that is easier said than done.

MVO, for which Harry Markowitz won a Nobel Prize, involves forming a portfolio that lies on the so-called “efficient frontier” and hence minimizes the variance (risk) for a given return, or conversely maximizes the return for a given risk. MVO suffers from the classic problem that new algorithmic traders will continually encounter in their journey: the optimal portfolio is formed with the benefit of hindsight, and there is no guarantee that the past optimal portfolio will continue to be optimal into the future. The underlying returns, correlations and covariance of portfolio components are not stationary and constantly change in often unpredictable ways. MVO therefore does have its detractors, and it is definitely worth understanding the positions of these detractors (see for example Michaud (1989), DeMiguel (2007) and Ang (2014)). A more positive exposition of MVO, governed by the momentum phenomenon and applied to long-only equities portfolios, is given in the interesting paper by Keller et.al. (2015).

Another way to estimate the risk associated with a strategy is to use Value-at-Risk (VaR), which provides an analytical estimate of the maximum size of a loss from a trading strategy or a portfolio over a given time horizon and under a given confidence level. For example, a VaR of \$100,000 at the 95% confidence level for a time horizon of one week means that there is a 95% chance of losing no more than \$100,000 over the following week. Alternatively, this VaR could be interpreted as there being a 5% chance of losing at least \$100,000 over the following week.

As with the other risk management tools mentioned here, it is important to understand the assumptions that VaR relies upon. Firstly, VaR does not consider the risk associated with the occurrence of extreme events. However, it is often precisely these events that we wish to understand. It also relies on point estimates of correlations and volatilities of strategy components, which of course constantly change. Finally, it assumes returns are normally distributed, which is usually not the case.

Finally, I want to mention an empirical approach to measuring the risk associated with a trading strategy: System Parameter Permutation, or SPP (Walton (2014)). This approach attempts to provide an unbiased estimate of strategy performance at any confidence level at any time horizon of interest. By “unbiased” I mean that the estimate is not subject to data mining biases or “p-hacking” mentioned above. I personally think that this approach has great practical value, but it can be computationally expensive to implement and may not be suitable for all trading strategies.

So now you know about a few different tools to help you manage risk. I won’t recommend one approach over another, but I will recommend learning about each, particularly their advantages, disadvantages and assumptions. You will then be in a good position to choose an approach that fits your goals and that you understand deeply enough to set realistic expectations around. Bear in mind also that there may be many different constraints under which portfolios and strategies need to be managed, particularly in an institutional setting.

One final word on risk management: when measuring any metric related to a trading system, consider that it is not static – rather, it nearly always evolves dynamically with time. Therefore, a point measurement tells only a tiny fraction of the true story. An example of why this is important can be seen in a portfolio of equities whose risk is managed by measuring the correlations and covariance of the different components. Such a portfolio aims to reduce risk through diversification. However, such a portfolio runs into problems when markets tank: under these conditions, previously uncorrelated assets tend to become much more correlated, nullifying the diversification effect precisely when it is needed most!

17857

### K-Means Clustering For Pair Selection In Python - matplotlib subplot functionality

In the previous post Lamarcus Coleman explored Python’s matplotlib

In this article, he will compare the clusters he created from the toy data to the ones that the K-Means algorithm created based on viewing the data.

Now that we have both our toy data and have visualized the clusters we created, we can compare the clusters we created from our toy data to the ones that our K-Means algorithm created based on viewing our data. We’ll code a visualization similar to the one we created earlier. However, instead of a single plot, we will use matplotlib subplot method to create two plots, our clusters and K-Means clusters that can be viewed side by side for analysis. If you would like to learn more about matplotlib subplot functionality, you can visit here.

#now we can compare our clustered data to that of kmeans
#creating subplots

plt.figure(figsize=(10,8))
plt.subplot(121)
plt.scatter(data[0][:,0],data[0][:,1],c=data[1],cmap='gist_rainbow')
#in the above line of code, we are simply replotting our clustered data
#based on already knowing the labels(i.e. c=data[1])
plt.title('Our Clustering')
plt.tight_layout()

plt.subplot(122)
plt.scatter(data[0][:,0],data[0][:,1],c=model.labels_,cmap='gist_rainbow')
#notice that the above line of code differs from the first in that
#c=model.labels_ instead of data[1]...this means that we will be plotting
#this second plot based on the clusters that our model predicted
plt.title('K-Means Clustering')
plt.tight_layout()
plt.show()

The above plots show that the K-Means algorithm was able to identify the clusters within our data. The coloring has no bearing on the clusters and is merely a way to distinguish clusters. In practice, we won’t have the actual clusters that our data belongs to and thus we wouldn’t be able to compare the clusters of K-Means to prior clusters. This walkthrough shows the ability of K-Means to identify the presence of subgroups within data.

At this point in our journey toward better understanding the application and usefulness of K-Means we’ve created our own clusters from data we created, used the K-Means algorithms to identify the clusters within our toy data and travelled back in time to a Statistical Arbitrage trading world with no K-Means.

We’ve learned that K-Means assigns data points to clusters randomly initially and then calculates centroids or mean values. It then calculates the distances within each cluster, squares these, and sums them, to get the sum of squared error. The goals is to reduce this error or distance. The algorithm repeats this process until there is no more in-cluster variation, or put another way, the cluster compositions stop changing.

Ahead, we will enter a Statistical Arbitrage trading world where K-Means is a viable option for solving the problem of pair selection and use the same to implement a Statistical Arbitrage trading strategy.

To see the previous posts in this series, click Part I, Part 2, Part 3, Part 4 and Part 5

------------------------------------------------------------

*Disclaimer: All investments and trading in the stock market involve risk. Any decisions to place trades in the financial markets, including trading in stock or options or other financial instruments is a personal decision that should only be made after thorough research, including a personal risk and financial assessment and the engagement of professional assistance to the extent you believe necessary. The trading strategies or related information mentioned in this article is for informational purposes only.

If you want to learn more about K-Means Clustering for Pair Selection in Python, or to download the code, visit QuantInsti website and the educational offerings at their Executive Programme in Algorithmic Trading (EPAT™).

18142

### qplum - Why is machine learning in finance so hard? A case study in generating hypothetical data

In case you missed it! The webinar recording is available on IBKR YouTube channel.

There is a lot of interest in using machine learning in data. However, there are aspects unique to finance that makes it really difficult to use machine learning in trading. If machine learning fails to generate outstanding alpha, there is a chance that interest and investment into ML in finance might wane, similar to what happened to neural network research in the 90s. In this talk, we want to touch upon five reasons why machine learning does not seem to work in finance and how to address them.

18774

### Back to Basics: Algorithmic Trading - Part 4

The first installment in this series is available as follows: Part I, Part II and Part III

There is a lot of information about algorithmic and quantitative trading in the public domain today. The type of person who is attracted to the field naturally wants to synthesize as much of this information as possible when they are starting out. As a result, newcomers can easily be overwhelmed with “analysis paralysis” and wind up spending a lot of their valuable spare time working on algorithmic trading without making much meaningful progress. This article aims to address that by sharing the way in which I would approach algorithmic trading as a beginner if I were just starting out now, but with the benefit of many years of hindsight.

In this post, we will go a little further and investigate the things that people who are just starting out should think about. In particular, I aim to provide you with something of a roadmap for getting started, by sharing some of the practical things that I’ve learned along the way.

Note on terminology

The term “algorithmic trading” is sometimes used in professional settings to refer to execution algorithms, for example algorithms that split up a large order to optimize the total cost of the transaction. In this post, I generally use the terms systematicalgorithmic and quantitative trading interchangeably to refer to strategic trading algorithms that look to profit from market anomalies, deviation from fair value, or some other statistically verifiable opportunity.

Learning the theoretical underpinnings is important – so start reading – but it is only the first step. Put the theory into practice is a theme that you will see repeated throughout this article; emphasizing the practical is my strongest message when it comes to making it in this field.

Having said that, in order to make it in algorithmic trading, one typically needs to have knowledge and skills that span a number of disciplines. This includes both technical and soft skills. Individuals looking to set up their own algorithmic trading business will need to be across many if not all of the topics described below; while if you are looking to build or be a part of a team, you may not need to be personally across all of these, so long as they are covered by other team members. These skills are discussed in some detail below.

Technical skills

The technical skills that are needed for long-term algorithmic trading include, as a minimum:

1. Programming
2. Statistics
3. Risk management

There are other skills I would really like to add to this list, but which go a little beyond what I would call “minimum requirements.” I’ll touch on these later. But first, let’s delve into each of these three core skills.

1.       Programming

If you can’t already program, start learning now. To do any serious algorithmic trading, you absolutely must be able to program, as it is this skill that enables efficient research. It pays to become familiar with the syntax of a C-based language like C++ or Java (the latter being much simpler to learn), but to also focus on the fundamentals of data structures and algorithms at the same time. This will give you a very solid foundation, and while it can take a decade or longer to become an expert in C++, I believe that most people can reach a decent level with six months of hard work. This sets you up for what follows.

It also pays to know at least one of the higher-level languages, like Python, R or MATLAB, as you will likely wind up doing the vast majority of your research and development in one of these languages. My personal preferences are R and Python.

• Python is fairly easy to learn and is fantastic for efficiently getting, processing and managing data from various sources. There are some very useful libraries written by generous and intelligent folks that make data analysis relatively painless, and I find myself using Python more and more as a research tool.
• I also really like using R for research and analytics as it is underpinned by a huge repository of useful libraries and functions. It was written with statistical analysis in mind, so it is a natural fit for the sort of work that algorithmic traders will need to do. The syntax of R can be a little strange though, and to this day I find myself almost constantly on Stack Overflow when developing in R!
• Finally, I have also used MATLAB and its open source counterpart Octave, but I would almost never choose to use these languages for serious algo research. That’s more of a personal preference, and some folks will prefer MATLAB, particularly those who come from an engineering background as they may have been exposed to it during their work and studies.

When you’re starting out, I don’t believe it matters greatly which of these high-level languages you choose. As time goes on, you will start to learn which tool is the most applicable for the task at hand, but there is a lot of cross-over in the capabilities of these languages so don’t get too hung up on your initial choice – just make a choice and get started!

Simulation environments

Of course, the point of being able to program in this context is to enable the testing and implementation of algorithmic trading systems. It can therefore be of tremendous benefit to have a quality simulation environment at your disposal. As with any modelling task, accuracy, speed and flexibility are significant considerations. You can always write your own simulation environment, and sometimes that will be the most sensible thing to do, but often you can leverage the tools that others have built for the task. This has the distinct advantage that it enables you to focus on doing actual research and development that relates directly to a trading strategy, rather than spending a lot of time building the simulation environment itself. The downside is that sometimes you don’t quite know exactly what is going on under the hood, and there are times when using someone else’s tool will prevent you from pursuing a certain idea, depending on the limitations of the tool.

A good simulation tool should have the following characteristics:

• Accuracy – the simulation of any real-world phenomenon inevitably suffers from a deficiency in accuracy. The trick is to ensure that the model is accurate enough for the task at hand. As statistician George Box once said, “all models are wrong, but some are useful.” Playing with useless models is a waste of time.
• Flexibility – ideally your simulation tool would not limit you or lock you in to certain approaches.
• Speed – at times, speed can become a real issue, for example when performing tick-based simulations or running optimization routines.
• Active development – if unexpected issues arise, you need access to the source code or to people who are responsible for it. If the tool is being actively developed, you can be reasonably sure that help will be available if you need it.

In the Next post Kris will discuss Statistics and Risk Management.

17856

1 2 3 4 5

###### 披露

IBKR量化博客上提供的材料（包括文章与点评）仅供参考。此等材料不代表盈透证券（IB）推荐您或您的客户采用任何在IBKR量化博客上发布内容的独立顾问、对冲基金或其他实体之服务或在此类个人/实体处投资，也不代表盈透证券推荐您在任何顾问或对冲基金处投资。在IBKR量化博客上发布内容的顾问、对冲基金及其他分析师独立于IB，IB不就此等顾问、对冲基金及其他个人/实体的历史及未来业绩、或其提供的信息的准确性做任何称述与担保。盈透证券不会开展“适合性审查”来确保任意顾问、对冲基金或其他个人/实体的交易适合您。

IB量化博客上提及的证券或其他金融产品并不适合所有投资者。发布的材料并未考虑您特定的投资目标、财务状况或需求，也不构成任意证券、金融产品或策略之推荐。在进行任意投资或交易前，您应考虑该投资或交易是否适合您特定的情况，且在必要时寻求专业建议。过去的业绩不保证将来的业绩。