IBKR Quant Blog


1 2 3 4 5 2 20


Quant

Demystifying the Learning of Algorithmic Trading


By Nitin Thapar

 

Introduction

In the year 2001 IBM released a report that attracted much attention; this report proved that two algorithmic trading strategies could beat human traders consistently. By 2017, 40% of trades that happened in NSE, 60% in LSE and 84% in NYSE were done using algorithmic trading.

Algorithmic and quantitative trading is the art of applying financial computing, econometrics, statistics, analytics and strategies to your trading practice. People who practice this art are often referred as the ‘Rocket Scientists of Wall Street’ and believe me some of them get paid no less than a rocket scientist.

Texas-based Singleton won a contest in 2016 run by Quantopian, an algorithmic investing website, to write trading programs. He was rewarded $100,000 to put his model into action for the next six months with all the profits to keep for him. The 21 year old was up about 1.5% within the same year, against an 8% slump in the S&P 500 equity index.

The learning curve

The way we look at the markets is completely different now. Stock markets have never been so exciting with the ever-changing nature, high competition, challenging and focused work environment.

The common traits amongst professionals who are planning to venture in this domain include dedication, commitment and passion towards the markets whether it’s about building a career, starting your own algorithmic trading desk or entrepreneurship in this field.

Now, not everyone can have access to state-of-the-art industry equipment found on the trading floors of the world's biggest financial firms. Also, the training is not just limited to the fundamentals of the financial markets, but also learning the modern programming languages, statistics and creating trading strategies that will get computers to trade faster than you can blink.

Fortunately, the education industry has responded very well to adopt this new learning curve. There are training programs available that can help you get the required kick-start to your algorithmic trading journey. Some of these educational portals build their own resources in conjunction with a hands-on learning experience. Additionally, with the availability of these credible learning opportunities it no longer matters what background you come from, you can still manage to excel in this field.

Oxford, the oldest university in the UK, is well known for its MSc in Mathematical and Computational Finance, which has an extremely strong reputation as one of the best MFE courses to take in the world. Also, UCLA’s Anderson School of Management is ranked seventh in the world for its Master of Financial Engineering degree, which is solidly based on the business school model of combining quantitative finance theory and principle with the latest business practices. There are at least 20 more such reputed universities that are scattered around the world which provide specialized courses in financial engineering.

Those who find it difficult to attend university based programs can look to online resources like Udemy, Quantra and Coursera that provide a great learning experience from their self-paced learning programs. These platforms provide you a practical hands-on learning experience, empowering you to learn and implement complex concepts easily.

From a practical point of view, there are knowledge hubs like QuantInsti, ATASSN and AlgoTrading101 that provide a great learning experience in the form of full-fledge training and certification programs. Aspiring algorithmic and quantitative traders get an opportunity to learn from leading global market practitioners. In addition, programs like the Executive Program in Algorithmic Trading (EPAT) by QuantInsti also provide students with placement opportunities.

Key Takeaways

Ultimately, the success lies in on how efficiently you use your skills.

Here are some of the steps to follow:

  1. Programming - Learn Python so that you have an understanding of how algorithms function and how it is used in trading
  2. Trading Platforms – Subscribe to some trading platforms and learn how they work
  3. Strategy - Start building your strategies and test them on the platforms until you find something that makes you happy
  4. Paper Trading – Run your strategies in the paper trading environment. Test algorithms for a period of time before committing real money
  5. Fine tune – Tweak, change, refine, learn and moderate. Track performance, make algo changes accordingly

Conclusion

Some may question if algorithmic trading can be learned by thosewho have the zeal for markets and the use of technology to execute trades, but it all comes down to the kind of dedication and commitment that drives you to ever-growing opportunities in this domain.

What is also certain is that mere books and reading material can only help you to get theoretical knowledge. Aspiring algorithmic and quantitative traders are now following educational programs that help them get trained from market professionals.

 

Author

Nitin Thapar holds an MBA in Marketing from Mumbai University. At QuantInsti, Nitin is involved in creating informative articles on Algorithmic & Quantitative trading. Nitin is also one of the most viewed writers on the Quora platform for Algorithmic Trading and Quantitative Trading category. Prior to QuantInsti, Nitin had worked with a SAP partner firm and one of the leading brands in procurement based SaS solutions with roles around content and digital marketing.

 

If you want to learn more about Algo Trading, visit QuantInsti website and the educational offerings at their Executive Programme in Algorithmic Trading (EPAT™).

This article is from QuantInsti and is being posted with QuantInsti’s permission. The views expressed in this article are solely those of the author and/or QuantInsti and IB is not endorsing or recommending any investment or trading discussed in the article. This material is for information only and is not and should not be construed as an offer to sell or the solicitation of an offer to buy any security. To the extent that this material discusses general market activity, industry or sector trends or other broad-based economic or political conditions, it should not be construed as research or investment advice. To the extent that it includes references to specific securities, commodities, currencies, or other instruments, those references do not constitute a recommendation by IB to buy, sell or hold such security. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice.


18429




Quant

Byte Academy - Interpreting and Visualizing AutoCorelation - Part 3


By Jithin J and Karthik Ravindra, Byte Academy

Click to see Part 1 and Part 2 in this series.

Simple Linear Regression using OLS statsmodel

Python

Python

 

 

As observed the DURBIN-WATSON test score here is .005 which is much closer to 0.

 

 

What is the Durbin-Watson Test

The Durbin Watson statistic gives an easy to interpret test score for autoregression. The Durbin-Watson statistic is always between 0 and 4. A value of 2 means that there is no autocorrelation. Values approaching 0 indicate positive autocorrelation and values toward 4 indicate negative autocorrelation.

 

For a detailed explanation, refer to: https://www.investopedia.com/terms/d/durbin-watson-statistic.asp

 

So here the Durbin-Watson test score is very close to 0 which means there is a very high positive correlation. Let’s visualize this correlation.

 

ACF Plot - Auto Corelation Function Plot

 

Python

 

 

By looking at the autocorrelation function (ACF) plot of the series, we can tentatively identify the

autocorrelation in the series. ACF is a bar chart of the coefficients of correlation between a time series and lags of itself. The blue shaded area under no autocorrelation is flat suggesting no correlation of error terms. The shaded area above suggests that as we move along in time, more and more error terms are correlated.

 

In this article, we demonstrated how to identify autocorrelation in time series data. Once identified, possible actions include applying the AR(p) or ARIMA model to reduce the correlation of error terms.

 

-------------------------------------------------------

Any trading symbols displayed are for illustrative purposes only and are not intended to portray recommendations.
 

Byte Academy is based in New York, USA. It offers coding education, classes in FinTech, Blockchain, DataSci, Python + Quant.

This article is from Byte Academy and is being posted with Byte Academy’s permission. The views expressed in this article are solely those of the author and/or Byte Academy and IB is not endorsing or recommending any investment or trading discussed in the article. This material is for information only and is not and should not be construed as an offer to sell or the solicitation of an offer to buy any security. To the extent that this material discusses general market activity, industry or sector trends or other broad-based economic or political conditions, it should not be construed as research or investment advice. To the extent that it includes references to specific securities, commodities, currencies, or other instruments, those references do not constitute a recommendation by IB to buy, sell or hold such security. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice.

 


18137




Quant

Storage Wars: Security Concerns Generate Interest In AI On-Premise Storage Solutions Like PureStorage (PSTG)


AI

Note: The content of this post references an opinion and / or is presented for product demonstration purposes. It is provided for information purposes only. It does not constitute, nor is it intended to be investment advice. Seek a duly licensed professional for investment advice.

AI (artificial intelligence) was certainly the buzzword of this past last year, influencing the conversations of most tech companies and also taking up increasing mindshare for Fortune 500 leaders across all industries.

In fact, we recently used Sentieo to take a look at mentions of AI in earnings call transcripts, and the number of mentions is growing exponentially. Here’s a snapshot from our recent Sentiment Analysis Quarterly Report:

AI

(For a full analysis of AI and other top keywords, download the full report here).

Companies looking to incorporate AI and machine learning into all aspects of their businesses also need to incorporate AI into their data storage systems. Currently, the top leaders in AI cloud storage services are: Amazon Web Services (AWS), Microsoft’s Azure, Google Cloud Platform (GCP), and IBM’s IBM Cloud and Watson. However, as data security and compliance become increasing concerns (especially in data security sensitive businesses like Financial Services and Healthcare), many companies are turning away from the cloud and looking towards on-premise data storage solutions to increase their privacy and control.

Jumping off from its recent partnership with Nvidia, PureStorage has created one of the first on-premise, AI-enabled solutions to hit the marketplace. For companies that don’t want to host data in the cloud (i.e. on-premise), there are no options outside of this new PSTG and NVDA offering. They may also be able to capitalize on “sole source” contracts with government institutions (circumventing the competitive bid process). These are 5-15M storage contacts with DoD, NASA, etc.

We took a look at PureStorage (PSTG) through the lens of Sentieo’s Mosaic tool, which plots alternative data that includes Google Trends, Alexa Website Data, and Twitter mentions. Alternative datasets like these can provide an edge in analyzing consumer-facing businesses, as they often have a high correlation with revenue growth and are available ahead of traditional financial metrics for the period. As consumer behavior shifts more and more towards digital, indicators like these have become more predictive of tech and consumer company results.

 

AI

AI

 

What we see above is that Google Trends (green line), Twitter mentions (blue line), and Alexa website visits (red line) are all trending up, very likely due to the announcement of this highly AI-optimized solution born of PureStorage’s partnership with Nvidia. While indicators for PureStorage are ticking up, we don’t necessarily expect this to impact this quarter’s earnings. However, we do expect higher guidance for the next few quarters as PSTG rides the AI wave until other on-premise solutions catch up.

We’ll be keeping our eye on PSTG until its earnings call in late May, but based on the alternative data we’ve seen, we like its prospects for growth.

-------------------------

About Sentieo
Made up of former hedge fund analysts, Sentieo is familiar with the challenge of gathering information to find a key data point that has the potential to make or break an investment thesis. With new datasets appearing daily, the job of an investor continues to grow more challenging and complex. This is the inspiration of Sentieo.

Sentieo is a financial data platform underpinned by search technology. Sentieo overlays search, collaboration and automation on key aspects of an analyst's workflow so that investors can spend less time searching and more time analyzing.https://www.sentieo.com/

 

This article is from Sentieo and is being posted with Sentieo’s permission. The views expressed in this article are solely those of the author and/or Sentieo and IB is not endorsing or recommending any investment or trading discussed in the article. This material is for information only and is not and should not be construed as an offer to sell or the solicitation of an offer to buy any security. To the extent that this material discusses general market activity, industry or sector trends or other broad-based economic or political conditions, it should not be construed as research or investment advice. To the extent that it includes references to specific securities, commodities, currencies, or other instruments, those references do not constitute a recommendation by IB to buy, sell or hold such security. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional ad


18379




Quant

SAS, R, or Python


2017 SAS, R, or Python Flash Survey Results

By Burtch Works

 

As many of you probably know, over the past few years we’ve been gauging statistical tool preferences among data scientists and predictive analytics professionals by sending out a flash survey to our network: the first two years weighing SAS vs. R, and then adding Python to the mix last year as their libraries expanded.

Now, as one might imagine, the discussion is always rather spirited in nature – we’ve had over 1,000 responses every year – and reading the comments has become one of our favorite parts of doing the survey, so feel free to chime in!

To keep the comparison simple, we only asked one question: Which do you prefer to use – SAS, R, or Python?

Python

 

Over the past four years we’ve seen preference for open source tools steadily climbing, with 66% of respondents choosing R or Python this year. Python climbed from 20% in 2016 to 26% this year.

 

Each year we also match responses to demographic information, to show how these preferences break down by factors like region, industry, education, years of experience, and more.

 

Similar to last year, the largest proportion of Python supporters are on the West Coast and in the Northeast, however, the Mountain region is close behind the Northeast, and all regions saw at least some increase in Python preference. R preference is highest in the Midwest and SAS preference is highest in the Southeast.

 

Open source preference remains high in Tech/Telecom and preference for SAS continues to be higher in more regulated industries like Financial Services and Pharmaceuticals.

 

Professionals with a Ph.D. are the most likely to prefer open source tools, likely due to the prevalence of R and Python usage in research and academic programs, and the foundation of experience it establishes as they move into business.

 

 

 

Preference of open source tools is by far the highest amongst professionals with 5 or less years’ experience. Even as the specific proportions have changed, this trend has remained fairly constant over the years that we’ve done the survey.

 

Visit Burtch Works website to read the full article: https://www.burtchworks.com/2017/06/19/2017-sas-r-python-flash-survey-results/

 

 

About Burtch Works

Burtch Works https://www.burtchworks.com/ is a quant & marketing research recruiters company. Follow them on Social Media: for #datascience#analytics, & #marketingresearch career news!  

 

This article is from Burtch Works and is being posted with Burtch Works’s permission. The views expressed in this article are solely those of the author and/or Burtch Works and IB is not endorsing or recommending any investment or trading discussed in the article. This material is for information only and is not and should not be construed as an offer to sell or the solicitation of an offer to buy any security. To the extent that this material discusses general market activity, industry or sector trends or other broad-based economic or political conditions, it should not be construed as research or investment advice. To the extent that it includes references to specific securities, commodities, currencies, or other instruments, those references do not constitute a recommendation by IB to buy, sell or hold such security. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice.


18430




Quant

R Tip of the Month: mclapply


R Tip of the Month: mclapply

 

By Majeed Simaan, PhD

 

One of the keynote lectures from last week's R in Finance conference focused on parallel computing. It was an excellent lecture delivered by Professor Norman S. Matloff from UC Davis. The lecture focused on challenges faced in parallel computing when dealing with time series analysis, which is recursive in nature. Nonetheless, it also stressed the power of R and the advancement of the current libraries to perform parallel computing. The lecture slides should be uploaded to the online program. In this vignette, I will illustrate the usage of the mclapply function from the parallel package, which I find super friendly to deploy.

To get started, I will take a look at the SPY ETF along with AAPL:

library(quantmod)
P1 <- get(getSymbols("SPY",from = "1990-01-01"))[,6]
P2 <- get(getSymbols("AAPL",from = "1990-01-01"))[,6]
P <- merge(P1,P2)
R <- na.omit(P/lag(P))-1
names(R) <- c("SPY","AAPL")

In particular, I will test the computation time needed to estimate AAPL's beta with the SPY ETF. To do so, I create a function named beta.f that takes i as its main argument. The function randomly samples 50% of the data using a fixed seed i and computes the market beta for AAPL.

beta.f <- function(i) {
  set.seed(i)   R.i <- R[sample(1:nrow(R),floor(0.5*nrow(R)) ),]
  lm.i <- lm(AAPL~SPY,data = R.i)
  beta.i <- summary(lm.i)$coefficients["SPY",1]
  return(beta.i)
  }

I run the computation twice over a sequence of i integers - once using the lapply and once using the mclapply. The latter runs in the same fashion of the former, making it is extremely easy to implement:

library(parallel)
N <- 10^2
f1 <- function() mean(unlist(lapply(1:N, beta.f)))
f2 <- function() mean(unlist(mclapply(1:N, beta.f)) )

To compare the computation time that f1 and f2 takes to run, I refer to the microbenchmark library to achieve a robust perspective. The main function from the library is microbenchmark whose main argument is the underlying function we like to evaluate. In our case, those are f1 and f2. Additionally, we can add an input that determines how many times we would like to run these functions. This provides multiple perspectives on the computational time needed to run each function.

library(microbenchmark)
ds.time <- microbenchmark(Regular = f1(),Parallel = f2(),times = 100)
ds.time
## Unit: milliseconds
##   expr   min   lq   mean   median   uq   max   neval   cld
##   Regular   785.0485   891.9385   985.5360   955.2429   1028.3437   1537.644   100   b
##   Parallel   445.8762   524.5227   625.4168   579.8332   712.6159   1000.358   100   a

We observe that, on average, the mclapply runs significantly faster than the base lapply function. Additionally, one can refer to the autoplot function from ggplot2 to demonstrate the time distribution that takes each function to run, by simply running the following command:

library(ggplot2)
autoplot(ds.time)

R-mcapply

Summary

Overall, this vignette demonstrates the enhancement of computation time using parallel computing for a specific task. Note that the illustration exhibited here was conducted using a Linux OS and, thus, the mclapply function may not perform the same on a Windows OS. Users are advised to continue their studies on the topic in order to understand whether (and under what conditions) parallel computing improves performance. Check the following notes by Josh Errickson for further reading on the topic.

 

Visit Majeed's GitHub – IBKR-R corner for all of his R tips, and to learn more about his expertise in R: https://github.com/simaan84

Majeed Simaan, Ph,D Finance, is well versed in research areas related to banking, asset pricing, and financial modeling. His research interests revolve around Banking and Risk Management, with emphasis on asset allocation and pricing. He has been involved in a number of projects that apply state of the art empirical research tools in the areas of financial networks (interconnectedness), machine learning, and textual analysis. His research has been published in the International Review of Economics and Finance and the Proceedings of the 2016 IEEE Symposium Series on Computational Intelligence. Majeed also pursued graduate training in the area of Mathematical Finance at the London School of Economics (LSE). He has a strong quantitative background in both computing and statistical learning. He holds both BA and MA in Statistics from the University of Haifa with specialization in actuarial science.

This article is from Majeed Simaan and is being posted with Majeed Simaan's permission. The views expressed in this article are solely those of the author and/or Majeed Simaan and IB is not endorsing or recommending any investment or trading discussed in the article. This material is for information only and is not and should not be construed as an offer to sell or the solicitation of an offer to buy any security. To the extent that this material discusses general market activity, industry or sector trends or other broad-based economic or political conditions, it should not be construed as research or investment advice. To the extent that it includes references to specific securities, commodities, currencies, or other instruments, those references do not constitute a recommendation by IB to buy, sell or hold such security. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice.


18427




1 2 3 4 5 2 20

Disclosures

We appreciate your feedback. If you have any questions or comments about IBKR Quant Blog please contact ibkrquant@ibkr.com.

The material (including articles and commentary) provided on IBKR Quant Blog is offered for informational purposes only. The posted material is NOT a recommendation by Interactive Brokers (IB) that you or your clients should contract for the services of or invest with any of the independent advisors or hedge funds or others who may post on IBKR Quant Blog or invest with any advisors or hedge funds. The advisors, hedge funds and other analysts who may post on IBKR Quant Blog are independent of IB and IB does not make any representations or warranties concerning the past or future performance of these advisors, hedge funds and others or the accuracy of the information they provide. Interactive Brokers does not conduct a "suitability review" to make sure the trading of any advisor or hedge fund or other party is suitable for you.

Securities or other financial instruments mentioned in the material posted are not suitable for all investors. The material posted does not take into account your particular investment objectives, financial situations or needs and is not intended as a recommendation to you of any particular securities, financial instruments or strategies. Before making any investment or trade, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice. Past performance is no guarantee of future results.

Any information provided by third parties has been obtained from sources believed to be reliable and accurate; however, IB does not warrant its accuracy and assumes no responsibility for any errors or omissions.

Any information posted by employees of IB or an affiliated company is based upon information that is believed to be reliable. However, neither IB nor its affiliates warrant its completeness, accuracy or adequacy. IB does not make any representations or warranties concerning the past or future performance of any financial instrument. By posting material on IB Quant Blog, IB is not representing that any particular financial instrument or trading strategy is appropriate for you.