{"id":65060,"date":"2020-11-03T11:22:00","date_gmt":"2020-11-03T16:22:00","guid":{"rendered":"https:\/\/ibkrcampus.com\/?p=65060"},"modified":"2022-11-21T09:46:33","modified_gmt":"2022-11-21T14:46:33","slug":"reinforcement-learning-in-trading-part-ii","status":"publish","type":"post","link":"https:\/\/www.interactivebrokers.com\/campus\/ibkr-quant-news\/reinforcement-learning-in-trading-part-ii\/","title":{"rendered":"Reinforcement Learning in Trading &#8211; Part II"},"content":{"rendered":"\n<p><em>See <a href=\"\/campus\/ibkr-quant-news\/reinforcement-learning-in-trading\/\">Part I<\/a> for an overview of reinforcement learning.<\/em><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"components-of-reinforcement-learning\">Components of reinforcement learning<\/h2>\n\n\n\n<p>With the bigger picture in mind on what the RL algorithm tries to solve, let us learn the building blocks or components of the reinforcement learning model.<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>Action<\/li><li>Policy<\/li><li>State<\/li><li>Rewards<\/li><li>Environment<\/li><\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"actions\">Actions<\/h3>\n\n\n\n<p>The actions can be thought of what problem is the RL algo solving. If the RL algo is solving the problem of trading then the actions would be Buy, Sell and Hold. If the problem is&nbsp;<a href=\"https:\/\/quantra.quantinsti.com\/course\/quantitative-portfolio-management\" target=\"_blank\" rel=\"noreferrer noopener\">portfolio management<\/a>&nbsp;then the actions would be capital allocations to each of the asset classes. How does the RL model decide which action to take?<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"policy\">Policy<\/h3>\n\n\n\n<p>There are two methods or policies which help the RL model take the actions. Initially, when the RL agent knows nothing about the game, the RL agent can decide actions randomly and learn from it. This is called an exploration policy. Later, the RL agent can use past experiences to map state to action that maximises the long-term rewards. This is called an exploitation policy.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"state\">State<\/h3>\n\n\n\n<p>The RL model needs meaningful information to take actions. This meaningful information is the state. For example, you have to decide whether to buy Apple stock or not. For that, what information would be useful to you? Well, you can say I need some&nbsp;<a href=\"https:\/\/blog.quantinsti.com\/tag\/technical-indicators\/\">technical indicators<\/a>, historical price data, sentiments data and fundamental data. All this information collected together becomes the state. It is up to the designer on what data should make up the state.<\/p>\n\n\n\n<p>But for proper analysis and execution, the data should be weakly predictive and weakly stationary. The data should be weakly predictive is simple enough to understand, but what do you mean by weakly stationary? Weakly stationary means that the data should have a constant mean and variance. But why is this important? The short answer is that&nbsp;<a href=\"https:\/\/blog.quantinsti.com\/tag\/machine-learning\/\">machine learning<\/a>&nbsp;algorithms work well on stationary data. Alright! How does the RL model learn to map state to action to take?<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"rewards\">Rewards<\/h3>\n\n\n\n<p>A reward can be thought of as the end objective which you want to achieve from your RL system. For example, the end objective would be to create a profitable trading system. Then, your reward becomes profit. Or it can be the best risk-adjusted returns then your reward becomes Sharpe ratio.<\/p>\n\n\n\n<p>Defining a reward function is critical to the performance of an RL model. The following metrics can be used for defining the reward.<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>Profit per tick<\/li><li><a href=\"https:\/\/blog.quantinsti.com\/sharpe-ratio-applications-algorithmic-trading\/\">Sharpe Ratio<\/a><\/li><li>Profit per trade<\/li><\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"environment\">Environment<\/h3>\n\n\n\n<p>The environment is the world that allows the RL agent to observe State. When the RL agent applies the action, the environment acts on that action, calculates rewards and transitions to the next state. For example, the environment can be thought of as a chess game or trading Apple stock.<\/p>\n\n\n\n<p><em>Stay tuned for the next installment in which Ishan will demonstrate the RL model.<\/em><\/p>\n\n\n\n<p><em>Visit QuantInsti to download practical code:&nbsp;<a href=\"https:\/\/blog.quantinsti.com\/reinforcement-learning-trading\/\">https:\/\/blog.quantinsti.com\/reinforcement-learning-trading\/<\/a>.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Join Ishan Shah from QuantInsti for a presentation on the components of reinforcement learning.<\/p>\n","protected":false},"author":517,"featured_media":22628,"comment_status":"closed","ping_status":"open","sticky":true,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[339,343,349,338,350,341,344],"tags":[8577,2105,8576,4922,1006,852,8575,4166,8579,8578,494,7258,5545],"contributors-categories":[13654],"class_list":{"0":"post-65060","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-data-science","8":"category-programing-languages","9":"category-python-development","10":"category-ibkr-quant-news","11":"category-quant-asia-pacific","12":"category-quant-development","13":"category-quant-regions","14":"tag-alphazero","15":"tag-deep-learning","16":"tag-delayed-gratification","17":"tag-econometrics","18":"tag-fintech","19":"tag-machine-learning","20":"tag-mean-reverting-strategy","21":"tag-portfolio-management","22":"tag-q-learning","23":"tag-q-table","24":"tag-quant","25":"tag-reinforcement-learning","26":"tag-sharpe-ratio","27":"contributors-categories-quantinsti"},"pp_statuses_selecting_workflow":false,"pp_workflow_action":"current","pp_status_selection":"publish","acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v26.9 (Yoast SEO v27.5) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Reinforcement Learning in Trading &#8211; Part II<\/title>\n<meta name=\"description\" content=\"Join Ishan Shah from QuantInsti for a presentation on the components of reinforcement learning.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.interactivebrokers.com\/campus\/wp-json\/wp\/v2\/posts\/65060\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Reinforcement Learning in Trading - Part II | IBKR Quant Blog\" \/>\n<meta property=\"og:description\" content=\"Join Ishan Shah from QuantInsti for a presentation on the components of reinforcement learning.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.interactivebrokers.com\/campus\/ibkr-quant-news\/reinforcement-learning-in-trading-part-ii\/\" \/>\n<meta property=\"og:site_name\" content=\"IBKR Campus US\" \/>\n<meta property=\"article:published_time\" content=\"2020-11-03T16:22:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2022-11-21T14:46:33+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2019\/10\/machine-learning.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1100\" \/>\n\t<meta property=\"og:image:height\" content=\"700\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Ishan Shah\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Ishan Shah\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"2 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\n\t    \"@context\": \"https:\\\/\\\/schema.org\",\n\t    \"@graph\": [\n\t        {\n\t            \"@type\": \"NewsArticle\",\n\t            \"@id\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/ibkr-quant-news\\\/reinforcement-learning-in-trading-part-ii\\\/#article\",\n\t            \"isPartOf\": {\n\t                \"@id\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/ibkr-quant-news\\\/reinforcement-learning-in-trading-part-ii\\\/\"\n\t            },\n\t            \"author\": {\n\t                \"name\": \"Ishan Shah\",\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#\\\/schema\\\/person\\\/0fd7dbae1e070042c10b53e8bdc551c5\"\n\t            },\n\t            \"headline\": \"Reinforcement Learning in Trading &#8211; Part II\",\n\t            \"datePublished\": \"2020-11-03T16:22:00+00:00\",\n\t            \"dateModified\": \"2022-11-21T14:46:33+00:00\",\n\t            \"mainEntityOfPage\": {\n\t                \"@id\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/ibkr-quant-news\\\/reinforcement-learning-in-trading-part-ii\\\/\"\n\t            },\n\t            \"wordCount\": 501,\n\t            \"publisher\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#organization\"\n\t            },\n\t            \"image\": {\n\t                \"@id\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/ibkr-quant-news\\\/reinforcement-learning-in-trading-part-ii\\\/#primaryimage\"\n\t            },\n\t            \"thumbnailUrl\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2019\\\/10\\\/machine-learning.jpg\",\n\t            \"keywords\": [\n\t                \"AlphaZero\",\n\t                \"Deep Learning\",\n\t                \"delayed gratification\",\n\t                \"Econometrics\",\n\t                \"fintech\",\n\t                \"Machine Learning\",\n\t                \"mean-reverting strategy\",\n\t                \"portfolio management\",\n\t                \"Q Learning\",\n\t                \"Q Table\",\n\t                \"Quant\",\n\t                \"Reinforcement Learning\",\n\t                \"Sharpe Ratio\"\n\t            ],\n\t            \"articleSection\": [\n\t                \"Data Science\",\n\t                \"Programming Languages\",\n\t                \"Python Development\",\n\t                \"Quant\",\n\t                \"Quant Asia Pacific\",\n\t                \"Quant Development\",\n\t                \"Quant Regions\"\n\t            ],\n\t            \"inLanguage\": \"en-US\"\n\t        },\n\t        {\n\t            \"@type\": \"WebPage\",\n\t            \"@id\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/ibkr-quant-news\\\/reinforcement-learning-in-trading-part-ii\\\/\",\n\t            \"url\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/ibkr-quant-news\\\/reinforcement-learning-in-trading-part-ii\\\/\",\n\t            \"name\": \"Reinforcement Learning in Trading - Part II | IBKR Quant Blog\",\n\t            \"isPartOf\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#website\"\n\t            },\n\t            \"primaryImageOfPage\": {\n\t                \"@id\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/ibkr-quant-news\\\/reinforcement-learning-in-trading-part-ii\\\/#primaryimage\"\n\t            },\n\t            \"image\": {\n\t                \"@id\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/ibkr-quant-news\\\/reinforcement-learning-in-trading-part-ii\\\/#primaryimage\"\n\t            },\n\t            \"thumbnailUrl\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2019\\\/10\\\/machine-learning.jpg\",\n\t            \"datePublished\": \"2020-11-03T16:22:00+00:00\",\n\t            \"dateModified\": \"2022-11-21T14:46:33+00:00\",\n\t            \"description\": \"Join Ishan Shah from QuantInsti for a presentation on the components of reinforcement learning.\",\n\t            \"inLanguage\": \"en-US\",\n\t            \"potentialAction\": [\n\t                {\n\t                    \"@type\": \"ReadAction\",\n\t                    \"target\": [\n\t                        \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/ibkr-quant-news\\\/reinforcement-learning-in-trading-part-ii\\\/\"\n\t                    ]\n\t                }\n\t            ]\n\t        },\n\t        {\n\t            \"@type\": \"ImageObject\",\n\t            \"inLanguage\": \"en-US\",\n\t            \"@id\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/ibkr-quant-news\\\/reinforcement-learning-in-trading-part-ii\\\/#primaryimage\",\n\t            \"url\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2019\\\/10\\\/machine-learning.jpg\",\n\t            \"contentUrl\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2019\\\/10\\\/machine-learning.jpg\",\n\t            \"width\": 1100,\n\t            \"height\": 700,\n\t            \"caption\": \"Machine Learning\"\n\t        },\n\t        {\n\t            \"@type\": \"WebSite\",\n\t            \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#website\",\n\t            \"url\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/\",\n\t            \"name\": \"IBKR Campus US\",\n\t            \"description\": \"Financial Education from Interactive Brokers\",\n\t            \"publisher\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#organization\"\n\t            },\n\t            \"potentialAction\": [\n\t                {\n\t                    \"@type\": \"SearchAction\",\n\t                    \"target\": {\n\t                        \"@type\": \"EntryPoint\",\n\t                        \"urlTemplate\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/?s={search_term_string}\"\n\t                    },\n\t                    \"query-input\": {\n\t                        \"@type\": \"PropertyValueSpecification\",\n\t                        \"valueRequired\": true,\n\t                        \"valueName\": \"search_term_string\"\n\t                    }\n\t                }\n\t            ],\n\t            \"inLanguage\": \"en-US\"\n\t        },\n\t        {\n\t            \"@type\": \"Organization\",\n\t            \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#organization\",\n\t            \"name\": \"Interactive Brokers\",\n\t            \"alternateName\": \"IBKR\",\n\t            \"url\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/\",\n\t            \"logo\": {\n\t                \"@type\": \"ImageObject\",\n\t                \"inLanguage\": \"en-US\",\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#\\\/schema\\\/logo\\\/image\\\/\",\n\t                \"url\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2024\\\/05\\\/ibkr-campus-logo.jpg\",\n\t                \"contentUrl\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2024\\\/05\\\/ibkr-campus-logo.jpg\",\n\t                \"width\": 669,\n\t                \"height\": 669,\n\t                \"caption\": \"Interactive Brokers\"\n\t            },\n\t            \"image\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#\\\/schema\\\/logo\\\/image\\\/\"\n\t            },\n\t            \"publishingPrinciples\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/about-ibkr-campus\\\/\",\n\t            \"ethicsPolicy\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/cyber-security-notice\\\/\"\n\t        },\n\t        {\n\t            \"@type\": \"Person\",\n\t            \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#\\\/schema\\\/person\\\/0fd7dbae1e070042c10b53e8bdc551c5\",\n\t            \"name\": \"Ishan Shah\",\n\t            \"url\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/author\\\/ishanshah\\\/\"\n\t        }\n\t    ]\n\t}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Reinforcement Learning in Trading &#8211; Part II","description":"Join Ishan Shah from QuantInsti for a presentation on the components of reinforcement learning.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.interactivebrokers.com\/campus\/wp-json\/wp\/v2\/posts\/65060\/","og_locale":"en_US","og_type":"article","og_title":"Reinforcement Learning in Trading - Part II | IBKR Quant Blog","og_description":"Join Ishan Shah from QuantInsti for a presentation on the components of reinforcement learning.","og_url":"https:\/\/www.interactivebrokers.com\/campus\/ibkr-quant-news\/reinforcement-learning-in-trading-part-ii\/","og_site_name":"IBKR Campus US","article_published_time":"2020-11-03T16:22:00+00:00","article_modified_time":"2022-11-21T14:46:33+00:00","og_image":[{"width":1100,"height":700,"url":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2019\/10\/machine-learning.jpg","type":"image\/jpeg"}],"author":"Ishan Shah","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Ishan Shah","Est. reading time":"2 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/www.interactivebrokers.com\/campus\/ibkr-quant-news\/reinforcement-learning-in-trading-part-ii\/#article","isPartOf":{"@id":"https:\/\/www.interactivebrokers.com\/campus\/ibkr-quant-news\/reinforcement-learning-in-trading-part-ii\/"},"author":{"name":"Ishan Shah","@id":"https:\/\/ibkrcampus.com\/campus\/#\/schema\/person\/0fd7dbae1e070042c10b53e8bdc551c5"},"headline":"Reinforcement Learning in Trading &#8211; Part II","datePublished":"2020-11-03T16:22:00+00:00","dateModified":"2022-11-21T14:46:33+00:00","mainEntityOfPage":{"@id":"https:\/\/www.interactivebrokers.com\/campus\/ibkr-quant-news\/reinforcement-learning-in-trading-part-ii\/"},"wordCount":501,"publisher":{"@id":"https:\/\/ibkrcampus.com\/campus\/#organization"},"image":{"@id":"https:\/\/www.interactivebrokers.com\/campus\/ibkr-quant-news\/reinforcement-learning-in-trading-part-ii\/#primaryimage"},"thumbnailUrl":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2019\/10\/machine-learning.jpg","keywords":["AlphaZero","Deep Learning","delayed gratification","Econometrics","fintech","Machine Learning","mean-reverting strategy","portfolio management","Q Learning","Q Table","Quant","Reinforcement Learning","Sharpe Ratio"],"articleSection":["Data Science","Programming Languages","Python Development","Quant","Quant Asia Pacific","Quant Development","Quant Regions"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.interactivebrokers.com\/campus\/ibkr-quant-news\/reinforcement-learning-in-trading-part-ii\/","url":"https:\/\/www.interactivebrokers.com\/campus\/ibkr-quant-news\/reinforcement-learning-in-trading-part-ii\/","name":"Reinforcement Learning in Trading - Part II | IBKR Quant Blog","isPartOf":{"@id":"https:\/\/ibkrcampus.com\/campus\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.interactivebrokers.com\/campus\/ibkr-quant-news\/reinforcement-learning-in-trading-part-ii\/#primaryimage"},"image":{"@id":"https:\/\/www.interactivebrokers.com\/campus\/ibkr-quant-news\/reinforcement-learning-in-trading-part-ii\/#primaryimage"},"thumbnailUrl":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2019\/10\/machine-learning.jpg","datePublished":"2020-11-03T16:22:00+00:00","dateModified":"2022-11-21T14:46:33+00:00","description":"Join Ishan Shah from QuantInsti for a presentation on the components of reinforcement learning.","inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.interactivebrokers.com\/campus\/ibkr-quant-news\/reinforcement-learning-in-trading-part-ii\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.interactivebrokers.com\/campus\/ibkr-quant-news\/reinforcement-learning-in-trading-part-ii\/#primaryimage","url":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2019\/10\/machine-learning.jpg","contentUrl":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2019\/10\/machine-learning.jpg","width":1100,"height":700,"caption":"Machine Learning"},{"@type":"WebSite","@id":"https:\/\/ibkrcampus.com\/campus\/#website","url":"https:\/\/ibkrcampus.com\/campus\/","name":"IBKR Campus US","description":"Financial Education from Interactive Brokers","publisher":{"@id":"https:\/\/ibkrcampus.com\/campus\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/ibkrcampus.com\/campus\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/ibkrcampus.com\/campus\/#organization","name":"Interactive Brokers","alternateName":"IBKR","url":"https:\/\/ibkrcampus.com\/campus\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/ibkrcampus.com\/campus\/#\/schema\/logo\/image\/","url":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/05\/ibkr-campus-logo.jpg","contentUrl":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/05\/ibkr-campus-logo.jpg","width":669,"height":669,"caption":"Interactive Brokers"},"image":{"@id":"https:\/\/ibkrcampus.com\/campus\/#\/schema\/logo\/image\/"},"publishingPrinciples":"https:\/\/www.interactivebrokers.com\/campus\/about-ibkr-campus\/","ethicsPolicy":"https:\/\/www.interactivebrokers.com\/campus\/cyber-security-notice\/"},{"@type":"Person","@id":"https:\/\/ibkrcampus.com\/campus\/#\/schema\/person\/0fd7dbae1e070042c10b53e8bdc551c5","name":"Ishan Shah","url":"https:\/\/www.interactivebrokers.com\/campus\/author\/ishanshah\/"}]}},"jetpack_featured_media_url":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2019\/10\/machine-learning.jpg","_links":{"self":[{"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/posts\/65060","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/users\/517"}],"replies":[{"embeddable":true,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/comments?post=65060"}],"version-history":[{"count":0,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/posts\/65060\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/media\/22628"}],"wp:attachment":[{"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/media?parent=65060"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/categories?post=65060"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/tags?post=65060"},{"taxonomy":"contributors-categories","embeddable":true,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/contributors-categories?post=65060"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}