{"id":205263,"date":"2024-04-23T11:15:01","date_gmt":"2024-04-23T15:15:01","guid":{"rendered":"https:\/\/ibkrcampus.com\/?p=205263"},"modified":"2024-04-23T11:15:43","modified_gmt":"2024-04-23T15:15:43","slug":"understanding-gradient-descent-algorithm-with-python-code","status":"publish","type":"post","link":"https:\/\/www.interactivebrokers.com\/campus\/ibkr-quant-news\/understanding-gradient-descent-algorithm-with-python-code\/","title":{"rendered":"Understanding Gradient Descent Algorithm with Python Code"},"content":{"rendered":"\n<p>Gradient Descent (GD) is the basic optimization algorithm for machine learning or deep learning. This post explains the basic concept of gradient descent with python code.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-parameter-learning\">Parameter Learning<\/h3>\n\n\n\n<p>Data is the outcome of action or activity.<\/p>\n\n\n\n<p class=\"has-text-align-center\"><strong><em>y, x<\/em><\/strong><\/p>\n\n\n\n<p>Our focus is to predict the outcome of next action from data. For this end, we should develop a model to describe data properly and do forecasting<\/p>\n\n\n\n<p>Model is a function of data and parameters <strong><em>(\u03b8= (w,b)\u2032)<\/em><\/strong>. We estimate parameters which fit data well.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"564\" height=\"34\" data-src=\"\/campus\/wp-content\/uploads\/sites\/2\/2024\/04\/shleeai-gradient-descent-algorithm-1.png\" alt=\"\" class=\"wp-image-205269 lazyload\" data-srcset=\"https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/04\/shleeai-gradient-descent-algorithm-1.png 564w, https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/04\/shleeai-gradient-descent-algorithm-1-300x18.png 300w\" data-sizes=\"(max-width: 564px) 100vw, 564px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 564px; aspect-ratio: 564\/34;\" \/><\/figure>\n\n\n\n<p>Loss is a distance function between data and model like MSE(Mean Squared Error).<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"573\" height=\"41\" data-src=\"\/campus\/wp-content\/uploads\/sites\/2\/2024\/04\/shleeai-gradient-descent-algorithm-2.png\" alt=\"\" class=\"wp-image-205271 lazyload\" data-srcset=\"https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/04\/shleeai-gradient-descent-algorithm-2.png 573w, https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/04\/shleeai-gradient-descent-algorithm-2-300x21.png 300w\" data-sizes=\"(max-width: 573px) 100vw, 573px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 573px; aspect-ratio: 573\/41;\" \/><\/figure>\n\n\n\n<p>Since data is fixed and given, the learning is the parameter update.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"568\" height=\"65\" data-src=\"\/campus\/wp-content\/uploads\/sites\/2\/2024\/04\/shleeai-gradient-descent-algorithm-3.png\" alt=\"\" class=\"wp-image-205272 lazyload\" data-srcset=\"https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/04\/shleeai-gradient-descent-algorithm-3.png 568w, https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/04\/shleeai-gradient-descent-algorithm-3-300x34.png 300w\" data-sizes=\"(max-width: 568px) 100vw, 568px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 568px; aspect-ratio: 568\/65;\" \/><\/figure>\n\n\n\n<p>Here&nbsp;<em>\u03b3&nbsp;<\/em>is the learning rate or step size and <img decoding=\"async\" data-src=\"\/campus\/wp-content\/uploads\/sites\/2\/2024\/04\/shleeai-gradient-descent-algorithm-4.png\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\"> is the gradient. The gradient is the partial derivatives of&nbsp;<em>J&nbsp;<\/em>with respect to&nbsp;<em>\u03b8<\/em>&nbsp;as follows.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"586\" height=\"403\" data-src=\"\/campus\/wp-content\/uploads\/sites\/2\/2024\/04\/shleeai-gradient-descent-algorithm-5.png\" alt=\"\" class=\"wp-image-205274 lazyload\" data-srcset=\"https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/04\/shleeai-gradient-descent-algorithm-5.png 586w, https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/04\/shleeai-gradient-descent-algorithm-5-300x206.png 300w\" data-sizes=\"(max-width: 586px) 100vw, 586px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 586px; aspect-ratio: 586\/403;\" \/><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\">Illustration for Gradient Descent<\/h3>\n\n\n\n<p>Purpose of learning is to minimize a loss or cost function&nbsp;<em>J<\/em>&nbsp;with respect to parameters. This is done by finding gradient. But the gradient always points in the direction of steepest increase in the loss function as can be seen in the following figure.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"450\" height=\"368\" data-src=\"\/campus\/wp-content\/uploads\/sites\/2\/2024\/04\/shleeai-gradient-descent-algorithm-6.png\" alt=\"\" class=\"wp-image-205278 lazyload\" data-srcset=\"https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/04\/shleeai-gradient-descent-algorithm-6.png 450w, https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/04\/shleeai-gradient-descent-algorithm-6-300x245.png 300w\" data-sizes=\"(max-width: 450px) 100vw, 450px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 450px; aspect-ratio: 450\/368;\" \/><\/figure>\n\n\n\n<p>Therefore the gradient descent which aims to find target parameters(<em><strong>b\u2217<\/strong><\/em>) takes a step in the direction of the negative gradient in order to reduce loss. For candidate parameters to move in the direction of reducing loss, new parameters are updated by negative gradient with learning rate or step size. In other words, parameters are determined by the gradient descent method automatically but learning rate is set by hand, which is a hyperparameter.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Python Code<\/h3>\n\n\n\n<p>The following python code implements the above explanation about gradient descent algorithm. Due to its structured simplicity, it is straightforward to understand relevant aspect of the gradient descent.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">#=========================================================================#\n# Financial Econometrics &amp; Derivatives, ML\/DL using R, Python, Tensorflow \n# by Sang-Heon Lee\n#\n# https:\/\/shleeai.blogspot.com\n#-------------------------------------------------------------------------#\n# Gradient Descent example\n#=========================================================================#\n \n# -*- coding: utf-8 -*-\nimport numpy as np\n \n#-------------------------------------------------------------------------#\n# Declaration of functions\n#-------------------------------------------------------------------------#\n# Model\ndef Model(x, w, b):\n    y_hat = w*x + b\n    return y_hat\n \n# Gradient\ndef Gradient(y,x,w,b):\n    y_hat = Model(x, w, b)\n    djdw = 2*np.mean((y-y_hat)*(-x))\n    djdb = 2*np.mean((y-y_hat)*(-1))\n    return djdw, djdb\n \n# Learning : step = step size or learning rate\ndef Learning(y,x,w,b,lr):\n    djdw, djdb = Gradient(y, x, w, b)\n    w_update = w - step*djdw\n    b_update = b - step*djdb\n    return w_update, b_update\n \n#-------------------------------------------------------------------------#\n# use real data\n#-------------------------------------------------------------------------#\nimport pandas as pd\nimport matplotlib.pyplot as plt\n \nurl = 'https:\/\/raw.githubusercontent.com\/bammuger\/blog\/main\/sample_data.csv'\ndata = pd.read_csv(url)\ndata.head()\n \nplt.scatter(data.inputs, data.outputs, s = 0.5)\nplt.show()\n<\/pre>\n\n\n\n<p><strong>1) Sufficient Iteration<\/strong><\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">#-------------------------------------------------------------------------#\n# Learning - sufficient iteration\n#-------------------------------------------------------------------------#\n# initial guess\nw = 2; b = 3; step = 0.05\n \n# Iternated learning process by parameter update using gradient descent\nfor i in range(0,5000):\n    y = data.outputs\n    x = data.inputs\n    w, b  = Learning(y, x, w, b, step)\n \nprint(\"Learned_w: {}, Learned_b: {}\".format(w, b))\n  \nX = np.linspace(0, 1, 100)\nY = w * X + b\n \nplt.scatter(data.inputs, data.outputs, s = 0.3)\nplt.plot(X, Y, '-r', linewidth = 1.5)\nplt.show()<\/pre>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"380\" height=\"248\" data-src=\"\/campus\/wp-content\/uploads\/sites\/2\/2024\/04\/shleeai-gradient-descent-algorithm-7.jpg\" alt=\"\" class=\"wp-image-205281 lazyload\" data-srcset=\"https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/04\/shleeai-gradient-descent-algorithm-7.jpg 380w, https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/04\/shleeai-gradient-descent-algorithm-7-300x196.jpg 300w\" data-sizes=\"(max-width: 380px) 100vw, 380px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 380px; aspect-ratio: 380\/248;\" \/><\/figure>\n\n\n\n<p><strong>2) Insufficient Iteration<\/strong><\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">#-------------------------------------------------------------------------#\n# Learning - insufficient iteration\n#-------------------------------------------------------------------------#\n# initial guess\nw = 2; b = 3; step = 0.05\n \n# Iternated learning process by parameter update using gradient descent\nfor i in range(0,10):\n    y = data.outputs\n    x = data.inputs\n    w, b  = Learning(y, x, w, b, step)\n    \nprint(\"Learned_w: {}, Learned_b: {}\".format(w, b))\n  \nX = np.linspace(0, 1, 100)\nY = (w * X) + b\n \nplt.scatter(data.inputs, data.outputs, s = 0.3)\nplt.plot(X, Y, '-r', linewidth = 1.5)\nplt.show()\n<\/pre>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"386\" height=\"248\" data-src=\"\/campus\/wp-content\/uploads\/sites\/2\/2024\/04\/shleeai-gradient-descent-algorithm-8.jpg\" alt=\"\" class=\"wp-image-205282 lazyload\" data-srcset=\"https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/04\/shleeai-gradient-descent-algorithm-8.jpg 386w, https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/04\/shleeai-gradient-descent-algorithm-8-300x193.jpg 300w\" data-sizes=\"(max-width: 386px) 100vw, 386px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 386px; aspect-ratio: 386\/248;\" \/><\/figure>\n\n\n\n<p>Next post, we will cover stochastic gradient descent, mini-batch gradient descent algorithm which are variants of GD.<\/p>\n\n\n\n<p><em>Originally posted on <a href=\"https:\/\/shleeai.blogspot.com\/2021\/06\/understanding-gradient-descent.html\">SHLee AI Financial Model<\/a> blog.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Gradient Descent (GD) is the basic optimization algorithm for machine learning or deep learning.<\/p>\n","protected":false},"author":662,"featured_media":205288,"comment_status":"open","ping_status":"closed","sticky":true,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[339,343,349,338,341],"tags":[806,17036,595],"contributors-categories":[13728],"class_list":{"0":"post-205263","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-data-science","8":"category-programing-languages","9":"category-python-development","10":"category-ibkr-quant-news","11":"category-quant-development","12":"tag-data-science","13":"tag-gradient-descent-algorithm","14":"tag-python","15":"contributors-categories-sh-fintech-modeling"},"pp_statuses_selecting_workflow":false,"pp_workflow_action":"current","pp_status_selection":"publish","acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v26.9 (Yoast SEO v27.3) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Understanding Gradient Descent Algorithm with Python Code<\/title>\n<meta name=\"description\" content=\"Gradient Descent (GD) is the basic optimization algorithm for machine learning or deep learning.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.interactivebrokers.com\/campus\/wp-json\/wp\/v2\/posts\/205263\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Understanding Gradient Descent Algorithm with Python Code\" \/>\n<meta property=\"og:description\" content=\"Gradient Descent (GD) is the basic optimization algorithm for machine learning or deep learning.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.interactivebrokers.com\/campus\/ibkr-quant-news\/understanding-gradient-descent-algorithm-with-python-code\/\" \/>\n<meta property=\"og:site_name\" content=\"IBKR Campus US\" \/>\n<meta property=\"article:published_time\" content=\"2024-04-23T15:15:01+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-04-23T15:15:43+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/04\/robot-and-human-hand-digital-abstract.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"563\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Sang-Heon Lee\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sang-Heon Lee\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\n\t    \"@context\": \"https:\\\/\\\/schema.org\",\n\t    \"@graph\": [\n\t        {\n\t            \"@type\": \"NewsArticle\",\n\t            \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/understanding-gradient-descent-algorithm-with-python-code\\\/#article\",\n\t            \"isPartOf\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/understanding-gradient-descent-algorithm-with-python-code\\\/\"\n\t            },\n\t            \"author\": {\n\t                \"name\": \"Sang-Heon Lee\",\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#\\\/schema\\\/person\\\/0a959ff9de7f0465a07baa1fe1ae0200\"\n\t            },\n\t            \"headline\": \"Understanding Gradient Descent Algorithm with Python Code\",\n\t            \"datePublished\": \"2024-04-23T15:15:01+00:00\",\n\t            \"dateModified\": \"2024-04-23T15:15:43+00:00\",\n\t            \"mainEntityOfPage\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/understanding-gradient-descent-algorithm-with-python-code\\\/\"\n\t            },\n\t            \"wordCount\": 329,\n\t            \"commentCount\": 0,\n\t            \"publisher\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#organization\"\n\t            },\n\t            \"image\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/understanding-gradient-descent-algorithm-with-python-code\\\/#primaryimage\"\n\t            },\n\t            \"thumbnailUrl\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2024\\\/04\\\/robot-and-human-hand-digital-abstract.jpg\",\n\t            \"keywords\": [\n\t                \"Data Science\",\n\t                \"Gradient Descent Algorithm\",\n\t                \"Python\"\n\t            ],\n\t            \"articleSection\": [\n\t                \"Data Science\",\n\t                \"Programming Languages\",\n\t                \"Python Development\",\n\t                \"Quant\",\n\t                \"Quant Development\"\n\t            ],\n\t            \"inLanguage\": \"en-US\",\n\t            \"potentialAction\": [\n\t                {\n\t                    \"@type\": \"CommentAction\",\n\t                    \"name\": \"Comment\",\n\t                    \"target\": [\n\t                        \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/understanding-gradient-descent-algorithm-with-python-code\\\/#respond\"\n\t                    ]\n\t                }\n\t            ]\n\t        },\n\t        {\n\t            \"@type\": \"WebPage\",\n\t            \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/understanding-gradient-descent-algorithm-with-python-code\\\/\",\n\t            \"url\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/understanding-gradient-descent-algorithm-with-python-code\\\/\",\n\t            \"name\": \"Understanding Gradient Descent Algorithm with Python Code | IBKR Campus US\",\n\t            \"isPartOf\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#website\"\n\t            },\n\t            \"primaryImageOfPage\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/understanding-gradient-descent-algorithm-with-python-code\\\/#primaryimage\"\n\t            },\n\t            \"image\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/understanding-gradient-descent-algorithm-with-python-code\\\/#primaryimage\"\n\t            },\n\t            \"thumbnailUrl\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2024\\\/04\\\/robot-and-human-hand-digital-abstract.jpg\",\n\t            \"datePublished\": \"2024-04-23T15:15:01+00:00\",\n\t            \"dateModified\": \"2024-04-23T15:15:43+00:00\",\n\t            \"description\": \"Gradient Descent (GD) is the basic optimization algorithm for machine learning or deep learning.\",\n\t            \"inLanguage\": \"en-US\",\n\t            \"potentialAction\": [\n\t                {\n\t                    \"@type\": \"ReadAction\",\n\t                    \"target\": [\n\t                        \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/understanding-gradient-descent-algorithm-with-python-code\\\/\"\n\t                    ]\n\t                }\n\t            ]\n\t        },\n\t        {\n\t            \"@type\": \"ImageObject\",\n\t            \"inLanguage\": \"en-US\",\n\t            \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/understanding-gradient-descent-algorithm-with-python-code\\\/#primaryimage\",\n\t            \"url\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2024\\\/04\\\/robot-and-human-hand-digital-abstract.jpg\",\n\t            \"contentUrl\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2024\\\/04\\\/robot-and-human-hand-digital-abstract.jpg\",\n\t            \"width\": 1000,\n\t            \"height\": 563,\n\t            \"caption\": \"Quant\"\n\t        },\n\t        {\n\t            \"@type\": \"WebSite\",\n\t            \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#website\",\n\t            \"url\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/\",\n\t            \"name\": \"IBKR Campus US\",\n\t            \"description\": \"Financial Education from Interactive Brokers\",\n\t            \"publisher\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#organization\"\n\t            },\n\t            \"potentialAction\": [\n\t                {\n\t                    \"@type\": \"SearchAction\",\n\t                    \"target\": {\n\t                        \"@type\": \"EntryPoint\",\n\t                        \"urlTemplate\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/?s={search_term_string}\"\n\t                    },\n\t                    \"query-input\": {\n\t                        \"@type\": \"PropertyValueSpecification\",\n\t                        \"valueRequired\": true,\n\t                        \"valueName\": \"search_term_string\"\n\t                    }\n\t                }\n\t            ],\n\t            \"inLanguage\": \"en-US\"\n\t        },\n\t        {\n\t            \"@type\": \"Organization\",\n\t            \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#organization\",\n\t            \"name\": \"Interactive Brokers\",\n\t            \"alternateName\": \"IBKR\",\n\t            \"url\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/\",\n\t            \"logo\": {\n\t                \"@type\": \"ImageObject\",\n\t                \"inLanguage\": \"en-US\",\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#\\\/schema\\\/logo\\\/image\\\/\",\n\t                \"url\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2024\\\/05\\\/ibkr-campus-logo.jpg\",\n\t                \"contentUrl\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2024\\\/05\\\/ibkr-campus-logo.jpg\",\n\t                \"width\": 669,\n\t                \"height\": 669,\n\t                \"caption\": \"Interactive Brokers\"\n\t            },\n\t            \"image\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#\\\/schema\\\/logo\\\/image\\\/\"\n\t            },\n\t            \"publishingPrinciples\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/about-ibkr-campus\\\/\",\n\t            \"ethicsPolicy\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/cyber-security-notice\\\/\"\n\t        },\n\t        {\n\t            \"@type\": \"Person\",\n\t            \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#\\\/schema\\\/person\\\/0a959ff9de7f0465a07baa1fe1ae0200\",\n\t            \"name\": \"Sang-Heon Lee\",\n\t            \"url\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/author\\\/sang-heonlee\\\/\"\n\t        }\n\t    ]\n\t}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Understanding Gradient Descent Algorithm with Python Code","description":"Gradient Descent (GD) is the basic optimization algorithm for machine learning or deep learning.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.interactivebrokers.com\/campus\/wp-json\/wp\/v2\/posts\/205263\/","og_locale":"en_US","og_type":"article","og_title":"Understanding Gradient Descent Algorithm with Python Code","og_description":"Gradient Descent (GD) is the basic optimization algorithm for machine learning or deep learning.","og_url":"https:\/\/www.interactivebrokers.com\/campus\/ibkr-quant-news\/understanding-gradient-descent-algorithm-with-python-code\/","og_site_name":"IBKR Campus US","article_published_time":"2024-04-23T15:15:01+00:00","article_modified_time":"2024-04-23T15:15:43+00:00","og_image":[{"width":1000,"height":563,"url":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/04\/robot-and-human-hand-digital-abstract.jpg","type":"image\/jpeg"}],"author":"Sang-Heon Lee","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Sang-Heon Lee","Est. reading time":"4 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/understanding-gradient-descent-algorithm-with-python-code\/#article","isPartOf":{"@id":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/understanding-gradient-descent-algorithm-with-python-code\/"},"author":{"name":"Sang-Heon Lee","@id":"https:\/\/ibkrcampus.com\/campus\/#\/schema\/person\/0a959ff9de7f0465a07baa1fe1ae0200"},"headline":"Understanding Gradient Descent Algorithm with Python Code","datePublished":"2024-04-23T15:15:01+00:00","dateModified":"2024-04-23T15:15:43+00:00","mainEntityOfPage":{"@id":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/understanding-gradient-descent-algorithm-with-python-code\/"},"wordCount":329,"commentCount":0,"publisher":{"@id":"https:\/\/ibkrcampus.com\/campus\/#organization"},"image":{"@id":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/understanding-gradient-descent-algorithm-with-python-code\/#primaryimage"},"thumbnailUrl":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/04\/robot-and-human-hand-digital-abstract.jpg","keywords":["Data Science","Gradient Descent Algorithm","Python"],"articleSection":["Data Science","Programming Languages","Python Development","Quant","Quant Development"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/understanding-gradient-descent-algorithm-with-python-code\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/understanding-gradient-descent-algorithm-with-python-code\/","url":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/understanding-gradient-descent-algorithm-with-python-code\/","name":"Understanding Gradient Descent Algorithm with Python Code | IBKR Campus US","isPartOf":{"@id":"https:\/\/ibkrcampus.com\/campus\/#website"},"primaryImageOfPage":{"@id":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/understanding-gradient-descent-algorithm-with-python-code\/#primaryimage"},"image":{"@id":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/understanding-gradient-descent-algorithm-with-python-code\/#primaryimage"},"thumbnailUrl":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/04\/robot-and-human-hand-digital-abstract.jpg","datePublished":"2024-04-23T15:15:01+00:00","dateModified":"2024-04-23T15:15:43+00:00","description":"Gradient Descent (GD) is the basic optimization algorithm for machine learning or deep learning.","inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/understanding-gradient-descent-algorithm-with-python-code\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/understanding-gradient-descent-algorithm-with-python-code\/#primaryimage","url":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/04\/robot-and-human-hand-digital-abstract.jpg","contentUrl":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/04\/robot-and-human-hand-digital-abstract.jpg","width":1000,"height":563,"caption":"Quant"},{"@type":"WebSite","@id":"https:\/\/ibkrcampus.com\/campus\/#website","url":"https:\/\/ibkrcampus.com\/campus\/","name":"IBKR Campus US","description":"Financial Education from Interactive Brokers","publisher":{"@id":"https:\/\/ibkrcampus.com\/campus\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/ibkrcampus.com\/campus\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/ibkrcampus.com\/campus\/#organization","name":"Interactive Brokers","alternateName":"IBKR","url":"https:\/\/ibkrcampus.com\/campus\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/ibkrcampus.com\/campus\/#\/schema\/logo\/image\/","url":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/05\/ibkr-campus-logo.jpg","contentUrl":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/05\/ibkr-campus-logo.jpg","width":669,"height":669,"caption":"Interactive Brokers"},"image":{"@id":"https:\/\/ibkrcampus.com\/campus\/#\/schema\/logo\/image\/"},"publishingPrinciples":"https:\/\/www.interactivebrokers.com\/campus\/about-ibkr-campus\/","ethicsPolicy":"https:\/\/www.interactivebrokers.com\/campus\/cyber-security-notice\/"},{"@type":"Person","@id":"https:\/\/ibkrcampus.com\/campus\/#\/schema\/person\/0a959ff9de7f0465a07baa1fe1ae0200","name":"Sang-Heon Lee","url":"https:\/\/www.interactivebrokers.com\/campus\/author\/sang-heonlee\/"}]}},"jetpack_featured_media_url":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/04\/robot-and-human-hand-digital-abstract.jpg","_links":{"self":[{"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/posts\/205263","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/users\/662"}],"replies":[{"embeddable":true,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/comments?post=205263"}],"version-history":[{"count":0,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/posts\/205263\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/media\/205288"}],"wp:attachment":[{"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/media?parent=205263"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/categories?post=205263"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/tags?post=205263"},{"taxonomy":"contributors-categories","embeddable":true,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/contributors-categories?post=205263"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}