{"id":208117,"date":"2024-06-17T11:23:52","date_gmt":"2024-06-17T15:23:52","guid":{"rendered":"https:\/\/ibkrcampus.com\/?p=208117"},"modified":"2024-08-13T16:00:35","modified_gmt":"2024-08-13T20:00:35","slug":"lasso-regression-model-with-r-code","status":"publish","type":"post","link":"https:\/\/www.interactivebrokers.com\/campus\/ibkr-quant-news\/lasso-regression-model-with-r-code\/","title":{"rendered":"Lasso Regression Model with R Code"},"content":{"rendered":"\n<h5 class=\"wp-block-heading\" id=\"h-tibshirani-1996-introduces-the-so-called-lasso-least-absolute-shrinkage-and-selection-operator-model-for-the-selection-and-shrinkage-of-parameters-this-model-is-very-useful-when-we-analyze-big-data-in-this-post-we-learn-how-to-set-up-the-lasso-model-and-estimate-it-using-glmnet-r-package\">Tibshirani (1996) introduces the so called LASSO (Least Absolute Shrinkage and Selection Operator) model for the selection and shrinkage of parameters. This model is very useful when we analyze big data. In this post, we learn how to set up the Lasso model and estimate it using glmnet R package.<\/h5>\n\n\n\n<p><a><\/a><br>Tibshirani (1996) introduces the LASSO (Least Absolute Shrinkage and Selection Operator) model for the selection and shrinkage of parameters. The Ridge model is similar to it in terms of the shrinkage but does not have selection function because the ridge model make the coefficient of unimportant variable close to zero but not exactly to zero.<\/p>\n\n\n\n<p>These regression models are called as the regularized or penalized regression model. In particular, Lasso is so powerful that it can work for big dataset in which the number of variables is more than 100 or 1000, 10000, &#8230;. and so on. The traditional linear regression model cannot deal with this sort of big data.<\/p>\n\n\n\n<p>Although the linear regression estimator is the unbiased estimator in terms of bias-variance trade-off relationship, the regularized or penalized regression such as Lasso, Ridge admit some bias for reducing variance. This means the minimization problem for the latter has two components: mean squared error and penalty for parameters.\u00a0 <strong><em>l<sub>1-<\/sub><\/em><\/strong>penalty of Lasso make variable selection and shrinkage possible but\u00a0<strong><em>l<sub>2-<\/sub><\/em><\/strong> penalty of Ridge make only shrinkage possible.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-model-lasso\"><strong>Model : Lasso<\/strong><\/h3>\n\n\n\n<p>For observation index <strong><em>i=1,2,&#8230;,N<\/em><\/strong>\u00a0and variable index\u00a0<strong><em>j=1,2,&#8230;,p<\/em><\/strong> standardized predictors\u00a0<strong><em>x<sub>ij<\/sub>\u00a0<\/em><\/strong>and demeaned or centered response variables) are given, Lasso model finds\u00a0<em><strong>\u03b2<sub>j<\/sub><\/strong><\/em>\u00a0which minimize the following objective function.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"572\" height=\"220\" data-src=\"\/campus\/wp-content\/uploads\/sites\/2\/2024\/06\/Lasso-Regression-Model-R-Code-SHLee.png\" alt=\"\" class=\"wp-image-208119 lazyload\" data-srcset=\"https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/06\/Lasso-Regression-Model-R-Code-SHLee.png 572w, https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/06\/Lasso-Regression-Model-R-Code-SHLee-300x115.png 300w\" data-sizes=\"(max-width: 572px) 100vw, 572px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 572px; aspect-ratio: 572\/220;\" \/><\/figure>\n\n\n\n<p>Here,\u00a0<strong><em>Y<\/em><\/strong>\u00a0is demeaned for the sake of exposition, but this is not a must. However, X variables should be standardized with mean zero and unit variance because the difference in scale of variables tends to distribute penalty to each variables unequally.<\/p>\n\n\n\n<p>From the above equations, the first part of it is the RSS (Residual Sum of Squares) and the second is the penalty term. This penalty term is adjusted by the hyperparameter\u00a0<strong>\u03bb<\/strong>. Hyperparameter is given exogenously by user through the process of manual searching or cross-validation.<\/p>\n\n\n\n<p>When certain variable is included in the Lasso but decreases RSS so small to be negligible (i.e. decreasing RSS by 0.000000001), the impact of the shrinkage penalty grows. This means the coefficient of this variable is to zero (Lasso) or close to zero (Ridge).<\/p>\n\n\n\n<p>Unlike the Ridge (convex and differentiable), the Lasso (non-convex and non-differentiable) does not have the closed-form solution in most problems and we use the cyclic coordinate descent algorithm. The only exception is the case where all X variables are orthonormal but this case is highly unlikely.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-r-code\"><strong>R code<\/strong><\/h3>\n\n\n\n<p>Let&#8217;s estimate parameters of Lasso and Ridge using&nbsp;glmnet R package&nbsp;which provides fast calculation and useful functions.<\/p>\n\n\n\n<p>For example, we make some artificial time series data. let <strong><em>X<\/em><\/strong>\u00a0be 10 randomly drawn time series (variables) and\u00a0<strong><em>Y<\/em><\/strong>\u00a0variable with predetermined coefficients and randomly drawn error terms. Some coefficient are set to zero for the clear understanding of the differences between the standard linear, Lasso, and Ridge regression. The R code is as follows.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"r\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">#=========================================================================#\n# Financial Econometrics &amp; Derivatives, ML\/DL using R, Python, Tensorflow \n# by Sang-Heon Lee\n#\n# https:\/\/shleeai.blogspot.com\n#-------------------------------------------------------------------------#\n# Lasso, Ridge\n#=========================================================================#\n \nlibrary(glmnet)\n \n    graphics.off()  # clear all graphs\n    rm(list = ls()) # remove all files from your workspace\n    \n    N = 500 # number of observations\n    p = 20  # number of variables\n    \n#--------------------------------------------\n# X variable\n#--------------------------------------------\n    X = matrix(rnorm(N*p), ncol=p)\n \n# before standardization\n    colMeans(X)    # mean\n    apply(X,2,sd)  # standard deviation\n \n# scale : mean = 0, std=1\n    X = scale(X)\n \n# after standardization\n    colMeans(X)    # mean\n    apply(X,2,sd)  # standard deviation\n \n#--------------------------------------------\n# Y variable\n#--------------------------------------------\n    beta = c( 0.15, -0.33,  0.25, -0.25, 0.05,rep(0, p\/2-5), \n             -0.25,  0.12, -0.125, rep(0, p\/2-3))\n \n    # Y variable, standardized Y\n    y = X%*%beta + rnorm(N, sd=0.5)\n    y = scale(y)\n \n#--------------------------------------------\n# Model\n#--------------------------------------------\n    lambda &lt;- 0.01\n    \n    # standard linear regression without intercept(-1)\n    li.eq &lt;- lm(y ~ X-1) \n    \n    # lasso\n    la.eq &lt;- glmnet(X, y, lambda=lambda, \n                    family=\"gaussian\", \n                    intercept = F, alpha=1) \n    # Ridge\n    ri.eq &lt;- glmnet(X, y, lambda=lambda, \n                    family=\"gaussian\", \n                    intercept = F, alpha=0) \n \n#--------------------------------------------\n# Results (lambda=0.01)\n#--------------------------------------------\n    df.comp &lt;- data.frame(\n        beta    = beta,\n        Linear  = li.eq$coefficients,\n        Lasso   = la.eq$beta[,1],\n        Ridge   = ri.eq$beta[,1]\n    )\n    df.comp\n    \n#--------------------------------------------\n# Results (lambda=0.1)\n#--------------------------------------------\n    lambda &lt;- 0.1\n    \n    # lasso\n    la.eq &lt;- glmnet(X, y, lambda=lambda,\n                    family=\"gaussian\",\n                    intercept = F, alpha=1) \n    # Ridge\n    ri.eq &lt;- glmnet(X, y, lambda=lambda,\n                    family=\"gaussian\",\n                    intercept = F, alpha=0) \n    \n    df.comp &lt;- data.frame(\n        beta    = beta,\n        Linear  = li.eq$coefficients,\n        Lasso   = la.eq$beta[,1],\n        Ridge   = ri.eq$beta[,1]\n    )\n    df.comp\n    \n#------------------------------------------------\n# Shrinkage of coefficients \n# (rangle lambda input or without lambda input)\n#------------------------------------------------\n    \n    # lasso\n    la.eq &lt;- glmnet(X, y, family=\"gaussian\", \n                    intercept = F, alpha=1) \n    # Ridge\n    ri.eq &lt;- glmnet(X, y, family=\"gaussian\", \n                    intercept = F, alpha=0) \n    # plot\n    x11(); par(mfrow=c(2,1)) \n    x11(); matplot(log(la.eq$lambda), t(la.eq$beta),\n                   type=\"l\", main=\"Lasso\", lwd=2)\n    x11(); matplot(log(ri.eq$lambda), t(ri.eq$beta),\n                   type=\"l\", main=\"Ridge\", lwd=2)\n    \n#------------------------------------------------    \n# Run cross-validation &amp; select lambda\n#------------------------------------------------\n    mod_cv &lt;- cv.glmnet(x=X, y=y, family='gaussian',\n                        intercept = F, alpha=1)\n    \n    # plot(log(mod_cv$lambda), mod_cv$cvm)\n    # cvm : The mean cross-validated error \n    #     - a vector of length length(lambda)\n    \n    # lambda.min : the \u03bb at which \n    # the minimal MSE is achieved.\n    \n    # lambda.1se : the largest \u03bb at which \n    # the MSE is within one standard error \n    # of the minimal MSE.\n    \n    x11(); plot(mod_cv) \n    coef(mod_cv, c(mod_cv$lambda.min,\n                   mod_cv$lambda.1se))\n    print(paste(mod_cv$lambda.min,\n                log(mod_cv$lambda.min)))\n    print(paste(mod_cv$lambda.1se,\n                log(mod_cv$lambda.1se)))<\/pre>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-estimation-results\"><strong>Estimation Results<\/strong><\/h3>\n\n\n\n<p>The following figure shows true coefficients (<em>\u03b2<\/em>) with which we generate data, the estimated coefficients of three regression models.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"511\" height=\"957\" data-src=\"\/campus\/wp-content\/uploads\/sites\/2\/2024\/06\/lasso_ridge_reg_output-shlee.png\" alt=\"\" class=\"wp-image-208121 lazyload\" data-srcset=\"https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/06\/lasso_ridge_reg_output-shlee.png 511w, https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/06\/lasso_ridge_reg_output-shlee-300x562.png 300w\" data-sizes=\"(max-width: 511px) 100vw, 511px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 511px; aspect-ratio: 511\/957;\" \/><\/figure>\n\n\n\n<p>The estimation results provide similar results between models despite the uncertainty in data-generating process. In particular, Lasso is identifying the insignificant or unimportant variables as zero coefficients. The variable selection and shrinkage effect are strong with\u00a0<strong>\u03bb<\/strong>. The following figures show the change of estimated coefficients with respect to the change of the penalty parameter (<strong>log(\u03bb<\/strong>)) which is the shrinkage path.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"540\" height=\"959\" data-src=\"\/campus\/wp-content\/uploads\/sites\/2\/2024\/06\/lasso_figure-shlee.png\" alt=\"\" class=\"wp-image-208123 lazyload\" data-srcset=\"https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/06\/lasso_figure-shlee.png 540w, https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/06\/lasso_figure-shlee-300x533.png 300w\" data-sizes=\"(max-width: 540px) 100vw, 540px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 540px; aspect-ratio: 540\/959;\" \/><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-model-selection\"><strong>Model Selection<\/strong><\/h3>\n\n\n\n<p>The most important thing in Lasso boils down to select the optimal\u00a0<strong>\u03bb<\/strong>. This is determined in the process of the cross-validation. cv.glmnet() function in glmnet provides the cross-validation results with some proper range of\u00a0<strong>\u03bb.<\/strong> Using this output, we can draw a graph of\u00a0<strong>log(\u03bb)<\/strong>\u00a0and MSE(means squared error).<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"540\" height=\"352\" data-src=\"\/campus\/wp-content\/uploads\/sites\/2\/2024\/06\/lasso_mse-shlee.png\" alt=\"\" class=\"wp-image-208124 lazyload\" data-srcset=\"https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/06\/lasso_mse-shlee.png 540w, https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/06\/lasso_mse-shlee-300x196.png 300w\" data-sizes=\"(max-width: 540px) 100vw, 540px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 540px; aspect-ratio: 540\/352;\" \/><\/figure>\n\n\n\n<p>From the above figures, the first candidate is the\u00a0<strong>\u03bb<\/strong>\u00a0at which the minimal MSE is achieved but it is likely that this model have many variables. The second is the largest\u00a0<strong>\u03bb<\/strong>\u00a0at which the MSE is within one standard error of the minimal MSE. This is somewhat heuristic or empirical approach but have some merits for reducing the number of variables. It is typical to choose the second, MSE minimized 1se\u00a0<strong>\u03bb<\/strong>. But visual inspection is very important tool to find the pattern of shrinkage process.<\/p>\n\n\n\n<p>The following result reports the estimated coefficients under the MSE minimized\u00a0<strong>\u03bb<\/strong>\u00a0and MSE minimized 1se\u00a0<strong>\u03bb<\/strong>\u00a0respectively.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"550\" height=\"787\" data-src=\"\/campus\/wp-content\/uploads\/sites\/2\/2024\/06\/lasso_cv_lambda_min_1se-shlee.png\" alt=\"\" class=\"wp-image-208125 lazyload\" data-srcset=\"https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/06\/lasso_cv_lambda_min_1se-shlee.png 550w, https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/06\/lasso_cv_lambda_min_1se-shlee-300x429.png 300w\" data-sizes=\"(max-width: 550px) 100vw, 550px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 550px; aspect-ratio: 550\/787;\" \/><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-forecast\"><strong>Forecast<\/strong><\/h3>\n\n\n\n<p>After estimating the parameters of Lasso regression, it is necessary to use this model for prediction. The forecasting exercise use not the penalty term but the estimated coefficients. Looking at the forecasting method, the Lasso, Ridge, and linear regression models are the same because the penalty term is only used for the estimation.<\/p>\n\n\n\n<p>Based on this post, sign restricted Lasso model will be discussed. It is important to set constraints on the sign of coefficient since the economic theory or empirical stylized fact advocate the specific sign.<\/p>\n\n\n\n<p>Tibshirani, Robert (1996). &#8220;Regression Shrinkage and Selection via the lasso,&#8221; Journal of the Royal Statistical Society 58-1, 267\u201388.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><em>Originally posted on <a href=\"https:\/\/shleeai.blogspot.com\/2021\/05\/lasso-regression-model-with-r-code.html\">SH Fintech Modeling<\/a>.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>These regression models are called as the regularized or penalized regression model.<\/p>\n","protected":false},"author":662,"featured_media":208131,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[339,343,338,341,342],"tags":[5121,806,1006,17274,17275,10181,17276],"contributors-categories":[13728],"class_list":{"0":"post-208117","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-data-science","8":"category-programing-languages","9":"category-ibkr-quant-news","10":"category-quant-development","11":"category-r-development","12":"tag-big-data","13":"tag-data-science","14":"tag-fintech","15":"tag-glmnet-package","16":"tag-lasso-least-absolute-shrinkage-and-selection-operator-model","17":"tag-regression-model","18":"tag-rss-residual-sum-of-squares","19":"contributors-categories-sh-fintech-modeling"},"pp_statuses_selecting_workflow":false,"pp_workflow_action":"current","pp_status_selection":"publish","acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v26.9 (Yoast SEO v27.3) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Lasso Regression Model with R Code | IBKR Quant<\/title>\n<meta name=\"description\" content=\"These regression models are called as the regularized or penalized regression model.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.interactivebrokers.com\/campus\/wp-json\/wp\/v2\/posts\/208117\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Lasso Regression Model with R Code\" \/>\n<meta property=\"og:description\" content=\"These regression models are called as the regularized or penalized regression model.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.interactivebrokers.com\/campus\/ibkr-quant-news\/lasso-regression-model-with-r-code\/\" \/>\n<meta property=\"og:site_name\" content=\"IBKR Campus US\" \/>\n<meta property=\"article:published_time\" content=\"2024-06-17T15:23:52+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-08-13T20:00:35+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/06\/framework-modeling.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"563\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Sang-Heon Lee\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sang-Heon Lee\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\n\t    \"@context\": \"https:\\\/\\\/schema.org\",\n\t    \"@graph\": [\n\t        {\n\t            \"@type\": \"NewsArticle\",\n\t            \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/lasso-regression-model-with-r-code\\\/#article\",\n\t            \"isPartOf\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/lasso-regression-model-with-r-code\\\/\"\n\t            },\n\t            \"author\": {\n\t                \"name\": \"Sang-Heon Lee\",\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#\\\/schema\\\/person\\\/0a959ff9de7f0465a07baa1fe1ae0200\"\n\t            },\n\t            \"headline\": \"Lasso Regression Model with R Code\",\n\t            \"datePublished\": \"2024-06-17T15:23:52+00:00\",\n\t            \"dateModified\": \"2024-08-13T20:00:35+00:00\",\n\t            \"mainEntityOfPage\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/lasso-regression-model-with-r-code\\\/\"\n\t            },\n\t            \"wordCount\": 860,\n\t            \"commentCount\": 0,\n\t            \"publisher\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#organization\"\n\t            },\n\t            \"image\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/lasso-regression-model-with-r-code\\\/#primaryimage\"\n\t            },\n\t            \"thumbnailUrl\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2024\\\/06\\\/framework-modeling.jpg\",\n\t            \"keywords\": [\n\t                \"Big Data\",\n\t                \"Data Science\",\n\t                \"fintech\",\n\t                \"glmnet package\",\n\t                \"LASSO (Least Absolute Shrinkage and Selection Operator) model\",\n\t                \"Regression Model\",\n\t                \"RSS (Residual Sum of Squares)\"\n\t            ],\n\t            \"articleSection\": [\n\t                \"Data Science\",\n\t                \"Programming Languages\",\n\t                \"Quant\",\n\t                \"Quant Development\",\n\t                \"R Development\"\n\t            ],\n\t            \"inLanguage\": \"en-US\",\n\t            \"potentialAction\": [\n\t                {\n\t                    \"@type\": \"CommentAction\",\n\t                    \"name\": \"Comment\",\n\t                    \"target\": [\n\t                        \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/lasso-regression-model-with-r-code\\\/#respond\"\n\t                    ]\n\t                }\n\t            ]\n\t        },\n\t        {\n\t            \"@type\": \"WebPage\",\n\t            \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/lasso-regression-model-with-r-code\\\/\",\n\t            \"url\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/lasso-regression-model-with-r-code\\\/\",\n\t            \"name\": \"Lasso Regression Model with R Code | IBKR Campus US\",\n\t            \"isPartOf\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#website\"\n\t            },\n\t            \"primaryImageOfPage\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/lasso-regression-model-with-r-code\\\/#primaryimage\"\n\t            },\n\t            \"image\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/lasso-regression-model-with-r-code\\\/#primaryimage\"\n\t            },\n\t            \"thumbnailUrl\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2024\\\/06\\\/framework-modeling.jpg\",\n\t            \"datePublished\": \"2024-06-17T15:23:52+00:00\",\n\t            \"dateModified\": \"2024-08-13T20:00:35+00:00\",\n\t            \"description\": \"These regression models are called as the regularized or penalized regression model.\",\n\t            \"inLanguage\": \"en-US\",\n\t            \"potentialAction\": [\n\t                {\n\t                    \"@type\": \"ReadAction\",\n\t                    \"target\": [\n\t                        \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/lasso-regression-model-with-r-code\\\/\"\n\t                    ]\n\t                }\n\t            ]\n\t        },\n\t        {\n\t            \"@type\": \"ImageObject\",\n\t            \"inLanguage\": \"en-US\",\n\t            \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/lasso-regression-model-with-r-code\\\/#primaryimage\",\n\t            \"url\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2024\\\/06\\\/framework-modeling.jpg\",\n\t            \"contentUrl\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2024\\\/06\\\/framework-modeling.jpg\",\n\t            \"width\": 1000,\n\t            \"height\": 563,\n\t            \"caption\": \"Quant\"\n\t        },\n\t        {\n\t            \"@type\": \"WebSite\",\n\t            \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#website\",\n\t            \"url\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/\",\n\t            \"name\": \"IBKR Campus US\",\n\t            \"description\": \"Financial Education from Interactive Brokers\",\n\t            \"publisher\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#organization\"\n\t            },\n\t            \"potentialAction\": [\n\t                {\n\t                    \"@type\": \"SearchAction\",\n\t                    \"target\": {\n\t                        \"@type\": \"EntryPoint\",\n\t                        \"urlTemplate\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/?s={search_term_string}\"\n\t                    },\n\t                    \"query-input\": {\n\t                        \"@type\": \"PropertyValueSpecification\",\n\t                        \"valueRequired\": true,\n\t                        \"valueName\": \"search_term_string\"\n\t                    }\n\t                }\n\t            ],\n\t            \"inLanguage\": \"en-US\"\n\t        },\n\t        {\n\t            \"@type\": \"Organization\",\n\t            \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#organization\",\n\t            \"name\": \"Interactive Brokers\",\n\t            \"alternateName\": \"IBKR\",\n\t            \"url\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/\",\n\t            \"logo\": {\n\t                \"@type\": \"ImageObject\",\n\t                \"inLanguage\": \"en-US\",\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#\\\/schema\\\/logo\\\/image\\\/\",\n\t                \"url\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2024\\\/05\\\/ibkr-campus-logo.jpg\",\n\t                \"contentUrl\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2024\\\/05\\\/ibkr-campus-logo.jpg\",\n\t                \"width\": 669,\n\t                \"height\": 669,\n\t                \"caption\": \"Interactive Brokers\"\n\t            },\n\t            \"image\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#\\\/schema\\\/logo\\\/image\\\/\"\n\t            },\n\t            \"publishingPrinciples\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/about-ibkr-campus\\\/\",\n\t            \"ethicsPolicy\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/cyber-security-notice\\\/\"\n\t        },\n\t        {\n\t            \"@type\": \"Person\",\n\t            \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#\\\/schema\\\/person\\\/0a959ff9de7f0465a07baa1fe1ae0200\",\n\t            \"name\": \"Sang-Heon Lee\",\n\t            \"url\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/author\\\/sang-heonlee\\\/\"\n\t        }\n\t    ]\n\t}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Lasso Regression Model with R Code | IBKR Quant","description":"These regression models are called as the regularized or penalized regression model.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.interactivebrokers.com\/campus\/wp-json\/wp\/v2\/posts\/208117\/","og_locale":"en_US","og_type":"article","og_title":"Lasso Regression Model with R Code","og_description":"These regression models are called as the regularized or penalized regression model.","og_url":"https:\/\/www.interactivebrokers.com\/campus\/ibkr-quant-news\/lasso-regression-model-with-r-code\/","og_site_name":"IBKR Campus US","article_published_time":"2024-06-17T15:23:52+00:00","article_modified_time":"2024-08-13T20:00:35+00:00","og_image":[{"width":1000,"height":563,"url":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/06\/framework-modeling.jpg","type":"image\/jpeg"}],"author":"Sang-Heon Lee","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Sang-Heon Lee","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/lasso-regression-model-with-r-code\/#article","isPartOf":{"@id":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/lasso-regression-model-with-r-code\/"},"author":{"name":"Sang-Heon Lee","@id":"https:\/\/ibkrcampus.com\/campus\/#\/schema\/person\/0a959ff9de7f0465a07baa1fe1ae0200"},"headline":"Lasso Regression Model with R Code","datePublished":"2024-06-17T15:23:52+00:00","dateModified":"2024-08-13T20:00:35+00:00","mainEntityOfPage":{"@id":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/lasso-regression-model-with-r-code\/"},"wordCount":860,"commentCount":0,"publisher":{"@id":"https:\/\/ibkrcampus.com\/campus\/#organization"},"image":{"@id":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/lasso-regression-model-with-r-code\/#primaryimage"},"thumbnailUrl":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/06\/framework-modeling.jpg","keywords":["Big Data","Data Science","fintech","glmnet package","LASSO (Least Absolute Shrinkage and Selection Operator) model","Regression Model","RSS (Residual Sum of Squares)"],"articleSection":["Data Science","Programming Languages","Quant","Quant Development","R Development"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/lasso-regression-model-with-r-code\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/lasso-regression-model-with-r-code\/","url":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/lasso-regression-model-with-r-code\/","name":"Lasso Regression Model with R Code | IBKR Campus US","isPartOf":{"@id":"https:\/\/ibkrcampus.com\/campus\/#website"},"primaryImageOfPage":{"@id":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/lasso-regression-model-with-r-code\/#primaryimage"},"image":{"@id":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/lasso-regression-model-with-r-code\/#primaryimage"},"thumbnailUrl":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/06\/framework-modeling.jpg","datePublished":"2024-06-17T15:23:52+00:00","dateModified":"2024-08-13T20:00:35+00:00","description":"These regression models are called as the regularized or penalized regression model.","inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/lasso-regression-model-with-r-code\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/lasso-regression-model-with-r-code\/#primaryimage","url":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/06\/framework-modeling.jpg","contentUrl":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/06\/framework-modeling.jpg","width":1000,"height":563,"caption":"Quant"},{"@type":"WebSite","@id":"https:\/\/ibkrcampus.com\/campus\/#website","url":"https:\/\/ibkrcampus.com\/campus\/","name":"IBKR Campus US","description":"Financial Education from Interactive Brokers","publisher":{"@id":"https:\/\/ibkrcampus.com\/campus\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/ibkrcampus.com\/campus\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/ibkrcampus.com\/campus\/#organization","name":"Interactive Brokers","alternateName":"IBKR","url":"https:\/\/ibkrcampus.com\/campus\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/ibkrcampus.com\/campus\/#\/schema\/logo\/image\/","url":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/05\/ibkr-campus-logo.jpg","contentUrl":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/05\/ibkr-campus-logo.jpg","width":669,"height":669,"caption":"Interactive Brokers"},"image":{"@id":"https:\/\/ibkrcampus.com\/campus\/#\/schema\/logo\/image\/"},"publishingPrinciples":"https:\/\/www.interactivebrokers.com\/campus\/about-ibkr-campus\/","ethicsPolicy":"https:\/\/www.interactivebrokers.com\/campus\/cyber-security-notice\/"},{"@type":"Person","@id":"https:\/\/ibkrcampus.com\/campus\/#\/schema\/person\/0a959ff9de7f0465a07baa1fe1ae0200","name":"Sang-Heon Lee","url":"https:\/\/www.interactivebrokers.com\/campus\/author\/sang-heonlee\/"}]}},"jetpack_featured_media_url":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/06\/framework-modeling.jpg","_links":{"self":[{"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/posts\/208117","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/users\/662"}],"replies":[{"embeddable":true,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/comments?post=208117"}],"version-history":[{"count":0,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/posts\/208117\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/media\/208131"}],"wp:attachment":[{"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/media?parent=208117"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/categories?post=208117"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/tags?post=208117"},{"taxonomy":"contributors-categories","embeddable":true,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/contributors-categories?post=208117"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}