{"id":236297,"date":"2025-12-19T10:09:23","date_gmt":"2025-12-19T15:09:23","guid":{"rendered":"https:\/\/ibkrcampus.com\/campus\/?p=236297"},"modified":"2025-12-22T04:50:11","modified_gmt":"2025-12-22T09:50:11","slug":"deep-latent-variable-models","status":"publish","type":"post","link":"https:\/\/www.interactivebrokers.com\/campus\/ibkr-quant-news\/deep-latent-variable-models\/","title":{"rendered":"Deep Latent Variable Models"},"content":{"rendered":"\n<p><em>The article &#8220;Deep Latent Variable Models&#8221; was originally published on <a href=\"https:\/\/predictnow.ai\/deep-latent-variable-models\/\">PredictNow.ai<\/a> blog.<\/em><\/p>\n\n\n\n<p>In our previous blog post, we introduced latent variable models, where the latent variable can be thought of as a feature vector that has been \u201cencoded\u201d efficiently. This encoding turns the feature vector X into a context vector z. Latent variable models sound very GenAI-zy, but they descend from models that quant traders have long been familiar with.<\/p>\n\n\n\n<p>No doubt you have heard of PCA or SVD (see Chapter 3 of our&nbsp;<a href=\"https:\/\/www.amazon.com\/Generative-AI-Trading-Asset-Management\/dp\/1394266979?_encoding=UTF8&amp;pd_rd_w=HVGiV&amp;content-id=amzn1.sym.bc3ba8d1-5076-4ab7-9ba8-a5c6211e002d&amp;pf_rd_p=bc3ba8d1-5076-4ab7-9ba8-a5c6211e002d&amp;pf_rd_r=141-8012032-0139843&amp;pd_rd_wg=NnDiy&amp;pd_rd_r=cfae53f5-c62f-478c-8fc1-62ce0fc5b0b6&amp;linkCode=sl1&amp;tag=quantitativet-20&amp;linkId=fcb9e3a2d95a2c546239e8978337e1bd&amp;language=en_US&amp;ref_=as_li_ss_tl\">book<\/a>&nbsp;for a primer)? Principal components or singular vectors are ways to represent returns in terms of a small number of variables. These variables are latent, or hidden, because they are inferred from the observed returns themselves, and not observable like the Fama-French factors such as HML or SMB. The benefit of applying these latent factors to model returns is that we need fewer parameters \u2013 i.e. dimensional reduction. For example, the covariance matrix of 500 stocks\u2019 returns have 125,250 parameters, whereas its 10-principal-component model has only 5,010 parameters. The methods to find these latent factors are diagonalization of the covariance matrix in the PCA case, or singular value decomposition of the \u201cdesign\u201d (data) matrix in the SVD case.<\/p>\n\n\n\n<p>More generally, latent variable models are used to model the probability distributions of the observed features X:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"645\" height=\"97\" data-src=\"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/12\/Deep-Latent-Variable-Models-1.jpg\" alt=\"Deep Latent Variable Models\" class=\"wp-image-236301 lazyload\" data-srcset=\"https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/12\/Deep-Latent-Variable-Models-1.jpg 645w, https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/12\/Deep-Latent-Variable-Models-1-300x45.jpg 300w\" data-sizes=\"(max-width: 645px) 100vw, 645px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 645px; aspect-ratio: 645\/97;\" \/><\/figure>\n\n\n\n<p>In the simplest case, z is just a categorical variable that takes 0 or 1 as value, with a binomial distribution, and p(X|z) is a Gaussian with parameters that depend on z. (You might think of the \u201ccontext vector\u201d z as&nbsp;<em>encoding&nbsp;<\/em>the information about X in the most compact manner possible: just 0 or 1.) Both the binomial and the Gaussian distributions here have fixed, but unknown, parameters that&nbsp;<em>do not depend<\/em>&nbsp;on X. This is called a Gaussian Mixture Model (GMM) and p(X) is written as<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"684\" height=\"86\" data-src=\"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/12\/Deep-Latent-Variable-Models-2.jpg\" alt=\"Deep Latent Variable Models\" class=\"wp-image-236303 lazyload\" data-srcset=\"https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/12\/Deep-Latent-Variable-Models-2.jpg 684w, https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/12\/Deep-Latent-Variable-Models-2-300x38.jpg 300w\" data-sizes=\"(max-width: 684px) 100vw, 684px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 684px; aspect-ratio: 684\/86;\" \/><\/figure>\n\n\n\n<p>1 is the probability of z=1, and is a Gaussian with different parameters for each z.<\/p>\n\n\n\n<p>In another familiar case, p(z) is no longer independently distributed, but each zt at time t depends on its previous value zt-1, governed by the transition probability aij<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"346\" height=\"38\" data-src=\"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/12\/Deep-Latent-Variable-Models-3.jpg\" alt=\"Deep Latent Variable Models\" class=\"wp-image-236304 lazyload\" data-srcset=\"https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/12\/Deep-Latent-Variable-Models-3.jpg 346w, https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/12\/Deep-Latent-Variable-Models-3-300x33.jpg 300w\" data-sizes=\"(max-width: 346px) 100vw, 346px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 346px; aspect-ratio: 346\/38;\" \/><\/figure>\n\n\n\n<p>This is the famous HMM (Hidden Markov Model). In a HMM, z still takes on 0 or 1 as values, and p(X|z) is still Gaussian.<\/p>\n\n\n\n<p>We don\u2019t know the actual distribution p(z) of the hidden variable z \u2013 after all, it is hidden! So how do we estimate its probability? Unlike PCA or SVD, the training algorithm used to find these unknown but fixed parameters is the celebrated EM (Expectation-Maximization) algorithm.<\/p>\n\n\n\n<p>In the EM algorithm, as in the more general Variational Inference (VI) algorithm to be described later, the key to training the model is to introduce a proposal distribution q(z|X) (which we called the \u201cencoder\u201d in the VAE framework described in the previous&nbsp;<a href=\"https:\/\/open.substack.com\/pub\/gatambook\/p\/features-selection-in-the-age-of?r=7j9m8&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=false\">blog post<\/a>) which approximates p(z|X) instead of p(z). We start by estimating q(z|X) using some arbitrary parameters old for p(z|X, old), i.e. q(z|X)=p(z|X, old). In the EM algorithm framework, this proposal distribution is also variously called the membership probability, the responsibility, posterior probability, soft assignment, or state occupancy probability. In our Gaussian mixture case,<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"583\" height=\"75\" data-src=\"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/12\/Deep-Latent-Variable-Models-4.jpg\" alt=\"Deep Latent Variable Models\" class=\"wp-image-236306 lazyload\" data-srcset=\"https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/12\/Deep-Latent-Variable-Models-4.jpg 583w, https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/12\/Deep-Latent-Variable-Models-4-300x39.jpg 300w\" data-sizes=\"(max-width: 583px) 100vw, 583px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 583px; aspect-ratio: 583\/75;\" \/><\/figure>\n\n\n\n<p>Notice the expectations computed for the Gaussians. That\u2019s why this is called E-step. In the VAE framework, you can think of this step as obtaining the output of the encoder.<\/p>\n\n\n\n<p>Now to find a better in the next iteration, we are supposed to&nbsp;<em>maximize<\/em>&nbsp;the log likelihood LL=log p(X| ) by varying . But we don\u2019t know the actual likelihood in Eqn (1) above, we only know an approximation<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"636\" height=\"47\" data-src=\"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/12\/Deep-Latent-Variable-Models-5.jpg\" alt=\"Deep Latent Variable Models\" class=\"wp-image-236307 lazyload\" data-srcset=\"https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/12\/Deep-Latent-Variable-Models-5.jpg 636w, https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/12\/Deep-Latent-Variable-Models-5-300x22.jpg 300w\" data-sizes=\"(max-width: 636px) 100vw, 636px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 636px; aspect-ratio: 636\/47;\" \/><\/figure>\n\n\n\n<p>Heck, we will maximize Q w.r.t. instead. This is the M-step. Next we set old= that was just optimized and rinse, repeat, until convergence (e.g. when Q doesn\u2019t significantly increase anymore.) In the VAE framework, you can think of this as a \u201cbackpropagation\u201d step that optimizes the log likelihood function that the \u201cdecoder\u201d p(X|z, ) generates.<\/p>\n\n\n\n<p>Now for the general deep latent variable model, p and q are still Gaussians, but their parameters are no longer constants. They are&nbsp;<em>sample-specific&nbsp;<\/em>(i.e. they are themselves functions of X). Researchers typically denote the parameters for the encoder q(z|X) as and those for the decoder p(X|z) as . These parameters are now weights and biases of two separate DNNs (deep neural network). As shown in the VAE diagram of the previous&nbsp;<a href=\"https:\/\/open.substack.com\/pub\/gatambook\/p\/features-selection-in-the-age-of?r=7j9m8&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=false\">blog post<\/a>, the parameters for q are (q , q)=DNN(X) and those for p are (p , p)=DNN(z).<\/p>\n\n\n\n<p>Note the parallel with transformers. Conventional features selection, like the parameters of a conventional latent variable model (e.g GMM and HMM), are fixed for the&nbsp;<em>entire&nbsp;<\/em>data set. But transformer-based features selection, like the parameters of a VAE, are sample-specific, offering much more flexibility. Of course, the price of this flexibility and specificity is that VAE can no longer be trained by the EM algorithm. It requires a method called Variational Inference (VI) Approximation. Similar to Eqn (2) above, the LL=log p(X| , ) can be written as an expectation over q, but this time with an explicit error term DKL:<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1100\" height=\"528\" data-src=\"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/12\/Deep-Latent-Variable-Models-6-1100x528.jpg\" alt=\"Deep Latent Variable Models\" class=\"wp-image-236309 lazyload\" data-srcset=\"https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/12\/Deep-Latent-Variable-Models-6-1100x528.jpg 1100w, https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/12\/Deep-Latent-Variable-Models-6-700x336.jpg 700w, https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/12\/Deep-Latent-Variable-Models-6-300x144.jpg 300w, https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/12\/Deep-Latent-Variable-Models-6-768x369.jpg 768w, https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/12\/Deep-Latent-Variable-Models-6.jpg 1456w\" data-sizes=\"(max-width: 1100px) 100vw, 1100px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1100px; aspect-ratio: 1100\/528;\" \/><\/figure>\n\n\n\n<p class=\"has-text-align-center\">Kigma and Wellington2019 \u201cAn introduction to Variational Autoencoders\u201d<\/p>\n\n\n\n<p>Note the first term, called ELBO (Evidence Lower Bound, pronounced \u201celbow\u201d), is analogous to Q in Eqn (2), except that it is an explicit function of X. The error term is the Kullback-Leibler Divergence \u2013 essentially the difference \u2013 between the proposal distribution q(z|X) and the actual posterior distribution p(z|X). Because DKL0 always, LL \u3093, always, and by maximizing \u3093, we can maximize LL as well. At the same time, as \u3093, increases, DKL goes to zero, and q(z|X) will be closer to p(z|X). In other words, the proposal gets more and more realistic. To maximize \u3093, which is the loss function of the encoder-decoder network DNN(X) and DNN(z), we can apply SGD (stochastic gradient descent) on both and simultaneously. We will also need to assume a simple Gaussian prior p(z)=(0, I). For more details on training, see Chapter 6 of our&nbsp;<a href=\"https:\/\/www.amazon.com\/Generative-AI-Trading-Asset-Management\/dp\/1394266979?_encoding=UTF8&amp;pd_rd_w=HVGiV&amp;content-id=amzn1.sym.bc3ba8d1-5076-4ab7-9ba8-a5c6211e002d&amp;pf_rd_p=bc3ba8d1-5076-4ab7-9ba8-a5c6211e002d&amp;pf_rd_r=141-8012032-0139843&amp;pd_rd_wg=NnDiy&amp;pd_rd_r=cfae53f5-c62f-478c-8fc1-62ce0fc5b0b6&amp;linkCode=sl1&amp;tag=quantitativet-20&amp;linkId=fcb9e3a2d95a2c546239e8978337e1bd&amp;language=en_US&amp;ref_=as_li_ss_tl\">book<\/a>.<\/p>\n\n\n\n<p>The entire process of training a VAE (encoder+decoder), just as in training a GMM or HMM, is unsupervised \u2013 no labels are needed. As we mentioned in the previous blog post, this allows us to<em>&nbsp;pre-train<\/em>&nbsp;a VAE using a vast amount of unlabeled data with perhaps only some relevance to the labeled data at hand. Once the VAE is pre-trained, we can use z (a sample of the output of the encoder) as features to train a supervised model for classification or regression. Other ways of using, training, or fine-tuning the VAE were explained in that blog post as well.<\/p>\n\n\n\n<p>In summary, we see how VAE is really a generalization of the more familiar latent variable models like GMM and HMM, except here the parameters of the distributions q(z|X) and p(X|z) themselves depend on the input sample X. This allows for much more flexibility in modeling real-world data, just as the transformer allows for sample-specific features selection. The price to pay for this flexibility is that there are lots more parameters (weights and biases of the encoder and decoder) to fit, and we need much more data to fit them. But the saving grace is that we only need unlabeled training data, which is abundant in most domains including finance.<\/p>\n\n\n\n<p><em>Visit <a href=\"https:\/\/predictnow.ai\/deep-latent-variable-models\/\">PredictNow.ai<\/a>\u00a0for additional insights on this topic.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Principal components or singular vectors are ways to represent returns in terms of a small number of variables. <\/p>\n","protected":false},"author":186,"featured_media":228756,"comment_status":"open","ping_status":"closed","sticky":true,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[339,338,341],"tags":[],"contributors-categories":[13719],"class_list":{"0":"post-236297","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-data-science","8":"category-ibkr-quant-news","9":"category-quant-development","10":"contributors-categories-predictnow-ai"},"pp_statuses_selecting_workflow":false,"pp_workflow_action":"current","pp_status_selection":"publish","acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v26.9 (Yoast SEO v27.4) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Deep Latent Variable Models | IBKR Quant<\/title>\n<meta name=\"description\" content=\"Principal components or singular vectors are ways to represent returns in terms of a small number of variables.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.interactivebrokers.com\/campus\/wp-json\/wp\/v2\/posts\/236297\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Deep Latent Variable Models\" \/>\n<meta property=\"og:description\" content=\"Principal components or singular vectors are ways to represent returns in terms of a small number of variables.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.interactivebrokers.com\/campus\/ibkr-quant-news\/deep-latent-variable-models\/\" \/>\n<meta property=\"og:site_name\" content=\"IBKR Campus US\" \/>\n<meta property=\"article:published_time\" content=\"2025-12-19T15:09:23+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-12-22T09:50:11+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/08\/ai-concept-artificial-intelligence-featured-img.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"563\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Contributor Author\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Contributor Author\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\n\t    \"@context\": \"https:\\\/\\\/schema.org\",\n\t    \"@graph\": [\n\t        {\n\t            \"@type\": \"NewsArticle\",\n\t            \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/deep-latent-variable-models\\\/#article\",\n\t            \"isPartOf\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/deep-latent-variable-models\\\/\"\n\t            },\n\t            \"author\": {\n\t                \"name\": \"Contributor Author\",\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#\\\/schema\\\/person\\\/e823e46b42ca381080387e794318a485\"\n\t            },\n\t            \"headline\": \"Deep Latent Variable Models\",\n\t            \"datePublished\": \"2025-12-19T15:09:23+00:00\",\n\t            \"dateModified\": \"2025-12-22T09:50:11+00:00\",\n\t            \"mainEntityOfPage\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/deep-latent-variable-models\\\/\"\n\t            },\n\t            \"wordCount\": 1259,\n\t            \"commentCount\": 0,\n\t            \"publisher\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#organization\"\n\t            },\n\t            \"image\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/deep-latent-variable-models\\\/#primaryimage\"\n\t            },\n\t            \"thumbnailUrl\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2025\\\/08\\\/ai-concept-artificial-intelligence-featured-img.jpg\",\n\t            \"articleSection\": [\n\t                \"Data Science\",\n\t                \"Quant\",\n\t                \"Quant Development\"\n\t            ],\n\t            \"inLanguage\": \"en-US\",\n\t            \"potentialAction\": [\n\t                {\n\t                    \"@type\": \"CommentAction\",\n\t                    \"name\": \"Comment\",\n\t                    \"target\": [\n\t                        \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/deep-latent-variable-models\\\/#respond\"\n\t                    ]\n\t                }\n\t            ]\n\t        },\n\t        {\n\t            \"@type\": \"WebPage\",\n\t            \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/deep-latent-variable-models\\\/\",\n\t            \"url\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/deep-latent-variable-models\\\/\",\n\t            \"name\": \"Deep Latent Variable Models | IBKR Campus US\",\n\t            \"isPartOf\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#website\"\n\t            },\n\t            \"primaryImageOfPage\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/deep-latent-variable-models\\\/#primaryimage\"\n\t            },\n\t            \"image\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/deep-latent-variable-models\\\/#primaryimage\"\n\t            },\n\t            \"thumbnailUrl\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2025\\\/08\\\/ai-concept-artificial-intelligence-featured-img.jpg\",\n\t            \"datePublished\": \"2025-12-19T15:09:23+00:00\",\n\t            \"dateModified\": \"2025-12-22T09:50:11+00:00\",\n\t            \"description\": \"Principal components or singular vectors are ways to represent returns in terms of a small number of variables.\",\n\t            \"inLanguage\": \"en-US\",\n\t            \"potentialAction\": [\n\t                {\n\t                    \"@type\": \"ReadAction\",\n\t                    \"target\": [\n\t                        \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/deep-latent-variable-models\\\/\"\n\t                    ]\n\t                }\n\t            ]\n\t        },\n\t        {\n\t            \"@type\": \"ImageObject\",\n\t            \"inLanguage\": \"en-US\",\n\t            \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/deep-latent-variable-models\\\/#primaryimage\",\n\t            \"url\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2025\\\/08\\\/ai-concept-artificial-intelligence-featured-img.jpg\",\n\t            \"contentUrl\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2025\\\/08\\\/ai-concept-artificial-intelligence-featured-img.jpg\",\n\t            \"width\": 1000,\n\t            \"height\": 563\n\t        },\n\t        {\n\t            \"@type\": \"WebSite\",\n\t            \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#website\",\n\t            \"url\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/\",\n\t            \"name\": \"IBKR Campus US\",\n\t            \"description\": \"Financial Education from Interactive Brokers\",\n\t            \"publisher\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#organization\"\n\t            },\n\t            \"potentialAction\": [\n\t                {\n\t                    \"@type\": \"SearchAction\",\n\t                    \"target\": {\n\t                        \"@type\": \"EntryPoint\",\n\t                        \"urlTemplate\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/?s={search_term_string}\"\n\t                    },\n\t                    \"query-input\": {\n\t                        \"@type\": \"PropertyValueSpecification\",\n\t                        \"valueRequired\": true,\n\t                        \"valueName\": \"search_term_string\"\n\t                    }\n\t                }\n\t            ],\n\t            \"inLanguage\": \"en-US\"\n\t        },\n\t        {\n\t            \"@type\": \"Organization\",\n\t            \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#organization\",\n\t            \"name\": \"Interactive Brokers\",\n\t            \"alternateName\": \"IBKR\",\n\t            \"url\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/\",\n\t            \"logo\": {\n\t                \"@type\": \"ImageObject\",\n\t                \"inLanguage\": \"en-US\",\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#\\\/schema\\\/logo\\\/image\\\/\",\n\t                \"url\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2024\\\/05\\\/ibkr-campus-logo.jpg\",\n\t                \"contentUrl\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2024\\\/05\\\/ibkr-campus-logo.jpg\",\n\t                \"width\": 669,\n\t                \"height\": 669,\n\t                \"caption\": \"Interactive Brokers\"\n\t            },\n\t            \"image\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#\\\/schema\\\/logo\\\/image\\\/\"\n\t            },\n\t            \"publishingPrinciples\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/about-ibkr-campus\\\/\",\n\t            \"ethicsPolicy\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/cyber-security-notice\\\/\"\n\t        },\n\t        {\n\t            \"@type\": \"Person\",\n\t            \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#\\\/schema\\\/person\\\/e823e46b42ca381080387e794318a485\",\n\t            \"name\": \"Contributor Author\",\n\t            \"url\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/author\\\/contributor-author\\\/\"\n\t        }\n\t    ]\n\t}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Deep Latent Variable Models | IBKR Quant","description":"Principal components or singular vectors are ways to represent returns in terms of a small number of variables.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.interactivebrokers.com\/campus\/wp-json\/wp\/v2\/posts\/236297\/","og_locale":"en_US","og_type":"article","og_title":"Deep Latent Variable Models","og_description":"Principal components or singular vectors are ways to represent returns in terms of a small number of variables.","og_url":"https:\/\/www.interactivebrokers.com\/campus\/ibkr-quant-news\/deep-latent-variable-models\/","og_site_name":"IBKR Campus US","article_published_time":"2025-12-19T15:09:23+00:00","article_modified_time":"2025-12-22T09:50:11+00:00","og_image":[{"width":1000,"height":563,"url":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/08\/ai-concept-artificial-intelligence-featured-img.jpg","type":"image\/jpeg"}],"author":"Contributor Author","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Contributor Author","Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/deep-latent-variable-models\/#article","isPartOf":{"@id":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/deep-latent-variable-models\/"},"author":{"name":"Contributor Author","@id":"https:\/\/ibkrcampus.com\/campus\/#\/schema\/person\/e823e46b42ca381080387e794318a485"},"headline":"Deep Latent Variable Models","datePublished":"2025-12-19T15:09:23+00:00","dateModified":"2025-12-22T09:50:11+00:00","mainEntityOfPage":{"@id":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/deep-latent-variable-models\/"},"wordCount":1259,"commentCount":0,"publisher":{"@id":"https:\/\/ibkrcampus.com\/campus\/#organization"},"image":{"@id":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/deep-latent-variable-models\/#primaryimage"},"thumbnailUrl":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/08\/ai-concept-artificial-intelligence-featured-img.jpg","articleSection":["Data Science","Quant","Quant Development"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/deep-latent-variable-models\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/deep-latent-variable-models\/","url":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/deep-latent-variable-models\/","name":"Deep Latent Variable Models | IBKR Campus US","isPartOf":{"@id":"https:\/\/ibkrcampus.com\/campus\/#website"},"primaryImageOfPage":{"@id":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/deep-latent-variable-models\/#primaryimage"},"image":{"@id":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/deep-latent-variable-models\/#primaryimage"},"thumbnailUrl":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/08\/ai-concept-artificial-intelligence-featured-img.jpg","datePublished":"2025-12-19T15:09:23+00:00","dateModified":"2025-12-22T09:50:11+00:00","description":"Principal components or singular vectors are ways to represent returns in terms of a small number of variables.","inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/deep-latent-variable-models\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/deep-latent-variable-models\/#primaryimage","url":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/08\/ai-concept-artificial-intelligence-featured-img.jpg","contentUrl":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/08\/ai-concept-artificial-intelligence-featured-img.jpg","width":1000,"height":563},{"@type":"WebSite","@id":"https:\/\/ibkrcampus.com\/campus\/#website","url":"https:\/\/ibkrcampus.com\/campus\/","name":"IBKR Campus US","description":"Financial Education from Interactive Brokers","publisher":{"@id":"https:\/\/ibkrcampus.com\/campus\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/ibkrcampus.com\/campus\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/ibkrcampus.com\/campus\/#organization","name":"Interactive Brokers","alternateName":"IBKR","url":"https:\/\/ibkrcampus.com\/campus\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/ibkrcampus.com\/campus\/#\/schema\/logo\/image\/","url":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/05\/ibkr-campus-logo.jpg","contentUrl":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/05\/ibkr-campus-logo.jpg","width":669,"height":669,"caption":"Interactive Brokers"},"image":{"@id":"https:\/\/ibkrcampus.com\/campus\/#\/schema\/logo\/image\/"},"publishingPrinciples":"https:\/\/www.interactivebrokers.com\/campus\/about-ibkr-campus\/","ethicsPolicy":"https:\/\/www.interactivebrokers.com\/campus\/cyber-security-notice\/"},{"@type":"Person","@id":"https:\/\/ibkrcampus.com\/campus\/#\/schema\/person\/e823e46b42ca381080387e794318a485","name":"Contributor Author","url":"https:\/\/www.interactivebrokers.com\/campus\/author\/contributor-author\/"}]}},"jetpack_featured_media_url":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/08\/ai-concept-artificial-intelligence-featured-img.jpg","_links":{"self":[{"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/posts\/236297","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/users\/186"}],"replies":[{"embeddable":true,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/comments?post=236297"}],"version-history":[{"count":0,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/posts\/236297\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/media\/228756"}],"wp:attachment":[{"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/media?parent=236297"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/categories?post=236297"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/tags?post=236297"},{"taxonomy":"contributors-categories","embeddable":true,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/contributors-categories?post=236297"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}