{"id":223003,"date":"2025-04-30T12:38:30","date_gmt":"2025-04-30T16:38:30","guid":{"rendered":"https:\/\/ibkrcampus.com\/campus\/?p=223003"},"modified":"2025-05-01T07:11:19","modified_gmt":"2025-05-01T11:11:19","slug":"bayesian-inference-methods-and-formula-explained","status":"publish","type":"post","link":"https:\/\/www.interactivebrokers.com\/campus\/ibkr-quant-news\/bayesian-inference-methods-and-formula-explained\/","title":{"rendered":"Bayesian Inference Methods and Formula Explained"},"content":{"rendered":"\n<p><em>The article &#8220;Bayesian Inference Methods and Formula Explained&#8221; was originally published on <a href=\"https:\/\/blog.quantinsti.com\/bayesian-inference\/\">QuantInsti<\/a> blog.<\/em><\/p>\n\n\n\n<p>This post on Bayesian inference is the second of a multi-part series on Bayesian statistics and methods used in quantitative finance.<\/p>\n\n\n\n<p>In my previous&nbsp;<a href=\"https:\/\/blog.quantinsti.com\/introduction-to-bayesian-statistics-in-finance\" target=\"_blank\" rel=\"noreferrer noopener\">post<\/a>, I gave a leisurely introduction to Bayesian statistics and while doing so distinguished between the frequentist and the Bayesian outlook of the world. I dwelt on how each of their underlying philosophies influenced their analysis of various probabilistic phenomena. I then discussed the Bayes&#8217; Theorem along with some illustrations to help lay the building blocks of Bayesian statistics.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-intent-of-this-post\"><strong>Intent of this Post<\/strong><\/h2>\n\n\n\n<p>My objective here is to help develop a deeper understanding of statistical analysis by focusing on the methodologies adopted by frequentist statistics and Bayesian statistics. I consciously choose to tackle the programming and simulation aspects using Python in my next post.<\/p>\n\n\n\n<p>I now instantiate the previously discussed ideas with a simple coin-tossing example adapted from &#8220;Introduction to Bayesian Econometrics (2nd Edition)&#8221;.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-example-a-repeated-coin-tossing-experiment\">Example: A Repeated Coin-Tossing Experiment<\/h3>\n\n\n\n<p>Suppose we are interested in estimating the bias of a coin whose fairness is unknown.&nbsp;<strong>We define \u03b8 (the Greek letter &#8216;theta&#8217;) as the probability of getting a head after a coin is tossed.<\/strong>&nbsp;\u03b8 is the unknown parameter we want to estimate. We intend to do so by inspecting the results of tossing the coin multiple times. Let us denote y as a realization of the random variable Y (representing the outcome of a coin toss). Let&nbsp;<strong>Y=1<\/strong>&nbsp;if a coin toss results in heads and&nbsp;<strong>Y=0<\/strong>&nbsp;if a coin toss results in tails. Essentially, we are assigning 1 to heads and 0 to tails.<\/p>\n\n\n\n<p>\u2234&nbsp;<strong>P(Y=1|\u03b8)=\u03b8 ; P(Y=0|\u03b8)=1\u2212\u03b8<\/strong><\/p>\n\n\n\n<p>Based on our above setup, Y can be modelled as a&nbsp;<a href=\"https:\/\/en.wikipedia.org\/wiki\/Bernoulli_distribution\" target=\"_blank\" rel=\"noreferrer noopener\">Bernoulli distribution<\/a>&nbsp;which we denote as<\/p>\n\n\n\n<p><strong>Y \u223c Bernoulli (\u03b8)<\/strong><\/p>\n\n\n\n<p>I now briefly view our experimental setup through the lens of the frequentist and the Bayesian before proceeding with our estimation of the unknown parameter&nbsp;<strong>\u03b8<\/strong>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Two Perspectives on the Experiment Setup<\/strong><\/h2>\n\n\n\n<p>In classical statistics (i.e. the frequentist approach), our parameter&nbsp;<strong>\u03b8<\/strong>&nbsp;is a fixed but unknown value lying between&nbsp;<strong>0<\/strong>&nbsp;and&nbsp;<strong>1<\/strong>. The data we collect is one realization of a recurrent (i.e. repeating this&nbsp;<strong>n<\/strong>-toss experiment say&nbsp;<strong>N<\/strong>&nbsp;times) experiment. Classical estimation techniques like the method of maximum likelihood are used to arrive at \u03b8\u0304\u0302 (called &#8216;theta hat&#8217;), an estimate for the unknown parameter&nbsp;<strong>\u03b8<\/strong>. In statistics, we usually express an estimate by putting a hat over the name of the parameter. I dilate this idea in the next section. To restate what has been said previously, we observe that in the frequentist universe, the parameter is fixed but the data is varying.<\/p>\n\n\n\n<p>Bayesian statistics is fundamentally different. Here, the parameter \u03b8 is treated as a random variable since there is uncertainty about its value. It, therefore, makes sense for us to regard our parameter as a random variable which will have an associated&nbsp;<a target=\"_blank\" href=\"https:\/\/blog.quantinsti.com\/statistics-probability-distribution\/\" rel=\"noreferrer noopener\">probability distribution<\/a>. In order to apply Bayesian inference, we turn our attention to one of the fundamental laws of probability theory,&nbsp;<a href=\"https:\/\/en.wikipedia.org\/wiki\/Bayes%27_theorem\" target=\"_blank\" rel=\"noreferrer noopener\">Bayes&#8217; Theorem<\/a>&nbsp;that we had seen previously.<\/p>\n\n\n\n<p>I use the mathematical form of Bayes&#8217; Theorem as a way to establish a connection with Bayesian inference.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"384\" height=\"123\" data-src=\"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/04\/quantinsti-bayesian-inference-1.png\" alt=\"Bayesian Inference Methods\" class=\"wp-image-223042 lazyload\" data-srcset=\"https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/04\/quantinsti-bayesian-inference-1.png 384w, https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/04\/quantinsti-bayesian-inference-1-300x96.png 300w\" data-sizes=\"(max-width: 384px) 100vw, 384px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 384px; aspect-ratio: 384\/123;\" \/><\/figure>\n\n\n\n<p>To repeat what I said in my previous post, what makes this theorem so handy is it allows us to&nbsp;<strong>invert a conditional probability<\/strong>. So if we observe a phenomenon and collect data or evidence about it, the theorem helps us analytically define the&nbsp;<em>conditional probability of different possible causes given the evidence.<\/em><\/p>\n\n\n\n<p>Let&#8217;s now apply this to our example by using the notations we had defined earlier. I label&nbsp;<strong>A = \u03b8<\/strong>&nbsp;and&nbsp;<strong>B = y<\/strong>. In the field of Bayesian statistics, there are special names used for each of these terms which I spell out below and use subsequently.&nbsp;<a href=\"https:\/\/blog.quantinsti.com\/bayesian-inference\/#one\">(1)<\/a>&nbsp;can be rewritten as:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"340\" height=\"108\" data-src=\"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/04\/quantinsti-bayesian-inference-updated-2.png\" alt=\"Bayesian Inference Methods\" class=\"wp-image-223043 lazyload\" data-srcset=\"https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/04\/quantinsti-bayesian-inference-updated-2.png 340w, https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/04\/quantinsti-bayesian-inference-updated-2-300x95.png 300w\" data-sizes=\"(max-width: 340px) 100vw, 340px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 340px; aspect-ratio: 340\/108;\" \/><\/figure>\n\n\n\n<p>where:<\/p>\n\n\n\n<p><strong>P(\u03b8)<\/strong>&nbsp;is the&nbsp;<strong>prior probability<\/strong>. We express our belief about the cause&nbsp;<strong>\u03b8<\/strong>&nbsp;BEFORE observing the evidence&nbsp;<strong>Y<\/strong>. In our example, the prior would be quantifying our a priori belief on the fairness of the coin (here we can start with the assumption that it is an unbiased coin, so \u03b8 = 1\/2).&nbsp;<strong>P(Y|\u03b8)<\/strong>&nbsp;is the&nbsp;<strong>likelihood<\/strong>. Here is where the real action happens. This is the probability of the observed sample or evidence given the hypothesized cause. Let us, without loss of generality, assume that we obtain 5 heads in 8 coin tosses. Presuming the coin to be unbiased as specified above, the likelihood would be the probability of observing 5 heads in 8 tosses given that \u03b8 = 1\/2.&nbsp;<strong>P(\u03b8|Y)<\/strong>&nbsp;is the&nbsp;<strong>posterior probability<\/strong>. This is the probability of the underlying cause \u03b8 AFTER observing the evidence y. Here, we compute our updated or a posteriori belief on the bias of the coin after observing 5 heads in 8 coin tosses using Bayes&#8217; theorem.&nbsp;<strong>P(Y)<\/strong>&nbsp;is the&nbsp;<strong>probability of the data or evidence<\/strong>. We sometimes also call this the marginal likelihood. This is obtained by taking the weighted sum (or integral) of the likelihood function of the evidence across all possible values of \u03b8. In our example, we would compute the probability of 5 heads in 8 coin tosses for all possible beliefs about \u03b8. This term is used to normalize the posterior probability. Since it is independent of the parameter to be estimated \u03b8, it is mathematically more tractable to express the posterior probability as:<\/p>\n\n\n\n<p><strong>P(\u03b8|Y) \u221d P(Y|\u03b8) \u00d7 P(\u03b8)&nbsp;<\/strong>\u2026\u2026.(3)<\/p>\n\n\n\n<p><a href=\"https:\/\/blog.quantinsti.com\/bayesian-inference\/#three\">(3)<\/a>&nbsp;<strong>is the most important expression in Bayesian statistics<\/strong>&nbsp;and bears repeating. For clarity, I paraphrase what I said earlier. Bayesian inference allows us to turnaround conditional probabilities i.e. use the prior probabilities and the likelihood functions to provide a connecting link to the posterior probabilities i.e.&nbsp;<strong>P(\u03b8|Y)<\/strong>&nbsp;granted that we only know&nbsp;<strong>P(Y|\u03b8)<\/strong>&nbsp;and the prior,&nbsp;<strong>P(\u03b8<\/strong>). I find it helpful to view&nbsp;<a href=\"https:\/\/blog.quantinsti.com\/bayesian-inference\/#three\">(3)<\/a>&nbsp;as:<\/p>\n\n\n\n<p><strong>Posterior Probability \u221d Likelihood \u00d7 Prior Probability<\/strong>&nbsp;\u2026\u2026\u2026.&nbsp;(4)<\/p>\n\n\n\n<p>The experimental objective is to get an estimate of the unknown parameter&nbsp;<strong>\u03b8<\/strong>&nbsp;based on the outcome of&nbsp;<strong>n<\/strong>&nbsp;independent coin tosses. The coin tosses generate the sample or data y = (y1, y2, \u2026,&nbsp;yn), where yi is 1 or 0 based on the result of the&nbsp;<strong>ith<\/strong>&nbsp;coin toss.<\/p>\n\n\n\n<p>I now show the frequentist and Bayesian approaches to fulfilling this objective. Feel free to cursorily skim through the derivations I touch upon here if you are not interested in the mathematics behind it. You can still develop sufficient intuitions and learn to use Bayesian techniques in practice.<\/p>\n\n\n\n<p><strong>Estimating&nbsp;<\/strong>\u03b8:<strong>&nbsp;The Frequentist Approach<\/strong><br>We compute the joint probability function using the maximum likelihood estimation (MLE) approach. The probability of the outcome for a single coin toss can be elegantly expressed as:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"230\" height=\"43\" data-src=\"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/04\/quantinsti-bayesian-inference-3.png\" alt=\"Bayesian Inference Methods and Formula\" class=\"wp-image-223028 lazyload\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 230px; aspect-ratio: 230\/43;\" \/><\/figure>\n\n\n\n<p>For a given value of&nbsp;<em>\u03b8<\/em>, the joint probability of the outcome for n independent coin tosses is the product of the probability of each individual outcome:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"577\" height=\"197\" data-src=\"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/04\/quantinsti-bayesian-inference-updated-3.png\" alt=\"Bayesian Inference Methods\" class=\"wp-image-223045 lazyload\" data-srcset=\"https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/04\/quantinsti-bayesian-inference-updated-3.png 577w, https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/04\/quantinsti-bayesian-inference-updated-3-300x102.png 300w\" data-sizes=\"(max-width: 577px) 100vw, 577px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 577px; aspect-ratio: 577\/197;\" \/><\/figure>\n\n\n\n<p>As we can see in&nbsp;<a href=\"https:\/\/blog.quantinsti.com\/bayesian-inference\/#four\">(4)<\/a>, the expression worked out is a function of the unknown parameter \u03b8 given the observations from our experiment. This function of \u03b8 is called the likelihood function and is usually referred to in the literature as:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"475\" height=\"61\" data-src=\"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/04\/quantinsti-bayesian-inference-updated-4.png\" alt=\"Bayesian Inference Methods\" class=\"wp-image-223046 lazyload\" data-srcset=\"https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/04\/quantinsti-bayesian-inference-updated-4.png 475w, https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/04\/quantinsti-bayesian-inference-updated-4-300x39.png 300w\" data-sizes=\"(max-width: 475px) 100vw, 475px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 475px; aspect-ratio: 475\/61;\" \/><\/figure>\n\n\n\n<p>OR<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"281\" height=\"65\" data-src=\"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/04\/quantinsti-bayesian-inference-updated-5.png\" alt=\"Bayesian Inference Methods\" class=\"wp-image-223048 lazyload\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 281px; aspect-ratio: 281\/65;\" \/><\/figure>\n\n\n\n<p>We would like to compute the value of&nbsp;<em>\u03b8<\/em>&nbsp;which is most likely to have yielded the observed set of outcomes. This is called the&nbsp;<em>maximum likelihood estimate<\/em>, \u03b8\u0304\u0302 (&#8216;theta hat&#8217;). For analytically computing it, we trivially take the first order derivative of&nbsp;<a href=\"https:\/\/blog.quantinsti.com\/bayesian-inference\/#six\">(6)<\/a>&nbsp;with respect to the parameter and set it equal to zero. It is prudent to also take the second derivative and check the sign of its value at \u03b8 = \u03b8\u0304\u0302&nbsp; to ensure that the estimate is indeed the maxima. We often customarily take the log of the likelihood function since it greatly simplifies the determination of the maximum likelihood estimator \u03b8\u0304\u0302 . It should therefore not surprise you that the literature is replete with log likelihood functions and their solutions.<\/p>\n\n\n\n<p><strong>Estimating \u03b8: The Bayesian Approach<\/strong><\/p>\n\n\n\n<p>I now change the notations we have used so far to make them a little more precise mathematically. I will use these notations throughout this series now. The reason for this alteration is so that we can suitably ascribe each term with symbols that remind us of their random nature. There is uncertainty over the values of \u03b8, Y, etc., we, therefore, regard them as random variables and assign them corresponding probability distributions which I do below.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Notations for the Density and Distribution Functions<\/strong><\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>\u03c0(\u22c5)<\/strong>&nbsp;(the Greek letter &#8216;pi&#8217;) to denote the probability distribution function of the&nbsp;<strong>prior<\/strong>&nbsp;(this is pertaining to \u03b8) and&nbsp;<strong>\u03c0(\u22c5|y)<\/strong>&nbsp;to denote the posterior density function of the parameter we attempt to estimate.<\/li>\n\n\n\n<li><strong>f(\u22c5)<\/strong>&nbsp;to denote the probability density function (pdf) for continuous random variables and p(.) which is the probability mass function (pmf) of discrete random variables. However, for simplicity, I use&nbsp;<strong>f(\u22c5)<\/strong>&nbsp;irrespective of whether the random variable&nbsp;<strong>Y<\/strong>&nbsp;is continuous or discrete.<\/li>\n\n\n\n<li>The joint density function will continue to be denoted as&nbsp;<strong>L(\u03b8|\u22c5)<\/strong>. to denote the likelihood function which is the joint density of the sample values and is usually the product of the pdf&#8217;s\/pmf&#8217;s of the sample values from our data.<\/li>\n<\/ul>\n\n\n\n<p>Remember that&nbsp;<strong><em>\u03b8<\/em><\/strong>&nbsp;is the parameter we are trying to estimate.<\/p>\n\n\n\n<p><a href=\"https:\/\/blog.quantinsti.com\/bayesian-inference\/#two\">(2)<\/a>&nbsp;and&nbsp;<a href=\"https:\/\/blog.quantinsti.com\/bayesian-inference\/#three\">(3)<\/a>&nbsp;can be rewritten as<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"326\" height=\"119\" data-src=\"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/04\/quantinsti-bayesian-inference-updated-6.png\" alt=\"Bayesian Inference Methods\" class=\"wp-image-223050 lazyload\" data-srcset=\"https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/04\/quantinsti-bayesian-inference-updated-6.png 326w, https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/04\/quantinsti-bayesian-inference-updated-6-300x110.png 300w\" data-sizes=\"(max-width: 326px) 100vw, 326px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 326px; aspect-ratio: 326\/119;\" \/><\/figure>\n\n\n\n<p>Stated in words,&nbsp;<strong>the posterior distribution function is proportional to the likelihood function times the prior distribution function<\/strong>. I redraw your attention to&nbsp;<a href=\"https:\/\/blog.quantinsti.com\/bayesian-inference\/#four\">(4)<\/a>&nbsp;and present it in congruence with our new notations.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"454\" height=\"37\" data-src=\"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/04\/quantinsti-bayesian-inference-updated-8.png\" alt=\"\" class=\"wp-image-223052 lazyload\" data-srcset=\"https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/04\/quantinsti-bayesian-inference-updated-8.png 454w, https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/04\/quantinsti-bayesian-inference-updated-8-300x24.png 300w\" data-sizes=\"(max-width: 454px) 100vw, 454px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 454px; aspect-ratio: 454\/37;\" \/><\/figure>\n\n\n\n<p>I now rewrite&nbsp;<a href=\"https:\/\/blog.quantinsti.com\/bayesian-inference\/#eight\">(8)<\/a>&nbsp;and&nbsp;<a href=\"https:\/\/blog.quantinsti.com\/bayesian-inference\/#nine\">(9)<\/a>&nbsp;using the likelihood function L(\u03b8|Y) defined earlier in&nbsp;<a href=\"https:\/\/blog.quantinsti.com\/bayesian-inference\/#seven\">(7)<\/a>.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"348\" height=\"184\" data-src=\"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/04\/quantinsti-bayesian-inference-updated-7.png\" alt=\"Bayesian Inference Methods\" class=\"wp-image-223049 lazyload\" data-srcset=\"https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/04\/quantinsti-bayesian-inference-updated-7.png 348w, https:\/\/ibkrcampus.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/04\/quantinsti-bayesian-inference-updated-7-300x159.png 300w\" data-sizes=\"(max-width: 348px) 100vw, 348px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 348px; aspect-ratio: 348\/184;\" \/><\/figure>\n\n\n\n<p>The denominator of&nbsp;<a href=\"https:\/\/blog.quantinsti.com\/bayesian-inference\/#eleven\">(11)<\/a>&nbsp;is the probability distribution of the evidence or data. I reiterate what I have previously mentioned while inspecting&nbsp;<a href=\"https:\/\/blog.quantinsti.com\/bayesian-inference\/#three\">(3)<\/a>: A useful way of considering the posterior density is using the proportionality approach as seen in&nbsp;<a href=\"https:\/\/blog.quantinsti.com\/bayesian-inference\/#twelve\">(12)<\/a>. That way, we don&#8217;t need to worry about the f(y) term on the RHS of&nbsp;<a href=\"https:\/\/blog.quantinsti.com\/bayesian-inference\/#eleven\">(11)<\/a>.<\/p>\n\n\n\n<p>For the mathematically curious among you, I now take you briefly down a needless rabbit hole to explain it incompletely. Perhaps, later in our journey, I may write a separate post brooding on these minutiae.<\/p>\n\n\n\n<p>In&nbsp;<a href=\"https:\/\/blog.quantinsti.com\/bayesian-inference\/#eleven\">(11)<\/a>,&nbsp;<strong>f(y)<\/strong>&nbsp;is the proportionality constant that makes the posterior distribution a proper density function integrating to 1. When we examine it more closely, we see that is, in fact, the unconditional (marginal) distribution of the random variable&nbsp;<strong>Y<\/strong>. We can determine it analytically by integrating over all possible values of the parameter&nbsp;<strong>\u03b8<\/strong>. Since we are integrating out&nbsp;<strong>\u03b8<\/strong>, we find that&nbsp;<strong>f(y)<\/strong>&nbsp;does not depend on&nbsp;<strong>\u03b8<\/strong>.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"285\" height=\"248\" data-src=\"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2025\/04\/quantinsti-bayesian-inference-updated-9.png\" alt=\"Bayesian Inference Methods\" class=\"wp-image-223054 lazyload\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 285px; aspect-ratio: 285\/248;\" \/><\/figure>\n\n\n\n<p><a href=\"https:\/\/blog.quantinsti.com\/bayesian-inference\/#eleven\">(11)<\/a>&nbsp;and&nbsp;<a href=\"https:\/\/blog.quantinsti.com\/bayesian-inference\/#twelve\">(12)<\/a>&nbsp;represent the&nbsp;<strong>continuous versions of the Bayes&#8217; Theorem<\/strong>.<\/p>\n\n\n\n<p><em>The posterior distribution is central to Bayesian statistics and inference because it blends all the updated information about the parameter&nbsp;<strong>\u03b8<\/strong>&nbsp;in a single expression. This includes information about&nbsp;<strong>\u03b8<\/strong>&nbsp;before the observations were inspected and this is captured through the prior distribution. The information contained in the observations is captured through the likelihood function.<\/em><\/p>\n\n\n\n<p>We can regard&nbsp;<a href=\"https:\/\/blog.quantinsti.com\/bayesian-inference\/#eleven\">(11)<\/a>&nbsp;as a method of updating information and this idea is further exemplified by the prior-posterior nomenclature.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The prior distribution of&nbsp;<strong>\u03b8<\/strong>,&nbsp;<strong>\u03c0(\u03b8)<\/strong>&nbsp;represents the information available about its possible values before recording the observations&nbsp;<strong>y<\/strong>.<\/li>\n\n\n\n<li>The likelihood function&nbsp;<strong>L(\u03b8|y)<\/strong>&nbsp;of&nbsp;<strong>\u03b8<\/strong>&nbsp;is then determined based on the observations&nbsp;<strong>y<\/strong>.<\/li>\n\n\n\n<li>The posterior distribution of&nbsp;<strong>\u03b8<\/strong>,&nbsp;<strong>\u03c0(\u03b8|y)<\/strong>&nbsp;summarizes all the available information about the unknown parameter \u03b8 after recording and incorporating the observations&nbsp;<strong>y<\/strong>.<\/li>\n<\/ul>\n\n\n\n<p><strong>The Bayesian estimate of \u03b8 would be the weighted average of the prior estimate and the maximum likelihood estimate, \u03b8\u0304\u0302&nbsp;<\/strong>&nbsp;. As the number of observations&nbsp;<strong>n<\/strong>&nbsp;increases and approached infinity, the weight on the prior estimate approaches zero and the weight on the MLE approaches one. This implies that the Bayesian and frequentist estimates would converge as our sample size gets larger.<\/p>\n\n\n\n<p>To clarify, in a classical or frequentist setting, the usual estimator of the parameter,&nbsp;<strong>\u03b8<\/strong>&nbsp;is the ML estimator, \u03b8\u0304\u0302 &nbsp;. Here, the prior is implicitly treated as a constant.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Summary<\/strong><\/h2>\n\n\n\n<p>I have devoted this post to deriving the fundamental result of Bayesian statistics, viz.&nbsp;<a href=\"https:\/\/blog.quantinsti.com\/bayesian-inference\/#ten\">(10)<\/a>&nbsp;. The essence of this expression is to represent uncertainty by combining the knowledge obtained from two sources &#8211; observations and prior beliefs. In doing so, I introduced the concepts of prior distributions, likelihood functions and posterior distributions as well as the comparison of the frequentist and Bayesian methodologies. In my next post, I intend to make good my promise of illustrating the above example with simulations in Python.<\/p>\n\n\n\n<p>Bayesian statistics is an important part of&nbsp;<a href=\"https:\/\/quantra.quantinsti.com\/course\/quantitative-trading-strategies-models\" target=\"_blank\" rel=\"noreferrer noopener\">quantitative strategies<\/a>&nbsp;which are part of an algorithmic trader\u2019s handbook.&nbsp;The&nbsp;<a href=\"https:\/\/www.quantinsti.com\/epat\/\" target=\"_blank\" rel=\"noreferrer noopener\">Executive Programme in Algorithmic Trading (EPAT\u2122)<\/a>&nbsp;course by&nbsp;<a href=\"https:\/\/www.quantinsti.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">QuantInsti\u00ae<\/a>&nbsp;covers training modules like Statistics &amp; Econometrics, Financial Computing &amp; Technology, and Algorithmic &amp; Quantitative Trading that equip you with the required skill sets for&nbsp;applying various trading instruments and platforms to be a successful trader.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>This post on Bayesian inference is the second of a multi-part series on Bayesian statistics and methods used in quantitative finance.<\/p>\n","protected":false},"author":646,"featured_media":176026,"comment_status":"open","ping_status":"closed","sticky":true,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[339,338,341],"tags":[19317,19316,19315,4922,4939],"contributors-categories":[13654],"class_list":{"0":"post-223003","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-data-science","8":"category-ibkr-quant-news","9":"category-quant-development","10":"tag-bayes-theorem","11":"tag-bayesian-inference-methods","12":"tag-bernoulli-distribution","13":"tag-econometrics","14":"tag-statistics","15":"contributors-categories-quantinsti"},"pp_statuses_selecting_workflow":false,"pp_workflow_action":"current","pp_status_selection":"publish","acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v26.9 (Yoast SEO v27.5) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Bayesian Inference Methods and Formula Explained<\/title>\n<meta name=\"description\" content=\"This post on Bayesian inference is the second of a multi-part series on Bayesian statistics and methods used in quantitative finance.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.interactivebrokers.com\/campus\/wp-json\/wp\/v2\/posts\/223003\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Bayesian Inference Methods and Formula Explained\" \/>\n<meta property=\"og:description\" content=\"This post on Bayesian inference is the second of a multi-part series on Bayesian statistics and methods used in quantitative finance.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.interactivebrokers.com\/campus\/ibkr-quant-news\/bayesian-inference-methods-and-formula-explained\/\" \/>\n<meta property=\"og:site_name\" content=\"IBKR Campus US\" \/>\n<meta property=\"article:published_time\" content=\"2025-04-30T16:38:30+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-05-01T11:11:19+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2023\/01\/abstract.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"563\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Vivek Krishnamoorthy\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Vivek Krishnamoorthy\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"12 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\n\t    \"@context\": \"https:\\\/\\\/schema.org\",\n\t    \"@graph\": [\n\t        {\n\t            \"@type\": \"NewsArticle\",\n\t            \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/bayesian-inference-methods-and-formula-explained\\\/#article\",\n\t            \"isPartOf\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/bayesian-inference-methods-and-formula-explained\\\/\"\n\t            },\n\t            \"author\": {\n\t                \"name\": \"Vivek Krishnamoorthy\",\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#\\\/schema\\\/person\\\/3219a2e26a01b50f115c36fec5150e5c\"\n\t            },\n\t            \"headline\": \"Bayesian Inference Methods and Formula Explained\",\n\t            \"datePublished\": \"2025-04-30T16:38:30+00:00\",\n\t            \"dateModified\": \"2025-05-01T11:11:19+00:00\",\n\t            \"mainEntityOfPage\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/bayesian-inference-methods-and-formula-explained\\\/\"\n\t            },\n\t            \"wordCount\": 2196,\n\t            \"commentCount\": 0,\n\t            \"publisher\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#organization\"\n\t            },\n\t            \"image\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/bayesian-inference-methods-and-formula-explained\\\/#primaryimage\"\n\t            },\n\t            \"thumbnailUrl\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2023\\\/01\\\/abstract.jpg\",\n\t            \"keywords\": [\n\t                \"Bayes' Theorem\",\n\t                \"Bayesian Inference Methods\",\n\t                \"Bernoulli Distribution\",\n\t                \"Econometrics\",\n\t                \"statistics\"\n\t            ],\n\t            \"articleSection\": [\n\t                \"Data Science\",\n\t                \"Quant\",\n\t                \"Quant Development\"\n\t            ],\n\t            \"inLanguage\": \"en-US\",\n\t            \"potentialAction\": [\n\t                {\n\t                    \"@type\": \"CommentAction\",\n\t                    \"name\": \"Comment\",\n\t                    \"target\": [\n\t                        \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/bayesian-inference-methods-and-formula-explained\\\/#respond\"\n\t                    ]\n\t                }\n\t            ]\n\t        },\n\t        {\n\t            \"@type\": \"WebPage\",\n\t            \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/bayesian-inference-methods-and-formula-explained\\\/\",\n\t            \"url\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/bayesian-inference-methods-and-formula-explained\\\/\",\n\t            \"name\": \"Bayesian Inference Methods and Formula Explained | IBKR Campus US\",\n\t            \"isPartOf\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#website\"\n\t            },\n\t            \"primaryImageOfPage\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/bayesian-inference-methods-and-formula-explained\\\/#primaryimage\"\n\t            },\n\t            \"image\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/bayesian-inference-methods-and-formula-explained\\\/#primaryimage\"\n\t            },\n\t            \"thumbnailUrl\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2023\\\/01\\\/abstract.jpg\",\n\t            \"datePublished\": \"2025-04-30T16:38:30+00:00\",\n\t            \"dateModified\": \"2025-05-01T11:11:19+00:00\",\n\t            \"description\": \"This post on Bayesian inference is the second of a multi-part series on Bayesian statistics and methods used in quantitative finance.\",\n\t            \"inLanguage\": \"en-US\",\n\t            \"potentialAction\": [\n\t                {\n\t                    \"@type\": \"ReadAction\",\n\t                    \"target\": [\n\t                        \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/bayesian-inference-methods-and-formula-explained\\\/\"\n\t                    ]\n\t                }\n\t            ]\n\t        },\n\t        {\n\t            \"@type\": \"ImageObject\",\n\t            \"inLanguage\": \"en-US\",\n\t            \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/ibkr-quant-news\\\/bayesian-inference-methods-and-formula-explained\\\/#primaryimage\",\n\t            \"url\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2023\\\/01\\\/abstract.jpg\",\n\t            \"contentUrl\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2023\\\/01\\\/abstract.jpg\",\n\t            \"width\": 1000,\n\t            \"height\": 563,\n\t            \"caption\": \"How to Access Interactive Brokers\u2019 Alternative Trading System (IBKRATS)\"\n\t        },\n\t        {\n\t            \"@type\": \"WebSite\",\n\t            \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#website\",\n\t            \"url\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/\",\n\t            \"name\": \"IBKR Campus US\",\n\t            \"description\": \"Financial Education from Interactive Brokers\",\n\t            \"publisher\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#organization\"\n\t            },\n\t            \"potentialAction\": [\n\t                {\n\t                    \"@type\": \"SearchAction\",\n\t                    \"target\": {\n\t                        \"@type\": \"EntryPoint\",\n\t                        \"urlTemplate\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/?s={search_term_string}\"\n\t                    },\n\t                    \"query-input\": {\n\t                        \"@type\": \"PropertyValueSpecification\",\n\t                        \"valueRequired\": true,\n\t                        \"valueName\": \"search_term_string\"\n\t                    }\n\t                }\n\t            ],\n\t            \"inLanguage\": \"en-US\"\n\t        },\n\t        {\n\t            \"@type\": \"Organization\",\n\t            \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#organization\",\n\t            \"name\": \"Interactive Brokers\",\n\t            \"alternateName\": \"IBKR\",\n\t            \"url\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/\",\n\t            \"logo\": {\n\t                \"@type\": \"ImageObject\",\n\t                \"inLanguage\": \"en-US\",\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#\\\/schema\\\/logo\\\/image\\\/\",\n\t                \"url\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2024\\\/05\\\/ibkr-campus-logo.jpg\",\n\t                \"contentUrl\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2024\\\/05\\\/ibkr-campus-logo.jpg\",\n\t                \"width\": 669,\n\t                \"height\": 669,\n\t                \"caption\": \"Interactive Brokers\"\n\t            },\n\t            \"image\": {\n\t                \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#\\\/schema\\\/logo\\\/image\\\/\"\n\t            },\n\t            \"publishingPrinciples\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/about-ibkr-campus\\\/\",\n\t            \"ethicsPolicy\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/cyber-security-notice\\\/\"\n\t        },\n\t        {\n\t            \"@type\": \"Person\",\n\t            \"@id\": \"https:\\\/\\\/ibkrcampus.com\\\/campus\\\/#\\\/schema\\\/person\\\/3219a2e26a01b50f115c36fec5150e5c\",\n\t            \"name\": \"Vivek Krishnamoorthy\",\n\t            \"url\": \"https:\\\/\\\/www.interactivebrokers.com\\\/campus\\\/author\\\/vivekkrishnamoorthy\\\/\"\n\t        }\n\t    ]\n\t}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Bayesian Inference Methods and Formula Explained","description":"This post on Bayesian inference is the second of a multi-part series on Bayesian statistics and methods used in quantitative finance.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.interactivebrokers.com\/campus\/wp-json\/wp\/v2\/posts\/223003\/","og_locale":"en_US","og_type":"article","og_title":"Bayesian Inference Methods and Formula Explained","og_description":"This post on Bayesian inference is the second of a multi-part series on Bayesian statistics and methods used in quantitative finance.","og_url":"https:\/\/www.interactivebrokers.com\/campus\/ibkr-quant-news\/bayesian-inference-methods-and-formula-explained\/","og_site_name":"IBKR Campus US","article_published_time":"2025-04-30T16:38:30+00:00","article_modified_time":"2025-05-01T11:11:19+00:00","og_image":[{"width":1000,"height":563,"url":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2023\/01\/abstract.jpg","type":"image\/jpeg"}],"author":"Vivek Krishnamoorthy","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Vivek Krishnamoorthy","Est. reading time":"12 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/bayesian-inference-methods-and-formula-explained\/#article","isPartOf":{"@id":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/bayesian-inference-methods-and-formula-explained\/"},"author":{"name":"Vivek Krishnamoorthy","@id":"https:\/\/ibkrcampus.com\/campus\/#\/schema\/person\/3219a2e26a01b50f115c36fec5150e5c"},"headline":"Bayesian Inference Methods and Formula Explained","datePublished":"2025-04-30T16:38:30+00:00","dateModified":"2025-05-01T11:11:19+00:00","mainEntityOfPage":{"@id":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/bayesian-inference-methods-and-formula-explained\/"},"wordCount":2196,"commentCount":0,"publisher":{"@id":"https:\/\/ibkrcampus.com\/campus\/#organization"},"image":{"@id":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/bayesian-inference-methods-and-formula-explained\/#primaryimage"},"thumbnailUrl":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2023\/01\/abstract.jpg","keywords":["Bayes' Theorem","Bayesian Inference Methods","Bernoulli Distribution","Econometrics","statistics"],"articleSection":["Data Science","Quant","Quant Development"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/bayesian-inference-methods-and-formula-explained\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/bayesian-inference-methods-and-formula-explained\/","url":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/bayesian-inference-methods-and-formula-explained\/","name":"Bayesian Inference Methods and Formula Explained | IBKR Campus US","isPartOf":{"@id":"https:\/\/ibkrcampus.com\/campus\/#website"},"primaryImageOfPage":{"@id":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/bayesian-inference-methods-and-formula-explained\/#primaryimage"},"image":{"@id":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/bayesian-inference-methods-and-formula-explained\/#primaryimage"},"thumbnailUrl":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2023\/01\/abstract.jpg","datePublished":"2025-04-30T16:38:30+00:00","dateModified":"2025-05-01T11:11:19+00:00","description":"This post on Bayesian inference is the second of a multi-part series on Bayesian statistics and methods used in quantitative finance.","inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/bayesian-inference-methods-and-formula-explained\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/ibkrcampus.com\/campus\/ibkr-quant-news\/bayesian-inference-methods-and-formula-explained\/#primaryimage","url":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2023\/01\/abstract.jpg","contentUrl":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2023\/01\/abstract.jpg","width":1000,"height":563,"caption":"How to Access Interactive Brokers\u2019 Alternative Trading System (IBKRATS)"},{"@type":"WebSite","@id":"https:\/\/ibkrcampus.com\/campus\/#website","url":"https:\/\/ibkrcampus.com\/campus\/","name":"IBKR Campus US","description":"Financial Education from Interactive Brokers","publisher":{"@id":"https:\/\/ibkrcampus.com\/campus\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/ibkrcampus.com\/campus\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/ibkrcampus.com\/campus\/#organization","name":"Interactive Brokers","alternateName":"IBKR","url":"https:\/\/ibkrcampus.com\/campus\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/ibkrcampus.com\/campus\/#\/schema\/logo\/image\/","url":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/05\/ibkr-campus-logo.jpg","contentUrl":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2024\/05\/ibkr-campus-logo.jpg","width":669,"height":669,"caption":"Interactive Brokers"},"image":{"@id":"https:\/\/ibkrcampus.com\/campus\/#\/schema\/logo\/image\/"},"publishingPrinciples":"https:\/\/www.interactivebrokers.com\/campus\/about-ibkr-campus\/","ethicsPolicy":"https:\/\/www.interactivebrokers.com\/campus\/cyber-security-notice\/"},{"@type":"Person","@id":"https:\/\/ibkrcampus.com\/campus\/#\/schema\/person\/3219a2e26a01b50f115c36fec5150e5c","name":"Vivek Krishnamoorthy","url":"https:\/\/www.interactivebrokers.com\/campus\/author\/vivekkrishnamoorthy\/"}]}},"jetpack_featured_media_url":"https:\/\/www.interactivebrokers.com\/campus\/wp-content\/uploads\/sites\/2\/2023\/01\/abstract.jpg","_links":{"self":[{"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/posts\/223003","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/users\/646"}],"replies":[{"embeddable":true,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/comments?post=223003"}],"version-history":[{"count":0,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/posts\/223003\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/media\/176026"}],"wp:attachment":[{"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/media?parent=223003"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/categories?post=223003"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/tags?post=223003"},{"taxonomy":"contributors-categories","embeddable":true,"href":"https:\/\/ibkrcampus.com\/campus\/wp-json\/wp\/v2\/contributors-categories?post=223003"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}