贝叶斯推导

Let X1X1 be the vector of observable random variables. Let X2X2 be the vector of latent random variables. Let ΘΘ be the vector of parameters. f(x2,θ|x1)=f(x1|x2,θ)f(x2|θ)f(θ)f(x1)

Maximum A Posterior Estimation | 极大后验估计

$p(\theta|x)=\frac{p(x|\theta)p(\theta)}{p(x)}$

$\hat{\theta}_{MAP}=\underset{\theta}{argmax}~ p(x|\theta)p(\theta)=\underset{\theta}{argmax}\{\sum\limits_{X_i} log ~p(X_i|\theta) + log~p(\theta)\}$

贝叶斯推断 :Bayesian Inference

Bayesian inference extends the MAP approach by allowing a distribution over the parameter set θ instead of making a direct estimate. Not only encodes this the maximum(a posteriori) value of the data-generated parameters, but it also incorporates expectation as another parameter estimate as well as variance information as a measure of estimation quality or confidence.

• $X$: 观测数据

• $\theta$: 潜变量