The Evidence Lower Bound(ELBO) is a strict lower bound on the log marginal likelihood of the observed data, that is, the log evidence $\log p(x)$.

$$ \mathrm{ELBO}(q)=\mathbb{E}_{q(z)}\big[\log p(x,z)-\log q(z)\big] =\int_{\mathcal Z} q(z)\log\frac{p(x,z)}{q(z)}\,dz $$

It is linked to the log evidence through the identity

$$ \log p(x)=\mathrm{ELBO}(q)+D_{\text{KL}}\big(q(z)\parallel p(z\mid x)\big) $$

Since

$$ D_{\text{KL}}\big(q(z)\parallel p(z\mid x)\big)\ge 0 $$

it follows that

$$ \log p(x)\ge \mathrm{ELBO}(q) $$

In Variational Inference(VI), the true posterior $p(z\mid x)$ is generally intractable because it depends on the marginalization

$$ p(x)=\int p(x,z)\,dz $$

which is often not available in closed form. This prevents direct computation and minimization of

$$ D_{\text{KL}}\big(q(z)\parallel p(z\mid x)\big) $$

For fixed observed data $x$, however, $\log p(x)$ is constant with respect to the variational distribution $q(z)$. Therefore, maximizing the ELBO is mathematically equivalent to minimizing

$$ D_{\text{KL}}\big(q(z)\parallel p(z\mid x)\big) $$

Thus, the ELBO converts an intractable inference problem into a tractable optimization problem.