Commit 05ca5466 authored by Torsten Ensslin's avatar Torsten Ensslin
Browse files

slight update

parent 3996cb92
......@@ -318,41 +318,44 @@ class Hamiltonian(EnergyOperator):
class AveragedEnergy(EnergyOperator):
"""Computes Kullbach-Leibler (KL) divergence or Gibbs free energies.
"""Averages an energy over samples
A sample-averaged energy, e.g. an Hamiltonian, approximates the relevant
part of a KL to be used in Variational Bayes inference if the samples are
drawn from the approximating Gaussian:
Can be used to computes Kullbach-Leibler (KL) divergence or Gibbs
free energies.
A sample-averaged energy, e.g. an Hamiltonian, approximates the
relevant part of a KL to be used in Variational Bayes inference if
the samples are drawn from the approximating Gaussian:
.. math ::
\\text{KL}(m) = \\frac1{\\#\{v_i\}} \\sum_{v_i} H(m+v_i),
where :math:`v_i` are the residual samples and :math:`m` is the mean field
around which the samples are drawn.
where :math:`v_i` are the residual samples and :math:`m` is the
mean field around which the samples are drawn.
Parameters
----------
h: Hamiltonian
The energy to be averaged.
res_samples : iterable of Fields
Set of residual sample points to be added to mean field for approximate
estimation of the KL.
Set of residual sample points to be added to mean field for
approximate estimation of the KL.
Note
----
Having symmetrized residual samples, with both v_i and -v_i being present
ensures that the distribution mean is exactly represented. This reduces
sampling noise and helps the numerics of the KL minimization process in the
variational Bayes inference.
Having symmetrized residual samples, with both v_i and -v_i being
present ensures that the distribution mean is exactly represented.
This reduces sampling noise and helps the numerics of the KL
minimization process in the variational Bayes inference.
See also
--------
Let :math:`Q(f) = G(f-m,D)` be the Gaussian distribution
which is used to approximate the accurate posterior :math:`P(f|d)` with
Let :math:`Q(f) = G(f-m,D)` be the Gaussian distribution that is
used to approximate the accurate posterior :math:`P(f|d)` with
information Hamiltonian
:math:`H(d,f) = -\\log P(d,f) = -\\log P(f|d) + \\text{const}`. In
Variational Bayes one needs to optimize the KL divergence between those
two distributions for m. It is:
Variational Bayes one needs to optimize the KL divergence between
those two distributions for :math:`m`. It is:
:math:`KL(Q,P) = \\int Df Q(f) \\log Q(f)/P(f)\\\\
= \\left< \\log Q(f) \\right>_Q(f) - \\left< \\log P(f) \\right>_Q(f)\\\\
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment