Skip to content
GitLab
Menu
Projects
Groups
Snippets
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
Menu
Open sidebar
ift
NIFTy
Commits
05ca5466
Commit
05ca5466
authored
Jan 19, 2019
by
Torsten Ensslin
Browse files
slight update
parent
3996cb92
Changes
1
Hide whitespace changes
Inline
Side-by-side
nifty5/operators/energy_operators.py
View file @
05ca5466
...
...
@@ -318,41 +318,44 @@ class Hamiltonian(EnergyOperator):
class
AveragedEnergy
(
EnergyOperator
):
"""
Computes Kullbach-Leibler (KL) di
verge
nce or Gibbs free energies.
"""
A
ver
a
ge
s an energy over samples
A sample-averaged energy, e.g. an Hamiltonian, approximates the relevant
part of a KL to be used in Variational Bayes inference if the samples are
drawn from the approximating Gaussian:
Can be used to computes Kullbach-Leibler (KL) divergence or Gibbs
free energies.
A sample-averaged energy, e.g. an Hamiltonian, approximates the
relevant part of a KL to be used in Variational Bayes inference if
the samples are drawn from the approximating Gaussian:
.. math ::
\\
text{KL}(m) =
\\
frac1{
\\
#\{v_i\}}
\\
sum_{v_i} H(m+v_i),
where :math:`v_i` are the residual samples and :math:`m` is the
mean field
around which the samples are drawn.
where :math:`v_i` are the residual samples and :math:`m` is the
mean field
around which the samples are drawn.
Parameters
----------
h: Hamiltonian
The energy to be averaged.
res_samples : iterable of Fields
Set of residual sample points to be added to mean field for
approximate
estimation of the KL.
Set of residual sample points to be added to mean field for
approximate
estimation of the KL.
Note
----
Having symmetrized residual samples, with both v_i and -v_i being
present
ensures that the distribution mean is exactly represented.
This reduces
sampling noise and helps the numerics of the KL
minimization process in the
variational Bayes inference.
Having symmetrized residual samples, with both v_i and -v_i being
present
ensures that the distribution mean is exactly represented.
This reduces
sampling noise and helps the numerics of the KL
minimization process in the
variational Bayes inference.
See also
--------
Let :math:`Q(f) = G(f-m,D)` be the Gaussian distribution
which is
used to approximate the accurate posterior :math:`P(f|d)` with
Let :math:`Q(f) = G(f-m,D)` be the Gaussian distribution
that is
used to approximate the accurate posterior :math:`P(f|d)` with
information Hamiltonian
:math:`H(d,f) = -
\\
log P(d,f) = -
\\
log P(f|d) +
\\
text{const}`. In
Variational Bayes one needs to optimize the KL divergence between
those
two distributions for
m
. It is:
Variational Bayes one needs to optimize the KL divergence between
those
two distributions for
:math:`m`
. It is:
:math:`KL(Q,P) =
\\
int Df Q(f)
\\
log Q(f)/P(f)
\\\\
=
\\
left<
\\
log Q(f)
\\
right>_Q(f) -
\\
left<
\\
log P(f)
\\
right>_Q(f)
\\\\
...
...
Write
Preview
Supports
Markdown
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment