Skip to content
GitLab
Menu
Projects
Groups
Snippets
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
Menu
Open sidebar
ift
NIFTy
Commits
86891afb
Commit
86891afb
authored
Jan 15, 2019
by
Martin Reinecke
Browse files
first round of fixes
parent
4a44d1e8
Changes
1
Hide whitespace changes
Inline
Side-by-side
nifty5/operators/energy_operators.py
View file @
86891afb
...
...
@@ -27,32 +27,31 @@ from .simple_linear_operators import VdotOperator
class
EnergyOperator
(
Operator
):
"""
Basis class EnergyOperator, a
n abstract class from which
"""
A
n abstract class from which
other specific EnergyOperator subclasses are derived.
An EnergyOperator has a scalar domain as target domain.
It turns a field into a scalar and a linearization into a linearization.
It is intended as an objective function for field inference.
It is intended as an objective function for field inference.
Typical usage in IFT:
as an information Hamiltonian (
=
negative log probability)
or as a Gibbs free energy (
=
averaged Hamiltonian),
aka Kullbach-Leibler divergence.
Typical usage in IFT:
-
as an information Hamiltonian (
i.e. a
negative log probability)
-
or as a Gibbs free energy (
i.e. an
averaged Hamiltonian),
aka Kullbach-Leibler divergence.
"""
_target
=
DomainTuple
.
scalar_domain
()
class
SquaredNormOperator
(
EnergyOperator
):
""" Class for squared field norm energy.
Usage
-----
E = SquaredNormOperator() represents a field energy E that is the L2 norm
of a field f:
E = SquaredNormOperator() represents a field energy E that is the L2 norm
of a field f:
E(f) = f^dagger f
"""
"""
def
__init__
(
self
,
domain
):
self
.
_domain
=
domain
...
...
@@ -73,13 +72,13 @@ class QuadraticFormOperator(EnergyOperator):
op : EndomorphicOperator
kernel of quadratic form
Usage
Notes
-----
E = QuadraticFormOperator(op) represents a field energy that is a
quadratic form in a field f with kernel op:
`
E = QuadraticFormOperator(op)
`
represents a field energy that is a
quadratic form in a field f with kernel op:
E(f) = 0.5 f^dagger op f
"""
:math:`
E(f) = 0.5 f^
\
dagger op f
`
"""
def
__init__
(
self
,
op
):
from
.endomorphic_operator
import
EndomorphicOperator
if
not
isinstance
(
op
,
EndomorphicOperator
):
...
...
@@ -102,24 +101,21 @@ class GaussianEnergy(EnergyOperator):
Attributes
----------
mean = mean (field) of the Gaussian,
default = 0
covariance = field covariance of the Gaussian,
default = identity operator
domain = domain of field,
default = domain of mean or covariance if specified
One of the attributes has to be specified at instanciation of a GaussianEnergy
to inform about the domain, otherwise an exception is rasied.
Usage
mean : Field
mean of the Gaussian, (default 0)
covariance : LinearOperator
covariance of the Gaussian (default = identity operator)
domain : Domainoid
operator domain, inferred from mean or covariance if specified
Notes
-----
E = GaussianEnergy(mean = m, covariance = D) represents (up to constants)
- At least one of the arguments has to be provided.
- `E = GaussianEnergy(mean=m, covariance=D)` represents (up to constants)
:math:`E(f) = - \log G(f-m, D) = 0.5 (f-m)^\dagger D^{-1} (f-m)`,
an information energy for a Gaussian distribution with mean m and covariance D.
E(f) = - log G(f-m, D) = 0.5 (f-m)^dagger D^-1 (f-m)
an information energy for a Gaussian distribution with mean m and covariance D.
"""
"""
def
__init__
(
self
,
mean
=
None
,
covariance
=
None
,
domain
=
None
):
self
.
_domain
=
None
if
mean
is
not
None
:
...
...
@@ -156,23 +152,23 @@ class GaussianEnergy(EnergyOperator):
class
PoissonianEnergy
(
EnergyOperator
):
"""Class for likelihood-energies of expected count field constrained by
"""Class for likelihood-energies of expected count field constrained by
Poissonian count data.
Parameters
----------
d : Field
d : Field
data field with counts
Usage
Notes
-----
E = GaussianEnergy(d) represents (up to an f-independent term log(d!))
E = GaussianEnergy(d) represents (up to an f-independent term log(d!))
E(f) = -log Poisson(d|f) = sum(f) - d^dagger log(f),
E(f) = -
\
log Poisson(d|f) = sum(f) - d^
\
dagger
\
log(f),
where f is a
f
ield in data space
(d.domain)
with the expectation values for
the counts.
"""
where f is a
F
ield in data space with the expectation values for
the counts.
"""
def
__init__
(
self
,
d
):
self
.
_d
=
d
self
.
_domain
=
DomainTuple
.
make
(
d
.
domain
)
...
...
@@ -189,11 +185,11 @@ class PoissonianEnergy(EnergyOperator):
class
InverseGammaLikelihood
(
EnergyOperator
):
"""Special class for inverse Gamma distributed covariances.
"""Special class for inverse Gamma distributed covariances.
RL FIXME: To be documented.
"""
def
__init__
(
self
,
d
):
def
__init__
(
self
,
d
):
self
.
_d
=
d
self
.
_domain
=
DomainTuple
.
make
(
d
.
domain
)
...
...
@@ -209,23 +205,23 @@ class InverseGammaLikelihood(EnergyOperator):
class
BernoulliEnergy
(
EnergyOperator
):
"""Class for likelihood-energies of expected event frequency constrained by
"""Class for likelihood-energies of expected event frequency constrained by
event data.
Parameters
----------
d : Field
d : Field
data field with events (=1) or non-events (=0)
Usage
Notes
-----
E = BernoulliEnergy(d) represents
E(f) = -log Bernoulli(d|f) = -d^dagger log
(f)
- (1-d)^dagger log(1-f),
:math:`
E(f) = -
\
log
\mbox{
Bernoulli
}
(d|f) = -d^
\
dagger
\
log
f
- (1-d)^
\
dagger
\
log(1-f)
`
,
where f is a field in data space (d.domain) with the expected frequencies of
events.
"""
where f is a field in data space (d.domain) with the expected frequencies of
events.
"""
def
__init__
(
self
,
d
):
self
.
_d
=
d
self
.
_domain
=
DomainTuple
.
make
(
d
.
domain
)
...
...
@@ -249,35 +245,33 @@ class Hamiltonian(EnergyOperator):
----------
lh : EnergyOperator
a likelihood energy
ic_samp : IterationController
ic_samp : IterationController
is passed to SamplingEnabler to draw Gaussian distributed samples
with covariance = metric of Hamiltonian
with covariance = metric of Hamiltonian
(= Hessian without terms that generate negative eigenvalues)
default = None
Usage
Notes
-----
H = Hamiltonian(E_lh) represents
H(f) = 0.5 f^dagger f + E_lh(f)
:math:`
H(f) = 0.5 f^
\
dagger f + E_
{
lh
}
(f)
`
an information Hamiltonian for a field f with a white Gaussian prior
(unit covariance) and the likelihood energy E_lh.
an information Hamiltonian for a field f with a white Gaussian prior
(unit covariance) and the likelihood energy
:math:`
E_
{
lh
}`
.
Tip
---
Other field priors can be represented via transformations of a white
Other field priors can be represented via transformations of a white
Gaussian field into a field with the desired prior probability structure.
By implementing prior information this way, the field prior is represented
By implementing prior information this way, the field prior is represented
by a generative model, from which NIFTy can draw samples and infer a field
using the Maximum a Posteriori (MAP) or the Variational Bayes (VB) method.
For more details see:
For more details see:
"Encoding prior knowledge in the structure of the likelihood"
Jakob Knollmüller, Torsten A. Ensslin, submitted, arXiv:1812.04403
https://arxiv.org/abs/1812.04403
"""
"""
def
__init__
(
self
,
lh
,
ic_samp
=
None
):
self
.
_lh
=
lh
self
.
_prior
=
GaussianEnergy
(
domain
=
lh
.
domain
)
...
...
@@ -303,27 +297,27 @@ class Hamiltonian(EnergyOperator):
class
SampledKullbachLeiblerDivergence
(
EnergyOperator
):
"""Class for Kullbach Leibler (KL) Divergence or Gibbs free energies
Precisely a sample averaged Hamiltonian (or other energy) that represents
Precisely a sample averaged Hamiltonian (or other energy) that represents
approximatively the relevant part of a KL to be used in Variational Bayes
inference if the samples are drawn from the approximating Gaussian.
Let Q(f) = G(f-m,D) Gaussian used to approximate
P(f|d), the correct posterior with information Hamiltonian
H(d,f) = - log P(d,f) = - log P(f|d) + const.
H(d,f) = - log P(d,f) = - log P(f|d) + const.
The KL divergence between those should then be optimized for m. It is
KL(Q,P) = int Df Q(f) log Q(f)/P(f)
= < log Q(f) >_Q(f) - < log P(f) >_Q(f)
= const + < H(f) >_G(f-m,D)
in essence the information Hamiltonian averaged over a Gaussian distribution
in essence the information Hamiltonian averaged over a Gaussian distribution
centered on the mean m.
SampledKullbachLeiblerDivergence(H) approximates < H(f) >_G(f-m,D) if the
SampledKullbachLeiblerDivergence(H) approximates < H(f) >_G(f-m,D) if the
residuals f-m are drawn from covariance D.
Parameters
----------
h: Hamiltonian
...
...
@@ -332,22 +326,20 @@ class SampledKullbachLeiblerDivergence(EnergyOperator):
set of residual sample points to be added to mean field
for approximate estimation of the KL
Usage:
-----
-
Notes
-----
KL = SampledKullbachLeiblerDivergence(H, samples) represents
KL(m) = sum_i H(m+v_i) / N,
where v_i are the residual samples, N is their number, and m is the mean field
around which the samples are drawn.
Tip:
----
Having symmetrized residual samples, with both, v_i and -v_i being present,
ensures that the distribution mean is exactly represented. This reduces sampling
noise and helps the numerics of the KL minimization process in the variational
Bayes inference.
"""
Having symmetrized residual samples, with both, v_i and -v_i being present,
ensures that the distribution mean is exactly represented. This reduces sampling
noise and helps the numerics of the KL minimization process in the variational
Bayes inference.
"""
def
__init__
(
self
,
h
,
res_samples
):
self
.
_h
=
h
self
.
_domain
=
h
.
domain
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment