Commit 8c50ce87 authored by Martin Reinecke's avatar Martin Reinecke
Browse files

Merge branch 'NIFTy_5' into model_documentation

parents 0dccab5d 8e516415
%% Cell type:markdown id: tags:
# A NIFTy demonstration
%% Cell type:markdown id: tags:
## IFT: Big Picture
IFT starting point:
$$d = Rs+n$$
Typically, $s$ is a continuous field, $d$ a discrete data vector. Particularly, $R$ is not invertible.
IFT aims at **inverting** the above uninvertible problem in the **best possible way** using Bayesian statistics.
## NIFTy
NIFTy (Numerical Information Field Theory) is a Python framework in which IFT problems can be tackled easily.
Main Interfaces:
- **Spaces**: Cartesian, 2-Spheres (Healpix, Gauss-Legendre) and their respective harmonic spaces.
- **Fields**: Defined on spaces.
- **Operators**: Acting on fields.
%% Cell type:markdown id: tags:
## Wiener Filter: Formulae
### Assumptions
- $d=Rs+n$, $R$ linear operator.
- $\mathcal P (s) = \mathcal G (s,S)$, $\mathcal P (n) = \mathcal G (n,N)$ where $S, N$ are positive definite matrices.
### Posterior
The Posterior is given by:
$$\mathcal P (s|d) \propto P(s,d) = \mathcal G(d-Rs,N) \,\mathcal G(s,S) \propto \mathcal G (m,D) $$
where
$$\begin{align}
m &= Dj \\
D^{-1}&= (S^{-1} +R^\dagger N^{-1} R )\\
j &= R^\dagger N^{-1} d
\end{align}$$
Let us implement this in NIFTy!
%% Cell type:markdown id: tags:
## Wiener Filter: Example
- We assume statistical homogeneity and isotropy. Therefore the signal covariance $S$ is diagonal in harmonic space, and is described by a one-dimensional power spectrum, assumed here as $$P(k) = P_0\,\left(1+\left(\frac{k}{k_0}\right)^2\right)^{-\gamma /2},$$
with $P_0 = 0.2, k_0 = 5, \gamma = 4$.
- $N = 0.2 \cdot \mathbb{1}$.
- Number of data points $N_{pix} = 512$.
- reconstruction in harmonic space.
- Response operator:
$$R = FFT_{\text{harmonic} \rightarrow \text{position}}$$
%% Cell type:code id: tags:
``` python
N_pixels = 512 # Number of pixels
def pow_spec(k):
P0, k0, gamma = [.2, 5, 4]
return P0 / ((1. + (k/k0)**2)**(gamma / 2))
```
%% Cell type:markdown id: tags:
## Wiener Filter: Implementation
%% Cell type:markdown id: tags:
### Import Modules
%% Cell type:code id: tags:
``` python
import numpy as np
np.random.seed(40)
import nifty5 as ift
import matplotlib.pyplot as plt
%matplotlib inline
```
%% Cell type:markdown id: tags:
### Implement Propagator
%% Cell type:code id: tags:
``` python
def Curvature(R, N, Sh):
IC = ift.GradientNormController(iteration_limit=50000,
tol_abs_gradnorm=0.1)
# WienerFilterCurvature is (R.adjoint*N.inverse*R + Sh.inverse) plus some handy
# helper methods.
return ift.library.WienerFilterCurvature(R,N,Sh,iteration_controller=IC,iteration_controller_sampling=IC)
return ift.WienerFilterCurvature(R,N,Sh,iteration_controller=IC,iteration_controller_sampling=IC)
```
%% Cell type:markdown id: tags:
### Conjugate Gradient Preconditioning
- $D$ is defined via:
$$D^{-1} = \mathcal S_h^{-1} + R^\dagger N^{-1} R.$$
In the end, we want to apply $D$ to $j$, i.e. we need the inverse action of $D^{-1}$. This is done numerically (algorithm: *Conjugate Gradient*).
<!--
- One can define the *condition number* of a non-singular and normal matrix $A$:
$$\kappa (A) := \frac{|\lambda_{\text{max}}|}{|\lambda_{\text{min}}|},$$
where $\lambda_{\text{max}}$ and $\lambda_{\text{min}}$ are the largest and smallest eigenvalue of $A$, respectively.
- The larger $\kappa$ the slower Conjugate Gradient.
- By default, conjugate gradient solves: $D^{-1} m = j$ for $m$, where $D^{-1}$ can be badly conditioned. If one knows a non-singular matrix $T$ for which $TD^{-1}$ is better conditioned, one can solve the equivalent problem:
$$\tilde A m = \tilde j,$$
where $\tilde A = T D^{-1}$ and $\tilde j = Tj$.
- In our case $S^{-1}$ is responsible for the bad conditioning of $D$ depending on the chosen power spectrum. Thus, we choose
$$T = \mathcal F^\dagger S_h^{-1} \mathcal F.$$
-->
%% Cell type:markdown id: tags:
### Generate Mock data
- Generate a field $s$ and $n$ with given covariances.
- Calculate $d$.
%% Cell type:code id: tags:
``` python
s_space = ift.RGSpace(N_pixels)
h_space = s_space.get_default_codomain()
HT = ift.HarmonicTransformOperator(h_space, target=s_space)
# Operators
Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec)
R = HT #*ift.create_harmonic_smoothing_operator((h_space,), 0, 0.02)
# Fields and data
sh = Sh.draw_sample()
noiseless_data=R(sh)
noise_amplitude = np.sqrt(0.2)
N = ift.ScalingOperator(noise_amplitude**2, s_space)
n = ift.Field.from_random(domain=s_space, random_type='normal',
std=noise_amplitude, mean=0)
d = noiseless_data + n
j = R.adjoint_times(N.inverse_times(d))
curv = Curvature(R=R, N=N, Sh=Sh)
D = curv.inverse
```
%% Cell type:markdown id: tags:
### Run Wiener Filter
%% Cell type:code id: tags:
``` python
m = D(j)
```
%% Cell type:markdown id: tags:
### Signal Reconstruction
%% Cell type:code id: tags:
``` python
# Get signal data and reconstruction data
s_data = HT(sh).to_global_data()
m_data = HT(m).to_global_data()
d_data = d.to_global_data()
plt.figure(figsize=(15,10))
plt.plot(s_data, 'r', label="Signal", linewidth=3)
plt.plot(d_data, 'k.', label="Data")
plt.plot(m_data, 'k', label="Reconstruction",linewidth=3)
plt.title("Reconstruction")
plt.legend()
plt.show()
```
%% Cell type:code id: tags:
``` python
plt.figure(figsize=(15,10))
plt.plot(s_data - s_data, 'r', label="Signal", linewidth=3)
plt.plot(d_data - s_data, 'k.', label="Data")
plt.plot(m_data - s_data, 'k', label="Reconstruction",linewidth=3)
plt.axhspan(-noise_amplitude,noise_amplitude, facecolor='0.9', alpha=.5)
plt.title("Residuals")
plt.legend()
plt.show()
```
%% Cell type:markdown id: tags:
### Power Spectrum
%% Cell type:code id: tags:
``` python
s_power_data = ift.power_analyze(sh).to_global_data()
m_power_data = ift.power_analyze(m).to_global_data()
plt.figure(figsize=(15,10))
plt.loglog()
plt.xlim(1, int(N_pixels/2))
ymin = min(m_power_data)
plt.ylim(ymin, 1)
xs = np.arange(1,int(N_pixels/2),.1)
plt.plot(xs, pow_spec(xs), label="True Power Spectrum", color='k',alpha=0.5)
plt.plot(s_power_data, 'r', label="Signal")
plt.plot(m_power_data, 'k', label="Reconstruction")
plt.axhline(noise_amplitude**2 / N_pixels, color="k", linestyle='--', label="Noise level", alpha=.5)
plt.axhspan(noise_amplitude**2 / N_pixels, ymin, facecolor='0.9', alpha=.5)
plt.title("Power Spectrum")
plt.legend()
plt.show()
```
%% Cell type:markdown id: tags:
## Wiener Filter on Incomplete Data
%% Cell type:code id: tags:
``` python
# Operators
Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec)
N = ift.ScalingOperator(noise_amplitude**2,s_space)
# R is defined below
# Fields
sh = Sh.draw_sample()
s = HT(sh)
n = ift.Field.from_random(domain=s_space, random_type='normal',
std=noise_amplitude, mean=0)
```
%% Cell type:markdown id: tags:
### Partially Lose Data
%% Cell type:code id: tags:
``` python
l = int(N_pixels * 0.2)
h = int(N_pixels * 0.2 * 2)
mask = np.full(s_space.shape, 1.)
mask[l:h] = 0
mask = ift.Field.from_global_data(s_space, mask)
R = ift.DiagonalOperator(mask)*HT
n = n.to_global_data()
n[l:h] = 0
n = ift.Field.from_global_data(s_space, n)
d = R(sh) + n
```
%% Cell type:code id: tags:
``` python
curv = Curvature(R=R, N=N, Sh=Sh)
D = curv.inverse
j = R.adjoint_times(N.inverse_times(d))
m = D(j)
```
%% Cell type:markdown id: tags:
### Compute Uncertainty
%% Cell type:code id: tags:
``` python
m_mean, m_var = ift.probe_with_posterior_samples(curv, HT, 200)
```
%% Cell type:markdown id: tags:
### Get data
%% Cell type:code id: tags:
``` python
# Get signal data and reconstruction data
s_data = s.to_global_data()
m_data = HT(m).to_global_data()
m_var_data = m_var.to_global_data()
uncertainty = np.sqrt(m_var_data)
d_data = d.to_global_data()
# Set lost data to NaN for proper plotting
d_data[d_data == 0] = np.nan
```
%% Cell type:code id: tags:
``` python
fig = plt.figure(figsize=(15,10))
plt.axvspan(l, h, facecolor='0.8',alpha=0.5)
plt.fill_between(range(N_pixels), m_data - uncertainty, m_data + uncertainty, facecolor='0.5', alpha=0.5)
plt.plot(s_data, 'r', label="Signal", alpha=1, linewidth=3)
plt.plot(d_data, 'k.', label="Data")
plt.plot(m_data, 'k', label="Reconstruction", linewidth=3)
plt.title("Reconstruction of incomplete data")
plt.legend()
```
%% Cell type:markdown id: tags:
# 2d Example
%% Cell type:code id: tags:
``` python
N_pixels = 256 # Number of pixels
sigma2 = 2. # Noise variance
def pow_spec(k):
P0, k0, gamma = [.2, 2, 4]
return P0 * (1. + (k/k0)**2)**(-gamma/2)
s_space = ift.RGSpace([N_pixels, N_pixels])
```
%% Cell type:code id: tags:
``` python
h_space = s_space.get_default_codomain()
HT = ift.HarmonicTransformOperator(h_space,s_space)
# Operators
Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec)
N = ift.ScalingOperator(sigma2,s_space)
# Fields and data
sh = Sh.draw_sample()
n = ift.Field.from_random(domain=s_space, random_type='normal',
std=np.sqrt(sigma2), mean=0)
# Lose some data
l = int(N_pixels * 0.33)
h = int(N_pixels * 0.33 * 2)
mask = np.full(s_space.shape, 1.)
mask[l:h,l:h] = 0.
mask = ift.Field.from_global_data(s_space, mask)
R = ift.DiagonalOperator(mask)*HT
n = n.to_global_data()
n[l:h, l:h] = 0
n = ift.Field.from_global_data(s_space, n)
curv = Curvature(R=R, N=N, Sh=Sh)
D = curv.inverse
d = R(sh) + n
j = R.adjoint_times(N.inverse_times(d))
# Run Wiener filter
m = D(j)
# Uncertainty
m_mean, m_var = ift.probe_with_posterior_samples(curv, HT, 20)
# Get data
s_data = HT(sh).to_global_data()
m_data = HT(m).to_global_data()
m_var_data = m_var.to_global_data()
d_data = d.to_global_data()
uncertainty = np.sqrt(np.abs(m_var_data))
```
%% Cell type:code id: tags:
``` python
cm = ['magma', 'inferno', 'plasma', 'viridis'][1]
mi = np.min(s_data)
ma = np.max(s_data)
fig, axes = plt.subplots(1, 2, figsize=(15, 7))
data = [s_data, d_data]
caption = ["Signal", "Data"]
for ax in axes.flat:
im = ax.imshow(data.pop(0), interpolation='nearest', cmap=cm, vmin=mi,
vmax=ma)
ax.set_title(caption.pop(0))
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.15, 0.05, 0.7])
fig.colorbar(im, cax=cbar_ax)
```
%% Cell type:code id: tags:
``` python
mi = np.min(s_data)
ma = np.max(s_data)
fig, axes = plt.subplots(3, 2, figsize=(15, 22.5))
sample = HT(curv.draw_sample(from_inverse=True)+m).to_global_data()
post_mean = (m_mean + HT(m)).to_global_data()
data = [s_data, m_data, post_mean, sample, s_data - m_data, uncertainty]
caption = ["Signal", "Reconstruction", "Posterior mean", "Sample", "Residuals", "Uncertainty Map"]
for ax in axes.flat:
im = ax.imshow(data.pop(0), interpolation='nearest', cmap=cm, vmin=mi, vmax=ma)
ax.set_title(caption.pop(0))
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([.85, 0.15, 0.05, 0.7])
fig.colorbar(im, cax=cbar_ax)
```
%% Cell type:markdown id: tags:
### Is the uncertainty map reliable?
%% Cell type:code id: tags:
``` python
precise = (np.abs(s_data-m_data) < uncertainty)
print("Error within uncertainty map bounds: " + str(np.sum(precise) * 100 / N_pixels**2) + "%")
plt.figure(figsize=(15,10))
plt.imshow(precise.astype(float), cmap="brg")
plt.colorbar()
```
%% Cell type:markdown id: tags:
# Start Coding
## NIFTy Repository + Installation guide
https://gitlab.mpcdf.mpg.de/ift/NIFTy
NIFTy v5 **more or less stable!**
......
......@@ -2,7 +2,9 @@ import nifty5 as ift
import numpy as np
from global_newton.models_other.apply_data import ApplyData
from global_newton.models_energy.hamiltonian import Hamiltonian
from nifty5.library.unit_log_gauss import UnitLogGauss
from nifty5 import GaussianEnergy
if __name__ == '__main__':
# s_space = ift.RGSpace([1024])
s_space = ift.RGSpace([128,128])
......@@ -45,7 +47,7 @@ if __name__ == '__main__':
NWR = ApplyData(data, ift.Field(d_space,val=noise), Rs)
INITIAL_POSITION = ift.from_random('normal',total_domain)
likelihood = UnitLogGauss(INITIAL_POSITION, NWR)
likelihood = GaussianEnergy(INITIAL_POSITION, NWR)
IC = ift.GradientNormController(iteration_limit=500, tol_abs_gradnorm=1e-3)
inverter = ift.ConjugateGradient(controller=IC)
......
......@@ -13,8 +13,10 @@ recognized from a large distance, ignoring all technical details.
From such a perspective,
- IFT problems largely consist of *minimization* problems involving a large
number of equations.
- IFT problems largely consist of the combination of several high dimensional
*minimization* problems.
- Within NIFTy, *models* are used to define the characteristic equations and
properties of the problems.
- The equations are built mostly from the application of *linear operators*,
but there may also be nonlinear functions involved.
- The unknowns in the equations represent either continuous physical *fields*,
......@@ -232,6 +234,62 @@ The properties :attr:`~LinearOperator.adjoint` and
were the original operator's adjoint or inverse, respectively.
Models
======
Model classes (represented by NIFTy5's abstract :class:`Model` class) are used to construct
the equations of a specific inference problem.
Most models are defined via a position, which is a :class:`MultiField` object,
their value at this position, which is again a :class:`MultiField` object and a Jacobian derivative,
which is a :class:`LinearOperator` and is needed for the minimization procedure.
Using the existing basic model classes one can construct more complicated models, as
NIFTy allows for easy and self-consinstent combination via point-wise multiplication,
addition and subtraction. The model resulting from these operations then automatically
contains the correct Jacobians, positions and values.
Notably, :class:`Constant` and :class:`Variable` allow for an easy way to turn
inference of specific quantities on and off.
The basic model classes also allow for more complex operations on models such as
the application of :class:`LinearOperators` or local non-linearities.
As an example one may consider the following combination of ``x``, which is a model of type
:class:`Variable` and ``y``, which is a model of type :class:`Constant`::
z = x*x + y
``z`` will then be a model with the following properties::
z.value = x.value*x.value + y.value
z.position = Union(x.position, y.position)
z.jacobian = 2*makeOp(x.value)
Basic models
------------
Basic model classes provided by NIFTy are
- :class:`Constant` contains a constant value and has a zero valued Jacobian.
Like other models, it has a position, but its value does not depend on it.
- :class:`Variable` returns the position as its value, its derivative is one.
- :class:`LinearModel` applies a :class:`LinearOperator` on the model.
- :class:`LocalModel` applies a non-linearity locally on the model.
- :class:`MultiModel` combines various models into one. In this case the position,
value and Jacobian are combined into corresponding :class:`MultiFields` and operators.
Advanced models
---------------
NIFTy also provides a library of more sophisticated models which are used for more
specific inference problems. Currently these are:
- :class:`AmplitudeModel`, which returns a smooth power spectrum.
- :class:`PointModel`, which models point sources which follow a inverse gamma distribution.
- :class:`SmoothSkyModel`, which models a diffuse lognormal field. It takes an amplitude model
to specify the correlation structure of the field.
.. _minimization:
......
......@@ -16,7 +16,7 @@ from .minimization import *
from .sugar import *
from .plotting.plot import plot
from . import library
from .library import *
from . import extra
from .utilities import memo
......
......@@ -26,7 +26,7 @@ except ImportError:
from .data_objects.numpy_do import *
__all__ = ["ntask", "rank", "master", "local_shape", "data_object", "full",
"empty", "zeros", "ones", "empty_like", "vdot", "abs", "exp",
"empty", "zeros", "ones", "empty_like", "vdot", "exp",
"log", "tanh", "sqrt", "from_object", "from_random",
"local_data", "ibegin", "ibegin_from_shape", "np_allreduce_sum",
"distaxis", "from_local_data", "from_global_data", "to_global_data",
......
......@@ -34,7 +34,7 @@ class GLSpace(StructuredDomain):
pixelization.
nlon : int, optional
Number of longitudinal bins that are used for this pixelization.
Default value is 2*nlat + 1.
Default value is 2*nlat - 1.
"""
_needed_for_hash = ["_nlat", "_nlon"]
......
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
# Copyright(C) 2013-2018 Max-Planck-Society
#
# NIFTy is being developed at the Max-Planck-Institut fuer Astrophysik
# and financially supported by the Studienstiftung des deutschen Volkes.
from ..library.gaussian_energy import GaussianEnergy
from ..minimization.energy import Energy
from ..operators import InversionEnabler, SamplingEnabler
from ..models.variable import Variable
from ..operators import InversionEnabler, SamplingEnabler
from ..utilities import memo
from ..library.unit_log_gauss import UnitLogGauss
class Hamiltonian(Energy):
......@@ -15,11 +33,8 @@ class Hamiltonian(Energy):
super(Hamiltonian, self).__init__(lh.position)
self._lh = lh
self._ic = iteration_controller
if iteration_controller_sampling is None:
self._ic_samp = iteration_controller
else:
self._ic_samp = iteration_controller_sampling
self._prior = UnitLogGauss(Variable(self.position))
self._ic_samp = iteration_controller_sampling
self._prior = GaussianEnergy(Variable(self.position))
self._precond = self._prior.curvature
def at(self, position):
......@@ -39,8 +54,11 @@ class Hamiltonian(Energy):
@memo
def curvature(self):
prior_curv = self._prior.curvature
c = SamplingEnabler(self._lh.curvature, prior_curv.inverse,
self._ic_samp, prior_curv.inverse)
if self._ic_samp is None:
c = self._lh.curvature + prior_curv
else:
c = SamplingEnabler(self._lh.curvature, prior_curv.inverse,
self._ic_samp, prior_curv.inverse)
return InversionEnabler(c, self._ic, self._precond)
def __str__(self):
......
from .operator_tests import consistency_check
from .energy_tests import *
from .energy_and_model_tests import *
......@@ -18,19 +18,42 @@
import numpy as np
from ..sugar import from_random
from ..minimization.energy import Energy
from ..models.model import Model
__all__ = ["check_value_gradient_consistency",
"check_value_gradient_curvature_consistency"]
def _get_acceptable_model(M):
val = M.value
if not np.isfinite(val.sum()):
raise ValueError('Initial Model value must be finite')
dir = from_random("normal", M.position.domain)
dirder = M.gradient(dir)
dir *= val/(dirder).norm()*1e-5
# Find a step length that leads to a "reasonable" Model
for i in range(50):
try:
M2 = M.at(M.position+dir)
if np.isfinite(M2.value.sum()) and abs(M2.value.sum()) < 1e20:
break
except FloatingPointError:
pass
dir *= 0.5
else:
raise ValueError("could not find a reasonable initial step")
return M2
def _get_acceptable_energy(E):
val = E.value
if not np.isfinite(val):
raise ValueError
raise ValueError('Initial Energy must be finite')
dir = from_random("normal", E.position.domain)
dirder = E.gradient.vdot(dir)
dir *= np.abs(val)/np.abs(dirder)*1e-5
# find a step length that leads to a "reasonable" energy
# Find a step length that leads to a "reasonable" energy
for i in range(50):
try:
E2 = E.at(E.position+dir)
......@@ -46,31 +69,45 @@ def _get_acceptable_energy(E):
def check_value_gradient_consistency(E, tol=1e-8, ntries=100):
for _ in range(ntries):
E2 = _get_acceptable_energy(E)
if isinstance(E, Energy):
E2 = _get_acceptable_energy(E)
else:
E2 = _get_acceptable_model(E)
val = E.value
dir = E2.position - E.position
# Enext = E2
Enext = E2
dirnorm = dir.norm()
for i in range(50):
Emid = E.at(E.position + 0.5*dir)
dirder = Emid.gradient.vdot(dir)/dirnorm
xtol = tol*Emid.gradient_norm
if abs((E2.value-val)/dirnorm - dirder) < xtol:
break
if isinstance(E, Energy):
dirder = Emid.gradient.vdot(dir)/dirnorm
else:
dirder = Emid.gradient(dir)/dirnorm
numgrad = (E2.value-val)/dirnorm
if isinstance(E, Model):
xtol = tol * dirder.norm() / np.sqrt(dirder.size)
if (abs(numgrad-dirder) < xtol).all():
break
else:
xtol = tol*Emid.gradient_norm
if abs(numgrad-dirder) < xtol:
break
dir *= 0.5
dirnorm *= 0.5
E2 = Emid
else:
raise ValueError("gradient and value seem inconsistent")
# E = Enext
E = Enext
def check_value_gradient_curvature_consistency(E, tol=1e-8, ntries=100):
if isinstance(E, Model):
raise ValueError('Models have no curvature, thus it cannot be tested.')
for _ in range(ntries):
E2 = _get_acceptable_energy(E)
val = E.value
dir = E2.position - E.position
# Enext = E2
Enext = E2
dirnorm = dir.norm()
for i in range(50):
Emid = E.at(E.position + 0.5*dir)
......@@ -85,4 +122,4 @@ def check_value_gradient_curvature_consistency(E, tol=1e-8, ntries=100):
E2 = Emid
else:
raise ValueError("gradient, value and curvature seem inconsistent")
# E = Enext
E = Enext
......@@ -24,8 +24,6 @@ from .domain_tuple import DomainTuple
from functools import reduce
from . import dobj
__all__ = ["Field", "sqrt", "exp", "log", "conjugate"]
class Field(object):
""" The discrete representation of a continuous field over multiple spaces.
......
from .amplitude_model import make_amplitude_model
from .apply_data import ApplyData
from .gaussian_energy import GaussianEnergy
from .los_response import LOSResponse
from .nonlinear_wiener_filter_energy import NonlinearWienerFilterEnergy
from .unit_log_gauss import UnitLogGauss
from .point_sources import PointSources
from .poisson_log_likelihood import PoissonLogLikelihood
from .poissonian_energy import PoissonianEnergy
from .smooth_sky import make_smooth_mf_sky_model, make_smooth_sky_model
from .wiener_filter_curvature import WienerFilterCurvature
from .wiener_filter_energy import WienerFilterEnergy
def ApplyData(data, var, model_data):
from .. import DiagonalOperator, Constant, sqrt
# TODO This is rather confusing. Delete that eventually.
from ..operators.diagonal_operator import DiagonalOperator
from ..models.constant import Constant
from ..sugar import sqrt
sqrt_n = DiagonalOperator(sqrt(var))
data = Constant(model_data.position, data)
return sqrt_n.inverse(model_data - data)
......@@ -17,45 +17,50 @@
# and financially supported by the Studienstiftung des deutschen Volkes.
from ..minimization.energy import Energy
from ..operators.inversion_enabler import InversionEnabler
from ..operators.sandwich_operator import SandwichOperator
from ..utilities import memo
class UnitLogGauss(Energy):
def __init__(self, s, inverter=None):
class GaussianEnergy(Energy):
def __init__(self, inp, mean=None, covariance=None):
"""
s: Sky model object
inp: Model object
value = 0.5 * s.vdot(s), i.e. a log-Gauss distribution with unit
covariance
"""
super(UnitLogGauss, self).__init__(s.position)
self._s = s
self._inverter = inverter
super(GaussianEnergy, self).__init__(inp.position)
self._inp = inp
self._mean = mean
self._cov = covariance
def at(self, position):
return self.__class__(self._s.at(position), self._inverter)
return self.__class__(self._inp.at(position), self._mean, self._cov)
@property
@memo
def _gradient_helper(self):
return self._s.gradient
def residual(self):
if self._mean is not None:
return self._inp.value - self._mean
return self._inp.value
@property
@memo
def value(self):
return .5 * self._s.value.squared_norm()
if self._cov is None:
return .5 * self.residual.vdot(self.residual).real
return .5 * self.residual.vdot(self._cov.inverse(self.residual)).real
@property
@memo
def gradient(self):
return self._gradient_helper.adjoint(self._s.value)
if self._cov is None:
return self._inp.gradient.adjoint(self.residual)
return self._inp.gradient.adjoint(self._cov.inverse(self.residual))
@property
@memo
def curvature(self):
c = SandwichOperator.make(self._gradient_helper)
if self._inverter is None:
return c
return InversionEnabler(c, self._inverter)
if self._cov is None:
return SandwichOperator.make(self._inp.gradient, None)
return SandwichOperator.make(self._inp.gradient, self._cov.inverse)
......@@ -23,7 +23,7 @@ from ..operators.sandwich_operator import SandwichOperator
from ..sugar import log, makeOp
class PoissonLogLikelihood(Energy):
class PoissonianEnergy(Energy):
def __init__(self, lamb, d):
"""
lamb: Sky model object
......@@ -31,7 +31,7 @@ class PoissonLogLikelihood(Energy):
value = 0.5 * s.vdot(s), i.e. a log-Gauss distribution with unit
covariance
"""
super(PoissonLogLikelihood, self).__init__(lamb.position)
super(PoissonianEnergy, self).__init__(lamb.position)
self._lamb = lamb
self._d = d
......
......@@ -16,6 +16,7 @@
# NIFTy is being developed at the Max-Planck-Institut fuer Astrophysik
# and financially supported by the Studienstiftung des deutschen Volkes.
import numpy as np
from ..field import Field
from ..multi import MultiField
from ..utilities import NiftyMetaBase, memo
......@@ -128,30 +129,27 @@ class Energy(NiftyMetaBase()):
"""
return None
def __mul__(self, factor):
from .energy_sum import EnergySum
if isinstance(factor, (float, int)):
return EnergySum.make([self], [factor])
return NotImplemented
def __rmul__(self, factor):
return self.__mul__(factor)
def __add__(self, other):
if not isinstance(other, Energy):
raise TypeError
return Add(self, other)
from .energy_sum import EnergySum
if isinstance(other, Energy):
return EnergySum.make([self, other])
return NotImplemented
def __sub__(self, other):
if not isinstance(other, Energy):
raise TypeError
return Add(self, (-1) * other)
def Add(energy1, energy2):
if (isinstance(energy1.position, MultiField) and
isinstance(energy2.position, MultiField)):
a = energy1.position._val
b = energy2.position._val
# Note: In python >3.5 one could do {**a, **b}
ab = a.copy()
ab.update(b)
position = MultiField(ab)
elif (isinstance(energy1.position, Field) and
isinstance(energy2.position, Field)):
position = energy1.position
else:
raise TypeError
from .energy_sum import EnergySum
return EnergySum(position, [energy1, energy2])
from .energy_sum import EnergySum
if isinstance(other, Energy):
return EnergySum.make([self, other], [1., -1.])
return NotImplemented
def __neg__(self):
from .energy_sum import EnergySum
return EnergySum.make([self], [-1.])
......@@ -16,49 +16,65 @@
# NIFTy is being developed at the Max-Planck-Institut fuer Astrophysik
# and financially supported by the Studienstiftung des deutschen Volkes.
from .energy import Energy
from ..utilities import memo