Commit 5244664a authored by Martin Reinecke's avatar Martin Reinecke
Browse files

Merge branch 'NIFTy_5' into operator_documentation

parents 8e71d181 8e516415
%% Cell type:markdown id: tags:
# A NIFTy demonstration
%% Cell type:markdown id: tags:
## IFT: Big Picture
IFT starting point:
$$d = Rs+n$$
Typically, $s$ is a continuous field, $d$ a discrete data vector. Particularly, $R$ is not invertible.
IFT aims at **inverting** the above uninvertible problem in the **best possible way** using Bayesian statistics.
## NIFTy
NIFTy (Numerical Information Field Theory) is a Python framework in which IFT problems can be tackled easily.
Main Interfaces:
- **Spaces**: Cartesian, 2-Spheres (Healpix, Gauss-Legendre) and their respective harmonic spaces.
- **Fields**: Defined on spaces.
- **Operators**: Acting on fields.
%% Cell type:markdown id: tags:
## Wiener Filter: Formulae
### Assumptions
- $d=Rs+n$, $R$ linear operator.
- $\mathcal P (s) = \mathcal G (s,S)$, $\mathcal P (n) = \mathcal G (n,N)$ where $S, N$ are positive definite matrices.
### Posterior
The Posterior is given by:
$$\mathcal P (s|d) \propto P(s,d) = \mathcal G(d-Rs,N) \,\mathcal G(s,S) \propto \mathcal G (m,D) $$
where
$$\begin{align}
m &= Dj \\
D^{-1}&= (S^{-1} +R^\dagger N^{-1} R )\\
j &= R^\dagger N^{-1} d
\end{align}$$
Let us implement this in NIFTy!
%% Cell type:markdown id: tags:
## Wiener Filter: Example
- We assume statistical homogeneity and isotropy. Therefore the signal covariance $S$ is diagonal in harmonic space, and is described by a one-dimensional power spectrum, assumed here as $$P(k) = P_0\,\left(1+\left(\frac{k}{k_0}\right)^2\right)^{-\gamma /2},$$
with $P_0 = 0.2, k_0 = 5, \gamma = 4$.
- $N = 0.2 \cdot \mathbb{1}$.
- Number of data points $N_{pix} = 512$.
- reconstruction in harmonic space.
- Response operator:
$$R = FFT_{\text{harmonic} \rightarrow \text{position}}$$
%% Cell type:code id: tags:
``` python
N_pixels = 512 # Number of pixels
def pow_spec(k):
P0, k0, gamma = [.2, 5, 4]
return P0 / ((1. + (k/k0)**2)**(gamma / 2))
```
%% Cell type:markdown id: tags:
## Wiener Filter: Implementation
%% Cell type:markdown id: tags:
### Import Modules
%% Cell type:code id: tags:
``` python
import numpy as np
np.random.seed(40)
import nifty5 as ift
import matplotlib.pyplot as plt
%matplotlib inline
```
%% Cell type:markdown id: tags:
### Implement Propagator
%% Cell type:code id: tags:
``` python
def Curvature(R, N, Sh):
IC = ift.GradientNormController(iteration_limit=50000,
tol_abs_gradnorm=0.1)
# WienerFilterCurvature is (R.adjoint*N.inverse*R + Sh.inverse) plus some handy
# helper methods.
return ift.library.WienerFilterCurvature(R,N,Sh,iteration_controller=IC,iteration_controller_sampling=IC)
return ift.WienerFilterCurvature(R,N,Sh,iteration_controller=IC,iteration_controller_sampling=IC)
```
%% Cell type:markdown id: tags:
### Conjugate Gradient Preconditioning
- $D$ is defined via:
$$D^{-1} = \mathcal S_h^{-1} + R^\dagger N^{-1} R.$$
In the end, we want to apply $D$ to $j$, i.e. we need the inverse action of $D^{-1}$. This is done numerically (algorithm: *Conjugate Gradient*).
<!--
- One can define the *condition number* of a non-singular and normal matrix $A$:
$$\kappa (A) := \frac{|\lambda_{\text{max}}|}{|\lambda_{\text{min}}|},$$
where $\lambda_{\text{max}}$ and $\lambda_{\text{min}}$ are the largest and smallest eigenvalue of $A$, respectively.
- The larger $\kappa$ the slower Conjugate Gradient.
- By default, conjugate gradient solves: $D^{-1} m = j$ for $m$, where $D^{-1}$ can be badly conditioned. If one knows a non-singular matrix $T$ for which $TD^{-1}$ is better conditioned, one can solve the equivalent problem:
$$\tilde A m = \tilde j,$$
where $\tilde A = T D^{-1}$ and $\tilde j = Tj$.
- In our case $S^{-1}$ is responsible for the bad conditioning of $D$ depending on the chosen power spectrum. Thus, we choose
$$T = \mathcal F^\dagger S_h^{-1} \mathcal F.$$
-->
%% Cell type:markdown id: tags:
### Generate Mock data
- Generate a field $s$ and $n$ with given covariances.
- Calculate $d$.
%% Cell type:code id: tags:
``` python
s_space = ift.RGSpace(N_pixels)
h_space = s_space.get_default_codomain()
HT = ift.HarmonicTransformOperator(h_space, target=s_space)
# Operators
Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec)
R = HT #*ift.create_harmonic_smoothing_operator((h_space,), 0, 0.02)
# Fields and data
sh = Sh.draw_sample()
noiseless_data=R(sh)
noise_amplitude = np.sqrt(0.2)
N = ift.ScalingOperator(noise_amplitude**2, s_space)
n = ift.Field.from_random(domain=s_space, random_type='normal',
std=noise_amplitude, mean=0)
d = noiseless_data + n
j = R.adjoint_times(N.inverse_times(d))
curv = Curvature(R=R, N=N, Sh=Sh)
D = curv.inverse
```
%% Cell type:markdown id: tags:
### Run Wiener Filter
%% Cell type:code id: tags:
``` python
m = D(j)
```
%% Cell type:markdown id: tags:
### Signal Reconstruction
%% Cell type:code id: tags:
``` python
# Get signal data and reconstruction data
s_data = HT(sh).to_global_data()
m_data = HT(m).to_global_data()
d_data = d.to_global_data()
plt.figure(figsize=(15,10))
plt.plot(s_data, 'r', label="Signal", linewidth=3)
plt.plot(d_data, 'k.', label="Data")
plt.plot(m_data, 'k', label="Reconstruction",linewidth=3)
plt.title("Reconstruction")
plt.legend()
plt.show()
```
%% Cell type:code id: tags:
``` python
plt.figure(figsize=(15,10))
plt.plot(s_data - s_data, 'r', label="Signal", linewidth=3)
plt.plot(d_data - s_data, 'k.', label="Data")
plt.plot(m_data - s_data, 'k', label="Reconstruction",linewidth=3)
plt.axhspan(-noise_amplitude,noise_amplitude, facecolor='0.9', alpha=.5)
plt.title("Residuals")
plt.legend()
plt.show()
```
%% Cell type:markdown id: tags:
### Power Spectrum
%% Cell type:code id: tags:
``` python
s_power_data = ift.power_analyze(sh).to_global_data()
m_power_data = ift.power_analyze(m).to_global_data()
plt.figure(figsize=(15,10))
plt.loglog()
plt.xlim(1, int(N_pixels/2))
ymin = min(m_power_data)
plt.ylim(ymin, 1)
xs = np.arange(1,int(N_pixels/2),.1)
plt.plot(xs, pow_spec(xs), label="True Power Spectrum", color='k',alpha=0.5)
plt.plot(s_power_data, 'r', label="Signal")
plt.plot(m_power_data, 'k', label="Reconstruction")
plt.axhline(noise_amplitude**2 / N_pixels, color="k", linestyle='--', label="Noise level", alpha=.5)
plt.axhspan(noise_amplitude**2 / N_pixels, ymin, facecolor='0.9', alpha=.5)
plt.title("Power Spectrum")
plt.legend()
plt.show()
```
%% Cell type:markdown id: tags:
## Wiener Filter on Incomplete Data
%% Cell type:code id: tags:
``` python
# Operators
Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec)
N = ift.ScalingOperator(noise_amplitude**2,s_space)
# R is defined below
# Fields
sh = Sh.draw_sample()
s = HT(sh)
n = ift.Field.from_random(domain=s_space, random_type='normal',
std=noise_amplitude, mean=0)
```
%% Cell type:markdown id: tags:
### Partially Lose Data
%% Cell type:code id: tags:
``` python
l = int(N_pixels * 0.2)
h = int(N_pixels * 0.2 * 2)
mask = np.full(s_space.shape, 1.)
mask[l:h] = 0
mask = ift.Field.from_global_data(s_space, mask)
R = ift.DiagonalOperator(mask)*HT
n = n.to_global_data()
n[l:h] = 0
n = ift.Field.from_global_data(s_space, n)
d = R(sh) + n
```
%% Cell type:code id: tags:
``` python
curv = Curvature(R=R, N=N, Sh=Sh)
D = curv.inverse
j = R.adjoint_times(N.inverse_times(d))
m = D(j)
```
%% Cell type:markdown id: tags:
### Compute Uncertainty
%% Cell type:code id: tags:
``` python
m_mean, m_var = ift.probe_with_posterior_samples(curv, HT, 200)
```
%% Cell type:markdown id: tags:
### Get data
%% Cell type:code id: tags:
``` python
# Get signal data and reconstruction data
s_data = s.to_global_data()
m_data = HT(m).to_global_data()
m_var_data = m_var.to_global_data()
uncertainty = np.sqrt(m_var_data)
d_data = d.to_global_data()
# Set lost data to NaN for proper plotting
d_data[d_data == 0] = np.nan
```
%% Cell type:code id: tags:
``` python
fig = plt.figure(figsize=(15,10))
plt.axvspan(l, h, facecolor='0.8',alpha=0.5)
plt.fill_between(range(N_pixels), m_data - uncertainty, m_data + uncertainty, facecolor='0.5', alpha=0.5)
plt.plot(s_data, 'r', label="Signal", alpha=1, linewidth=3)
plt.plot(d_data, 'k.', label="Data")
plt.plot(m_data, 'k', label="Reconstruction", linewidth=3)
plt.title("Reconstruction of incomplete data")
plt.legend()
```
%% Cell type:markdown id: tags:
# 2d Example
%% Cell type:code id: tags:
``` python
N_pixels = 256 # Number of pixels
sigma2 = 2. # Noise variance
def pow_spec(k):
P0, k0, gamma = [.2, 2, 4]
return P0 * (1. + (k/k0)**2)**(-gamma/2)
s_space = ift.RGSpace([N_pixels, N_pixels])
```
%% Cell type:code id: tags:
``` python
h_space = s_space.get_default_codomain()
HT = ift.HarmonicTransformOperator(h_space,s_space)
# Operators
Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec)
N = ift.ScalingOperator(sigma2,s_space)
# Fields and data
sh = Sh.draw_sample()
n = ift.Field.from_random(domain=s_space, random_type='normal',
std=np.sqrt(sigma2), mean=0)
# Lose some data
l = int(N_pixels * 0.33)
h = int(N_pixels * 0.33 * 2)
mask = np.full(s_space.shape, 1.)
mask[l:h,l:h] = 0.
mask = ift.Field.from_global_data(s_space, mask)
R = ift.DiagonalOperator(mask)*HT
n = n.to_global_data()
n[l:h, l:h] = 0
n = ift.Field.from_global_data(s_space, n)
curv = Curvature(R=R, N=N, Sh=Sh)
D = curv.inverse
d = R(sh) + n
j = R.adjoint_times(N.inverse_times(d))
# Run Wiener filter
m = D(j)
# Uncertainty
m_mean, m_var = ift.probe_with_posterior_samples(curv, HT, 20)
# Get data
s_data = HT(sh).to_global_data()
m_data = HT(m).to_global_data()
m_var_data = m_var.to_global_data()
d_data = d.to_global_data()
uncertainty = np.sqrt(np.abs(m_var_data))
```
%% Cell type:code id: tags:
``` python
cm = ['magma', 'inferno', 'plasma', 'viridis'][1]
mi = np.min(s_data)
ma = np.max(s_data)
fig, axes = plt.subplots(1, 2, figsize=(15, 7))
data = [s_data, d_data]
caption = ["Signal", "Data"]
for ax in axes.flat:
im = ax.imshow(data.pop(0), interpolation='nearest', cmap=cm, vmin=mi,
vmax=ma)
ax.set_title(caption.pop(0))
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.15, 0.05, 0.7])
fig.colorbar(im, cax=cbar_ax)
```
%% Cell type:code id: tags:
``` python
mi = np.min(s_data)
ma = np.max(s_data)
fig, axes = plt.subplots(3, 2, figsize=(15, 22.5))
sample = HT(curv.draw_sample(from_inverse=True)+m).to_global_data()
post_mean = (m_mean + HT(m)).to_global_data()
data = [s_data, m_data, post_mean, sample, s_data - m_data, uncertainty]
caption = ["Signal", "Reconstruction", "Posterior mean", "Sample", "Residuals", "Uncertainty Map"]
for ax in axes.flat:
im = ax.imshow(data.pop(0), interpolation='nearest', cmap=cm, vmin=mi, vmax=ma)
ax.set_title(caption.pop(0))
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([.85, 0.15, 0.05, 0.7])
fig.colorbar(im, cax=cbar_ax)
```
%% Cell type:markdown id: tags:
### Is the uncertainty map reliable?
%% Cell type:code id: tags:
``` python
precise = (np.abs(s_data-m_data) < uncertainty)
print("Error within uncertainty map bounds: " + str(np.sum(precise) * 100 / N_pixels**2) + "%")
plt.figure(figsize=(15,10))
plt.imshow(precise.astype(float), cmap="brg")
plt.colorbar()
```
%% Cell type:markdown id: tags:
# Start Coding
## NIFTy Repository + Installation guide
https://gitlab.mpcdf.mpg.de/ift/NIFTy
NIFTy v5 **more or less stable!**
......
......@@ -2,7 +2,9 @@ import nifty5 as ift
import numpy as np
from global_newton.models_other.apply_data import ApplyData
from global_newton.models_energy.hamiltonian import Hamiltonian
from nifty5.library.unit_log_gauss import UnitLogGauss
from nifty5 import GaussianEnergy
if __name__ == '__main__':
# s_space = ift.RGSpace([1024])
s_space = ift.RGSpace([128,128])
......@@ -45,7 +47,7 @@ if __name__ == '__main__':
NWR = ApplyData(data, ift.Field(d_space,val=noise), Rs)
INITIAL_POSITION = ift.from_random('normal',total_domain)
likelihood = UnitLogGauss(INITIAL_POSITION, NWR)
likelihood = GaussianEnergy(INITIAL_POSITION, NWR)
IC = ift.GradientNormController(iteration_limit=500, tol_abs_gradnorm=1e-3)
inverter = ift.ConjugateGradient(controller=IC)
......
......@@ -13,8 +13,10 @@ recognized from a large distance, ignoring all technical details.
From such a perspective,
- IFT problems largely consist of *minimization* problems involving a large
number of equations.
- IFT problems largely consist of the combination of several high dimensional
*minimization* problems.
- Within NIFTy, *models* are used to define the characteristic equations and
properties of the problems.
- The equations are built mostly from the application of *linear operators*,
but there may also be nonlinear functions involved.
- The unknowns in the equations represent either continuous physical *fields*,
......@@ -232,6 +234,62 @@ The properties :attr:`~LinearOperator.adjoint` and
were the original operator's adjoint or inverse, respectively.
Models
======
Model classes (represented by NIFTy5's abstract :class:`Model` class) are used to construct
the equations of a specific inference problem.
Most models are defined via a position, which is a :class:`MultiField` object,
their value at this position, which is again a :class:`MultiField` object and a Jacobian derivative,
which is a :class:`LinearOperator` and is needed for the minimization procedure.
Using the existing basic model classes one can construct more complicated models, as
NIFTy allows for easy and self-consinstent combination via point-wise multiplication,
addition and subtraction. The model resulting from these operations then automatically
contains the correct Jacobians, positions and values.
Notably, :class:`Constant` and :class:`Variable` allow for an easy way to turn
inference of specific quantities on and off.
The basic model classes also allow for more complex operations on models such as
the application of :class:`LinearOperators` or local non-linearities.
As an example one may consider the following combination of ``x``, which is a model of type
:class:`Variable` and ``y``, which is a model of type :class:`Constant`::
z = x*x + y
``z`` will then be a model with the following properties::
z.value = x.value*x.value + y.value
z.position = Union(x.position, y.position)
z.jacobian = 2*makeOp(x.value)
Basic models
------------
Basic model classes provided by NIFTy are
- :class:`Constant` contains a constant value and has a zero valued Jacobian.
Like other models, it has a position, but its value does not depend on it.
- :class:`Variable` returns the position as its value, its derivative is one.
- :class:`LinearModel` applies a :class:`LinearOperator` on the model.
- :class:`LocalModel` applies a non-linearity locally on the model.
- :class:`MultiModel` combines various models into one. In this case the position,
value and Jacobian are combined into corresponding :class:`MultiFields` and operators.
Advanced models
---------------
NIFTy also provides a library of more sophisticated models which are used for more
specific inference problems. Currently these are:
- :class:`AmplitudeModel`, which returns a smooth power spectrum.
- :class:`PointModel`, which models point sources which follow a inverse gamma distribution.
- :class:`SmoothSkyModel`, which models a diffuse lognormal field. It takes an amplitude model
to specify the correlation structure of the field.
.. _minimization:
......
......@@ -16,7 +16,7 @@ from .minimization import *
from .sugar import *
from .plotting.plot import plot
from . import library
from .library import *
from . import extra
from .utilities import memo
......
......@@ -26,7 +26,7 @@ except ImportError:
from .data_objects.numpy_do import *
__all__ = ["ntask", "rank", "master", "local_shape", "data_object", "full",
"empty", "zeros", "ones", "empty_like", "vdot", "abs", "exp",
"empty", "zeros", "ones", "empty_like", "vdot", "exp",
"log", "tanh", "sqrt", "from_object", "from_random",
"local_data", "ibegin", "ibegin_from_shape", "np_allreduce_sum",
"distaxis", "from_local_data", "from_global_data", "to_global_data",
......
......@@ -34,7 +34,7 @@ class GLSpace(StructuredDomain):
pixelization.
nlon : int, optional
Number of longitudinal bins that are used for this pixelization.
Default value is 2*nlat + 1.
Default value is 2*nlat - 1.
"""
_needed_for_hash = ["_nlat", "_nlon"]
......
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
# Copyright(C) 2013-2018 Max-Planck-Society
#
# NIFTy is being developed at the Max-Planck-Institut fuer Astrophysik
# and financially supported by the Studienstiftung des deutschen Volkes.
from ..library.gaussian_energy import GaussianEnergy
from ..minimization.energy import Energy
from ..operators import InversionEnabler, SamplingEnabler
from ..models.variable import Variable
from ..operators import InversionEnabler, SamplingEnabler
from ..utilities import memo
from ..library.unit_log_gauss import UnitLogGauss
class Hamiltonian(Energy):
......@@ -15,11 +33,8 @@ class Hamiltonian(Energy):
super(Hamiltonian, self).__init__(lh.position)
self._lh = lh
self._ic = iteration_controller
if iteration_controller_sampling is None:
self._ic_samp = iteration_controller
else:
self._ic_samp = iteration_controller_sampling
self._prior = UnitLogGauss(Variable(self.position))
self._ic_samp = iteration_controller_sampling
self._prior = GaussianEnergy(Variable(self.position))
self._precond = self._prior.curvature
def at(self, position):
......@@ -39,8 +54,11 @@ class Hamiltonian(Energy):
@memo
def curvature(self):
prior_curv = self._prior.curvature
c = SamplingEnabler(self._lh.curvature, prior_curv.inverse,
self._ic_samp, prior_curv.inverse)
if self._ic_samp is None:
c = self._lh.curvature + prior_curv
else:
c = SamplingEnabler(self._lh.curvature, prior_curv.inverse,
self._ic_samp, prior_curv.inverse)
return InversionEnabler(c, self._ic, self._precond)
def __str__(self):
......
......@@ -24,8 +24,6 @@ from .domain_tuple import DomainTuple
from functools import reduce
from . import dobj
__all__ = ["Field", "sqrt", "exp", "log", "conjugate"]
class Field(object):
""" The discrete representation of a continuous field over multiple spaces.
......
from .amplitude_model import make_amplitude_model
from .apply_data import ApplyData
from .gaussian_energy import GaussianEnergy
from .los_response import LOSResponse
from .nonlinear_wiener_filter_energy import NonlinearWienerFilterEnergy
from .unit_log_gauss import UnitLogGauss
from .point_sources import PointSources
from .poisson_log_likelihood import PoissonLogLikelihood
from .poissonian_energy import PoissonianEnergy
from .smooth_sky import make_smooth_mf_sky_model, make_smooth_sky_model
from .wiener_filter_curvature import WienerFilterCurvature
from .wiener_filter_energy import WienerFilterEnergy
def ApplyData(data, var, model_data):
from .. import DiagonalOperator, Constant, sqrt
# TODO This is rather confusing. Delete that eventually.
from ..operators.diagonal_operator import DiagonalOperator
from ..models.constant import Constant
from ..sugar import sqrt
sqrt_n = DiagonalOperator(sqrt(var))
data = Constant(model_data.position, data)
return sqrt_n.inverse(model_data - data)
......@@ -17,45 +17,50 @@
# and financially supported by the Studienstiftung des deutschen Volkes.
from ..minimization.energy import Energy
from ..operators.inversion_enabler import InversionEnabler
from ..operators.sandwich_operator import SandwichOperator
from ..utilities import memo
class UnitLogGauss(Energy):
def __init__(self, s, inverter=None):
class GaussianEnergy(Energy):
def __init__(self, inp, mean=None, covariance=None):
"""
s: Sky model object
inp: Model object
value = 0.5 * s.vdot(s), i.e. a log-Gauss distribution with unit
covariance
"""
super(UnitLogGauss, self).__init__(s.position)
self._s = s
self._inverter = inverter
super(GaussianEnergy, self).__init__(inp.position)
self._inp = inp
self._mean = mean
self._cov = covariance
def at(self, position):
return self.__class__(self._s.at(position), self._inverter)
return self.__class__(self._inp.at(position), self._mean, self._cov)
@property
@memo
def _gradient_helper(self):
return self._s.gradient
def residual(self):
if self._mean is not None:
return self._inp.value - self._mean
return self._inp.value
@property
@memo
def value(self):
return .5 * self._s.value.squared_norm()
if self._cov is None:
return .5 * self.residual.vdot(self.residual).real
return .5 * self.residual.vdot(self._cov.inverse(self.residual)).real
@property
@memo
def gradient(self):
return self._gradient_helper.adjoint(self._s.value)
if self._cov is None:
return self._inp.gradient.adjoint(self.residual)
return self._inp.gradient.adjoint(self._cov.inverse(self.residual))
@property
@memo
def curvature(self):
c = SandwichOperator.make(self._gradient_helper)
if self._inverter is None:
return c
return InversionEnabler(c, self._inverter)
if self._cov is None:
return SandwichOperator.make(self._inp.gradient, None)
return SandwichOperator.make(self._inp.gradient, self._cov.inverse)
......@@ -23,7 +23,7 @@ from ..operators.sandwich_operator import SandwichOperator
from ..sugar import log, makeOp
class PoissonLogLikelihood(Energy):
class PoissonianEnergy(Energy):
def __init__(self, lamb, d):
"""
lamb: Sky model object
......@@ -31,7 +31,7 @@ class PoissonLogLikelihood(Energy):
value = 0.5 * s.vdot(s), i.e. a log-Gauss distribution with unit
covariance
"""
super(PoissonLogLikelihood, self).__init__(lamb.position)
super(PoissonianEnergy, self).__init__(lamb.position)
self._lamb = lamb
self._d = d
......
......@@ -164,10 +164,25 @@ class MultiField(object):
def __neg__(self):
return MultiField({key: -val for key, val in self.items()})
def __abs__(self):
return MultiField({key: abs(val) for key, val in self.items()})
def conjugate(self):
return MultiField({key: sub_field.conjugate()
for key, sub_field in self.items()})
def all(self):
for v in self.values():
if not v.all():
return False
return True
def any(self):
for v in self.values():
if v.any():
return True
return False
def isEquivalentTo(self, other):
"""Determines (as quickly as possible) whether `self`'s content is
identical to `other`'s content."""
......
......@@ -10,6 +10,7 @@ from .harmonic_transform_operator import HarmonicTransformOperator
from .inversion_enabler import InversionEnabler
from .laplace_operator import LaplaceOperator
from .linear_operator import LinearOperator
from .mask_operator import MaskOperator
from .multi_adaptor import MultiAdaptor
from .power_distributor import PowerDistributor
from .qht_operator import QHTOperator
......@@ -23,7 +24,7 @@ from .symmetrizing_operator import SymmetrizingOperator
__all__ = ["LinearOperator", "EndomorphicOperator", "ScalingOperator",
"DiagonalOperator", "HarmonicTransformOperator", "FFTOperator",
"FFTSmoothingOperator", "GeometryRemover",
"FFTSmoothingOperator", "GeometryRemover", "MaskOperator",
"LaplaceOperator", "SmoothnessOperator", "PowerDistributor",
"InversionEnabler", "SandwichOperator", "SamplingEnabler",
"DOFDistributor", "SelectionOperator", "MultiAdaptor",
......
......@@ -16,13 +16,48 @@
# NIFTy is being developed at the Max-Planck-Institut fuer Astrophysik
# and financially supported by the Studienstiftung des deutschen Volkes.
from ..models.constant import Constant
from .unit_log_gauss import UnitLogGauss
from ..energies.hamiltonian import Hamiltonian
import numpy as np
from ..domain_tuple import DomainTuple
from ..domains.unstructured_domain import UnstructuredDomain
from ..field import Field
from .linear_operator import LinearOperator
def NonlinearWienerFilterEnergy(measured_data, data_model, sqrtN, iteration_controller):
d = measured_data.lock()
residual = Constant(data_model.position, d) - data_model
lh = UnitLogGauss(sqrtN.inverse(residual))
return Hamiltonian(lh, iteration_controller)
class MaskOperator(LinearOperator):
def __init__(self, mask):
if not isinstance(mask, Field):
raise TypeError
self._domain = DomainTuple.make(mask.domain)
self._mask = np.logical_not(mask.to_global_data())
self._target = DomainTuple.make(UnstructuredDomain(self._mask.sum()))
def data_indices(self):
if len(self.domain.shape) == 1:
return np.arange(self.domain.shape[0])[self._mask]
if len(self.domain.shape) == 2:
return np.indices(self.domain.shape).transpose((1, 2, 0))[self._mask]
def apply(self, x, mode):
self._check_input(x, mode)
if mode == self.TIMES:
res = x.to_global_data()[self._mask]
return Field.from_global_data(self.target, res)
x = x.to_global_data()
res = np.empty(self.domain.shape, x.dtype)
res[self._mask] = x
res[~self._mask] = 0
return Field.from_global_data(self.domain, res)
@property
def capability(self):
return self.TIMES | self.ADJOINT_TIMES
@property
def domain(self):
return self._domain
@property
def target(self):
return self._target
......@@ -24,16 +24,17 @@ from .endomorphic_operator import EndomorphicOperator
class SamplingEnabler(EndomorphicOperator):
"""Class which augments the capability of another operator object via
numerical inversion.
"""Class which acts as the operator object (likelihood + prior)
and enables sampling from its inverse even if the operator object
itself does not support it.
Parameters
----------
op : :class:`EndomorphicOperator`
The operator to be enhanced.
The InversionEnabler object will support the same operation modes as
`op`, and additionally the inverse set. The newly-added modes will
be computed by iterative inversion.
likelihood : :class:`EndomorphicOperator`
Curvature of the likelihood
prior : :class:`EndomorphicOperator`
Inverse curvature of the prior
iteration_controller : :class:`IterationController`
The iteration controller to use for the iterative numerical inversion
done by a :class:`ConjugateGradient` object.
......
......@@ -57,7 +57,7 @@ class Energy_Tests(unittest.TestCase):
tol_abs_gradnorm=1e-5)
S = ift.create_power_operator(hspace, power_spectrum=_flat_PS)
energy = ift.library.WienerFilterEnergy(
energy = ift.WienerFilterEnergy(
position=s0, d=d, R=R, N=N, S=S, iteration_controller=IC)
ift.extra.check_value_gradient_curvature_consistency(
energy, ntries=10)
......@@ -66,10 +66,10 @@ class Energy_Tests(unittest.TestCase):
ift.RGSpace(64, distances=.789),
ift.RGSpace([32, 32], distances=.789)],
[ift.Tanh, ift.Exponential, ift.Linear],
[1, 1e-2, 1e2],
[4, 78, 23]))
def testNonlinearMap(self, space, nonlinearity, seed):
def testGaussianEnergy(self, space, nonlinearity, noise, seed):
np.random.seed(seed)
f = nonlinearity()
dim = len(space.shape)
hspace = space.get_default_codomain()
ht = ift.HarmonicTransformOperator(hspace, target=space)
......@@ -77,23 +77,23 @@ class Energy_Tests(unittest.TestCase):
pspace = ift.PowerSpace(hspace, binbounds=binbounds)
Dist = ift.PowerDistributor(target=hspace, power_space=pspace)
xi0 = ift.Field.from_random(domain=hspace, random_type='normal')
xi0_var = ift.Variable(ift.MultiField({'xi':xi0}))['xi']
xi0_var = ift.Variable(ift.MultiField({'xi': xi0}))['xi']
def pspec(k): return 1 / (1 + k**2)**dim
pspec = ift.PS_field(pspace, pspec)
A = Dist(ift.sqrt(pspec))
n = ift.Field.from_random(domain=space, random_type='normal')
N = ift.ScalingOperator(noise, space)
n = N.draw_sample()
s = ht(ift.makeOp(A)(xi0_var))
R = ift.ScalingOperator(10., space)
sqrtN = ift.ScalingOperator(1., space)
d_model = R(ift.LocalModel(s, nonlinearity()))
d = d_model.value + n
IC = ift.GradientNormController(iteration_limit=100,
tol_abs_gradnorm=1e-5)
energy = ift.library.NonlinearWienerFilterEnergy(
d, d_model, sqrtN, IC)
if isinstance