Commit 0649322d authored by Martin Reinecke's avatar Martin Reinecke
Browse files

Merge branch 'op_twiddling' into 'NIFTy_4'

Replace InverseOperator and AdjointOperator with OperatorAdapter, and more

See merge request ift/NIFTy!237
parents 9740e2f3 17d86163
Pipeline #26740 passed with stages
in 10 minutes and 4 seconds
......@@ -3,32 +3,41 @@ image: debian:testing-slim
stages:
- test
- release
variables:
DOCKER_DRIVER: overlay
before_script:
- apt-get update
- sh ci/install_basics.sh
- pip install --process-dependency-links -r ci/requirements.txt
- pip3 install --process-dependency-links -r ci/requirements.txt
- pip install --user .
- pip3 install --user .
test_min:
test_python2:
stage: test
script:
- nosetests3 -q
- OMP_NUM_THREADS=1 mpiexec --allow-run-as-root -n 4 nosetests -q 2>/dev/null
- OMP_NUM_THREADS=1 mpiexec --allow-run-as-root -n 4 nosetests3 -q 2>/dev/null
- apt-get update > /dev/null
- apt-get install -y git libfftw3-dev openmpi-bin libopenmpi-dev python python-pip python-dev python-nose python-numpy python-matplotlib python-future python-mpi4py python-scipy > /dev/null
- pip install --process-dependency-links parameterized coverage git+https://gitlab.mpcdf.mpg.de/ift/pyHealpix.git > /dev/null
- pip install --user .
- OMP_NUM_THREADS=1 mpiexec --allow-run-as-root -n 2 nosetests -q 2>/dev/null
- nosetests -q --with-coverage --cover-package=nifty4 --cover-branches --cover-erase
- >
coverage report | grep TOTAL | awk '{ print "TOTAL: "$6; }'
test_python3:
stage: test
script:
- apt-get update > /dev/null
- apt-get install -y git libfftw3-dev openmpi-bin libopenmpi-dev python3 python3-pip python3-dev python3-nose python3-numpy python3-matplotlib python3-future python3-mpi4py python3-scipy > /dev/null
- pip3 install --process-dependency-links parameterized git+https://gitlab.mpcdf.mpg.de/ift/pyHealpix.git > /dev/null
- pip3 install --user .
- nosetests3 -q
- OMP_NUM_THREADS=1 mpiexec --allow-run-as-root -n 2 nosetests3 -q 2>/dev/null
pages:
stage: release
script:
- sh docs/generate.sh
- mv docs/build/ public/
- apt-get update > /dev/null
- apt-get install -y git libfftw3-dev python python-pip python-dev python-numpy python-future python-sphinx python-sphinx-rtd-theme python-numpydoc > /dev/null
- pip install --user .
- sh docs/generate.sh > /dev/null
- mv docs/build/ public/
artifacts:
paths:
- public
......
apt-get install -y git libfftw3-dev openmpi-bin libopenmpi-dev \
python python-pip python-dev python-nose python-numpy python-matplotlib python-future python-mpi4py python-scipy \
python3 python3-pip python3-dev python3-nose python3-numpy python3-matplotlib python3-future python3-mpi4py python3-scipy
parameterized
coverage
git+https://gitlab.mpcdf.mpg.de/ift/pyHealpix.git
sphinx
sphinx_rtd_theme
numpydoc
%% Cell type:markdown id: tags:
# A NIFTy demonstration
%% Cell type:markdown id: tags:
## IFT: Big Picture
IFT starting point:
$$d = Rs+n$$
Typically, $s$ is a continuous field, $d$ a discrete data vector. Particularly, $R$ is not invertible.
IFT aims at **inverting** the above uninvertible problem in the **best possible way** using Bayesian statistics.
## NIFTy
NIFTy (Numerical Information Field Theory) is a Python framework in which IFT problems can be tackled easily.
Main Interfaces:
- **Spaces**: Cartesian, 2-Spheres (Healpix, Gauss-Legendre) and their respective harmonic spaces.
- **Fields**: Defined on spaces.
- **Operators**: Acting on fields.
%% Cell type:markdown id: tags:
## Wiener Filter: Formulae
### Assumptions
- $d=Rs+n$, $R$ linear operator.
- $\mathcal P (s) = \mathcal G (s,S)$, $\mathcal P (n) = \mathcal G (n,N)$ where $S, N$ are positive definite matrices.
### Posterior
The Posterior is given by:
$$\mathcal P (s|d) \propto P(s,d) = \mathcal G(d-Rs,N) \,\mathcal G(s,S) \propto \mathcal G (m,D) $$
where
$$\begin{align}
m &= Dj \\
D^{-1}&= (S^{-1} +R^\dagger N^{-1} R )\\
j &= R^\dagger N^{-1} d
\end{align}$$
Let us implement this in NIFTy!
%% Cell type:markdown id: tags:
## Wiener Filter: Example
- We assume statistical homogeneity and isotropy. Therefore the signal covariance $S$ is diagonal in harmonic space, and is described by a one-dimensional power spectrum, assumed here as $$P(k) = P_0\,\left(1+\left(\frac{k}{k_0}\right)^2\right)^{-\gamma /2},$$
with $P_0 = 0.2, k_0 = 5, \gamma = 4$.
- $N = 0.2 \cdot \mathbb{1}$.
- Number of data points $N_{pix} = 512$.
- reconstruction in harmonic space.
- Response operator:
$$R = FFT_{\text{harmonic} \rightarrow \text{position}}$$
%% Cell type:code id: tags:
``` python
N_pixels = 512 # Number of pixels
def pow_spec(k):
P0, k0, gamma = [.2, 5, 4]
return P0 / ((1. + (k/k0)**2)**(gamma / 2))
```
%% Cell type:markdown id: tags:
## Wiener Filter: Implementation
%% Cell type:markdown id: tags:
### Import Modules
%% Cell type:code id: tags:
``` python
import numpy as np
np.random.seed(40)
import nifty4 as ift
import matplotlib.pyplot as plt
%matplotlib inline
```
%% Cell type:markdown id: tags:
### Implement Propagator
%% Cell type:code id: tags:
``` python
def Curvature(R, N, Sh):
IC = ift.GradientNormController(iteration_limit=50000,
tol_abs_gradnorm=0.1)
inverter = ift.ConjugateGradient(controller=IC)
# WienerFilterCurvature is (R.adjoint*N.inverse*R + Sh.inverse) plus some handy
# helper methods.
return ift.library.WienerFilterCurvature(R,N,Sh,inverter)
```
%% Cell type:markdown id: tags:
### Conjugate Gradient Preconditioning
- $D$ is defined via:
$$D^{-1} = \mathcal S_h^{-1} + R^\dagger N^{-1} R.$$
In the end, we want to apply $D$ to $j$, i.e. we need the inverse action of $D^{-1}$. This is done numerically (algorithm: *Conjugate Gradient*).
<!--
- One can define the *condition number* of a non-singular and normal matrix $A$:
$$\kappa (A) := \frac{|\lambda_{\text{max}}|}{|\lambda_{\text{min}}|},$$
where $\lambda_{\text{max}}$ and $\lambda_{\text{min}}$ are the largest and smallest eigenvalue of $A$, respectively.
- The larger $\kappa$ the slower Conjugate Gradient.
- By default, conjugate gradient solves: $D^{-1} m = j$ for $m$, where $D^{-1}$ can be badly conditioned. If one knows a non-singular matrix $T$ for which $TD^{-1}$ is better conditioned, one can solve the equivalent problem:
$$\tilde A m = \tilde j,$$
where $\tilde A = T D^{-1}$ and $\tilde j = Tj$.
- In our case $S^{-1}$ is responsible for the bad conditioning of $D$ depending on the chosen power spectrum. Thus, we choose
$$T = \mathcal F^\dagger S_h^{-1} \mathcal F.$$
-->
%% Cell type:markdown id: tags:
### Generate Mock data
- Generate a field $s$ and $n$ with given covariances.
- Calculate $d$.
%% Cell type:code id: tags:
``` python
s_space = ift.RGSpace(N_pixels)
h_space = s_space.get_default_codomain()
HT = ift.HarmonicTransformOperator(h_space, target=s_space)
# Operators
Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec)
R = HT #*ift.create_harmonic_smoothing_operator((h_space,), 0, 0.02)
# Fields and data
sh = Sh.draw_sample()
noiseless_data=R(sh)
noise_amplitude = np.sqrt(0.2)
N = ift.ScalingOperator(noise_amplitude**2, s_space)
n = ift.Field.from_random(domain=s_space, random_type='normal',
std=noise_amplitude, mean=0)
d = noiseless_data + n
j = R.adjoint_times(N.inverse_times(d))
curv = Curvature(R=R, N=N, Sh=Sh)
D = curv.inverse
```
%% Cell type:markdown id: tags:
### Run Wiener Filter
%% Cell type:code id: tags:
``` python
m = D(j)
```
%% Cell type:markdown id: tags:
### Signal Reconstruction
%% Cell type:code id: tags:
``` python
# Get signal data and reconstruction data
s_data = HT(sh).to_global_data()
m_data = HT(m).to_global_data()
d_data = d.to_global_data()
plt.figure(figsize=(15,10))
plt.plot(s_data, 'r', label="Signal", linewidth=3)
plt.plot(d_data, 'k.', label="Data")
plt.plot(m_data, 'k', label="Reconstruction",linewidth=3)
plt.title("Reconstruction")
plt.legend()
plt.show()
```
%% Cell type:code id: tags:
``` python
plt.figure(figsize=(15,10))
plt.plot(s_data - s_data, 'r', label="Signal", linewidth=3)
plt.plot(d_data - s_data, 'k.', label="Data")
plt.plot(m_data - s_data, 'k', label="Reconstruction",linewidth=3)
plt.axhspan(-noise_amplitude,noise_amplitude, facecolor='0.9', alpha=.5)
plt.title("Residuals")
plt.legend()
plt.show()
```
%% Cell type:markdown id: tags:
### Power Spectrum
%% Cell type:code id: tags:
``` python
s_power_data = ift.power_analyze(sh).to_global_data()
m_power_data = ift.power_analyze(m).to_global_data()
plt.figure(figsize=(15,10))
plt.loglog()
plt.xlim(1, int(N_pixels/2))
ymin = min(m_power_data)
plt.ylim(ymin, 1)
xs = np.arange(1,int(N_pixels/2),.1)
plt.plot(xs, pow_spec(xs), label="True Power Spectrum", color='k',alpha=0.5)
plt.plot(s_power_data, 'r', label="Signal")
plt.plot(m_power_data, 'k', label="Reconstruction")
plt.axhline(noise_amplitude**2 / N_pixels, color="k", linestyle='--', label="Noise level", alpha=.5)
plt.axhspan(noise_amplitude**2 / N_pixels, ymin, facecolor='0.9', alpha=.5)
plt.title("Power Spectrum")
plt.legend()
plt.show()
```
%% Cell type:markdown id: tags:
## Wiener Filter on Incomplete Data
%% Cell type:code id: tags:
``` python
# Operators
Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec)
N = ift.ScalingOperator(noise_amplitude**2,s_space)
# R is defined below
# Fields
sh = Sh.draw_sample()
s = HT(sh)
n = ift.Field.from_random(domain=s_space, random_type='normal',
std=noise_amplitude, mean=0)
```
%% Cell type:markdown id: tags:
### Partially Lose Data
%% Cell type:code id: tags:
``` python
l = int(N_pixels * 0.2)
h = int(N_pixels * 0.2 * 2)
mask = np.full(s_space.shape, 1.)
mask[l:h] = 0
mask = ift.Field.from_global_data(s_space, mask)
R = ift.DiagonalOperator(mask)*HT
n = n.to_global_data()
n[l:h] = 0
n = ift.Field.from_global_data(s_space, n)
d = R(sh) + n
```
%% Cell type:code id: tags:
``` python
curv = Curvature(R=R, N=N, Sh=Sh)
D = curv.inverse
j = R.adjoint_times(N.inverse_times(d))
m = D(j)
```
%% Cell type:markdown id: tags:
### Compute Uncertainty
%% Cell type:code id: tags:
``` python
m_mean, m_var = ift.probe_with_posterior_samples(curv, HT, 200)
```
%% Cell type:markdown id: tags:
### Get data
%% Cell type:code id: tags:
``` python
# Get signal data and reconstruction data
s_data = s.to_global_data()
m_data = HT(m).to_global_data()
m_var_data = m_var.to_global_data()
uncertainty = np.sqrt(m_var_data)
d_data = d.to_global_data()
# Set lost data to NaN for proper plotting
d_data[d_data == 0] = np.nan
```
%% Cell type:code id: tags:
``` python
fig = plt.figure(figsize=(15,10))
plt.axvspan(l, h, facecolor='0.8',alpha=0.5)
plt.fill_between(range(N_pixels), m_data - uncertainty, m_data + uncertainty, facecolor='0.5', alpha=0.5)
plt.plot(s_data, 'r', label="Signal", alpha=1, linewidth=3)
plt.plot(d_data, 'k.', label="Data")
plt.plot(m_data, 'k', label="Reconstruction", linewidth=3)
plt.title("Reconstruction of incomplete data")
plt.legend()
```
%% Cell type:markdown id: tags:
# 2d Example
%% Cell type:code id: tags:
``` python
N_pixels = 256 # Number of pixels
sigma2 = 2. # Noise variance
def pow_spec(k):
P0, k0, gamma = [.2, 2, 4]
return P0 * (1. + (k/k0)**2)**(-gamma/2)
s_space = ift.RGSpace([N_pixels, N_pixels])
```
%% Cell type:code id: tags:
``` python
h_space = s_space.get_default_codomain()
HT = ift.HarmonicTransformOperator(h_space,s_space)
# Operators
Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec)
N = ift.ScalingOperator(sigma2,s_space)
# Fields and data
sh = Sh.draw_sample()
n = ift.Field.from_random(domain=s_space, random_type='normal',
std=np.sqrt(sigma2), mean=0)
# Lose some data
l = int(N_pixels * 0.33)
h = int(N_pixels * 0.33 * 2)
mask = np.full(s_space.shape, 1.)
mask[l:h,l:h] = 0.
mask = ift.Field.from_global_data(s_space, mask)
R = ift.DiagonalOperator(mask)*HT
n = n.to_global_data()
n[l:h, l:h] = 0
n = ift.Field.from_global_data(s_space, n)
curv = Curvature(R=R, N=N, Sh=Sh)
D = curv.inverse
d = R(sh) + n
j = R.adjoint_times(N.inverse_times(d))
# Run Wiener filter
m = D(j)
# Uncertainty
m_mean, m_var = ift.probe_with_posterior_samples(curv, HT, 20)
# Get data
s_data = HT(sh).to_global_data()
m_data = HT(m).to_global_data()
m_var_data = m_var.to_global_data()
d_data = d.to_global_data()
uncertainty = np.sqrt(np.abs(m_var_data))
```
%% Cell type:code id: tags:
``` python
cm = ['magma', 'inferno', 'plasma', 'viridis'][1]
mi = np.min(s_data)
ma = np.max(s_data)
fig, axes = plt.subplots(1, 2, figsize=(15, 7))
data = [s_data, d_data]
caption = ["Signal", "Data"]
for ax in axes.flat:
im = ax.imshow(data.pop(0), interpolation='nearest', cmap=cm, vmin=mi,
vmax=ma)
ax.set_title(caption.pop(0))
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.15, 0.05, 0.7])
fig.colorbar(im, cax=cbar_ax)
```
%% Cell type:code id: tags:
``` python
mi = np.min(s_data)
ma = np.max(s_data)
fig, axes = plt.subplots(3, 2, figsize=(15, 22.5))
sample = HT(curv.draw_sample()+m).to_global_data()
sample = HT(curv.draw_sample(from_inverse=True)+m).to_global_data()
post_mean = (m_mean + HT(m)).to_global_data()
data = [s_data, m_data, post_mean, sample, s_data - m_data, uncertainty]
caption = ["Signal", "Reconstruction", "Posterior mean", "Sample", "Residuals", "Uncertainty Map"]
for ax in axes.flat:
im = ax.imshow(data.pop(0), interpolation='nearest', cmap=cm, vmin=mi, vmax=ma)
ax.set_title(caption.pop(0))
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([.85, 0.15, 0.05, 0.7])
fig.colorbar(im, cax=cbar_ax)
```
%% Cell type:markdown id: tags:
### Is the uncertainty map reliable?
%% Cell type:code id: tags:
``` python
precise = (np.abs(s_data-m_data) < uncertainty )
print("Error within uncertainty map bounds: " + str(np.sum(precise) * 100 / N_pixels**2) + "%")
plt.figure(figsize=(15,10))
plt.imshow(precise.astype(float), cmap="brg")
plt.colorbar()
```
%% Cell type:markdown id: tags:
# Start Coding
## NIFTy Repository + Installation guide
https://gitlab.mpcdf.mpg.de/ift/NIFTy
NIFTy v4 **more or less stable!**
......
......@@ -52,8 +52,7 @@ if __name__ == "__main__":
tol_abs_gradnorm=0.1)
inverter = ift.ConjugateGradient(controller=IC)
D = (ift.SandwichOperator(R, N.inverse) + Sh.inverse).inverse
# MR FIXME: we can/should provide a preconditioner here as well!
D = ift.InversionEnabler(D, inverter)
D = ift.InversionEnabler(D, inverter, approximation=Sh)
m = D(j)
# Plotting
......
......@@ -236,12 +236,24 @@ class data_object(object):
def __ipow__(self, other):
return self._binary_helper(other, op='__ipow__')
def __eq__(self, other):
return self._binary_helper(other, op='__eq__')
def __lt__(self, other):
return self._binary_helper(other, op='__lt__')
def __le__(self, other):
return self._binary_helper(other, op='__le__')
def __ne__(self, other):
return self._binary_helper(other, op='__ne__')
def __eq__(self, other):
return self._binary_helper(other, op='__eq__')
def __ge__(self, other):
return self._binary_helper(other, op='__ge__')
def __gt__(self, other):
return self._binary_helper(other, op='__gt__')
def __neg__(self):
return data_object(self._shape, -self._data, self._distaxis)
......
from .operator_tests import consistency_check
from .energy_tests import check_value_gradient_consistency
from .energy_tests import *
......@@ -19,35 +19,63 @@
import numpy as np
from ..field import Field
__all__ = ["check_value_gradient_consistency"]
__all__ = ["check_value_gradient_consistency",
"check_value_gradient_curvature_consistency"]
def check_value_gradient_consistency(E, tol=1e-6, ntries=100):
def _get_acceptable_energy(E):
if not np.isfinite(E.value):
raise ValueError
dir = Field.from_random("normal", E.position.domain)
# find a step length that leads to a "reasonable" energy
for i in range(50):
try:
E2 = E.at(E.position+dir)
if np.isfinite(E2.value) and abs(E2.value) < 1e20:
break
except FloatingPointError:
pass
dir *= 0.5
else:
raise ValueError("could not find a reasonable initial step")
return E2
def check_value_gradient_consistency(E, tol=1e-6, ntries=100):
for _ in range(ntries):
dir = Field.from_random("normal", E.position.domain)
# find a step length that leads to a "reasonable" energy
E2 = _get_acceptable_energy(E)
dir = E2.position - E.position
Enext = E2
dirnorm = dir.norm()
dirder = E.gradient.vdot(dir)/dirnorm
for i in range(50):
try:
E2 = E.at(E.position+dir)
if np.isfinite(E2.value) and abs(E2.value) < 1e20:
break
except FloatingPointError:
pass
print(abs((E2.value-E.value)/dirnorm-dirder))
if abs((E2.value-E.value)/dirnorm-dirder) < tol:
break
dir *= 0.5
dirnorm *= 0.5
E2 = E2.at(E.position+dir)
else:
raise ValueError("could not find a reasonable initial step")
raise ValueError("gradient and value seem inconsistent")
# E = Enext
def check_value_gradient_curvature_consistency(E, tol=1e-6, ntries=100):
for _ in range(ntries):
E2 = _get_acceptable_energy(E)
dir = E2.position - E.position
Enext = E2
dirder = E.gradient.vdot(dir)
dirnorm = dir.norm()
dirder = E.gradient.vdot(dir)/dirnorm
dgrad = E.curvature(dir)/dirnorm
for i in range(50):
Ediff = E2.value - E.value
eps = 1e-10*max(abs(E.value), abs(E2.value))
if abs(Ediff-dirder) < max([tol*abs(Ediff), tol*abs(dirder), eps]):
gdiff = E2.gradient - E.gradient
if abs((E2.value-E.value)/dirnorm-dirder) < tol and \
(abs((E2.gradient-E.gradient)/dirnorm-dgrad) < tol).all():
break
dir *= 0.5
dirder *= 0.5
dirnorm *= 0.5
E2 = E2.at(E.position+dir)
else:
raise ValueError("gradient and value seem inconsistent")
E = Enext
raise ValueError("gradient, value and curvature seem inconsistent")
# E = Enext
......@@ -70,7 +70,7 @@ class NonlinearPowerEnergy(Energy):
if samples is None or samples == 0:
xi_sample_list = [xi]
else:
xi_sample_list = [D.inverse_draw_sample() + xi
xi_sample_list = [D.draw_sample(from_inverse=True) + xi
for _ in range(samples)]
self.xi_sample_list = xi_sample_list
self.inverter = inverter
......
......@@ -40,4 +40,4 @@ def WienerFilterCurvature(R, N, S, inverter):
The minimizer to use during numerical inversion
"""
op = SandwichOperator(R, N.inverse) + S.inverse
return InversionEnabler(op, inverter, S)
return InversionEnabler(op, inverter, S.inverse)
......@@ -10,7 +10,7 @@ from .steepest_descent import SteepestDescent
from .vl_bfgs import VL_BFGS
from .l_bfgs import L_BFGS
from .relaxed_newton import RelaxedNewton
from .scipy_minimizer import ScipyMinimizer, NewtonCG, L_BFGS_B
from .scipy_minimizer import ScipyMinimizer, NewtonCG, L_BFGS_B, ScipyCG
from .energy import Energy
from .quadratic_energy import QuadraticEnergy
from .line_energy import LineEnergy
......@@ -19,5 +19,5 @@ __all__ = ["LineSearch", "LineSearchStrongWolfe",
"IterationController", "GradientNormController",
"Minimizer", "ConjugateGradient", "NonlinearCG", "DescentMinimizer",
"SteepestDescent", "VL_BFGS", "RelaxedNewton", "ScipyMinimizer",
"NewtonCG", "L_BFGS_B", "Energy", "QuadraticEnergy", "LineEnergy",
"L_BFGS"]
"NewtonCG", "L_BFGS_B", "ScipyCG", "Energy", "QuadraticEnergy",
"LineEnergy", "L_BFGS"]
......@@ -24,14 +24,26 @@ from ..logger import logger
from .iteration_controller import IterationController
def _toNdarray(fld):
return fld.to_global_data().reshape(-1)
def _toFlatNdarray(fld):
return fld.val.flatten()
def _toField(arr, dom):
return Field.from_global_data(dom, arr.reshape(dom.shape))
class _MinHelper(object):
def __init__(self, energy):
self._energy = energy
self._domain = energy.position.domain
def _update(self, x):
pos = Field(self._domain, x.reshape(self._domain.shape))
if (pos.val != self._energy.position.val).any():
pos = _toField(x, self._domain)
if (pos != self._energy.position).any():
self._energy = self._energy.at(pos.locked_copy())
def fun(self, x):
......@@ -40,13 +52,12 @@ class _MinHelper(object):
def jac(self, x):
self._update(x)
return self._energy.gradient.val.flatten()
return _toFlatNdarray(self._energy.gradient)
def hessp(self, x, p):
self._update(x)
vec = Field(self._domain, p.reshape(self._domain.shape))
res = self._energy.curvature(vec)
return res.val.flatten()
res = self._energy.curvature(_toField(p, self._domain))
return _toFlatNdarray(res)
class ScipyMinimizer(Minimizer):
......@@ -113,3 +124,40 @@ def L_BFGS_B(ftol, gtol, maxiter, maxcor=10, disp=False, bounds=None):
options = {"ftol": ftol, "gtol": gtol, "maxiter": maxiter,
"maxcor": maxcor, "disp": disp}
return ScipyMinimizer("L-BFGS-B", options, False, bounds)
class ScipyCG(Minimizer):
def __init__(self, tol, maxiter):
super(ScipyCG, self).__init__()
if not dobj.is_numpy():
raise NotImplementedError
self._tol = tol
self._maxiter = maxiter
def __call__(self, energy, preconditioner=None):
from scipy.sparse.linalg import LinearOperator as scipy_linop, cg
from .quadratic_energy import QuadraticEnergy
if not isinstance(energy, QuadraticEnergy):
raise ValueError("need a quadratic energy for CG")
class mymatvec(object):
def __init__(self, op):
self._op = op
def __call__(self, inp):
return _toNdarray(self._op(_toField(inp, self._op.domain)))
op = energy._A
b = _toNdarray(energy._b)
sx = _toNdarray(energy.position)
sci_op = scipy_linop(shape=(op.domain.size, op.target.size),
matvec=mymatvec(op))
prec_op = None
if preconditioner is not None:
prec_op = scipy_linop(shape=(op.domain.size, op.target.size),
matvec=mymatvec(preconditioner))
res, stat = cg(sci_op, b, x0=sx, tol=self._tol, M=prec_op,
maxiter=self._maxiter)
stat = (IterationController.CONVERGED if stat >= 0 else
IterationController.ERROR)
return energy.at(_toField(res, op.domain)), stat
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
# Copyright(C) 2013-2018 Max-Planck-Society
#
# NIFTy is being developed at the Max-Planck-Institut fuer Astrophysik
# and financially supported by the Studienstiftung des deutschen Volkes.
from .linear_operator import LinearOperator
class AdjointOperator(LinearOperator):
"""Adapter class representing the adjoint of a given operator."""
def __init__(self, op):
super(AdjointOperator, self).__init__()
self._op = op
@property
def domain(self):
return self._op.target
@property
def target(self):
return self._op.domain
@property
def capability(self):
return self._adjointCapability[self._op.capability]
@property
def adjoint(self):
return self._op
def apply(self, x, mode):
return self._op.apply(x, self._adjointMode[mode])
......@@ -96,13 +96,15 @@ class ChainOperator(LinearOperator):
def target(self):
return self._ops[0].target
@property
def inverse(self):
return self.make([op.inverse for op in reversed(self._ops)])
@property
def adjoint(self):
return self.make([op.adjoint for op in reversed(self._ops)])
def _flip_modes(self, mode):
if mode == 0:
return self
if mode == 1 or mode == 2:
return self.make([op._flip_modes(mode)
for op in reversed(self._ops)])
if mode == 3:
return self.make([op._flip_modes(mode) for op in self._ops])
raise ValueError("bad operator flipping mode")
@property
def capability(self):
......
......@@ -158,35 +158,30 @@ class DiagonalOperator(EndomorphicOperator):
def capability(self):
return self._all_ops
@property
def inverse(self):
res = self._skeleton(())
res._ldiag = 1./self._ldiag
return res
@property
def adjoint(self):
if np.issubdtype(self._ldiag.dtype, np.floating):
def _flip_modes(self, mode):
if mode == 0:
return self
if mode == 1 and np.issubdtype(self._ldiag.dtype, np.floating):
return self
res = self._skeleton(())
res._ldiag = self._ldiag.conjugate()
return res
def draw_sample(self, dtype=np.float64):
if (np.issubdtype(self._ldiag.dtype, np.complexfloating) or
(self._ldiag <= 0.).any()):
raise ValueError("operator not positive definite")
res = Field.from_random(random_type="normal", domain=self._domain,
dtype=dtype)
res.local_data[()] *= np.sqrt(self._ldiag)
if mode == 1:
res._ldiag = self._ldiag.conjugate()
elif mode == 2:
res._ldiag = 1./self._ldiag
elif mode == 3:
res._ldiag = 1./self._ldiag.conjugate()
else:
raise ValueError("bad operator flipping mode")
return res
def inverse_draw_sample(self, dtype=np.float64):
def draw_sample(self, from_inverse=False, dtype=np.float64):
if (np.issubdtype(self._ldiag.dtype, np.complexfloating) or
(self._ldiag <= 0.).any()):
raise ValueError("operator not positive definite")
res = Field.from_random(random_type="normal", domain=self._domain,
dtype=dtype)
res.local_data[()] /= np.sqrt(self._ldiag)
if from_inverse:
res.local_data[()] /= np.sqrt(self._ldiag)
else:
res.local_data[()] *= np.sqrt(self._ldiag)
return res
......@@ -36,31 +36,22 @@ class EndomorphicOperator(LinearOperator):
for endomorphic operators."""
return self.domain
def draw_sample(self, dtype=np.float64):
def draw_sample(self, from_inverse=False, dtype=np.float64):
"""Generate a zero-mean sample
Generates a sample from a Gaussian distribution with zero mean and
covariance given by the operator.
Parameters
----------
from_inverse : bool (default : False)
if True, the sample is drawn from the inverse of the operator
dtype : numpy datatype (default : numpy.float64)
the data type to be used for the sample
Returns
-------