Commit 6c25c285 authored by Martin Reinecke's avatar Martin Reinecke
Browse files

merge

parents 5484a894 2249183a
Pipeline #75731 passed with stages
in 13 minutes and 42 seconds
......@@ -91,6 +91,7 @@ MPI support is added via:
sudo apt-get install python3-mpi4py
Pypocketfft is added via:
pip3 install --user git+https://gitlab.mpcdf.mpg.de/mtr/pypocketfft
If this library is present, NIFTy will detect it automatically and prefer
......@@ -146,12 +147,12 @@ The NIFTy package is licensed under the terms of the
Contributors
------------
Find the list of all people who authored commits in this repository.
### NIFTy6
- Andrija Kostic
- Gordian Edenhofer
- Jakob Knollmüller
- Lukas Platz
- Martin Reinecke
- [Philipp Arras](https://wwwmpa.mpa-garching.mpg.de/~parras/)
......
......@@ -707,9 +707,7 @@
"# Start Coding\n",
"## NIFTy Repository + Installation guide\n",
"\n",
"https://gitlab.mpcdf.mpg.de/ift/NIFTy\n",
"\n",
"NIFTy v6 **more or less stable!**"
"https://gitlab.mpcdf.mpg.de/ift/NIFTy\n"
]
},
{
......
%% Cell type:markdown id: tags:
# A NIFTy demonstration
%% Cell type:markdown id: tags:
## IFT: Big Picture
IFT starting point:
$$d = Rs+n$$
Typically, $s$ is a continuous field, $d$ a discrete data vector. Particularly, $R$ is not invertible.
IFT aims at **inverting** the above uninvertible problem in the **best possible way** using Bayesian statistics.
## NIFTy
NIFTy (Numerical Information Field Theory) is a Python framework in which IFT problems can be tackled easily.
Main Interfaces:
- **Spaces**: Cartesian, 2-Spheres (Healpix, Gauss-Legendre) and their respective harmonic spaces.
- **Fields**: Defined on spaces.
- **Operators**: Acting on fields.
%% Cell type:markdown id: tags:
## Wiener Filter: Formulae
### Assumptions
- $d=Rs+n$, $R$ linear operator.
- $\mathcal P (s) = \mathcal G (s,S)$, $\mathcal P (n) = \mathcal G (n,N)$ where $S, N$ are positive definite matrices.
### Posterior
The Posterior is given by:
$$\mathcal P (s|d) \propto P(s,d) = \mathcal G(d-Rs,N) \,\mathcal G(s,S) \propto \mathcal G (s-m,D) $$
where
$$\begin{align}
m &= Dj \\
D^{-1}&= (S^{-1} +R^\dagger N^{-1} R )\\
j &= R^\dagger N^{-1} d
\end{align}$$
Let us implement this in NIFTy!
%% Cell type:markdown id: tags:
## Wiener Filter: Example
- We assume statistical homogeneity and isotropy. Therefore the signal covariance $S$ is diagonal in harmonic space, and is described by a one-dimensional power spectrum, assumed here as $$P(k) = P_0\,\left(1+\left(\frac{k}{k_0}\right)^2\right)^{-\gamma /2},$$
with $P_0 = 0.2, k_0 = 5, \gamma = 4$.
- $N = 0.2 \cdot \mathbb{1}$.
- Number of data points $N_{pix} = 512$.
- reconstruction in harmonic space.
- Response operator:
$$R = FFT_{\text{harmonic} \rightarrow \text{position}}$$
%% Cell type:code id: tags:
``` python
N_pixels = 512 # Number of pixels
def pow_spec(k):
P0, k0, gamma = [.2, 5, 4]
return P0 / ((1. + (k/k0)**2)**(gamma / 2))
```
%% Cell type:markdown id: tags:
## Wiener Filter: Implementation
%% Cell type:markdown id: tags:
### Import Modules
%% Cell type:code id: tags:
``` python
import numpy as np
import nifty7 as ift
import matplotlib.pyplot as plt
%matplotlib inline
```
%% Cell type:markdown id: tags:
### Implement Propagator
%% Cell type:code id: tags:
``` python
def Curvature(R, N, Sh):
IC = ift.GradientNormController(iteration_limit=50000,
tol_abs_gradnorm=0.1)
# WienerFilterCurvature is (R.adjoint*N.inverse*R + Sh.inverse) plus some handy
# helper methods.
return ift.WienerFilterCurvature(R,N,Sh,iteration_controller=IC,iteration_controller_sampling=IC)
```
%% Cell type:markdown id: tags:
### Conjugate Gradient Preconditioning
- $D$ is defined via:
$$D^{-1} = \mathcal S_h^{-1} + R^\dagger N^{-1} R.$$
In the end, we want to apply $D$ to $j$, i.e. we need the inverse action of $D^{-1}$. This is done numerically (algorithm: *Conjugate Gradient*).
<!--
- One can define the *condition number* of a non-singular and normal matrix $A$:
$$\kappa (A) := \frac{|\lambda_{\text{max}}|}{|\lambda_{\text{min}}|},$$
where $\lambda_{\text{max}}$ and $\lambda_{\text{min}}$ are the largest and smallest eigenvalue of $A$, respectively.
- The larger $\kappa$ the slower Conjugate Gradient.
- By default, conjugate gradient solves: $D^{-1} m = j$ for $m$, where $D^{-1}$ can be badly conditioned. If one knows a non-singular matrix $T$ for which $TD^{-1}$ is better conditioned, one can solve the equivalent problem:
$$\tilde A m = \tilde j,$$
where $\tilde A = T D^{-1}$ and $\tilde j = Tj$.
- In our case $S^{-1}$ is responsible for the bad conditioning of $D$ depending on the chosen power spectrum. Thus, we choose
$$T = \mathcal F^\dagger S_h^{-1} \mathcal F.$$
-->
%% Cell type:markdown id: tags:
### Generate Mock data
- Generate a field $s$ and $n$ with given covariances.
- Calculate $d$.
%% Cell type:code id: tags:
``` python
s_space = ift.RGSpace(N_pixels)
h_space = s_space.get_default_codomain()
HT = ift.HarmonicTransformOperator(h_space, target=s_space)
# Operators
Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec)
R = HT #*ift.create_harmonic_smoothing_operator((h_space,), 0, 0.02)
# Fields and data
sh = Sh.draw_sample_with_dtype(dtype=np.float64)
noiseless_data=R(sh)
noise_amplitude = np.sqrt(0.2)
N = ift.ScalingOperator(s_space, noise_amplitude**2)
n = ift.Field.from_random(domain=s_space, random_type='normal',
std=noise_amplitude, mean=0)
d = noiseless_data + n
j = R.adjoint_times(N.inverse_times(d))
curv = Curvature(R=R, N=N, Sh=Sh)
D = curv.inverse
```
%% Cell type:markdown id: tags:
### Run Wiener Filter
%% Cell type:code id: tags:
``` python
m = D(j)
```
%% Cell type:markdown id: tags:
### Signal Reconstruction
%% Cell type:code id: tags:
``` python
# Get signal data and reconstruction data
s_data = HT(sh).val
m_data = HT(m).val
d_data = d.val
plt.figure(figsize=(15,10))
plt.plot(s_data, 'r', label="Signal", linewidth=3)
plt.plot(d_data, 'k.', label="Data")
plt.plot(m_data, 'k', label="Reconstruction",linewidth=3)
plt.title("Reconstruction")
plt.legend()
plt.show()
```
%% Cell type:code id: tags:
``` python
plt.figure(figsize=(15,10))
plt.plot(s_data - s_data, 'r', label="Signal", linewidth=3)
plt.plot(d_data - s_data, 'k.', label="Data")
plt.plot(m_data - s_data, 'k', label="Reconstruction",linewidth=3)
plt.axhspan(-noise_amplitude,noise_amplitude, facecolor='0.9', alpha=.5)
plt.title("Residuals")
plt.legend()
plt.show()
```
%% Cell type:markdown id: tags:
### Power Spectrum
%% Cell type:code id: tags:
``` python
s_power_data = ift.power_analyze(sh).val
m_power_data = ift.power_analyze(m).val
plt.figure(figsize=(15,10))
plt.loglog()
plt.xlim(1, int(N_pixels/2))
ymin = min(m_power_data)
plt.ylim(ymin, 1)
xs = np.arange(1,int(N_pixels/2),.1)
plt.plot(xs, pow_spec(xs), label="True Power Spectrum", color='k',alpha=0.5)
plt.plot(s_power_data, 'r', label="Signal")
plt.plot(m_power_data, 'k', label="Reconstruction")
plt.axhline(noise_amplitude**2 / N_pixels, color="k", linestyle='--', label="Noise level", alpha=.5)
plt.axhspan(noise_amplitude**2 / N_pixels, ymin, facecolor='0.9', alpha=.5)
plt.title("Power Spectrum")
plt.legend()
plt.show()
```
%% Cell type:markdown id: tags:
## Wiener Filter on Incomplete Data
%% Cell type:code id: tags:
``` python
# Operators
Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec)
N = ift.ScalingOperator(s_space, noise_amplitude**2)
# R is defined below
# Fields
sh = Sh.draw_sample_with_dtype(dtype=np.float64)
s = HT(sh)
n = ift.Field.from_random(domain=s_space, random_type='normal',
std=noise_amplitude, mean=0)
```
%% Cell type:markdown id: tags:
### Partially Lose Data
%% Cell type:code id: tags:
``` python
l = int(N_pixels * 0.2)
h = int(N_pixels * 0.2 * 2)
mask = np.full(s_space.shape, 1.)
mask[l:h] = 0
mask = ift.Field.from_raw(s_space, mask)
R = ift.DiagonalOperator(mask)(HT)
n = n.val_rw()
n[l:h] = 0
n = ift.Field.from_raw(s_space, n)
d = R(sh) + n
```
%% Cell type:code id: tags:
``` python
curv = Curvature(R=R, N=N, Sh=Sh)
D = curv.inverse
j = R.adjoint_times(N.inverse_times(d))
m = D(j)
```
%% Cell type:markdown id: tags:
### Compute Uncertainty
%% Cell type:code id: tags:
``` python
m_mean, m_var = ift.probe_with_posterior_samples(curv, HT, 200, np.float64)
```
%% Cell type:markdown id: tags:
### Get data
%% Cell type:code id: tags:
``` python
# Get signal data and reconstruction data
s_data = s.val
m_data = HT(m).val
m_var_data = m_var.val
uncertainty = np.sqrt(m_var_data)
d_data = d.val_rw()
# Set lost data to NaN for proper plotting
d_data[d_data == 0] = np.nan
```
%% Cell type:code id: tags:
``` python
fig = plt.figure(figsize=(15,10))
plt.axvspan(l, h, facecolor='0.8',alpha=0.5)
plt.fill_between(range(N_pixels), m_data - uncertainty, m_data + uncertainty, facecolor='0.5', alpha=0.5)
plt.plot(s_data, 'r', label="Signal", alpha=1, linewidth=3)
plt.plot(d_data, 'k.', label="Data")
plt.plot(m_data, 'k', label="Reconstruction", linewidth=3)
plt.title("Reconstruction of incomplete data")
plt.legend()
```
%% Cell type:markdown id: tags:
# 2d Example
%% Cell type:code id: tags:
``` python
N_pixels = 256 # Number of pixels
sigma2 = 2. # Noise variance
def pow_spec(k):
P0, k0, gamma = [.2, 2, 4]
return P0 * (1. + (k/k0)**2)**(-gamma/2)
s_space = ift.RGSpace([N_pixels, N_pixels])
```
%% Cell type:code id: tags:
``` python
h_space = s_space.get_default_codomain()
HT = ift.HarmonicTransformOperator(h_space,s_space)
# Operators
Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec)
N = ift.ScalingOperator(s_space, sigma2)
# Fields and data
sh = Sh.draw_sample_with_dtype(dtype=np.float64)
n = ift.Field.from_random(domain=s_space, random_type='normal',
std=np.sqrt(sigma2), mean=0)
# Lose some data
l = int(N_pixels * 0.33)
h = int(N_pixels * 0.33 * 2)
mask = np.full(s_space.shape, 1.)
mask[l:h,l:h] = 0.
mask = ift.Field.from_raw(s_space, mask)
R = ift.DiagonalOperator(mask)(HT)
n = n.val_rw()
n[l:h, l:h] = 0
n = ift.Field.from_raw(s_space, n)
curv = Curvature(R=R, N=N, Sh=Sh)
D = curv.inverse
d = R(sh) + n
j = R.adjoint_times(N.inverse_times(d))
# Run Wiener filter
m = D(j)
# Uncertainty
m_mean, m_var = ift.probe_with_posterior_samples(curv, HT, 20, np.float64)
# Get data
s_data = HT(sh).val
m_data = HT(m).val
m_var_data = m_var.val
d_data = d.val
uncertainty = np.sqrt(np.abs(m_var_data))
```
%% Cell type:code id: tags:
``` python
cm = ['magma', 'inferno', 'plasma', 'viridis'][1]
mi = np.min(s_data)
ma = np.max(s_data)
fig, axes = plt.subplots(1, 2, figsize=(15, 7))
data = [s_data, d_data]
caption = ["Signal", "Data"]
for ax in axes.flat:
im = ax.imshow(data.pop(0), interpolation='nearest', cmap=cm, vmin=mi,
vmax=ma)
ax.set_title(caption.pop(0))
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.15, 0.05, 0.7])
fig.colorbar(im, cax=cbar_ax)
```
%% Cell type:code id: tags:
``` python
mi = np.min(s_data)
ma = np.max(s_data)
fig, axes = plt.subplots(3, 2, figsize=(15, 22.5))
sample = HT(curv.draw_sample(from_inverse=True)+m).val
post_mean = (m_mean + HT(m)).val
data = [s_data, m_data, post_mean, sample, s_data - m_data, uncertainty]
caption = ["Signal", "Reconstruction", "Posterior mean", "Sample", "Residuals", "Uncertainty Map"]
for ax in axes.flat:
im = ax.imshow(data.pop(0), interpolation='nearest', cmap=cm, vmin=mi, vmax=ma)
ax.set_title(caption.pop(0))
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([.85, 0.15, 0.05, 0.7])
fig.colorbar(im, cax=cbar_ax)
```
%% Cell type:markdown id: tags:
### Is the uncertainty map reliable?
%% Cell type:code id: tags:
``` python
precise = (np.abs(s_data-m_data) < uncertainty)
print("Error within uncertainty map bounds: " + str(np.sum(precise) * 100 / N_pixels**2) + "%")
plt.figure(figsize=(15,10))
plt.imshow(precise.astype(float), cmap="brg")
plt.colorbar()
```
%% Cell type:markdown id: tags:
# Start Coding
## NIFTy Repository + Installation guide
https://gitlab.mpcdf.mpg.de/ift/NIFTy
NIFTy v6 **more or less stable!**
%% Cell type:code id: tags:
``` python
```
......
......@@ -137,8 +137,9 @@ if __name__ == '__main__':
# Plot current reconstruction
plot = ift.Plot()
plot.add(signal(KL.position), title="reconstruction")
plot.add([A.force(mock_position), A.force(KL.position)], title="power")
plot.add(signal(KL.position), title="Latent mean")
plot.add([A.force(KL.position + ss) for ss in KL.samples],
title="Samples power spectrum")
plot.output(ny=1, ysize=6, xsize=16,
name=filename.format("loop_{:02d}".format(i)))
......
......@@ -77,7 +77,7 @@ def _check_linearity(op, domain_dtype, atol, rtol):
return
fld1 = from_random(op.domain, "normal", dtype=domain_dtype)
fld2 = from_random(op.domain, "normal", dtype=domain_dtype)
alpha = np.random.random() # FIXME: this can break badly with MPI!
alpha = 0.42
val1 = op(alpha*fld1+fld2)
val2 = alpha*op(fld1)+op(fld2)
assert_allclose(val1, val2, atol=atol, rtol=rtol)
......
......@@ -65,10 +65,10 @@ class Linearization(Operator):
return self.make_var(self._val, self._want_metric)
def prepend_jac(self, jac):
metric = None
if self._metric is not None:
if self._metric is None:
return self.new(self._val, self._jac @ jac)
from .operators.sandwich_operator import SandwichOperator
metric = None if self._metric is None else SandwichOperator.make(jac, self._metric)
metric = SandwichOperator.make(jac, self._metric)
return self.new(self._val, self._jac @ jac, metric)
@property
......
......@@ -36,27 +36,6 @@ def _shareRange(nwork, nshares, myshare):
return lo, hi
def _np_allreduce_sum(comm, arr):
if comm is None:
return arr
from mpi4py import MPI
arr = np.array(arr)
res = np.empty_like(arr)
comm.Allreduce(arr, res, MPI.SUM)
return res
def _allreduce_sum_field(comm, fld):
if comm is None:
return fld
if isinstance(fld, Field):
return Field(fld.domain, _np_allreduce_sum(comm, fld.val))
res = tuple(
Field(f.domain, _np_allreduce_sum(comm, f.val))
for f in fld.values())
return MultiField(fld.domain, res)
class _KLMetric(EndomorphicOperator):
def __init__(self, KL):
self._KL = KL
......@@ -185,27 +164,21 @@ class MetricGaussianKL(Energy):
raise ValueError("# of samples mismatch")
self._local_samples = _local_samples
self._lin = Linearization.make_partial_var(mean, self._constants)
v, g = None, None
if len(self._local_samples) == 0: # hack if there are too many MPI tasks
tmp = self._hamiltonian(self._lin)
v = 0. * tmp.val.val
g = 0. * tmp.gradient
else:
v, g = [], []
for s in self._local_samples:
tmp = self._hamiltonian(self._lin+s)
tv = tmp.val.val
tg = tmp.gradient
if self._mirror_samples:
tmp = tmp + self._hamiltonian(self._lin-s)
if v is None:
v = tmp.val.val_rw()
g = tmp.gradient
else:
v += tmp.val.val
g = g + tmp.gradient
self._val = _np_allreduce_sum(self._comm, v)[()] / self._n_eff_samples
tmp = self._hamiltonian(self._lin-s)
tv = tv + tmp.val.val
tg = tg + tmp.gradient
v.append(tv)
g.append(tg)
self._val = self._sumup(v)[()]/self._n_eff_samples
if np.isnan(self._val) and self._mitigate_nans:
self._val = np.inf
self._grad = _allreduce_sum_field(self._comm, g) / self._n_eff_samples
self._metric = None
self._grad = self._sumup(g)/self._n_eff_samples
def at(self, position):
return MetricGaussianKL(
......@@ -221,24 +194,15 @@ class MetricGaussianKL(Energy):
def gradient(self):
return self._grad
def _get_metric(self):
def apply_metric(self, x):
lin = self._lin.with_want_metric()
if self._metric is None:
if len(self._local_samples) == 0: # hack if there are too many MPI tasks
self._metric = self._hamiltonian(lin).metric.scale(0.)
else:
mymap = map(lambda v: self._hamiltonian(lin+v).metric,
self._local_samples)
unscaled_metric = utilities.my_sum(mymap)
res = []
for s in self._local_samples:
tmp = self._hamiltonian(lin+s).metric(x)
if self._mirror_samples:
mymap = map(lambda v: self._hamiltonian(lin-v).metric,
self._local_samples)
unscaled_metric = unscaled_metric + utilities.my_sum(mymap)
self._metric = unscaled_metric.scale(1./self._n_eff_samples)
def apply_metric(self, x):
self._get_metric()
return _allreduce_sum_field(self._comm, self._metric(x))
tmp = tmp + self._hamiltonian(lin-s).metric(x)
res.append(tmp)
return self._sumup(res)/self._n_eff_samples
@property
def metric(self):
......@@ -263,15 +227,73 @@ class MetricGaussianKL(Energy):
if self._mirror_samples:
yield -s
def _sumup(self, obj):
""" This is a deterministic implementation of MPI allreduce
Numeric addition is not associative due to rounding errors.
Therefore we provide our own implementation that is consistent
no matter if MPI is used and how many tasks there are.
At the beginning, a list `who` is constructed, that states which obj can
be found on which MPI task.
Then elements are added pairwise, with increasing pair distance.
In the first round, the distance between pair members is 1:
v[0] := v[0] + v[1]
v[2] := v[2] + v[3]
v[4] := v[4] + v[5]
Entries whose summation partner lies beyond the end of the array
stay unchanged.
When both summation partners are not located on the same MPI task,
the second summand is sent to the task holding the first summand and
the operation is carried out there.
For the next round, the distance is doubled:
v[0] := v[0] + v[2]
v[4] := v[4] + v[6]
v[8] := v[8] + v[10]
This is repeated until the distance exceeds the length of the array.
At this point v[0] contains the sum of all entries, which is then
broadcast to all tasks.
"""
if self._comm is None:
who = np.zeros(self._n_samples, dtype=np.int32)
rank = 0
vals = list(obj) # necessary since we don't want to modify `obj`
else:
ntask = self._comm.Get_size()
rank = self._comm.Get_rank()
rank_lo_hi = [_shareRange(self._n_samples, ntask, i) for i in range(ntask)]
lo, hi = rank_lo_hi[rank]
vals = [None]*lo + list(obj) + [None]*(self._n_samples-hi)
who = [t for t, (l, h) in enumerate(rank_lo_hi) for cnt in range(h-l)]
step = 1
while step < self._n_samples:
for j in range(0, self._n_samples, 2*step):
if j+step < self._n_samples: # summation partner found
if rank == who[j]:
if who[j] == who[j+step]: # no communication required
vals[j] = vals[j] + vals[j+step]
vals[j+step] = None
else:
vals[j] = vals[j] + self._comm.recv(source=who[j+step])
elif rank == who[j+step]:
self._comm.send(vals[j+step], dest=who[j])
vals[j+step] = None
step *= 2
if self._comm is None:
return vals[0]
return self._comm.bcast(vals[0], root=who[0])
def _metric_sample(self, from_inverse=False):
if from_inverse:
raise NotImplementedError()
lin = self._lin.with_want_metric()
samp = full(self._hamiltonian.domain, 0.)
samp = []
sseq = random.spawn_sseq(self._n_samples)
for i, v in enumerate(self._local_samples):
with random.Context(sseq[self._lo+i]):
samp = samp + self._hamiltonian(lin+v).metric.draw_sample(from_inverse=False)
tmp = self._hamiltonian(lin+v).metric.draw_sample(from_inverse=False)
if self._mirror_samples:
samp = samp + self._hamiltonian(lin-v).metric.draw_sample(from_inverse=False)
return _allreduce_sum_field(self._comm, samp)/self._n_eff_samples
tmp = tmp + self._hamiltonian(lin-v).metric.draw_sample(from_inverse=False)
samp.append(tmp)
return self._sumup(samp)/self._n_eff_samples
......@@ -268,24 +268,29 @@ def _optimise_operator(op):
def optimise_operator(op):
"""
Merges redundant operations in the tree structure of an operator.
For example it is ensured that for ``(f@x + x)`` ``x`` is only computed once.
Currently works only on ``_OpChain``, ``_OpSum`` and ``_OpProd`` and does not optimise their linear pendants
For example it is ensured that for ``f@x + x`` the operator ``x`` is only computed once.
It is supposed to be used on the whole operator chain before doing minimisation.
Currently optimises only ``_OpChain``, ``_OpSum`` and ``_OpProd`` and not their linear pendants
``ChainOp`` and ``SumOperator``.
Parameters
----------
op: Operator
op : Operator
Operator with a tree structure.
Returns
-------
op_optimised : Operator
Operator with same input/output, but optimised tree structure.
Notes
-----
Since operators are compared by id best results are achieved when the following code
Operators are compared only by id, so best results are achieved when the following code
>>> from nifty7 import UniformOperator, DomainTuple
>>> uni1 = UniformOperator(DomainTuple.scalar_domain()
......@@ -293,16 +298,18 @@ def optimise_operator(op):
>>> op = (uni1 + uni2)*(uni1 + uni2)
is replaced by something comparable to
>>> uni = UniformOperator(DomainTuple.scalar_domain())
>>> uni_add = uni + uni
>>> op = uni_add * uni_add
After optimisation the operator is comparable in speed to
After optimisation the operator is as fast as
>>> op = (2*uni)**2
"""
op_optimised = deepcopy(op)
op_optimised = _optimise_operator(op_optimised)
test_field = from_random('normal', op.domain)
test_field = from_random(op.domain)
if isinstance(op(test_field), MultiField):
for key in op(test_field).keys():
assert allclose(op(test_field).val[key], op_optimised(test_field).val[key], 1e-10)
......
......@@ -144,6 +144,9 @@ class VariableCovarianceGaussianEnergy(EnergyOperator):
inverse_covariance : key
Inverse covariance diagonal key of the Gaussian.
sampling_dtype : np.dtype
Data type of the samples. Usually either 'np.float*' or 'np.complex*'
"""
def __init__(self, domain, residual_key, inverse_covariance_key, sampling_dtype):
......
......@@ -260,10 +260,10 @@ def from_random(domain, random_type='normal', dtype=np.float64, **kwargs):
Parameters
----------
random_type : 'pm1', 'normal', or 'uniform'
The random distribution to use.
domain : Domainoid
the intended domain of the output field
random_type : 'pm1', 'normal', or 'uniform'
The random distribution to use.
dtype : type
data type of the output field (e.g. numpy.float64)
**kwargs : additional parameters for the random distribution
......
......@@ -18,7 +18,7 @@
import numpy as np
import pytest
from mpi4py import MPI
from numpy.testing import assert_, assert_allclose
from numpy.testing import assert_, assert_equal
import nifty7 as ift
......@@ -76,33 +76,35 @@ def test_kl(constants, point_estimates, mirror_samples, mode, mf):
assert_(len(tuple(kl1.samples)) == expected_nsamps)
# Test value
assert_allclose(kl0.value, kl1.value)
assert_equal(kl0.value, kl1.value)
# Test gradient
if not mf:
ift.extra.assert_allclose(kl0.gradient, kl1.gradient, 0, 1e-14)
return
if mf:
for kk in h.domain.keys():
res0 = kl0.gradient[kk].val
if kk in constants:
res0 = 0*res0
res1 = kl1.gradient[kk].val
assert_allclose(res0, res1)
assert_equal(res0, res1)
else:
assert_equal(kl0.gradient.val, kl1.gradient.val)
# Test point_estimates (after drawing samples)
if mf:
for kk in point_estimates:
for ss in kl0.samples:
ss = ss[kk].val
assert_allclose(ss, 0*ss)
assert_equal(ss, 0*ss)
for ss in kl1.samples:
ss = ss[kk].val
assert_allclose(ss, 0*ss)
assert_equal(ss, 0*ss)
# Test constants (after some minimization)
if mf:
cg = ift.GradientNormController(iteration_limit=5)
minimizer = ift.NewtonCG(cg)
for e in [kl0, kl1]:
e, _ = minimizer(e)
diff = (mean0 - e.position).to_dict()
for kk in constants:
assert_allclose(diff[kk].val, 0*diff[kk].val)
assert_equal(diff[kk].val, 0*diff[kk].val)
......@@ -39,7 +39,6 @@ class CountingOp(ift.Operator):
def test_operator_tree_optimiser():
dom = ift.RGSpace(10, harmonic=True)
hdom = dom.get_default_codomain()
cop1 = CountingOp(dom)
op1 = (ift.UniformOperator(dom, -1, 2)@cop1).ducktape('a')
cop2 = CountingOp(dom)
......@@ -58,3 +57,5 @@ def test_operator_tree_optimiser():
op = ift.operator_tree_optimiser._optimise_operator(op)
assert_allclose(op(fld).val, op_orig(fld).val, rtol=np.finfo(np.float64).eps)
assert_(1 == ( (cop4.count-1) * cop3.count * cop2.count * cop1.count))
# test testing
ift.optimise_operator(op_orig)
......@@ -28,9 +28,12 @@ def test_conjugation_operator():
dom = ift.makeDomain(sp)
f = ift.from_random(dom, dtype=np.complex128)
op = ift.ScalingOperator(sp, 1).conjugate()
arr = f.val
res1 = f.conjugate()
res2 = op(f)
res3 = arr.conjugate()
assert_allclose(res1.val, res2.val)
assert_allclose(res1.val, res3)
ift.extra.consistency_check(op, domain_dtype=np.float64,
target_dtype=np.float64)
ift.extra.consistency_check(op, domain_dtype=np.complex128,
......
......@@ -23,25 +23,33 @@ import nifty7 as ift
from ..common import setup_function, teardown_function
def test_contraction_operator():
def test_operator_sum():
x1 = ift.RGSpace((9,), distances=2.)
x2 = ift.RGSpace((2, 12), distances=(0.3,))
dom1 = ift.makeDomain(x1)
dom2 = ift.makeDomain((x1, x2))
f1 = ift.from_random(dom1)
f2 = ift.from_random(dom2)
arr1 = f1.val
arr2 = f2.val
op1 = ift.ScalingOperator(dom1, 1).sum()
op2 = ift.ScalingOperator(dom2, 1).sum()
op3 = ift.ScalingOperator(dom2, 1).sum(spaces=1)
res1 = f1.sum()
res2 = op1(f1)
res3 = arr1.sum()
assert_allclose(res1.val, res2.val)
res3 = f2.sum()
res4 = op2(f2)
assert_allclose(res3.val, res4.val)
res5 = f2.sum(spaces=1)
res6 = op3(f2)
assert_allclose(res5.val, res6.val)
assert_allclose(res1.val, res3)
res4 = f2.sum()
res5 = op2(f2)
res6 = arr2.sum()
assert_allclose(res4.val, res5.val)
assert_allclose(res4.val, res6)
res7 = f2.sum(spaces=1)
res8 = op3(f2)
res9 = arr2.sum(axis=(1, 2))
assert_allclose(res7.val, res8.val)
assert_allclose(res7.val, res9)
for op in [op1, op2, op3]:
ift.extra.consistency_check(op, domain_dtype=np.float64,
target_dtype=np.float64)
......
......@@ -35,13 +35,19 @@ def test_integration_operator():
op3 = ift.ScalingOperator(dom2, 1).integrate(spaces=1)
res1 = f1.integrate()
res2 = op1(f1)
res3 = (f1.val*x1.dvol).sum()
assert_allclose(res1.val, res2.val)
res3 = f2.integrate()
res4 = op2(f2)
assert_allclose(res3.val, res4.val)
res5 = f2.integrate(spaces=1)
res6 = op3(f2)
assert_allclose(res5.val, res6.val)
assert_allclose(res1.val, res3)
res4 = f2.integrate()
res5 = op2(f2)
res6 = (f2.val*x1.dvol*x2.dvol).sum()
assert_allclose(res4.val, res5.val)
assert_allclose(res4.val, res6)
res7 = f2.integrate(spaces=1)
res8 = op3(f2)