Commit 9c050381 authored by Martin Reinecke's avatar Martin Reinecke
Browse files

merge NIFTy_6

parents 73c8b1a9 a9fea605
Pipeline #75246 passed with stages
in 8 minutes and 26 seconds
......@@ -43,6 +43,13 @@ test_serial:
- >
grep TOTAL coverage.txt | awk '{ print "TOTAL: "$4; }'
test_mpi:
stage: test
variables:
OMPI_MCA_btl_vader_single_copy_mechanism: none
script:
- mpiexec -n 2 --bind-to none pytest-3 -q test/test_mpi
pages:
stage: release
script:
......@@ -61,7 +68,7 @@ before_script:
run_ipynb:
stage: demo_runs
script:
- jupyter nbconvert --execute --ExecutePreprocessor.timeout=None demos/Wiener_Filter.ipynb
- jupyter nbconvert --execute --ExecutePreprocessor.timeout=None demos/getting_started_0.ipynb
run_getting_started_1:
stage: demo_runs
......
Changes since NIFTy 5:
Minimum Python version increased to 3.6
=======================================
New operators
=============
In addition to the below changes, the following operators were introduced:
* UniformOperator: Transforms a Gaussian into a uniform distribution
* VariableCovarianceGaussianEnergy: Energy operator for inferring covariances
* MultiLinearEinsum: Multi-linear version of numpy's einsum with derivates
* LinearEinsum: Linear version of numpy's einsum with one free field
* PartialConjugate: Conjugates parts of a multi-field
* SliceOperator: Geometry preserving mask operator
* SplitOperator: Splits a single field into a multi-field
FFT convention adjusted
=======================
When going to harmonic space, NIFTy's FFT operator now uses a minus sign in the
exponent (and, consequently, a plus sign on the adjoint transform). This
convention is consistent with almost all other numerical FFT libraries.
Interface change in EndomorphicOperator.draw_sample()
=====================================================
Both complex-valued and real-valued Gaussian probability distributions have
Hermitian and positive endomorphisms as covariance. Just by looking at an
endomorphic operator itself it is not clear whether it is viewed as covariance
for real or complex Gaussians when a sample of the respective distribution shall
be drawn. Therefore, we introduce the method `draw_sample_with_dtype()` which
needs to be given the data type of the probability distribution. This function
is implemented for all operators which actually draw random numbers
(`DiagonalOperator` and `ScalingOperator`). The class `SamplingDtypeSetter` acts
as a wrapper for this kind of operators in order to fix the data type of the
distribution. Samples from these operators can be drawn with `.draw_sample()`.
In order to dive into those subtleties I suggest running the following code and
playing around with the dtypes.
```
import nifty6 as ift
import numpy as np
dom = ift.UnstructuredDomain(5)
dtype = [np.float64, np.complex128][1]
invcov = ift.ScalingOperator(dom, 3)
e = ift.GaussianEnergy(mean=ift.from_random(dom, 'normal', dtype=dtype),
inverse_covariance=invcov)
pos = ift.from_random(dom, 'normal', dtype=np.complex128)
lin = e(ift.Linearization.make_var(pos, want_metric=True))
met = lin.metric
print(met)
print(met.draw_sample())
```
MPI parallelisation over samples in MetricGaussianKL
====================================================
......@@ -15,6 +71,13 @@ the generation of reproducible random numbers in the presence of MPI parallelism
and leads to cleaner code overall. Please see the documentation of
`nifty6.random` for details.
Interface Change for from_random and OuterProduct
=================================================
The sugar.from_random, Field.from_random, MultiField.from_random now take domain
as the first argument and default to 'normal' for the second argument.
Likewise OuterProduct takes domain as the first argument and a field as the second.
Interface Change for non-linear Operators
=========================================
......
......@@ -45,7 +45,7 @@ Installation
### Requirements
- [Python 3](https://www.python.org/) (3.5.x or later)
- [Python 3](https://www.python.org/) (3.6.x or later)
- [SciPy](https://www.scipy.org/)
Optional dependencies:
......
......@@ -43,7 +43,7 @@ if __name__ == '__main__':
harmonic_space = position_space.get_default_codomain()
HT = ift.HarmonicTransformOperator(harmonic_space, position_space)
position = ift.from_random('normal', harmonic_space)
position = ift.from_random(harmonic_space, 'normal')
# Define power spectrum and amplitudes
def sqrtpspec(k):
......@@ -58,13 +58,13 @@ if __name__ == '__main__':
# Generate mock data
p = R(sky)
mock_position = ift.from_random('normal', harmonic_space)
mock_position = ift.from_random(harmonic_space, 'normal')
tmp = p(mock_position).val.astype(np.float64)
data = ift.random.current_rng().binomial(1, tmp)
data = ift.Field.from_raw(R.target, data)
# Compute likelihood and Hamiltonian
position = ift.from_random('normal', harmonic_space)
position = ift.from_random(harmonic_space, 'normal')
likelihood = ift.BernoulliEnergy(data) @ p
ic_newton = ift.DeltaEnergyController(
name='Newton', iteration_limit=100, tol_rel_deltaE=1e-8)
......
%% Cell type:markdown id: tags:
# A NIFTy demonstration
%% Cell type:markdown id: tags:
## IFT: Big Picture
IFT starting point:
$$d = Rs+n$$
Typically, $s$ is a continuous field, $d$ a discrete data vector. Particularly, $R$ is not invertible.
IFT aims at **inverting** the above uninvertible problem in the **best possible way** using Bayesian statistics.
## NIFTy
NIFTy (Numerical Information Field Theory) is a Python framework in which IFT problems can be tackled easily.
Main Interfaces:
- **Spaces**: Cartesian, 2-Spheres (Healpix, Gauss-Legendre) and their respective harmonic spaces.
- **Fields**: Defined on spaces.
- **Operators**: Acting on fields.
%% Cell type:markdown id: tags:
## Wiener Filter: Formulae
### Assumptions
- $d=Rs+n$, $R$ linear operator.
- $\mathcal P (s) = \mathcal G (s,S)$, $\mathcal P (n) = \mathcal G (n,N)$ where $S, N$ are positive definite matrices.
### Posterior
The Posterior is given by:
$$\mathcal P (s|d) \propto P(s,d) = \mathcal G(d-Rs,N) \,\mathcal G(s,S) \propto \mathcal G (s-m,D) $$
where
$$\begin{align}
m &= Dj \\
D^{-1}&= (S^{-1} +R^\dagger N^{-1} R )\\
j &= R^\dagger N^{-1} d
\end{align}$$
Let us implement this in NIFTy!
%% Cell type:markdown id: tags:
## Wiener Filter: Example
- We assume statistical homogeneity and isotropy. Therefore the signal covariance $S$ is diagonal in harmonic space, and is described by a one-dimensional power spectrum, assumed here as $$P(k) = P_0\,\left(1+\left(\frac{k}{k_0}\right)^2\right)^{-\gamma /2},$$
with $P_0 = 0.2, k_0 = 5, \gamma = 4$.
- $N = 0.2 \cdot \mathbb{1}$.
- Number of data points $N_{pix} = 512$.
- reconstruction in harmonic space.
- Response operator:
$$R = FFT_{\text{harmonic} \rightarrow \text{position}}$$
%% Cell type:code id: tags:
``` python
N_pixels = 512 # Number of pixels
def pow_spec(k):
P0, k0, gamma = [.2, 5, 4]
return P0 / ((1. + (k/k0)**2)**(gamma / 2))
```
%% Cell type:markdown id: tags:
## Wiener Filter: Implementation
%% Cell type:markdown id: tags:
### Import Modules
%% Cell type:code id: tags:
``` python
import numpy as np
import nifty6 as ift
import matplotlib.pyplot as plt
%matplotlib inline
```
%% Cell type:markdown id: tags:
### Implement Propagator
%% Cell type:code id: tags:
``` python
def Curvature(R, N, Sh):
IC = ift.GradientNormController(iteration_limit=50000,
tol_abs_gradnorm=0.1)
# WienerFilterCurvature is (R.adjoint*N.inverse*R + Sh.inverse) plus some handy
# helper methods.
return ift.WienerFilterCurvature(R,N,Sh,iteration_controller=IC,iteration_controller_sampling=IC)
```
%% Cell type:markdown id: tags:
### Conjugate Gradient Preconditioning
- $D$ is defined via:
$$D^{-1} = \mathcal S_h^{-1} + R^\dagger N^{-1} R.$$
In the end, we want to apply $D$ to $j$, i.e. we need the inverse action of $D^{-1}$. This is done numerically (algorithm: *Conjugate Gradient*).
<!--
- One can define the *condition number* of a non-singular and normal matrix $A$:
$$\kappa (A) := \frac{|\lambda_{\text{max}}|}{|\lambda_{\text{min}}|},$$
where $\lambda_{\text{max}}$ and $\lambda_{\text{min}}$ are the largest and smallest eigenvalue of $A$, respectively.
- The larger $\kappa$ the slower Conjugate Gradient.
- By default, conjugate gradient solves: $D^{-1} m = j$ for $m$, where $D^{-1}$ can be badly conditioned. If one knows a non-singular matrix $T$ for which $TD^{-1}$ is better conditioned, one can solve the equivalent problem:
$$\tilde A m = \tilde j,$$
where $\tilde A = T D^{-1}$ and $\tilde j = Tj$.
- In our case $S^{-1}$ is responsible for the bad conditioning of $D$ depending on the chosen power spectrum. Thus, we choose
$$T = \mathcal F^\dagger S_h^{-1} \mathcal F.$$
-->
%% Cell type:markdown id: tags:
### Generate Mock data
- Generate a field $s$ and $n$ with given covariances.
- Calculate $d$.
%% Cell type:code id: tags:
``` python
s_space = ift.RGSpace(N_pixels)
h_space = s_space.get_default_codomain()
HT = ift.HarmonicTransformOperator(h_space, target=s_space)
# Operators
Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec)
R = HT #*ift.create_harmonic_smoothing_operator((h_space,), 0, 0.02)
# Fields and data
sh = Sh.draw_sample()
sh = Sh.draw_sample_with_dtype(dtype=np.float64)
noiseless_data=R(sh)
noise_amplitude = np.sqrt(0.2)
N = ift.ScalingOperator(s_space, noise_amplitude**2)
n = ift.Field.from_random(domain=s_space, random_type='normal',
std=noise_amplitude, mean=0)
d = noiseless_data + n
j = R.adjoint_times(N.inverse_times(d))
curv = Curvature(R=R, N=N, Sh=Sh)
D = curv.inverse
```
%% Cell type:markdown id: tags:
### Run Wiener Filter
%% Cell type:code id: tags:
``` python
m = D(j)
```
%% Cell type:markdown id: tags:
### Signal Reconstruction
%% Cell type:code id: tags:
``` python
# Get signal data and reconstruction data
s_data = HT(sh).val
m_data = HT(m).val
d_data = d.val
plt.figure(figsize=(15,10))
plt.plot(s_data, 'r', label="Signal", linewidth=3)
plt.plot(d_data, 'k.', label="Data")
plt.plot(m_data, 'k', label="Reconstruction",linewidth=3)
plt.title("Reconstruction")
plt.legend()
plt.show()
```
%% Cell type:code id: tags:
``` python
plt.figure(figsize=(15,10))
plt.plot(s_data - s_data, 'r', label="Signal", linewidth=3)
plt.plot(d_data - s_data, 'k.', label="Data")
plt.plot(m_data - s_data, 'k', label="Reconstruction",linewidth=3)
plt.axhspan(-noise_amplitude,noise_amplitude, facecolor='0.9', alpha=.5)
plt.title("Residuals")
plt.legend()
plt.show()
```
%% Cell type:markdown id: tags:
### Power Spectrum
%% Cell type:code id: tags:
``` python
s_power_data = ift.power_analyze(sh).val
m_power_data = ift.power_analyze(m).val
plt.figure(figsize=(15,10))
plt.loglog()
plt.xlim(1, int(N_pixels/2))
ymin = min(m_power_data)
plt.ylim(ymin, 1)
xs = np.arange(1,int(N_pixels/2),.1)
plt.plot(xs, pow_spec(xs), label="True Power Spectrum", color='k',alpha=0.5)
plt.plot(s_power_data, 'r', label="Signal")
plt.plot(m_power_data, 'k', label="Reconstruction")
plt.axhline(noise_amplitude**2 / N_pixels, color="k", linestyle='--', label="Noise level", alpha=.5)
plt.axhspan(noise_amplitude**2 / N_pixels, ymin, facecolor='0.9', alpha=.5)
plt.title("Power Spectrum")
plt.legend()
plt.show()
```
%% Cell type:markdown id: tags:
## Wiener Filter on Incomplete Data
%% Cell type:code id: tags:
``` python
# Operators
Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec)
N = ift.ScalingOperator(s_space, noise_amplitude**2)
# R is defined below
# Fields
sh = Sh.draw_sample()
sh = Sh.draw_sample_with_dtype(dtype=np.float64)
s = HT(sh)
n = ift.Field.from_random(domain=s_space, random_type='normal',
std=noise_amplitude, mean=0)
```
%% Cell type:markdown id: tags:
### Partially Lose Data
%% Cell type:code id: tags:
``` python
l = int(N_pixels * 0.2)
h = int(N_pixels * 0.2 * 2)
mask = np.full(s_space.shape, 1.)
mask[l:h] = 0
mask = ift.Field.from_raw(s_space, mask)
R = ift.DiagonalOperator(mask)(HT)
n = n.val_rw()
n[l:h] = 0
n = ift.Field.from_raw(s_space, n)
d = R(sh) + n
```
%% Cell type:code id: tags:
``` python
curv = Curvature(R=R, N=N, Sh=Sh)
D = curv.inverse
j = R.adjoint_times(N.inverse_times(d))
m = D(j)
```
%% Cell type:markdown id: tags:
### Compute Uncertainty
%% Cell type:code id: tags:
``` python
m_mean, m_var = ift.probe_with_posterior_samples(curv, HT, 200)
m_mean, m_var = ift.probe_with_posterior_samples(curv, HT, 200, np.float64)
```
%% Cell type:markdown id: tags:
### Get data
%% Cell type:code id: tags:
``` python
# Get signal data and reconstruction data
s_data = s.val
m_data = HT(m).val
m_var_data = m_var.val
uncertainty = np.sqrt(m_var_data)
d_data = d.val_rw()
# Set lost data to NaN for proper plotting
d_data[d_data == 0] = np.nan
```
%% Cell type:code id: tags:
``` python
fig = plt.figure(figsize=(15,10))
plt.axvspan(l, h, facecolor='0.8',alpha=0.5)
plt.fill_between(range(N_pixels), m_data - uncertainty, m_data + uncertainty, facecolor='0.5', alpha=0.5)
plt.plot(s_data, 'r', label="Signal", alpha=1, linewidth=3)
plt.plot(d_data, 'k.', label="Data")
plt.plot(m_data, 'k', label="Reconstruction", linewidth=3)
plt.title("Reconstruction of incomplete data")
plt.legend()
```
%% Cell type:markdown id: tags:
# 2d Example
%% Cell type:code id: tags:
``` python
N_pixels = 256 # Number of pixels
sigma2 = 2. # Noise variance
def pow_spec(k):
P0, k0, gamma = [.2, 2, 4]
return P0 * (1. + (k/k0)**2)**(-gamma/2)
s_space = ift.RGSpace([N_pixels, N_pixels])
```
%% Cell type:code id: tags:
``` python
h_space = s_space.get_default_codomain()
HT = ift.HarmonicTransformOperator(h_space,s_space)
# Operators
Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec)
N = ift.ScalingOperator(s_space, sigma2)
# Fields and data
sh = Sh.draw_sample()
sh = Sh.draw_sample_with_dtype(dtype=np.float64)
n = ift.Field.from_random(domain=s_space, random_type='normal',
std=np.sqrt(sigma2), mean=0)
# Lose some data
l = int(N_pixels * 0.33)
h = int(N_pixels * 0.33 * 2)
mask = np.full(s_space.shape, 1.)
mask[l:h,l:h] = 0.
mask = ift.Field.from_raw(s_space, mask)
R = ift.DiagonalOperator(mask)(HT)
n = n.val_rw()
n[l:h, l:h] = 0
n = ift.Field.from_raw(s_space, n)
curv = Curvature(R=R, N=N, Sh=Sh)
D = curv.inverse
d = R(sh) + n
j = R.adjoint_times(N.inverse_times(d))
# Run Wiener filter
m = D(j)
# Uncertainty
m_mean, m_var = ift.probe_with_posterior_samples(curv, HT, 20)
m_mean, m_var = ift.probe_with_posterior_samples(curv, HT, 20, np.float64)
# Get data
s_data = HT(sh).val
m_data = HT(m).val
m_var_data = m_var.val
d_data = d.val
uncertainty = np.sqrt(np.abs(m_var_data))
```
%% Cell type:code id: tags:
``` python
cm = ['magma', 'inferno', 'plasma', 'viridis'][1]
mi = np.min(s_data)
ma = np.max(s_data)
fig, axes = plt.subplots(1, 2, figsize=(15, 7))
data = [s_data, d_data]
caption = ["Signal", "Data"]
for ax in axes.flat:
im = ax.imshow(data.pop(0), interpolation='nearest', cmap=cm, vmin=mi,
vmax=ma)
ax.set_title(caption.pop(0))
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.15, 0.05, 0.7])
fig.colorbar(im, cax=cbar_ax)
```
%% Cell type:code id: tags:
``` python
mi = np.min(s_data)
ma = np.max(s_data)
fig, axes = plt.subplots(3, 2, figsize=(15, 22.5))
sample = HT(curv.draw_sample(from_inverse=True)+m).val
post_mean = (m_mean + HT(m)).val
data = [s_data, m_data, post_mean, sample, s_data - m_data, uncertainty]
caption = ["Signal", "Reconstruction", "Posterior mean", "Sample", "Residuals", "Uncertainty Map"]
for ax in axes.flat:
im = ax.imshow(data.pop(0), interpolation='nearest', cmap=cm, vmin=mi, vmax=ma)
ax.set_title(caption.pop(0))
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([.85, 0.15, 0.05, 0.7])
fig.colorbar(im, cax=cbar_ax)
```
%% Cell type:markdown id: tags:
### Is the uncertainty map reliable?
%% Cell type:code id: tags:
``` python
precise = (np.abs(s_data-m_data) < uncertainty)
print("Error within uncertainty map bounds: " + str(np.sum(precise) * 100 / N_pixels**2) + "%")
plt.figure(figsize=(15,10))
plt.imshow(precise.astype(float), cmap="brg")
plt.colorbar()
```
%% Cell type:markdown id: tags:
# Start Coding
## NIFTy Repository + Installation guide
https://gitlab.mpcdf.mpg.de/ift/NIFTy
NIFTy v5 **more or less stable!**
NIFTy v6 **more or less stable!**
%% Cell type:code id: tags:
``` python
```
......
......@@ -40,7 +40,7 @@ def make_checkerboard_mask(position_space):
def make_random_mask():
# Random mask for spherical mode
mask = ift.from_random('pm1', position_space)
mask = ift.from_random(position_space, 'pm1')
mask = (mask + 1)/2
return mask.val
......@@ -114,8 +114,8 @@ if __name__ == '__main__':
N = ift.ScalingOperator(data_space, noise)
# Create mock data
MOCK_SIGNAL = S.draw_sample()
MOCK_NOISE = N.draw_sample()
MOCK_SIGNAL = S.draw_sample_with_dtype(dtype=np.float64)
MOCK_NOISE = N.draw_sample_with_dtype(dtype=np.float64)
data = R(MOCK_SIGNAL) + MOCK_NOISE
# Build inverse propagator D and information source j
......
......@@ -90,7 +90,7 @@ if __name__ == '__main__':
# Generate mock data and define likelihood operator
d_space = R.target[0]
lamb = R(sky)
mock_position = ift.from_random('normal', domain)
mock_position = ift.from_random(domain, 'normal')
data = lamb(mock_position)
data = ift.random.current_rng().poisson(data.val.astype(np.float64))
data = ift.Field.from_raw(d_space, data)
......@@ -103,7 +103,7 @@ if __name__ == '__main__':
# Compute MAP solution by minimizing the information Hamiltonian
H = ift.StandardHamiltonian(likelihood)
initial_position = ift.from_random('normal', domain)
initial_position = ift.from_random(domain, 'normal')
H = ift.EnergyAdapter(initial_position, H, want_metric=True)
H, convergence = minimizer(H)
......
......@@ -55,9 +55,32 @@ if __name__ == '__main__':
position_space = ift.RGSpace([128, 128])
cfmaker = ift.CorrelatedFieldMaker.make(1e-3, 1e-6, '')
cfmaker.add_fluctuations(position_space,
1., 1e-2, 1, .5, .1, .5, -3, 0.5, '')
cfmaker = ift.CorrelatedFieldMaker.make(
offset_mean = 0.0, # 0.
offset_std_mean = 1e-3, # 1e-3
offset_std_std = 1e-6, # 1e-6
prefix = '')
fluctuations_dict = {
# Amplitude of the fluctuations
'fluctuations_mean': 2.0, # 1.0
'fluctuations_stddev': 1.0, # 1e-2
# Smooth variation speed
'flexibility_mean': 2.5, # 1.0
'flexibility_stddev': 1.0, # 0.5
# How strong the ragged component of the spectrum is
# (Ratio of Wiener process and integrated Wiener process ?)
'asperity_mean': 0.5, # 0.1
'asperity_stddev': 0.5, # 0.5
# Slope of linear spectrum component
'loglogavgslope_mean': -2.0, # -3.0
'loglogavgslope_stddev': 0.5 # 0.5
}
cfmaker.add_fluctuations(position_space, **fluctuations_dict)
correlated_field = cfmaker.finalize()
A = cfmaker.amplitude
......@@ -75,8 +98,8 @@ if __name__ == '__main__':
N = ift.ScalingOperator(data_space, noise)
# Generate mock signal and data
mock_position = ift.from_random('normal', signal_response.domain)
data = signal_response(mock_position) + N.draw_sample()
mock_position = ift.from_random(signal_response.domain, 'normal')
data = signal_response(mock_position) + N.draw_sample_with_dtype(dtype=np.float64)
# Minimization parameters
ic_sampling = ift.AbsDeltaEnergyController(
......
......@@ -73,7 +73,7 @@ if __name__ == '__main__':
sp2 = ift.RGSpace(npix2)
# Set up signal model
cfmaker = ift.CorrelatedFieldMaker.make(1e-2, 1e-6, '')
cfmaker = ift.CorrelatedFieldMaker.make(0., 1e-2, 1e-6, '')
cfmaker.add_fluctuations(sp1, 0.1, 1e-2, 1, .1, .01, .5, -2, 1., 'amp1')
cfmaker.add_fluctuations(sp2, 0.1, 1e-2, 1, .1, .01, .5,
-1.5, .5, 'amp2')
......@@ -97,8 +97,8 @@ if __name__ == '__main__':
N = ift.ScalingOperator(data_space, noise)
# Generate mock signal and data
mock_position = ift.from_random('normal', signal_response.domain)
data = signal_response(mock_position) + N.draw_sample()
mock_position = ift.from_random(signal_response.domain, 'normal')
data = signal_response(mock_position) + N.draw_sample_with_dtype(dtype=np.float64)
plot = ift.Plot()
plot.add(signal(mock_position), title='Ground Truth')
......@@ -114,7 +114,9 @@ if __name__ == '__main__':
ic_newton = ift.AbsDeltaEnergyController(name='Newton',
deltaE=0.01,
iteration_limit=35)
minimizer = ift.NewtonCG(ic_newton)
ic_sampling.enable_logging()
ic_newton.enable_logging()
minimizer = ift.NewtonCG(ic_newton, activate_logging=True)
## number of samples used to estimate the KL
N_samples = 20
......@@ -143,10 +145,15 @@ if __name__ == '__main__':
plot.add([A2.force(KL.position),
A2.force(mock_position)],
title="power2")
plot.output(nx=2,
plot.add((ic_newton.history, ic_sampling.history,
minimizer.inversion_history),
label=['KL', 'Sampling', 'Newton inversion'],
title='Cumulative energies', s=[None, None, 1],
alpha=[None, 0.2, None])
plot.output(nx=3,
ny=2,
ysize=10,
xsize=10,
xsize=15,
name=filename.format("loop_{:02d}".format(i)))
# Done, draw posterior samples
......
......@@ -25,7 +25,8 @@ from .operators.adder import Adder
from .operators.diagonal_operator import DiagonalOperator
from .operators.distributors import DOFDistributor, PowerDistributor
from .operators.domain_tuple_field_inserter import DomainTupleFieldInserter
from .operators.contraction_operator import ContractionOperator
from .operators.einsum import LinearEinsum, MultiLinearEinsum
from .operators.contraction_operator import ContractionOperator, IntegrationOperator
from .operators.linear_interpolation import LinearInterpolator
from .operators.endomorphic_operator import EndomorphicOperator
from .operators.harmonic_operators import (
......@@ -35,15 +36,16 @@ from .operators.field_zero_padder import FieldZeroPadder
from .operators.inversion_enabler import InversionEnabler
from .operators.mask_operator import MaskOperator
from .operators.regridding_operator import RegriddingOperator
from .operators.sampling_enabler import SamplingEnabler
from .operators.sampling_enabler import SamplingEnabler, SamplingDtypeSetter
from .operators.sandwich_operator import SandwichOperator
from .operators.scaling_operator import ScalingOperator
from .operators.selection_operators import SliceOperator, SplitOperator
from .operators.block_diagonal_operator import BlockDiagonalOperator
from .operators.outer_product_operator import OuterProduct
from .operators.simple_linear_operators import (
VdotOperator, ConjugationOperator, Realizer,
FieldAdapter, ducktape, GeometryRemover, NullOperator,
MatrixProductOperator, PartialExtractor)
VdotOperator, ConjugationOperator, Realizer, FieldAdapter, ducktape,
GeometryRemover, NullOperator, PartialExtractor)
from .operators.matrix_product_operator import MatrixProductOperator
from .operators.value_inserter import ValueInserter
from .operators.energy_operators import (
EnergyOperator, GaussianEnergy, PoissonianEnergy, InverseGammaLikelihood,
......
......@@ -142,8 +142,7 @@ class RGSpace(StructuredDomain):
@staticmethod
def _kernel(x, sigma):
from ..sugar import exp
return exp(x*x * (-2.*np.pi*np.pi*sigma*sigma))
return (x*x * (-2.*np.pi*np.pi*sigma*sigma)).ptw("exp")
def get_fft_smoothing_kernel_function(self, sigma):
if (not self.harmonic):
......
......@@ -42,8 +42,8 @@ def _adjoint_implementation(op, domain_dtype, target_dtype, atol, rtol,
needed_cap = op.TIMES | op.ADJOINT_TIMES
if (op.capability & needed_cap) != needed_cap:
return
f1 = from_random("normal", op.domain, dtype=domain_dtype)
f2 = from_random("normal", op.target, dtype=target_dtype)
f1 = from_random(op.domain, "normal", dtype=domain_dtype)
f2 = from_random(op.target, "normal", dtype=target_dtype)
res1 = f1.s_vdot(op.adjoint_times(f2))
res2 = op.times(f1).s_vdot(f2)
if only_r_linear:
......@@ -55,11 +55,11 @@ def _inverse_implementation(op, domain_dtype, target_dtype, atol, rtol):
needed_cap = op.TIMES | op.INVERSE_TIMES
if (op.capability & needed_cap) != needed_cap:
return
foo = from_random("normal", op.target, dtype=target_dtype)
foo = from_random(op.target, "normal", dtype=target_dtype)
res = op(op.inverse_times(foo))
assert_allclose(res, foo, atol=atol, rtol=rtol)
foo = from_random("normal", op.domain, dtype=domain_dtype)
foo = from_random(op.domain, "normal", dtype=domain_dtype)
res = op.inverse_times(op(foo))
assert_allclose(res, foo, atol=atol, rtol=rtol)
......@@ -75,8 +75,8 @@ def _check_linearity(op, domain_dtype, atol, rtol):
needed_cap = op.TIMES
if (op.capability & needed_cap) != needed_cap:
return
fld1 = from_random("normal", op.domain, dtype=domain_dtype)
fld2 = from_random("normal", op.domain, dtype=domain_dtype)
fld1 = from_random(op.domain, "normal", dtype=domain_dtype)
fld2 = from_random(op.domain, "normal", dtype=domain_dtype)
alpha = np.random.random() # FIXME: this can break badly with MPI!
val1 = op(alpha*fld1+fld2)
val2 = alpha*op(fld1)+op(fld2)
......@@ -88,7 +88,7 @@ def _actual_domain_check_linear(op, domain_dtype=None, inp=None):
if (op.capability & needed_cap) != needed_cap:
return
if domain_dtype is not None:
inp = from_random("normal", op.domain, dtype=domain_dtype)
inp = from_random(op.domain, "normal", dtype=domain_dtype)
elif inp is None:
raise ValueError('Need to specify either dtype or inp')
assert_(inp.domain is op.domain)
......@@ -219,7 +219,7 @@ def consistency_check(op, domain_dtype=np.float64, target_dtype=np.float64,
def _get_acceptable_location(op, loc, lin):
if not np.isfinite(lin.val.s_sum()):
raise ValueError('Initial value must be finite')
dir = from_random("normal", loc.domain)
dir = from_random(loc.domain, "normal")
dirder = lin.jac(dir)
if dirder.norm() == 0:
dir = dir * (lin.val.norm()*1e-5)
......@@ -296,3 +296,11 @@ def check_jacobian_consistency(op, loc, tol=1e-8, ntries=100, perf_check=True):
print(hist)
raise ValueError("gradient and value seem inconsistent")
loc = locnext
# FIXME The following code shows that we need prober tests for complex
# derivatives
ddtype = loc.values()[0].dtype if isinstance(loc, MultiField) else loc.dtype
tdtype = dirder.values()[0].dtype if isinstance(dirder, MultiField) else dirder.dtype
only_r_linear = ddtype != tdtype
consistency_check(linmid.jac, domain_dtype=ddtype, target_dtype=tdtype,
only_r_linear=only_r_linear)
......@@ -20,9 +20,10 @@ import numpy as np
from . import utilities
from .domain_tuple import DomainTuple
from .operators.operator import Operator
class Field(object):
class Field(Operator):
"""The discrete representation of a continuous field over multiple spaces.
Stores data arrays and carries all the needed meta-information (i.e. the
......@@ -49,7 +50,7 @@ class Field(object):
raise TypeError("domain must be of type DomainTuple")
if not isinstance(val, np.ndarray):
if np.isscalar(val):
val = np.full(domain.shape, val)
val = np.broadcast_to(val, domain.shape)