Commit 9afb3f68 authored by Reimar Leike's avatar Reimar Leike
Browse files

Merge branch 'NIFTy_7' into metric_tests

Fixed Fisher tests. The following bugs were present:

 - Metric of VariableCovarianceGaussianEnergy was wrong (already fixed in NIFTy+7, was fixed by merging)
 - debugging factor was not used, thus leading a false 'did not raise' assertion
 - the fast Fisher test for the VariableCovarianceGaussianEnergy was totally broken
 - slightly decreasing the tolerance for the isher test leads to factors of 2
   being detected
parents a411d39f b5e93218
......@@ -13,7 +13,7 @@ stages:
build_docker_from_scratch:
only:
- schedules
image: docker:19.03.8
image: docker
stage: build_docker
before_script:
- ls
......@@ -25,7 +25,7 @@ build_docker_from_scratch:
build_docker_from_cache:
except:
- schedules
image: docker:19.03.8
image: docker
stage: build_docker
before_script:
- ls
......@@ -53,6 +53,7 @@ test_mpi:
pages:
stage: release
script:
- rm -rf docs/build docs/source/mod
- sh docs/generate.sh
- mv docs/build/ public/
artifacts:
......
Changes since NIFTy 6
=====================
CorrelatedFieldMaker interface change
-------------------------------------
The interface of `ift.CorrelatedFieldMaker.make` and
`ift.CorrelatedFieldMaker.add_fluctuations` changed; it now expects the mean
and the standard deviation of their various parameters not as separate
arguments but as a tuple.
Furthermore, it is now possible to disable the asperity and the flexibility
together with the asperity in the correlated field model. Note that disabling
only the flexibility is not possible.
Additionally, the parameters `flexibility`, `asperity` and most importantly
`loglogavgslope` refer to the power spectrum instead of the amplitude now.
For existing codes that means that both values in the tuple `loglogavgslope`
and `flexibility` need to be doubled. The transformation of the `asperity`
parameter is nontrivial.
SimpleCorrelatedField
---------------------
A simplified version of the correlated field model was introduced which does not
allow for multiple power spectra, the presence of a degree of freedom parameter
`dofdex`, or `total_N` larger than zero. Except for the above mentioned
limitations, it is equivalent to `ift.CorrelatedFieldMaker`. Hence, if one
wants to understand the implementation idea behind the model, it is easier to
grasp from reading `ift.SimpleCorrelatedField` than from going through
`ift.CorrelatedFieldMaker`.
Change in external dependencies
-------------------------------
......@@ -20,8 +49,9 @@ The implementation tests for nonlinear operators are now available in
MetricGaussianKL interface
--------------------------
Users do not instanciate `MetricGaussianKL` by its constructor anymore. Rather
`MetricGaussianKL.make()` shall be used.
Users do not instantiate `MetricGaussianKL` by its constructor anymore. Rather
`MetricGaussianKL.make()` shall be used. Additionally, `mirror_samples` is not
set by default anymore.
Changes since NIFTy 5
......@@ -91,10 +121,10 @@ New approach for sampling complex numbers
When calling draw_sample_with_dtype with a complex dtype,
the variance is now used for the imaginary part and real part separately.
This is done in order to be consistent with the Hamiltonian.
Note that by this,
Note that by this,
```
np.std(ift.from_random(domain, 'normal', dtype=np.complex128).val)
````
````
does not give 1, but sqrt(2) as a result.
......
......@@ -13,6 +13,7 @@ RUN apt-get update && apt-get install -y \
python3-mpi4py python3-matplotlib \
# more optional NIFTy dependencies
&& pip3 install ducc0 \
&& pip3 install finufft \
&& pip3 install jupyter \
&& rm -rf /var/lib/apt/lists/*
......
......@@ -137,6 +137,15 @@ The NIFTy package is licensed under the terms of the
Contributors
------------
### NIFTy7
- Gordian Edenhofer
- Lukas Platz
- Martin Reinecke
- [Philipp Arras](https://wwwmpa.mpa-garching.mpg.de/~parras/)
- Philipp Frank
- [Reimar Heinrich Leike](https://wwwmpa.mpa-garching.mpg.de/~reimar/)
### NIFTy6
- Andrija Kostic
......
......@@ -233,7 +233,7 @@
"\n",
"# Operators\n",
"Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec)\n",
"R = HT #*ift.create_harmonic_smoothing_operator((h_space,), 0, 0.02)\n",
"R = HT # @ ift.create_harmonic_smoothing_operator((h_space,), 0, 0.02)\n",
"\n",
"# Fields and data\n",
"sh = Sh.draw_sample_with_dtype(dtype=np.float64)\n",
......
%% Cell type:markdown id: tags:
# A NIFTy demonstration
%% Cell type:markdown id: tags:
## IFT: Big Picture
IFT starting point:
$$d = Rs+n$$
Typically, $s$ is a continuous field, $d$ a discrete data vector. Particularly, $R$ is not invertible.
IFT aims at **inverting** the above uninvertible problem in the **best possible way** using Bayesian statistics.
## NIFTy
NIFTy (Numerical Information Field Theory) is a Python framework in which IFT problems can be tackled easily.
Main Interfaces:
- **Spaces**: Cartesian, 2-Spheres (Healpix, Gauss-Legendre) and their respective harmonic spaces.
- **Fields**: Defined on spaces.
- **Operators**: Acting on fields.
%% Cell type:markdown id: tags:
## Wiener Filter: Formulae
### Assumptions
- $d=Rs+n$, $R$ linear operator.
- $\mathcal P (s) = \mathcal G (s,S)$, $\mathcal P (n) = \mathcal G (n,N)$ where $S, N$ are positive definite matrices.
### Posterior
The Posterior is given by:
$$\mathcal P (s|d) \propto P(s,d) = \mathcal G(d-Rs,N) \,\mathcal G(s,S) \propto \mathcal G (s-m,D) $$
where
$$\begin{align}
m &= Dj \\
D^{-1}&= (S^{-1} +R^\dagger N^{-1} R )\\
j &= R^\dagger N^{-1} d
\end{align}$$
Let us implement this in NIFTy!
%% Cell type:markdown id: tags:
## Wiener Filter: Example
- We assume statistical homogeneity and isotropy. Therefore the signal covariance $S$ is diagonal in harmonic space, and is described by a one-dimensional power spectrum, assumed here as $$P(k) = P_0\,\left(1+\left(\frac{k}{k_0}\right)^2\right)^{-\gamma /2},$$
with $P_0 = 0.2, k_0 = 5, \gamma = 4$.
- $N = 0.2 \cdot \mathbb{1}$.
- Number of data points $N_{pix} = 512$.
- reconstruction in harmonic space.
- Response operator:
$$R = FFT_{\text{harmonic} \rightarrow \text{position}}$$
%% Cell type:code id: tags:
``` python
N_pixels = 512 # Number of pixels
def pow_spec(k):
P0, k0, gamma = [.2, 5, 4]
return P0 / ((1. + (k/k0)**2)**(gamma / 2))
```
%% Cell type:markdown id: tags:
## Wiener Filter: Implementation
%% Cell type:markdown id: tags:
### Import Modules
%% Cell type:code id: tags:
``` python
import numpy as np
import nifty7 as ift
import matplotlib.pyplot as plt
%matplotlib inline
```
%% Cell type:markdown id: tags:
### Implement Propagator
%% Cell type:code id: tags:
``` python
def Curvature(R, N, Sh):
IC = ift.GradientNormController(iteration_limit=50000,
tol_abs_gradnorm=0.1)
# WienerFilterCurvature is (R.adjoint*N.inverse*R + Sh.inverse) plus some handy
# helper methods.
return ift.WienerFilterCurvature(R,N,Sh,iteration_controller=IC,iteration_controller_sampling=IC)
```
%% Cell type:markdown id: tags:
### Conjugate Gradient Preconditioning
- $D$ is defined via:
$$D^{-1} = \mathcal S_h^{-1} + R^\dagger N^{-1} R.$$
In the end, we want to apply $D$ to $j$, i.e. we need the inverse action of $D^{-1}$. This is done numerically (algorithm: *Conjugate Gradient*).
<!--
- One can define the *condition number* of a non-singular and normal matrix $A$:
$$\kappa (A) := \frac{|\lambda_{\text{max}}|}{|\lambda_{\text{min}}|},$$
where $\lambda_{\text{max}}$ and $\lambda_{\text{min}}$ are the largest and smallest eigenvalue of $A$, respectively.
- The larger $\kappa$ the slower Conjugate Gradient.
- By default, conjugate gradient solves: $D^{-1} m = j$ for $m$, where $D^{-1}$ can be badly conditioned. If one knows a non-singular matrix $T$ for which $TD^{-1}$ is better conditioned, one can solve the equivalent problem:
$$\tilde A m = \tilde j,$$
where $\tilde A = T D^{-1}$ and $\tilde j = Tj$.
- In our case $S^{-1}$ is responsible for the bad conditioning of $D$ depending on the chosen power spectrum. Thus, we choose
$$T = \mathcal F^\dagger S_h^{-1} \mathcal F.$$
-->
%% Cell type:markdown id: tags:
### Generate Mock data
- Generate a field $s$ and $n$ with given covariances.
- Calculate $d$.
%% Cell type:code id: tags:
``` python
s_space = ift.RGSpace(N_pixels)
h_space = s_space.get_default_codomain()
HT = ift.HarmonicTransformOperator(h_space, target=s_space)
# Operators
Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec)
R = HT #*ift.create_harmonic_smoothing_operator((h_space,), 0, 0.02)
R = HT # @ ift.create_harmonic_smoothing_operator((h_space,), 0, 0.02)
# Fields and data
sh = Sh.draw_sample_with_dtype(dtype=np.float64)
noiseless_data=R(sh)
noise_amplitude = np.sqrt(0.2)
N = ift.ScalingOperator(s_space, noise_amplitude**2)
n = ift.Field.from_random(domain=s_space, random_type='normal',
std=noise_amplitude, mean=0)
d = noiseless_data + n
j = R.adjoint_times(N.inverse_times(d))
curv = Curvature(R=R, N=N, Sh=Sh)
D = curv.inverse
```
%% Cell type:markdown id: tags:
### Run Wiener Filter
%% Cell type:code id: tags:
``` python
m = D(j)
```
%% Cell type:markdown id: tags:
### Signal Reconstruction
%% Cell type:code id: tags:
``` python
# Get signal data and reconstruction data
s_data = HT(sh).val
m_data = HT(m).val
d_data = d.val
plt.figure(figsize=(15,10))
plt.plot(s_data, 'r', label="Signal", linewidth=3)
plt.plot(d_data, 'k.', label="Data")
plt.plot(m_data, 'k', label="Reconstruction",linewidth=3)
plt.title("Reconstruction")
plt.legend()
plt.show()
```
%% Cell type:code id: tags:
``` python
plt.figure(figsize=(15,10))
plt.plot(s_data - s_data, 'r', label="Signal", linewidth=3)
plt.plot(d_data - s_data, 'k.', label="Data")
plt.plot(m_data - s_data, 'k', label="Reconstruction",linewidth=3)
plt.axhspan(-noise_amplitude,noise_amplitude, facecolor='0.9', alpha=.5)
plt.title("Residuals")
plt.legend()
plt.show()
```
%% Cell type:markdown id: tags:
### Power Spectrum
%% Cell type:code id: tags:
``` python
s_power_data = ift.power_analyze(sh).val
m_power_data = ift.power_analyze(m).val
plt.figure(figsize=(15,10))
plt.loglog()
plt.xlim(1, int(N_pixels/2))
ymin = min(m_power_data)
plt.ylim(ymin, 1)
xs = np.arange(1,int(N_pixels/2),.1)
plt.plot(xs, pow_spec(xs), label="True Power Spectrum", color='k',alpha=0.5)
plt.plot(s_power_data, 'r', label="Signal")
plt.plot(m_power_data, 'k', label="Reconstruction")
plt.axhline(noise_amplitude**2 / N_pixels, color="k", linestyle='--', label="Noise level", alpha=.5)
plt.axhspan(noise_amplitude**2 / N_pixels, ymin, facecolor='0.9', alpha=.5)
plt.title("Power Spectrum")
plt.legend()
plt.show()
```
%% Cell type:markdown id: tags:
## Wiener Filter on Incomplete Data
%% Cell type:code id: tags:
``` python
# Operators
Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec)
N = ift.ScalingOperator(s_space, noise_amplitude**2)
# R is defined below
# Fields
sh = Sh.draw_sample_with_dtype(dtype=np.float64)
s = HT(sh)
n = ift.Field.from_random(domain=s_space, random_type='normal',
std=noise_amplitude, mean=0)
```
%% Cell type:markdown id: tags:
### Partially Lose Data
%% Cell type:code id: tags:
``` python
l = int(N_pixels * 0.2)
h = int(N_pixels * 0.2 * 2)
mask = np.full(s_space.shape, 1.)
mask[l:h] = 0
mask = ift.Field.from_raw(s_space, mask)
R = ift.DiagonalOperator(mask)(HT)
n = n.val_rw()
n[l:h] = 0
n = ift.Field.from_raw(s_space, n)
d = R(sh) + n
```
%% Cell type:code id: tags:
``` python
curv = Curvature(R=R, N=N, Sh=Sh)
D = curv.inverse
j = R.adjoint_times(N.inverse_times(d))
m = D(j)
```
%% Cell type:markdown id: tags:
### Compute Uncertainty
%% Cell type:code id: tags:
``` python
m_mean, m_var = ift.probe_with_posterior_samples(curv, HT, 200, np.float64)
```
%% Cell type:markdown id: tags:
### Get data
%% Cell type:code id: tags:
``` python
# Get signal data and reconstruction data
s_data = s.val
m_data = HT(m).val
m_var_data = m_var.val
uncertainty = np.sqrt(m_var_data)
d_data = d.val_rw()
# Set lost data to NaN for proper plotting
d_data[d_data == 0] = np.nan
```
%% Cell type:code id: tags:
``` python
fig = plt.figure(figsize=(15,10))
plt.axvspan(l, h, facecolor='0.8',alpha=0.5)
plt.fill_between(range(N_pixels), m_data - uncertainty, m_data + uncertainty, facecolor='0.5', alpha=0.5)
plt.plot(s_data, 'r', label="Signal", alpha=1, linewidth=3)
plt.plot(d_data, 'k.', label="Data")
plt.plot(m_data, 'k', label="Reconstruction", linewidth=3)
plt.title("Reconstruction of incomplete data")
plt.legend()
```
%% Cell type:markdown id: tags:
# 2d Example
%% Cell type:code id: tags:
``` python
N_pixels = 256 # Number of pixels
sigma2 = 2. # Noise variance
def pow_spec(k):
P0, k0, gamma = [.2, 2, 4]
return P0 * (1. + (k/k0)**2)**(-gamma/2)
s_space = ift.RGSpace([N_pixels, N_pixels])
```
%% Cell type:code id: tags:
``` python
h_space = s_space.get_default_codomain()
HT = ift.HarmonicTransformOperator(h_space,s_space)
# Operators
Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec)
N = ift.ScalingOperator(s_space, sigma2)
# Fields and data
sh = Sh.draw_sample_with_dtype(dtype=np.float64)
n = ift.Field.from_random(domain=s_space, random_type='normal',
std=np.sqrt(sigma2), mean=0)
# Lose some data
l = int(N_pixels * 0.33)
h = int(N_pixels * 0.33 * 2)
mask = np.full(s_space.shape, 1.)
mask[l:h,l:h] = 0.
mask = ift.Field.from_raw(s_space, mask)
R = ift.DiagonalOperator(mask)(HT)
n = n.val_rw()
n[l:h, l:h] = 0
n = ift.Field.from_raw(s_space, n)
curv = Curvature(R=R, N=N, Sh=Sh)
D = curv.inverse
d = R(sh) + n
j = R.adjoint_times(N.inverse_times(d))
# Run Wiener filter
m = D(j)
# Uncertainty
m_mean, m_var = ift.probe_with_posterior_samples(curv, HT, 20, np.float64)
# Get data
s_data = HT(sh).val
m_data = HT(m).val
m_var_data = m_var.val
d_data = d.val
uncertainty = np.sqrt(np.abs(m_var_data))
```
%% Cell type:code id: tags:
``` python
cm = ['magma', 'inferno', 'plasma', 'viridis'][1]
mi = np.min(s_data)
ma = np.max(s_data)
fig, axes = plt.subplots(1, 2, figsize=(15, 7))
data = [s_data, d_data]
caption = ["Signal", "Data"]
for ax in axes.flat:
im = ax.imshow(data.pop(0), interpolation='nearest', cmap=cm, vmin=mi,
vmax=ma)
ax.set_title(caption.pop(0))
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.15, 0.05, 0.7])
fig.colorbar(im, cax=cbar_ax)
```
%% Cell type:code id: tags:
``` python
mi = np.min(s_data)
ma = np.max(s_data)
fig, axes = plt.subplots(3, 2, figsize=(15, 22.5))
sample = HT(curv.draw_sample(from_inverse=True)+m).val
post_mean = (m_mean + HT(m)).val
data = [s_data, m_data, post_mean, sample, s_data - m_data, uncertainty]
caption = ["Signal", "Reconstruction", "Posterior mean", "Sample", "Residuals", "Uncertainty Map"]
for ax in axes.flat:
im = ax.imshow(data.pop(0), interpolation='nearest', cmap=cm, vmin=mi, vmax=ma)
ax.set_title(caption.pop(0))
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([.85, 0.15, 0.05, 0.7])
fig.colorbar(im, cax=cbar_ax)
```
%% Cell type:markdown id: tags:
### Is the uncertainty map reliable?
%% Cell type:code id: tags:
``` python
precise = (np.abs(s_data-m_data) < uncertainty)
print("Error within uncertainty map bounds: " + str(np.sum(precise) * 100 / N_pixels**2) + "%")
plt.figure(figsize=(15,10))
plt.imshow(precise.astype(float), cmap="brg")
plt.colorbar()
```
%% Cell type:markdown id: tags:
# Start Coding
## NIFTy Repository + Installation guide
https://gitlab.mpcdf.mpg.de/ift/NIFTy
%% Cell type:code id: tags:
``` python
```
......
......@@ -105,6 +105,9 @@ def main():
H = ift.EnergyAdapter(initial_position, H, want_metric=True)
H, convergence = minimizer(H)
ift.extra.minisanity(data, lambda x: ift.makeOp(1/lamb(x)), lamb,
H.position)
# Plotting
signal = sky(mock_position)
reconst = sky(H.position)
......
......@@ -52,40 +52,31 @@ def main():
else:
mode = 0
filename = "getting_started_3_mode_{}_".format(mode) + "{}.png"
position_space = ift.RGSpace([128, 128])
# For a detailed showcase of the effects the parameters
# of the CorrelatedField model have on the generated fields,
# see 'getting_started_4_CorrelatedFields.ipynb'.
cfmaker = ift.CorrelatedFieldMaker.make(
offset_mean= 0.0,
offset_std_mean= 1e-3,
offset_std_std= 1e-6,
prefix='')
args = {
'offset_mean': 0,
'offset_std': (1e-3, 1e-6),
fluctuations_dict = {
# Amplitude of field fluctuations
'fluctuations_mean': 2.0, # 1.0
'fluctuations_stddev': 1.0, # 1e-2
'fluctuations': (2., 1.), # 1.0, 1e-2
# Exponent of power law power spectrum component
'loglogavgslope_mean': -2.0, # -3.0
'loglogavgslope_stddev': 0.5, # 0.5
'loglogavgslope': (-4., 1), # -6.0, 1
# Amplitude of integrated Wiener process power spectrum component
'flexibility_mean': 2.5, # 1.0
'flexibility_stddev': 1.0, # 0.5
'flexibility': (5, 2.), # 2.0, 1.0
# How ragged the integrated Wiener process component is
'asperity_mean': 0.5, # 0.1
'asperity_stddev': 0.5 # 0.5
'asperity': (0.5, 0.5) # 0.1, 0.5
}
cfmaker.add_fluctuations(position_space, **fluctuations_dict)
correlated_field = cfmaker.finalize()
A = cfmaker.amplitude
correlated_field = ift.SimpleCorrelatedField(position_space, **args)
pspec = correlated_field.power_spectrum
# Apply a nonlinearity
signal = ift.sigmoid(correlated_field)
......@@ -98,6 +89,7 @@ def main():
# Specify noise
data_space = R.target
noise = .001
sig = ift.ScalingOperator(data_space, np.sqrt(noise))
N = ift.ScalingOperator(data_space, noise)
# Generate mock signal and data
......@@ -106,7 +98,7 @@ def main():
# Minimization parameters
ic_sampling = ift.AbsDeltaEnergyController(
name='Sampling', deltaE=0.05, iteration_limit=100)
deltaE=0.05, iteration_limit=100)
ic_newton = ift.AbsDeltaEnergyController(
name='Newton', deltaE=0.5, iteration_limit=35)
minimizer = ift.NewtonCG(ic_newton)
......@@ -122,7 +114,7 @@ def main():
plot = ift.Plot()
plot.add(signal(mock_position), title='Ground Truth')
plot.add(R.adjoint_times(data), title='Data')
plot.add([A.force(mock_position)], title='Power Spectrum')
plot.add([pspec.force(mock_position)], title='Power Spectrum')
plot.output(ny=1, nx=3, xsize=24, ysize=6, name=filename.format("setup"))
# number of samples used to estimate the KL
......@@ -131,20 +123,20 @@ def main():
# Draw new samples to approximate the KL five times
for i in range(5):
# Draw new samples and minimize KL
KL = ift.MetricGaussianKL.make(mean, H, N_samples)
KL = ift.MetricGaussianKL.make(mean, H, N_samples, True)
KL, convergence = minimizer(KL)
mean = KL.position
ift.extra.minisanity(data, lambda x: N.inverse, signal_response,
KL.position, KL.samples)
# Plot current reconstruction
plot = ift.Plot()
plot.add(signal(KL.position), title="Latent mean")
plot.add([A.force(KL.position + ss) for ss in KL.samples],
plot.add([pspec.force(KL.position + ss) for ss in KL.samples],
title="Samples power spectrum")
plot.output(ny=1, ysize=6, xsize=16,
name=filename.format("loop_{:02d}".format(i)))
# Draw posterior samples
KL = ift.MetricGaussianKL.make(mean, H, N_samples)
sc = ift.StatCalculator()
for sample in KL.samples:
sc.add(signal(sample + KL.position))
......@@ -155,12 +147,16 @@ def main():
plot.add(sc.mean, title="Posterior Mean")
plot.add(ift.sqrt(sc.var), title="Posterior Standard Deviation")
powers = [A.force(s + KL.position) for s in KL.samples]
powers = [pspec.force(s + KL.position) for s in KL.samples]
sc = ift.StatCalculator()
for pp in powers:
sc.add(pp)
plot.add(
powers + [A.force(mock_position),
A.force(KL.position)],
powers + [pspec.force(mock_position),
pspec.force(KL.position), sc.mean],
title="Sampled Posterior Power Spectrum",
linewidth=[1.]*len(powers) + [3., 3.])
linewidth=[1.]*len(powers) + [3., 3., 3.],
label=[None]*len(powers) + ['Ground truth', 'Posterior latent mean', 'Posterior mean'])
plot.output(ny=1, nx=3, xsize=24, ysize=6, name=filename_res)
print("Saved results as '{}'.".format(filename_res))
......
......@@ -135,7 +135,7 @@
"## Before the Action: The Moment-Matched Log-Normal Distribution\n",
"Many properties of the correlated field are modelled as being lognormally distributed.\n",
"\n",
"The distribution models are parametrized via their means `_mean` and standard-deviations `_stddev`.\n",
"The distribution models are parametrized via their means and standard-deviations (first and second position in tuple).\n",
"\n",
"To get a feeling of how the ratio of the `mean` and `stddev` parameters influences the distribution shape,\n",
"here are a few example histograms: (observe the x-axis!)"
......@@ -189,21 +189,16 @@
"# Neutral model parameters yielding a quasi-constant field\n",
"cf_make_pars = {\n",
" 'offset_mean': 0.,\n",
" 'offset_std_mean': 1e-3,\n",
" 'offset_std_std': 1e-16,\n",
" 'offset_std': (1e-3, 1e-16),\n",
" 'prefix': ''\n",
"}\n",
"\n",
"cf_x_fluct_pars = {\n",
" 'target_subdomain': x_space,\n",
" 'fluctuations_mean': 1e-3,\n",
" 'fluctuations_stddev': 1e-16,\n",
" 'flexibility_mean': 1e-3,\n",
" 'flexibility_stddev': 1e-16,\n",
" 'asperity_mean': 1e-3,\n",
" 'asperity_stddev': 1e-16,\n",
" 'loglogavgslope_mean': 0.,\n",
" 'loglogavgslope_stddev': 1e-16\n",
" 'fluctuations': (1e-3, 1e-16),\n",
" 'flexibility': (1e-3, 1e-16),\n",
" 'asperity': (1e-3, 1e-16),\n",
" 'loglogavgslope': (0., 1e-16)\n",
"}\n",
"\n",
"init_model(cf_make_pars, cf_x_fluct_pars)"
......@@ -225,18 +220,18 @@
"source": [
"# Parameter Showcases\n",
"\n",
"## The `fluctuations_` parameters of `add_fluctuations()`\n",
"## The `fluctuations` parameters of `add_fluctuations()`\n",
"\n",
"determine the **amplitude of variations along the field dimension**\n",
"for which `add_fluctuations` is called.\n",
"\n",
"`fluctuations_mean` set the _average_ amplitude of the fields fluctuations along the given dimension,\\\n",
"`fluctuations_stddev` sets the width and shape of the amplitude distribution.\n",
"`fluctuations[0]` set the _average_ amplitude of the fields fluctuations along the given dimension,\\\n",
"`fluctuations[1]` sets the width and shape of the amplitude distribution.\n",
"\n",
"The amplitude is modelled as being log-normally distributed,\n",
"see `The Moment-Matched Log-Normal Distribution` above for details.\n",
"\n",
"#### `fluctuations_mean`:"
"#### `fluctuations` mean:"
]
},
{
......@@ -245,14 +240,14 @@
"metadata": {},
"outputs": [],
"source": [
"vary_parameter('fluctuations_mean', [0.05, 0.5, 2.], samples_vary_in='xi')"
"vary_parameter('fluctuations', [(0.05, 1e-16), (0.5, 1e-16), (2., 1e-16)], samples_vary_in='xi')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### `fluctuations_stddev`:"
"#### `fluctuations` std:"
]
},
{
......@@ -261,21 +256,21 @@
"metadata": {},
"outputs": [],
"source": [
"cf_x_fluct_pars['fluctuations_mean'] = 1.0\n",
"vary_parameter('fluctuations_stddev', [0.01, 0.1, 1.0], samples_vary_in='fluctuations')"
"vary_parameter('fluctuations', [(1., 0.01), (1., 0.1), (1., 1.)], samples_vary_in='fluctuations')\n",
"cf_x_fluct_pars['fluctuations'] = (1., 1e-16)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## The `loglogavgslope_` parameters of `add_fluctuations()`\n",
"## The `loglogavgslope` parameters of `add_fluctuations()`\n",
"\n",
"determine **the slope of the loglog-linear (power law) component of the power spectrum**.\n",
"\n",
"The slope is modelled to be normally distributed.\n",
"\n",
"#### `loglogavgslope_mean`:"
"#### `loglogavgslope` mean:"
]
},
{
......@@ -284,14 +279,14 @@
"metadata": {},
"outputs": [],
"source": [
"vary_parameter('loglogavgslope_mean', [-3., -1., 1.], samples_vary_in='xi')"
"vary_parameter('loglogavgslope', [(-6., 1e-16), (-2., 1e-16), (2., 1e-16)], samples_vary_in='xi')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### `loglogavgslope_stddev`:"
"#### `loglogavgslope` std:"
]
},
{
......@@ -300,25 +295,25 @@
"metadata": {},
"outputs": [],
"source": [
"cf_x_fluct_pars['loglogavgslope_mean'] = -1.0\n",
"vary_parameter('loglogavgslope_stddev', [0.01, 0.1, 1.0], samples_vary_in='loglogavgslope')"
"vary_parameter('loglogavgslope', [(-2., 0.02), (-2., 0.2), (-2., 2.0)], samples_vary_in='loglogavgslope')\n",
"cf_x_fluct_pars['loglogavgslope'] = (-2., 1e-16)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## The `flexibility_` parameters of `add_fluctuations()`\n",
"## The `flexibility` parameters of `add_fluctuations()`\n",
"\n",
"determine **the amplitude of the integrated Wiener process component of the power spectrum**\n",
"(how strong the power spectrum varies besides the power-law).\n",
"\n",
"`flexibility_mean` sets the _average_ amplitude of the i.g.p. component,\n",
"`flexibility_stddev` sets how much the amplitude can vary.\\\n",
"`flexibility[0]` sets the _average_ amplitude of the i.g.p. component,\\\n",
"`flexibility[1]` sets how much the amplitude can vary.\\\n",
"These two parameters feed into a moment-matched log-normal distribution model,\n",
"see above for a demo of its behavior.\n",
"\n",
"#### `flexibility_mean`:"
"#### `flexibility` mean:"
]
},
{
......@@ -327,14 +322,14 @@
"metadata": {},
"outputs": [],
"source": [
"vary_parameter('flexibility_mean', [0.1, 1.0, 3.0], samples_vary_in='spectrum')"
"vary_parameter('flexibility', [(0.4, 1e-16), (4.0, 1e-16), (12.0, 1e-16)], samples_vary_in='spectrum')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### `flexibility_stddev`:"
"#### `flexibility` std:"
]
},
{
......@@ -343,23 +338,23 @@
"metadata": {},
"outputs": [],
"source": [
"cf_x_fluct_pars['flexibility_mean'] = 2.0\n",
"vary_parameter('flexibility_stddev', [0.01, 0.1, 1.0], samples_vary_in='flexibility')"
"vary_parameter('flexibility', [(4., 0.02), (4., 0.2), (4., 2.)], samples_vary_in='flexibility')\n",
"cf_x_fluct_pars['flexibility'] = (4., 1e-16)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## The `asperity_` parameters of `add_fluctuations()`\n",
"## The `asperity` parameters of `add_fluctuations()`\n",
"\n",
"`asperity_` determines **how rough the integrated Wiener process component of the power spectrum is**.\n",
"`asperity` determines **how rough the integrated Wiener process component of the power spectrum is**.\n",
"\n",
"`asperity_mean` sets the average roughness, `asperity_stddev` sets how much the roughness can vary.\\\n",
"`asperity[0]` sets the average roughness, `asperity[1]` sets how much the roughness can vary.\\\n",
"These two parameters feed into a moment-matched log-normal distribution model,\n",
"see above for a demo of its behavior.\n",
"\n",
"#### `asperity_mean`:"
"#### `asperity` mean:"
]
},
{
......@@ -368,14 +363,14 @@
"metadata": {},
"outputs": [],
"source": [
"vary_parameter('asperity_mean', [0.001, 1.0, 5.0], samples_vary_in='spectrum')"
"vary_parameter('asperity', [(0.001, 1e-16), (1.0, 1e-16), (5., 1e-16)], samples_vary_in='spectrum')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### `asperity_stddev`:"
"#### `asperity` std:"
]
},
{
......@@ -384,8 +379,8 @@
"metadata": {},
"outputs": [],
"source": [
"cf_x_fluct_pars['asperity_mean'] = 1.0\n",
"vary_parameter('asperity_stddev', [0.01, 0.1, 1.0], samples_vary_in='asperity')"
"vary_parameter('asperity', [(1., 0.01), (1., 0.1), (1., 1.)], samples_vary_in='asperity')\n",
"cf_x_fluct_pars['asperity'] = (1., 1e-16)"
]
},
{
......@@ -406,10 +401,10 @@
"outputs": [],
"source": [
"# Reset model to neutral\n",
"cf_x_fluct_pars['fluctuations_mean'] = 1e-3\n",
"cf_x_fluct_pars['flexibility_mean'] = 1e-3\n",
"cf_x_fluct_pars['asperity_mean'] = 1e-3\n",
"cf_x_fluct_pars['loglogavgslope_mean'] = 1e-3"
"cf_x_fluct_pars['fluctuations'] = (1e-3, 1e-16)\n",
"cf_x_fluct_pars['flexibility'] = (1e-3, 1e-16)\n",
"cf_x_fluct_pars['asperity'] = (1e-3, 1e-16)\n",
"cf_x_fluct_pars['loglogavgslope'] = (1e-3, 1e-16)"
]
},
{
......@@ -425,15 +420,15 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## The `offset_std_` parameters of `CorrelatedFieldMaker.make()`\n",
"## The `offset_std` parameters of `CorrelatedFieldMaker.make()`\n",
"\n",
"Variation of the global offset of the field are modelled as being log-normally distributed.\n",
"See `The Moment-Matched Log-Normal Distribution` above for details.\n",
"\n",
"The `offset_std_mean` parameter sets how much NIFTy will vary the offset *on average*.\\\n",
"The `offset_std_std` parameters defines the with and shape of the offset variation distribution.\n",
"The `offset_std