Commit b530bdb1 authored by Martin Reinecke's avatar Martin Reinecke
Browse files

small tweaks

parent e5a4f4e3
Pipeline #106043 passed with stages
in 20 minutes and 54 seconds
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
# Code example: Wiener filter # Code example: Wiener filter
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Introduction ## Introduction
IFT starting point: IFT starting point:
$$d = Rs+n$$ $$d = Rs+n$$
Typically, $s$ is a continuous field, $d$ a discrete data vector. Particularly, $R$ is not invertible. Typically, $s$ is a continuous field, $d$ a discrete data vector. Particularly, $R$ is not invertible.
IFT aims at **inverting** the above uninvertible problem in the **best possible way** using Bayesian statistics. IFT aims at **inverting** the above uninvertible problem in the **best possible way** using Bayesian statistics.
NIFTy (Numerical Information Field Theory) is a Python framework in which IFT problems can be tackled easily. NIFTy (Numerical Information Field Theory) is a Python framework in which IFT problems can be tackled easily.
Main Interfaces: Main Interfaces:
- **Spaces**: Cartesian, 2-Spheres (Healpix, Gauss-Legendre) and their respective harmonic spaces. - **Spaces**: Cartesian, 2-Spheres (Healpix, Gauss-Legendre) and their respective harmonic spaces.
- **Fields**: Defined on spaces. - **Fields**: Defined on spaces.
- **Operators**: Acting on fields. - **Operators**: Acting on fields.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Wiener filter on one-dimensional fields ## Wiener filter on one-dimensional fields
### Assumptions ### Assumptions
- $d=Rs+n$, $R$ linear operator. - $d=Rs+n$, $R$ linear operator.
- $\mathcal P (s) = \mathcal G (s,S)$, $\mathcal P (n) = \mathcal G (n,N)$ where $S, N$ are positive definite matrices. - $\mathcal P (s) = \mathcal G (s,S)$, $\mathcal P (n) = \mathcal G (n,N)$ where $S, N$ are positive definite matrices.
### Posterior ### Posterior
The Posterior is given by: The Posterior is given by:
$$\mathcal P (s|d) \propto P(s,d) = \mathcal G(d-Rs,N) \,\mathcal G(s,S) \propto \mathcal G (s-m,D) $$ $$\mathcal P (s|d) \propto P(s,d) = \mathcal G(d-Rs,N) \,\mathcal G(s,S) \propto \mathcal G (s-m,D) $$
where where
$$m = Dj$$ $$m = Dj$$
with with
$$D = (S^{-1} +R^\dagger N^{-1} R)^{-1} , \quad j = R^\dagger N^{-1} d.$$ $$D = (S^{-1} +R^\dagger N^{-1} R)^{-1} , \quad j = R^\dagger N^{-1} d.$$
Let us implement this in NIFTy! Let us implement this in NIFTy!
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### In NIFTy ### In NIFTy
- We assume statistical homogeneity and isotropy. Therefore the signal covariance $S$ is diagonal in harmonic space, and is described by a one-dimensional power spectrum, assumed here as $$P(k) = P_0\,\left(1+\left(\frac{k}{k_0}\right)^2\right)^{-\gamma /2},$$ - We assume statistical homogeneity and isotropy. Therefore the signal covariance $S$ is diagonal in harmonic space, and is described by a one-dimensional power spectrum, assumed here as $$P(k) = P_0\,\left(1+\left(\frac{k}{k_0}\right)^2\right)^{-\gamma /2},$$
with $P_0 = 0.2, k_0 = 5, \gamma = 4$. with $P_0 = 0.2, k_0 = 5, \gamma = 4$.
- $N = 0.2 \cdot \mathbb{1}$. - $N = 0.2 \cdot \mathbb{1}$.
- Number of data points $N_{pix} = 512$. - Number of data points $N_{pix} = 512$.
- reconstruction in harmonic space. - reconstruction in harmonic space.
- Response operator: - Response operator:
$$R = FFT_{\text{harmonic} \rightarrow \text{position}}$$ $$R = FFT_{\text{harmonic} \rightarrow \text{position}}$$
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
N_pixels = 512 # Number of pixels N_pixels = 512 # Number of pixels
def pow_spec(k): def pow_spec(k):
P0, k0, gamma = [.2, 5, 4] P0, k0, gamma = [.2, 5, 4]
return P0 / ((1. + (k/k0)**2)**(gamma / 2)) return P0 / ((1. + (k/k0)**2)**(gamma / 2))
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Implementation ### Implementation
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
#### Import Modules #### Import Modules
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
import numpy as np import numpy as np
import nifty8 as ift import nifty8 as ift
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
plt.rcParams['figure.dpi'] = 100
plt.style.use("seaborn-notebook") plt.style.use("seaborn-notebook")
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
#### Implement Propagator #### Implement Propagator
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
def Curvature(R, N, Sh): def Curvature(R, N, Sh):
IC = ift.GradientNormController(iteration_limit=50000, IC = ift.GradientNormController(iteration_limit=50000,
tol_abs_gradnorm=0.1) tol_abs_gradnorm=0.1)
# WienerFilterCurvature is (R.adjoint*N.inverse*R + Sh.inverse) plus some handy # WienerFilterCurvature is (R.adjoint*N.inverse*R + Sh.inverse) plus some handy
# helper methods. # helper methods.
return ift.WienerFilterCurvature(R,N,Sh,iteration_controller=IC,iteration_controller_sampling=IC) return ift.WienerFilterCurvature(R,N,Sh,iteration_controller=IC,iteration_controller_sampling=IC)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
#### Conjugate Gradient Preconditioning #### Conjugate Gradient Preconditioning
- $D$ is defined via: - $D$ is defined via:
$$D^{-1} = \mathcal S_h^{-1} + R^\dagger N^{-1} R.$$ $$D^{-1} = \mathcal S_h^{-1} + R^\dagger N^{-1} R.$$
In the end, we want to apply $D$ to $j$, i.e. we need the inverse action of $D^{-1}$. This is done numerically (algorithm: *Conjugate Gradient*). In the end, we want to apply $D$ to $j$, i.e. we need the inverse action of $D^{-1}$. This is done numerically (algorithm: *Conjugate Gradient*).
<!-- <!--
- One can define the *condition number* of a non-singular and normal matrix $A$: - One can define the *condition number* of a non-singular and normal matrix $A$:
$$\kappa (A) := \frac{|\lambda_{\text{max}}|}{|\lambda_{\text{min}}|},$$ $$\kappa (A) := \frac{|\lambda_{\text{max}}|}{|\lambda_{\text{min}}|},$$
where $\lambda_{\text{max}}$ and $\lambda_{\text{min}}$ are the largest and smallest eigenvalue of $A$, respectively. where $\lambda_{\text{max}}$ and $\lambda_{\text{min}}$ are the largest and smallest eigenvalue of $A$, respectively.
- The larger $\kappa$ the slower Conjugate Gradient. - The larger $\kappa$ the slower Conjugate Gradient.
- By default, conjugate gradient solves: $D^{-1} m = j$ for $m$, where $D^{-1}$ can be badly conditioned. If one knows a non-singular matrix $T$ for which $TD^{-1}$ is better conditioned, one can solve the equivalent problem: - By default, conjugate gradient solves: $D^{-1} m = j$ for $m$, where $D^{-1}$ can be badly conditioned. If one knows a non-singular matrix $T$ for which $TD^{-1}$ is better conditioned, one can solve the equivalent problem:
$$\tilde A m = \tilde j,$$ $$\tilde A m = \tilde j,$$
where $\tilde A = T D^{-1}$ and $\tilde j = Tj$. where $\tilde A = T D^{-1}$ and $\tilde j = Tj$.
- In our case $S^{-1}$ is responsible for the bad conditioning of $D$ depending on the chosen power spectrum. Thus, we choose - In our case $S^{-1}$ is responsible for the bad conditioning of $D$ depending on the chosen power spectrum. Thus, we choose
$$T = \mathcal F^\dagger S_h^{-1} \mathcal F.$$ $$T = \mathcal F^\dagger S_h^{-1} \mathcal F.$$
--> -->
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
#### Generate Mock data #### Generate Mock data
- Generate a field $s$ and $n$ with given covariances. - Generate a field $s$ and $n$ with given covariances.
- Calculate $d$. - Calculate $d$.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
s_space = ift.RGSpace(N_pixels) s_space = ift.RGSpace(N_pixels)
h_space = s_space.get_default_codomain() h_space = s_space.get_default_codomain()
HT = ift.HarmonicTransformOperator(h_space, target=s_space) HT = ift.HarmonicTransformOperator(h_space, target=s_space)
# Operators # Operators
Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec) Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec)
R = HT # @ ift.create_harmonic_smoothing_operator((h_space,), 0, 0.02) R = HT # @ ift.create_harmonic_smoothing_operator((h_space,), 0, 0.02)
# Fields and data # Fields and data
sh = Sh.draw_sample_with_dtype(dtype=np.float64) sh = Sh.draw_sample_with_dtype(dtype=np.float64)
noiseless_data=R(sh) noiseless_data=R(sh)
noise_amplitude = np.sqrt(0.2) noise_amplitude = np.sqrt(0.2)
N = ift.ScalingOperator(s_space, noise_amplitude**2) N = ift.ScalingOperator(s_space, noise_amplitude**2)
n = ift.Field.from_random(domain=s_space, random_type='normal', n = ift.Field.from_random(domain=s_space, random_type='normal',
std=noise_amplitude, mean=0) std=noise_amplitude, mean=0)
d = noiseless_data + n d = noiseless_data + n
j = R.adjoint_times(N.inverse_times(d)) j = R.adjoint_times(N.inverse_times(d))
curv = Curvature(R=R, N=N, Sh=Sh) curv = Curvature(R=R, N=N, Sh=Sh)
D = curv.inverse D = curv.inverse
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
#### Run Wiener Filter #### Run Wiener Filter
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
m = D(j) m = D(j)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
#### Results #### Results
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Get signal data and reconstruction data # Get signal data and reconstruction data
s_data = HT(sh).val s_data = HT(sh).val
m_data = HT(m).val m_data = HT(m).val
d_data = d.val d_data = d.val
plt.plot(s_data, 'r', label="Signal", linewidth=2) plt.plot(s_data, 'r', label="Signal", linewidth=2)
plt.plot(d_data, 'k.', label="Data") plt.plot(d_data, 'k.', label="Data")
plt.plot(m_data, 'k', label="Reconstruction",linewidth=2) plt.plot(m_data, 'k', label="Reconstruction",linewidth=2)
plt.title("Reconstruction") plt.title("Reconstruction")
plt.legend() plt.legend()
plt.show() plt.show()
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
plt.plot(s_data - s_data, 'r', label="Signal", linewidth=2) plt.plot(s_data - s_data, 'r', label="Signal", linewidth=2)
plt.plot(d_data - s_data, 'k.', label="Data") plt.plot(d_data - s_data, 'k.', label="Data")
plt.plot(m_data - s_data, 'k', label="Reconstruction",linewidth=2) plt.plot(m_data - s_data, 'k', label="Reconstruction",linewidth=2)
plt.axhspan(-noise_amplitude,noise_amplitude, facecolor='0.9', alpha=.5) plt.axhspan(-noise_amplitude,noise_amplitude, facecolor='0.9', alpha=.5)
plt.title("Residuals") plt.title("Residuals")
plt.legend() plt.legend()
plt.show() plt.show()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
#### Power Spectrum #### Power Spectrum
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
s_power_data = ift.power_analyze(sh).val s_power_data = ift.power_analyze(sh).val
m_power_data = ift.power_analyze(m).val m_power_data = ift.power_analyze(m).val
plt.loglog() plt.loglog()
plt.xlim(1, int(N_pixels/2)) plt.xlim(1, int(N_pixels/2))
ymin = min(m_power_data) ymin = min(m_power_data)
plt.ylim(ymin, 1) plt.ylim(ymin, 1)
xs = np.arange(1,int(N_pixels/2),.1) xs = np.arange(1,int(N_pixels/2),.1)
plt.plot(xs, pow_spec(xs), label="True Power Spectrum", color='k',alpha=0.5) plt.plot(xs, pow_spec(xs), label="True Power Spectrum", color='k',alpha=0.5)
plt.plot(s_power_data, 'r', label="Signal") plt.plot(s_power_data, 'r', label="Signal")
plt.plot(m_power_data, 'k', label="Reconstruction") plt.plot(m_power_data, 'k', label="Reconstruction")
plt.axhline(noise_amplitude**2 / N_pixels, color="k", linestyle='--', label="Noise level", alpha=.5) plt.axhline(noise_amplitude**2 / N_pixels, color="k", linestyle='--', label="Noise level", alpha=.5)
plt.axhspan(noise_amplitude**2 / N_pixels, ymin, facecolor='0.9', alpha=.5) plt.axhspan(noise_amplitude**2 / N_pixels, ymin, facecolor='0.9', alpha=.5)
plt.title("Power Spectrum") plt.title("Power Spectrum")
plt.legend() plt.legend()
plt.show() plt.show()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Wiener Filter on Incomplete Data ## Wiener Filter on Incomplete Data
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Operators # Operators
Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec) Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec)
N = ift.ScalingOperator(s_space, noise_amplitude**2) N = ift.ScalingOperator(s_space, noise_amplitude**2)
# R is defined below # R is defined below
# Fields # Fields
sh = Sh.draw_sample_with_dtype(dtype=np.float64) sh = Sh.draw_sample_with_dtype(dtype=np.float64)
s = HT(sh) s = HT(sh)
n = ift.Field.from_random(domain=s_space, random_type='normal', n = ift.Field.from_random(domain=s_space, random_type='normal',
std=noise_amplitude, mean=0) std=noise_amplitude, mean=0)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Partially Lose Data ### Partially Lose Data
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
l = int(N_pixels * 0.2) l = int(N_pixels * 0.2)
h = int(N_pixels * 0.2 * 2) h = int(N_pixels * 0.2 * 2)
mask = np.full(s_space.shape, 1.) mask = np.full(s_space.shape, 1.)
mask[l:h] = 0 mask[l:h] = 0
mask = ift.Field.from_raw(s_space, mask) mask = ift.Field.from_raw(s_space, mask)
R = ift.DiagonalOperator(mask)(HT) R = ift.DiagonalOperator(mask)(HT)
n = n.val_rw() n = n.val_rw()
n[l:h] = 0 n[l:h] = 0
n = ift.Field.from_raw(s_space, n) n = ift.Field.from_raw(s_space, n)
d = R(sh) + n d = R(sh) + n
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
curv = Curvature(R=R, N=N, Sh=Sh) curv = Curvature(R=R, N=N, Sh=Sh)
D = curv.inverse D = curv.inverse
j = R.adjoint_times(N.inverse_times(d)) j = R.adjoint_times(N.inverse_times(d))
m = D(j) m = D(j)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Compute Uncertainty ### Compute Uncertainty
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
m_mean, m_var = ift.probe_with_posterior_samples(curv, HT, 200, np.float64) m_mean, m_var = ift.probe_with_posterior_samples(curv, HT, 200, np.float64)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Get data ### Get data
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Get signal data and reconstruction data # Get signal data and reconstruction data
s_data = s.val s_data = s.val
m_data = HT(m).val m_data = HT(m).val
m_var_data = m_var.val m_var_data = m_var.val
uncertainty = np.sqrt(m_var_data) uncertainty = np.sqrt(m_var_data)
d_data = d.val_rw() d_data = d.val_rw()
# Set lost data to NaN for proper plotting # Set lost data to NaN for proper plotting
d_data[d_data == 0] = np.nan d_data[d_data == 0] = np.nan
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
plt.axvspan(l, h, facecolor='0.8',alpha=0.5) plt.axvspan(l, h, facecolor='0.8',alpha=0.5)
plt.fill_between(range(N_pixels), m_data - uncertainty, m_data + uncertainty, facecolor='0.5', alpha=0.5) plt.fill_between(range(N_pixels), m_data - uncertainty, m_data + uncertainty, facecolor='0.5', alpha=0.5)
plt.plot(s_data, 'r', label="Signal", alpha=1, linewidth=2) plt.plot(s_data, 'r', label="Signal", alpha=1, linewidth=2)
plt.plot(d_data, 'k.', label="Data") plt.plot(d_data, 'k.', label="Data")
plt.plot(m_data, 'k', label="Reconstruction", linewidth=2) plt.plot(m_data, 'k', label="Reconstruction", linewidth=2)
plt.title("Reconstruction of incomplete data") plt.title("Reconstruction of incomplete data")
plt.legend() plt.legend()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Wiener filter on two-dimensional field ## Wiener filter on two-dimensional field
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
N_pixels = 256 # Number of pixels N_pixels = 256 # Number of pixels
sigma2 = 2. # Noise variance sigma2 = 2. # Noise variance
def pow_spec(k): def pow_spec(k):
P0, k0, gamma = [.2, 2, 4] P0, k0, gamma = [.2, 2, 4]
return P0 * (1. + (k/k0)**2)**(-gamma/2) return P0 * (1. + (k/k0)**2)**(-gamma/2)
s_space = ift.RGSpace([N_pixels, N_pixels]) s_space = ift.RGSpace([N_pixels, N_pixels])
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
h_space = s_space.get_default_codomain() h_space = s_space.get_default_codomain()
HT = ift.HarmonicTransformOperator(h_space,s_space) HT = ift.HarmonicTransformOperator(h_space,s_space)
# Operators # Operators
Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec) Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec)
N = ift.ScalingOperator(s_space, sigma2) N = ift.ScalingOperator(s_space, sigma2)
# Fields and data # Fields and data
sh = Sh.draw_sample_with_dtype(dtype=np.float64) sh = Sh.draw_sample_with_dtype(dtype=np.float64)
n = ift.Field.from_random(domain=s_space, random_type='normal', n = ift.Field.from_random(domain=s_space, random_type='normal',
std=np.sqrt(sigma2), mean=0) std=np.sqrt(sigma2), mean=0)
# Lose some data # Lose some data
l = int(N_pixels * 0.33) l = int(N_pixels * 0.33)
h = int(N_pixels * 0.33 * 2) h = int(N_pixels * 0.33 * 2)
mask = np.full(s_space.shape, 1.) mask = np.full(s_space.shape, 1.)
mask[l:h,l:h] = 0. mask[l:h,l:h] = 0.
mask = ift.Field.from_raw(s_space, mask) mask = ift.Field.from_raw(s_space, mask)
R = ift.DiagonalOperator(mask)(HT) R = ift.DiagonalOperator(mask)(HT)
n = n.val_rw() n = n.val_rw()
n[l:h, l:h] = 0 n[l:h, l:h] = 0
n = ift.Field.from_raw(s_space, n) n = ift.Field.from_raw(s_space, n)
curv = Curvature(R=R, N=N, Sh=Sh) curv = Curvature(R=R, N=N, Sh=Sh)
D = curv.inverse D = curv.inverse
d = R(sh) + n d = R(sh) + n
j = R.adjoint_times(N.inverse_times(d)) j = R.adjoint_times(N.inverse_times(d))
# Run Wiener filter # Run Wiener filter
m = D(j) m = D(j)
# Uncertainty # Uncertainty
m_mean, m_var = ift.probe_with_posterior_samples(curv, HT, 20, np.float64) m_mean, m_var = ift.probe_with_posterior_samples(curv, HT, 20, np.float64)
# Get data # Get data
s_data = HT(sh).val s_data = HT(sh).val
m_data = HT(m).val m_data = HT(m).val
m_var_data = m_var.val m_var_data = m_var.val
d_data = d.val d_data = d.val
uncertainty = np.sqrt(np.abs(m_var_data)) uncertainty = np.sqrt(np.abs(m_var_data))
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
cmap = ['magma', 'inferno', 'plasma', 'viridis'][1] cmap = ['magma', 'inferno', 'plasma', 'viridis'][1]
mi = np.min(s_data) mi = np.min(s_data)
ma = np.max(s_data) ma = np.max(s_data)
fig, axes = plt.subplots(1, 2) fig, axes = plt.subplots(1, 2)
data = [s_data, d_data] data = [s_data, d_data]
caption = ["Signal", "Data"] caption = ["Signal", "Data"]
for ax in axes.flat: for ax in axes.flat:
im = ax.imshow(data.pop(0), interpolation='nearest', cmap=cmap, vmin=mi, im = ax.imshow(data.pop(0), interpolation='nearest', cmap=cmap, vmin=mi,
vmax=ma) vmax=ma)
ax.set_title(caption.pop(0)) ax.set_title(caption.pop(0))
fig.subplots_adjust(right=0.8) fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.15, 0.05, 0.7]) cbar_ax = fig.add_axes([0.85, 0.15, 0.05, 0.7])
fig.colorbar(im, cax=cbar_ax) fig.colorbar(im, cax=cbar_ax)
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
mi = np.min(s_data) mi = np.min(s_data)
ma = np.max(s_data) ma = np.max(s_data)
fig, axes = plt.subplots(3, 2, figsize=(10, 15)) fig, axes = plt.subplots(3, 2, figsize=(10, 15))
sample = HT(curv.draw_sample(from_inverse=True)+m).val sample = HT(curv.draw_sample(from_inverse=True)+m).val
post_mean = (m_mean + HT(m)).val post_mean = (m_mean + HT(m)).val
data = [s_data, m_data, post_mean, sample, s_data - m_data, uncertainty] data = [s_data, m_data, post_mean, sample, s_data - m_data, uncertainty]
caption = ["Signal", "Reconstruction", "Posterior mean", "Sample", "Residuals", "Uncertainty Map"] caption = ["Signal", "Reconstruction", "Posterior mean", "Sample", "Residuals", "Uncertainty Map"]
for ax in axes.flat: for ax in axes.flat:
im = ax.imshow(data.pop(0), interpolation='nearest', cmap=cmap, vmin=mi, vmax=ma) im = ax.imshow(data.pop(0), interpolation='nearest', cmap=cmap, vmin=mi, vmax=ma)
ax.set_title(caption.pop(0)) ax.set_title(caption.pop(0))
fig.subplots_adjust(right=0.8) fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([.85, 0.15, 0.05, 0.7]) cbar_ax = fig.add_axes([.85, 0.15, 0.05, 0.7])
fig.colorbar(im, cax=cbar_ax) fig.colorbar(im, cax=cbar_ax)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Is the uncertainty map reliable? ### Is the uncertainty map reliable?
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
precise = (np.abs(s_data-m_data) < uncertainty) precise = (np.abs(s_data-m_data) < uncertainty)
print("Error within uncertainty map bounds: " + str(np.sum(precise) * 100 / N_pixels**2) + "%") print("Error within uncertainty map bounds: " + str(np.sum(precise) * 100 / N_pixels**2) + "%")
plt.imshow(precise.astype(float), cmap="brg") plt.imshow(precise.astype(float), cmap="brg")
plt.colorbar() plt.colorbar()
``` ```
......
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
# Notebook showcasing the NIFTy 6 Correlated Field model # Notebook showcasing the NIFTy 6 Correlated Field model
**Skip to `Parameter Showcases` for the meat/veggies ;)** **Skip to `Parameter Showcases` for the meat/veggies ;)**
The field model roughly works like this: The field model roughly works like this:
`f = HT( A * zero_mode * xi ) + offset` `f = HT( A * zero_mode * xi ) + offset`
`A` is a spectral power field which is constructed from power spectra that hold on subdomains of the target domain. `A` is a spectral power field which is constructed from power spectra that hold on subdomains of the target domain.
It is scaled by a zero mode operator and then pointwise multiplied by a gaussian excitation field, yielding It is scaled by a zero mode operator and then pointwise multiplied by a gaussian excitation field, yielding
a representation of the field in harmonic space. a representation of the field in harmonic space.
It is then transformed into the target real space and a offset added. It is then transformed into the target real space and a offset added.
The power spectra `A` is constructed of are in turn constructed as the sum of a power law component The power spectra `A` is constructed of are in turn constructed as the sum of a power law component
and an integrated Wiener process whose amplitude and roughness can be set. and an integrated Wiener process whose amplitude and roughness can be set.
## Setup code ## Setup code
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
import nifty8 as ift import nifty8 as ift
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
plt.rcParams['figure.dpi'] = 100
plt.style.use("seaborn-notebook")
import numpy as np import numpy as np
ift.random.push_sseq_from_seed(43) ift.random.push_sseq_from_seed(43)
n_pix = 256 n_pix = 256
x_space = ift.RGSpace(n_pix) x_space = ift.RGSpace(n_pix)
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Plotting routine # Plotting routine
def plot(fields, spectra, title=None): def plot(fields, spectra, title=None):
# Plotting preparation is normally handled by nifty8.Plot # Plotting preparation is normally handled by nifty8.Plot
# It is done manually here to be able to tweak details # It is done manually here to be able to tweak details
# Fields are assumed to have identical domains # Fields are assumed to have identical domains
fig = plt.figure(tight_layout=True, figsize=(12, 3.5)) fig = plt.figure(tight_layout=True, figsize=(10, 3))
if title is not None: if title is not None:
fig.suptitle(title, fontsize=14) fig.suptitle(title, fontsize=14)
# Field # Field
ax1 = fig.add_subplot(1, 2, 1) ax1 = fig.add_subplot(1, 2, 1)
ax1.axhline(y=0., color='k', linestyle='--', alpha=0.25) ax1.axhline(y=0., color='k', linestyle='--', alpha=0.25)
for field in fields: for field in fields:
dom = field.domain[0] dom = field.domain[0]
xcoord = np.arange(dom.shape[0]) * dom.distances[0] xcoord = np.arange(dom.shape[0]) * dom.distances[0]
ax1.plot(xcoord, field.val) ax1.plot(xcoord, field.val)
ax1.set_xlim(xcoord[0], xcoord[-1]) ax1.set_xlim(xcoord[0], xcoord[-1])
ax1.set_ylim(-5., 5.) ax1.set_ylim(-5., 5.)
ax1.set_xlabel('x') ax1.set_xlabel('x')
ax1.set_ylabel('f(x)') ax1.set_ylabel('f(x)')
ax1.set_title('Field realizations') ax1.set_title('Field realizations')
# Spectrum # Spectrum
ax2 = fig.add_subplot(1, 2, 2) ax2 = fig.add_subplot(1, 2, 2)
for spectrum in spectra: for spectrum in spectra:
xcoord = spectrum.domain[0].k_lengths xcoord = spectrum.domain[0].k_lengths
ycoord = spectrum.val_rw() ycoord = spectrum.val_rw()
ycoord[0] = ycoord[1] ycoord[0] = ycoord[1]
ax2.plot(xcoord, ycoord) ax2.plot(xcoord, ycoord)
ax2.set_ylim(1e-6, 10.) ax2.set_ylim(1e-6, 10.)
ax2.set_xscale('log') ax2.set_xscale('log')
ax2.set_yscale('log') ax2.set_yscale('log')
ax2.set_xlabel('k') ax2.set_xlabel('k')
ax2.set_ylabel('p(k)') ax2.set_ylabel('p(k)')
ax2.set_title('Power Spectrum') ax2.set_title('Power Spectrum')
fig.align_labels() fig.align_labels()
plt.show() plt.show()
# Helper: draw main sample # Helper: draw main sample
main_sample = None main_sample = None
def init_model(m_pars, fl_pars): def init_model(m_pars, fl_pars):
global main_sample global main_sample
cf = ift.CorrelatedFieldMaker(m_pars["prefix"]) cf = ift.CorrelatedFieldMaker(m_pars["prefix"])
cf.set_amplitude_total_offset(m_pars["offset_mean"], m_pars["offset_std"]) cf.set_amplitude_total_offset(m_pars["offset_mean"], m_pars["offset_std"])
cf.add_fluctuations(**fl_pars) cf.add_fluctuations(**fl_pars)
field = cf.finalize(prior_info=0) field = cf.finalize(prior_info=0)
main_sample = ift.from_random(field.domain) main_sample = ift.from_random(field.domain)
print("model domain keys:", field.domain.keys()) print("model domain keys:", field.domain.keys())
# Helper: field and spectrum from parameter dictionaries + plotting # Helper: field and spectrum from parameter dictionaries + plotting
def eval_model(m_pars, fl_pars, title=None, samples=None): def eval_model(m_pars, fl_pars, title=None, samples=None):
cf = ift.CorrelatedFieldMaker(m_pars["prefix"]) cf = ift.CorrelatedFieldMaker(m_pars["prefix"])
cf.set_amplitude_total_offset(m_pars["offset_mean"], m_pars["offset_std"]) cf.set_amplitude_total_offset(m_pars["offset_mean"], m_pars["offset_std"])
cf.add_fluctuations(**fl_pars) cf.add_fluctuations(**fl_pars)
field = cf.finalize(prior_info=0) field = cf.finalize(prior_info=0)
spectrum = cf.amplitude spectrum = cf.amplitude
if samples is None: if samples is None:
samples = [main_sample] samples = [main_sample]
field_realizations = [field(s) for s in samples] field_realizations = [field(s) for s in samples]
spectrum_realizations = [spectrum.force(s) for s in samples] spectrum_realizations = [spectrum.force(s) for s in samples]
plot(field_realizations, spectrum_realizations, title) plot(field_realizations, spectrum_realizations, title)
def gen_samples(key_to_vary): def gen_samples(key_to_vary):
if key_to_vary is None: if key_to_vary is None:
return [main_sample] return [main_sample]
dct = main_sample.to_dict() dct = main_sample.to_dict()
subdom_to_vary = dct.pop(key_to_vary).domain subdom_to_vary = dct.pop(key_to_vary).domain
samples = [] samples = []
for i in range(8): for i in range(8):
d = dct.copy() d = dct.copy()
d[key_to_vary] = ift.from_random(subdom_to_vary) d[key_to_vary] = ift.from_random(subdom_to_vary)
samples.append(ift.MultiField.from_dict(d)) samples.append(ift.MultiField.from_dict(d))
return samples return samples
def vary_parameter(parameter_key, values, samples_vary_in=None): def vary_parameter(parameter_key, values, samples_vary_in=None):
s = gen_samples(samples_vary_in) s = gen_samples(samples_vary_in)
for v in values: for v in values:
if parameter_key in cf_make_pars.keys(): if parameter_key in cf_make_pars.keys():
m_pars = {**cf_make_pars, parameter_key: v} m_pars = {**cf_make_pars, parameter_key: v}
eval_model(m_pars, cf_x_fluct_pars, f"{parameter_key} = {v}", s) eval_model(m_pars, cf_x_fluct_pars, f"{parameter_key} = {v}", s)
else: else:
fl_pars = {**cf_x_fluct_pars, parameter_key: v} fl_pars = {**cf_x_fluct_pars, parameter_key: v}
eval_model(cf_make_pars, fl_pars, f"{parameter_key} = {v}", s) eval_model(cf_make_pars, fl_pars, f"{parameter_key} = {v}", s)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Before the Action: The Moment-Matched Log-Normal Distribution ## Before the Action: The Moment-Matched Log-Normal Distribution
Many properties of the correlated field are modelled as being lognormally distributed. Many properties of the correlated field are modelled as being lognormally distributed.
The distribution models are parametrized via their means and standard-deviations (first and second position in tuple). The distribution models are parametrized via their means and standard-deviations (first and second position in tuple).
To get a feeling of how the ratio of the `mean` and `stddev` parameters influences the distribution shape, To get a feeling of how the ratio of the `mean` and `stddev` parameters influences the distribution shape,
here are a few example histograms: (observe the x-axis!) here are a few example histograms: (observe the x-axis!)
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
fig = plt.figure(figsize=(13, 3.5)) fig = plt.figure(figsize=(13, 3.5))
mean = 1.0 mean = 1.0
sigmas = [1.0, 0.5, 0.1] sigmas = [1.0, 0.5, 0.1]
for i in range(3): for i in range(3):
op = ift.LognormalTransform(mean=mean, sigma=sigmas[i], op = ift.LognormalTransform(mean=mean, sigma=sigmas[i],
key='foo', N_copies=0) key='foo', N_copies=0)
op_samples = np.array( op_samples = np.array(
[op(s).val for s in [ift.from_random(op.domain) for i in range(10000)]]) [op(s).val for s in [ift.from_random(op.domain) for i in range(10000)]])
ax = fig.add_subplot(1, 3, i + 1) ax = fig.add_subplot(1, 3, i + 1)
ax.hist(op_samples, bins=50) ax.hist(op_samples, bins=50)
ax.set_title(f"mean = {mean}, sigma = {sigmas[i]}") ax.set_title(f"mean = {mean}, sigma = {sigmas[i]}")
ax.set_xlabel('x') ax.set_xlabel('x')
del op_samples del op_samples
plt.show() plt.show()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## The Neutral Field ## The Neutral Field
To demonstrate the effect of all parameters, first a 'neutral' set of parameters To demonstrate the effect of all parameters, first a 'neutral' set of parameters
is defined which then are varied one by one, showing the effect of the variation is defined which then are varied one by one, showing the effect of the variation
on the generated field realizations and the underlying power spectrum from which on the generated field realizations and the underlying power spectrum from which
they were drawn. they were drawn.
As a neutral field, a model with a white power spectrum and vanishing spectral power was chosen. As a neutral field, a model with a white power spectrum and vanishing spectral power was chosen.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Neutral model parameters yielding a quasi-constant field # Neutral model parameters yielding a quasi-constant field
cf_make_pars = { cf_make_pars = {
'offset_mean': 0., 'offset_mean': 0.,
'offset_std': (1e-3, 1e-16), 'offset_std': (1e-3, 1e-16),
'prefix': '' 'prefix': ''
} }
cf_x_fluct_pars = { cf_x_fluct_pars = {
'target_subdomain': x_space, 'target_subdomain': x_space,
'fluctuations': (1e-3, 1e-16), 'fluctuations': (1e-3, 1e-16),
'flexibility': (1e-3, 1e-16), 'flexibility': (1e-3, 1e-16),
'asperity': (1e-3, 1e-16), 'asperity': (1e-3, 1e-16),
'loglogavgslope': (0., 1e-16) 'loglogavgslope': (0., 1e-16)
} }
init_model(cf_make_pars, cf_x_fluct_pars) init_model(cf_make_pars, cf_x_fluct_pars)
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Show neutral field # Show neutral field
eval_model(cf_make_pars, cf_x_fluct_pars, "Neutral Field") eval_model(cf_make_pars, cf_x_fluct_pars, "Neutral Field")
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
# Parameter Showcases # Parameter Showcases
## The `fluctuations` parameters of `add_fluctuations()` ## The `fluctuations` parameters of `add_fluctuations()`
determine the **amplitude of variations along the field dimension** determine the **amplitude of variations along the field dimension**
for which `add_fluctuations` is called. for which `add_fluctuations` is called.
`fluctuations[0]` set the _average_ amplitude of the fields fluctuations along the given dimension,\ `fluctuations[0]` set the _average_ amplitude of the fields fluctuations along the given dimension,\
`fluctuations[1]` sets the width and shape of the amplitude distribution. `fluctuations[1]` sets the width and shape of the amplitude distribution.
The amplitude is modelled as being log-normally distributed, The amplitude is modelled as being log-normally distributed,
see `The Moment-Matched Log-Normal Distribution` above for details. see `The Moment-Matched Log-Normal Distribution` above for details.
#### `fluctuations` mean: #### `fluctuations` mean:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
vary_parameter('fluctuations', [(0.05, 1e-16), (0.5, 1e-16), (2., 1e-16)], samples_vary_in='xi') vary_parameter('fluctuations', [(0.05, 1e-16), (0.5, 1e-16), (2., 1e-16)], samples_vary_in='xi')
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
#### `fluctuations` std: #### `fluctuations` std:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
vary_parameter('fluctuations', [(1., 0.01), (1., 0.1), (1., 1.)], samples_vary_in='fluctuations') vary_parameter('fluctuations', [(1., 0.01), (1., 0.1), (1., 1.)], samples_vary_in='fluctuations')
cf_x_fluct_pars['fluctuations'] = (1., 1e-16) cf_x_fluct_pars['fluctuations'] = (1., 1e-16)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## The `loglogavgslope` parameters of `add_fluctuations()` ## The `loglogavgslope` parameters of `add_fluctuations()`
determine **the slope of the loglog-linear (power law) component of the power spectrum**. determine **the slope of the loglog-linear (power law) component of the power spectrum**.
The slope is modelled to be normally distributed. The slope is modelled to be normally distributed.
#### `loglogavgslope` mean: #### `loglogavgslope` mean:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
vary_parameter('loglogavgslope', [(-6., 1e-16), (-2., 1e-16), (2., 1e-16)], samples_vary_in='xi') vary_parameter('loglogavgslope', [(-6., 1e-16), (-2., 1e-16), (2., 1e-16)], samples_vary_in='xi')
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
#### `loglogavgslope` std: #### `loglogavgslope` std:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
vary_parameter('loglogavgslope', [(-2., 0.02), (-2., 0.2), (-2., 2.0)], samples_vary_in='loglogavgslope') vary_parameter('loglogavgslope', [(-2., 0.02), (-2., 0.2), (-2., 2.0)], samples_vary_in='loglogavgslope')
cf_x_fluct_pars['loglogavgslope'] = (-2., 1e-16) cf_x_fluct_pars['loglogavgslope'] = (-2., 1e-16)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## The `flexibility` parameters of `add_fluctuations()` ## The `flexibility` parameters of `add_fluctuations()`
determine **the amplitude of the integrated Wiener process component of the power spectrum** determine **the amplitude of the integrated Wiener process component of the power spectrum**
(how strong the power spectrum varies besides the power-law). (how strong the power spectrum varies besides the power-law).
`flexibility[0]` sets the _average_ amplitude of the i.g.p. component,\ `flexibility[0]` sets the _average_ amplitude of the i.g.p. component,\
`flexibility[1]` sets how much the amplitude can vary.\ `flexibility[1]` sets how much the amplitude can vary.\
These two parameters feed into a moment-matched log-normal distribution model, These two parameters feed into a moment-matched log-normal distribution model,
see above for a demo of its behavior. see above for a demo of its behavior.
#### `flexibility` mean: #### `flexibility` mean:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
vary_parameter('flexibility', [(0.4, 1e-16), (4.0, 1e-16), (12.0, 1e-16)], samples_vary_in='spectrum') vary_parameter('flexibility', [(0.4, 1e-16), (4.0, 1e-16), (12.0, 1e-16)], samples_vary_in='spectrum')
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
#### `flexibility` std: #### `flexibility` std:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
vary_parameter('flexibility', [(4., 0.02), (4., 0.2), (4., 2.)], samples_vary_in='flexibility') vary_parameter('flexibility', [(4., 0.02), (4., 0.2), (4., 2.)], samples_vary_in='flexibility')
cf_x_fluct_pars['flexibility'] = (4., 1e-16) cf_x_fluct_pars['flexibility'] = (4., 1e-16)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## The `asperity` parameters of `add_fluctuations()` ## The `asperity` parameters of `add_fluctuations()`
`asperity` determines **how rough the integrated Wiener process component of the power spectrum is**. `asperity` determines **how rough the integrated Wiener process component of the power spectrum is**.
`asperity[0]` sets the average roughness, `asperity[1]` sets how much the roughness can vary.\ `asperity[0]` sets the average roughness, `asperity[1]` sets how much the roughness can vary.\
These two parameters feed into a moment-matched log-normal distribution model, These two parameters feed into a moment-matched log-normal distribution model,
see above for a demo of its behavior. see above for a demo of its behavior.
#### `asperity` mean: #### `asperity` mean:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
vary_parameter('asperity', [(0.001, 1e-16), (1.0, 1e-16), (5., 1e-16)], samples_vary_in='spectrum') vary_parameter('asperity', [(0.001, 1e-16), (1.0, 1e-16), (5., 1e-16)], samples_vary_in='spectrum')
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
#### `asperity` std: #### `asperity` std:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
vary_parameter('asperity', [(1., 0.01), (1., 0.1), (1., 1.)], samples_vary_in='asperity') vary_parameter('asperity', [(1., 0.01), (1., 0.1), (1., 1.)], samples_vary_in='asperity')
cf_x_fluct_pars['asperity'] = (1., 1e-16) cf_x_fluct_pars['asperity'] = (1., 1e-16)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## The `offset_mean` parameter of `CorrelatedFieldMaker.make()` ## The `offset_mean` parameter of `CorrelatedFieldMaker.make()`
The `offset_mean` parameter defines a global additive offset on the field realizations. The `offset_mean` parameter defines a global additive offset on the field realizations.
If the field is used for a lognormal model `f = field.exp()`, this acts as a global signal magnitude offset. If the field is used for a lognormal model `f = field.exp()`, this acts as a global signal magnitude offset.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Reset model to neutral # Reset model to neutral
cf_x_fluct_pars['fluctuations'] = (1e-3, 1e-16) cf_x_fluct_pars['fluctuations'] = (1e-3, 1e-16)
cf_x_fluct_pars['flexibility'] = (1e-3, 1e-16) cf_x_fluct_pars['flexibility'] = (1e-3, 1e-16)
cf_x_fluct_pars['asperity'] = (1e-3, 1e-16) cf_x_fluct_pars['asperity'] = (1e-3, 1e-16)
cf_x_fluct_pars['loglogavgslope'] = (1e-3, 1e-16) cf_x_fluct_pars['loglogavgslope'] = (1e-3, 1e-16)
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
vary_parameter('offset_mean', [3., 0., -2.]) vary_parameter('offset_mean', [3., 0., -2.])
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## The `offset_std` parameters of `CorrelatedFieldMaker.make()` ## The `offset_std` parameters of `CorrelatedFieldMaker.make()`
Variation of the global offset of the field are modelled as being log-normally distributed. Variation of the global offset of the field are modelled as being log-normally distributed.
See `The Moment-Matched Log-Normal Distribution` above for details. See `The Moment-Matched Log-Normal Distribution` above for details.
The `offset_std[0]` parameter sets how much NIFTy will vary the offset *on average*.\ The `offset_std[0]` parameter sets how much NIFTy will vary the offset *on average*.\
The `offset_std[1]` parameters defines the with and shape of the offset variation distribution. The `offset_std[1]` parameters defines the with and shape of the offset variation distribution.
#### `offset_std` mean: #### `offset_std` mean:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
vary_parameter('offset_std', [(1e-16, 1e-16), (0.5, 1e-16), (2., 1e-16)], samples_vary_in='xi') vary_parameter('offset_std', [(1e-16, 1e-16), (0.5, 1e-16), (2., 1e-16)], samples_vary_in='xi')
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
#### `offset_std` std: #### `offset_std` std:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
vary_parameter('offset_std', [(1., 0.01), (1., 0.1), (1., 1.)], samples_vary_in='zeromode') vary_parameter('offset_std', [(1., 0.01), (1., 0.1), (1., 1.)], samples_vary_in='zeromode')
``` ```
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment