Skip to content
Snippets Groups Projects
Commit d83a042e authored by Philipp Arras's avatar Philipp Arras
Browse files

Make float64 default for WienerFilterCurvature

parent 00cc61c9
Branches
No related tags found
1 merge request!458Introduce SamplingDtypeEnabler
Pipeline #74863 passed
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
# A NIFTy demonstration # A NIFTy demonstration
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## IFT: Big Picture ## IFT: Big Picture
IFT starting point: IFT starting point:
$$d = Rs+n$$ $$d = Rs+n$$
Typically, $s$ is a continuous field, $d$ a discrete data vector. Particularly, $R$ is not invertible. Typically, $s$ is a continuous field, $d$ a discrete data vector. Particularly, $R$ is not invertible.
IFT aims at **inverting** the above uninvertible problem in the **best possible way** using Bayesian statistics. IFT aims at **inverting** the above uninvertible problem in the **best possible way** using Bayesian statistics.
## NIFTy ## NIFTy
NIFTy (Numerical Information Field Theory) is a Python framework in which IFT problems can be tackled easily. NIFTy (Numerical Information Field Theory) is a Python framework in which IFT problems can be tackled easily.
Main Interfaces: Main Interfaces:
- **Spaces**: Cartesian, 2-Spheres (Healpix, Gauss-Legendre) and their respective harmonic spaces. - **Spaces**: Cartesian, 2-Spheres (Healpix, Gauss-Legendre) and their respective harmonic spaces.
- **Fields**: Defined on spaces. - **Fields**: Defined on spaces.
- **Operators**: Acting on fields. - **Operators**: Acting on fields.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Wiener Filter: Formulae ## Wiener Filter: Formulae
### Assumptions ### Assumptions
- $d=Rs+n$, $R$ linear operator. - $d=Rs+n$, $R$ linear operator.
- $\mathcal P (s) = \mathcal G (s,S)$, $\mathcal P (n) = \mathcal G (n,N)$ where $S, N$ are positive definite matrices. - $\mathcal P (s) = \mathcal G (s,S)$, $\mathcal P (n) = \mathcal G (n,N)$ where $S, N$ are positive definite matrices.
### Posterior ### Posterior
The Posterior is given by: The Posterior is given by:
$$\mathcal P (s|d) \propto P(s,d) = \mathcal G(d-Rs,N) \,\mathcal G(s,S) \propto \mathcal G (s-m,D) $$ $$\mathcal P (s|d) \propto P(s,d) = \mathcal G(d-Rs,N) \,\mathcal G(s,S) \propto \mathcal G (s-m,D) $$
where where
$$\begin{align} $$\begin{align}
m &= Dj \\ m &= Dj \\
D^{-1}&= (S^{-1} +R^\dagger N^{-1} R )\\ D^{-1}&= (S^{-1} +R^\dagger N^{-1} R )\\
j &= R^\dagger N^{-1} d j &= R^\dagger N^{-1} d
\end{align}$$ \end{align}$$
Let us implement this in NIFTy! Let us implement this in NIFTy!
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Wiener Filter: Example ## Wiener Filter: Example
- We assume statistical homogeneity and isotropy. Therefore the signal covariance $S$ is diagonal in harmonic space, and is described by a one-dimensional power spectrum, assumed here as $$P(k) = P_0\,\left(1+\left(\frac{k}{k_0}\right)^2\right)^{-\gamma /2},$$ - We assume statistical homogeneity and isotropy. Therefore the signal covariance $S$ is diagonal in harmonic space, and is described by a one-dimensional power spectrum, assumed here as $$P(k) = P_0\,\left(1+\left(\frac{k}{k_0}\right)^2\right)^{-\gamma /2},$$
with $P_0 = 0.2, k_0 = 5, \gamma = 4$. with $P_0 = 0.2, k_0 = 5, \gamma = 4$.
- $N = 0.2 \cdot \mathbb{1}$. - $N = 0.2 \cdot \mathbb{1}$.
- Number of data points $N_{pix} = 512$. - Number of data points $N_{pix} = 512$.
- reconstruction in harmonic space. - reconstruction in harmonic space.
- Response operator: - Response operator:
$$R = FFT_{\text{harmonic} \rightarrow \text{position}}$$ $$R = FFT_{\text{harmonic} \rightarrow \text{position}}$$
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
N_pixels = 512 # Number of pixels N_pixels = 512 # Number of pixels
def pow_spec(k): def pow_spec(k):
P0, k0, gamma = [.2, 5, 4] P0, k0, gamma = [.2, 5, 4]
return P0 / ((1. + (k/k0)**2)**(gamma / 2)) return P0 / ((1. + (k/k0)**2)**(gamma / 2))
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Wiener Filter: Implementation ## Wiener Filter: Implementation
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Import Modules ### Import Modules
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
import numpy as np import numpy as np
import nifty6 as ift import nifty6 as ift
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
%matplotlib inline %matplotlib inline
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Implement Propagator ### Implement Propagator
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
def Curvature(R, N, Sh): def Curvature(R, N, Sh):
IC = ift.GradientNormController(iteration_limit=50000, IC = ift.GradientNormController(iteration_limit=50000,
tol_abs_gradnorm=0.1) tol_abs_gradnorm=0.1)
# WienerFilterCurvature is (R.adjoint*N.inverse*R + Sh.inverse) plus some handy # WienerFilterCurvature is (R.adjoint*N.inverse*R + Sh.inverse) plus some handy
# helper methods. # helper methods.
return ift.WienerFilterCurvature(R,N,Sh,iteration_controller=IC,iteration_controller_sampling=IC, data_sampling_dtype=np.float64, prior_sampling_dtype=np.float64) return ift.WienerFilterCurvature(R,N,Sh,iteration_controller=IC,iteration_controller_sampling=IC)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Conjugate Gradient Preconditioning ### Conjugate Gradient Preconditioning
- $D$ is defined via: - $D$ is defined via:
$$D^{-1} = \mathcal S_h^{-1} + R^\dagger N^{-1} R.$$ $$D^{-1} = \mathcal S_h^{-1} + R^\dagger N^{-1} R.$$
In the end, we want to apply $D$ to $j$, i.e. we need the inverse action of $D^{-1}$. This is done numerically (algorithm: *Conjugate Gradient*). In the end, we want to apply $D$ to $j$, i.e. we need the inverse action of $D^{-1}$. This is done numerically (algorithm: *Conjugate Gradient*).
<!-- <!--
- One can define the *condition number* of a non-singular and normal matrix $A$: - One can define the *condition number* of a non-singular and normal matrix $A$:
$$\kappa (A) := \frac{|\lambda_{\text{max}}|}{|\lambda_{\text{min}}|},$$ $$\kappa (A) := \frac{|\lambda_{\text{max}}|}{|\lambda_{\text{min}}|},$$
where $\lambda_{\text{max}}$ and $\lambda_{\text{min}}$ are the largest and smallest eigenvalue of $A$, respectively. where $\lambda_{\text{max}}$ and $\lambda_{\text{min}}$ are the largest and smallest eigenvalue of $A$, respectively.
- The larger $\kappa$ the slower Conjugate Gradient. - The larger $\kappa$ the slower Conjugate Gradient.
- By default, conjugate gradient solves: $D^{-1} m = j$ for $m$, where $D^{-1}$ can be badly conditioned. If one knows a non-singular matrix $T$ for which $TD^{-1}$ is better conditioned, one can solve the equivalent problem: - By default, conjugate gradient solves: $D^{-1} m = j$ for $m$, where $D^{-1}$ can be badly conditioned. If one knows a non-singular matrix $T$ for which $TD^{-1}$ is better conditioned, one can solve the equivalent problem:
$$\tilde A m = \tilde j,$$ $$\tilde A m = \tilde j,$$
where $\tilde A = T D^{-1}$ and $\tilde j = Tj$. where $\tilde A = T D^{-1}$ and $\tilde j = Tj$.
- In our case $S^{-1}$ is responsible for the bad conditioning of $D$ depending on the chosen power spectrum. Thus, we choose - In our case $S^{-1}$ is responsible for the bad conditioning of $D$ depending on the chosen power spectrum. Thus, we choose
$$T = \mathcal F^\dagger S_h^{-1} \mathcal F.$$ $$T = \mathcal F^\dagger S_h^{-1} \mathcal F.$$
--> -->
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Generate Mock data ### Generate Mock data
- Generate a field $s$ and $n$ with given covariances. - Generate a field $s$ and $n$ with given covariances.
- Calculate $d$. - Calculate $d$.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
s_space = ift.RGSpace(N_pixels) s_space = ift.RGSpace(N_pixels)
h_space = s_space.get_default_codomain() h_space = s_space.get_default_codomain()
HT = ift.HarmonicTransformOperator(h_space, target=s_space) HT = ift.HarmonicTransformOperator(h_space, target=s_space)
# Operators # Operators
Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec) Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec)
R = HT #*ift.create_harmonic_smoothing_operator((h_space,), 0, 0.02) R = HT #*ift.create_harmonic_smoothing_operator((h_space,), 0, 0.02)
# Fields and data # Fields and data
sh = Sh.draw_sample_with_dtype(dtype=np.float64) sh = Sh.draw_sample_with_dtype(dtype=np.float64)
noiseless_data=R(sh) noiseless_data=R(sh)
noise_amplitude = np.sqrt(0.2) noise_amplitude = np.sqrt(0.2)
N = ift.ScalingOperator(s_space, noise_amplitude**2) N = ift.ScalingOperator(s_space, noise_amplitude**2)
n = ift.Field.from_random(domain=s_space, random_type='normal', n = ift.Field.from_random(domain=s_space, random_type='normal',
std=noise_amplitude, mean=0) std=noise_amplitude, mean=0)
d = noiseless_data + n d = noiseless_data + n
j = R.adjoint_times(N.inverse_times(d)) j = R.adjoint_times(N.inverse_times(d))
curv = Curvature(R=R, N=N, Sh=Sh) curv = Curvature(R=R, N=N, Sh=Sh)
D = curv.inverse D = curv.inverse
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Run Wiener Filter ### Run Wiener Filter
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
m = D(j) m = D(j)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Signal Reconstruction ### Signal Reconstruction
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Get signal data and reconstruction data # Get signal data and reconstruction data
s_data = HT(sh).val s_data = HT(sh).val
m_data = HT(m).val m_data = HT(m).val
d_data = d.val d_data = d.val
plt.figure(figsize=(15,10)) plt.figure(figsize=(15,10))
plt.plot(s_data, 'r', label="Signal", linewidth=3) plt.plot(s_data, 'r', label="Signal", linewidth=3)
plt.plot(d_data, 'k.', label="Data") plt.plot(d_data, 'k.', label="Data")
plt.plot(m_data, 'k', label="Reconstruction",linewidth=3) plt.plot(m_data, 'k', label="Reconstruction",linewidth=3)
plt.title("Reconstruction") plt.title("Reconstruction")
plt.legend() plt.legend()
plt.show() plt.show()
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
plt.figure(figsize=(15,10)) plt.figure(figsize=(15,10))
plt.plot(s_data - s_data, 'r', label="Signal", linewidth=3) plt.plot(s_data - s_data, 'r', label="Signal", linewidth=3)
plt.plot(d_data - s_data, 'k.', label="Data") plt.plot(d_data - s_data, 'k.', label="Data")
plt.plot(m_data - s_data, 'k', label="Reconstruction",linewidth=3) plt.plot(m_data - s_data, 'k', label="Reconstruction",linewidth=3)
plt.axhspan(-noise_amplitude,noise_amplitude, facecolor='0.9', alpha=.5) plt.axhspan(-noise_amplitude,noise_amplitude, facecolor='0.9', alpha=.5)
plt.title("Residuals") plt.title("Residuals")
plt.legend() plt.legend()
plt.show() plt.show()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Power Spectrum ### Power Spectrum
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
s_power_data = ift.power_analyze(sh).val s_power_data = ift.power_analyze(sh).val
m_power_data = ift.power_analyze(m).val m_power_data = ift.power_analyze(m).val
plt.figure(figsize=(15,10)) plt.figure(figsize=(15,10))
plt.loglog() plt.loglog()
plt.xlim(1, int(N_pixels/2)) plt.xlim(1, int(N_pixels/2))
ymin = min(m_power_data) ymin = min(m_power_data)
plt.ylim(ymin, 1) plt.ylim(ymin, 1)
xs = np.arange(1,int(N_pixels/2),.1) xs = np.arange(1,int(N_pixels/2),.1)
plt.plot(xs, pow_spec(xs), label="True Power Spectrum", color='k',alpha=0.5) plt.plot(xs, pow_spec(xs), label="True Power Spectrum", color='k',alpha=0.5)
plt.plot(s_power_data, 'r', label="Signal") plt.plot(s_power_data, 'r', label="Signal")
plt.plot(m_power_data, 'k', label="Reconstruction") plt.plot(m_power_data, 'k', label="Reconstruction")
plt.axhline(noise_amplitude**2 / N_pixels, color="k", linestyle='--', label="Noise level", alpha=.5) plt.axhline(noise_amplitude**2 / N_pixels, color="k", linestyle='--', label="Noise level", alpha=.5)
plt.axhspan(noise_amplitude**2 / N_pixels, ymin, facecolor='0.9', alpha=.5) plt.axhspan(noise_amplitude**2 / N_pixels, ymin, facecolor='0.9', alpha=.5)
plt.title("Power Spectrum") plt.title("Power Spectrum")
plt.legend() plt.legend()
plt.show() plt.show()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Wiener Filter on Incomplete Data ## Wiener Filter on Incomplete Data
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Operators # Operators
Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec) Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec)
N = ift.ScalingOperator(s_space, noise_amplitude**2) N = ift.ScalingOperator(s_space, noise_amplitude**2)
# R is defined below # R is defined below
# Fields # Fields
sh = Sh.draw_sample_with_dtype(dtype=np.float64) sh = Sh.draw_sample_with_dtype(dtype=np.float64)
s = HT(sh) s = HT(sh)
n = ift.Field.from_random(domain=s_space, random_type='normal', n = ift.Field.from_random(domain=s_space, random_type='normal',
std=noise_amplitude, mean=0) std=noise_amplitude, mean=0)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Partially Lose Data ### Partially Lose Data
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
l = int(N_pixels * 0.2) l = int(N_pixels * 0.2)
h = int(N_pixels * 0.2 * 2) h = int(N_pixels * 0.2 * 2)
mask = np.full(s_space.shape, 1.) mask = np.full(s_space.shape, 1.)
mask[l:h] = 0 mask[l:h] = 0
mask = ift.Field.from_raw(s_space, mask) mask = ift.Field.from_raw(s_space, mask)
R = ift.DiagonalOperator(mask)(HT) R = ift.DiagonalOperator(mask)(HT)
n = n.val_rw() n = n.val_rw()
n[l:h] = 0 n[l:h] = 0
n = ift.Field.from_raw(s_space, n) n = ift.Field.from_raw(s_space, n)
d = R(sh) + n d = R(sh) + n
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
curv = Curvature(R=R, N=N, Sh=Sh) curv = Curvature(R=R, N=N, Sh=Sh)
D = curv.inverse D = curv.inverse
j = R.adjoint_times(N.inverse_times(d)) j = R.adjoint_times(N.inverse_times(d))
m = D(j) m = D(j)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Compute Uncertainty ### Compute Uncertainty
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
m_mean, m_var = ift.probe_with_posterior_samples(curv, HT, 200, np.float64) m_mean, m_var = ift.probe_with_posterior_samples(curv, HT, 200, np.float64)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Get data ### Get data
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Get signal data and reconstruction data # Get signal data and reconstruction data
s_data = s.val s_data = s.val
m_data = HT(m).val m_data = HT(m).val
m_var_data = m_var.val m_var_data = m_var.val
uncertainty = np.sqrt(m_var_data) uncertainty = np.sqrt(m_var_data)
d_data = d.val_rw() d_data = d.val_rw()
# Set lost data to NaN for proper plotting # Set lost data to NaN for proper plotting
d_data[d_data == 0] = np.nan d_data[d_data == 0] = np.nan
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
fig = plt.figure(figsize=(15,10)) fig = plt.figure(figsize=(15,10))
plt.axvspan(l, h, facecolor='0.8',alpha=0.5) plt.axvspan(l, h, facecolor='0.8',alpha=0.5)
plt.fill_between(range(N_pixels), m_data - uncertainty, m_data + uncertainty, facecolor='0.5', alpha=0.5) plt.fill_between(range(N_pixels), m_data - uncertainty, m_data + uncertainty, facecolor='0.5', alpha=0.5)
plt.plot(s_data, 'r', label="Signal", alpha=1, linewidth=3) plt.plot(s_data, 'r', label="Signal", alpha=1, linewidth=3)
plt.plot(d_data, 'k.', label="Data") plt.plot(d_data, 'k.', label="Data")
plt.plot(m_data, 'k', label="Reconstruction", linewidth=3) plt.plot(m_data, 'k', label="Reconstruction", linewidth=3)
plt.title("Reconstruction of incomplete data") plt.title("Reconstruction of incomplete data")
plt.legend() plt.legend()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
# 2d Example # 2d Example
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
N_pixels = 256 # Number of pixels N_pixels = 256 # Number of pixels
sigma2 = 2. # Noise variance sigma2 = 2. # Noise variance
def pow_spec(k): def pow_spec(k):
P0, k0, gamma = [.2, 2, 4] P0, k0, gamma = [.2, 2, 4]
return P0 * (1. + (k/k0)**2)**(-gamma/2) return P0 * (1. + (k/k0)**2)**(-gamma/2)
s_space = ift.RGSpace([N_pixels, N_pixels]) s_space = ift.RGSpace([N_pixels, N_pixels])
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
h_space = s_space.get_default_codomain() h_space = s_space.get_default_codomain()
HT = ift.HarmonicTransformOperator(h_space,s_space) HT = ift.HarmonicTransformOperator(h_space,s_space)
# Operators # Operators
Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec) Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec)
N = ift.ScalingOperator(s_space, sigma2) N = ift.ScalingOperator(s_space, sigma2)
# Fields and data # Fields and data
sh = Sh.draw_sample_with_dtype(dtype=np.float64) sh = Sh.draw_sample_with_dtype(dtype=np.float64)
n = ift.Field.from_random(domain=s_space, random_type='normal', n = ift.Field.from_random(domain=s_space, random_type='normal',
std=np.sqrt(sigma2), mean=0) std=np.sqrt(sigma2), mean=0)
# Lose some data # Lose some data
l = int(N_pixels * 0.33) l = int(N_pixels * 0.33)
h = int(N_pixels * 0.33 * 2) h = int(N_pixels * 0.33 * 2)
mask = np.full(s_space.shape, 1.) mask = np.full(s_space.shape, 1.)
mask[l:h,l:h] = 0. mask[l:h,l:h] = 0.
mask = ift.Field.from_raw(s_space, mask) mask = ift.Field.from_raw(s_space, mask)
R = ift.DiagonalOperator(mask)(HT) R = ift.DiagonalOperator(mask)(HT)
n = n.val_rw() n = n.val_rw()
n[l:h, l:h] = 0 n[l:h, l:h] = 0
n = ift.Field.from_raw(s_space, n) n = ift.Field.from_raw(s_space, n)
curv = Curvature(R=R, N=N, Sh=Sh) curv = Curvature(R=R, N=N, Sh=Sh)
D = curv.inverse D = curv.inverse
d = R(sh) + n d = R(sh) + n
j = R.adjoint_times(N.inverse_times(d)) j = R.adjoint_times(N.inverse_times(d))
# Run Wiener filter # Run Wiener filter
m = D(j) m = D(j)
# Uncertainty # Uncertainty
m_mean, m_var = ift.probe_with_posterior_samples(curv, HT, 20, np.float64) m_mean, m_var = ift.probe_with_posterior_samples(curv, HT, 20, np.float64)
# Get data # Get data
s_data = HT(sh).val s_data = HT(sh).val
m_data = HT(m).val m_data = HT(m).val
m_var_data = m_var.val m_var_data = m_var.val
d_data = d.val d_data = d.val
uncertainty = np.sqrt(np.abs(m_var_data)) uncertainty = np.sqrt(np.abs(m_var_data))
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
cm = ['magma', 'inferno', 'plasma', 'viridis'][1] cm = ['magma', 'inferno', 'plasma', 'viridis'][1]
mi = np.min(s_data) mi = np.min(s_data)
ma = np.max(s_data) ma = np.max(s_data)
fig, axes = plt.subplots(1, 2, figsize=(15, 7)) fig, axes = plt.subplots(1, 2, figsize=(15, 7))
data = [s_data, d_data] data = [s_data, d_data]
caption = ["Signal", "Data"] caption = ["Signal", "Data"]
for ax in axes.flat: for ax in axes.flat:
im = ax.imshow(data.pop(0), interpolation='nearest', cmap=cm, vmin=mi, im = ax.imshow(data.pop(0), interpolation='nearest', cmap=cm, vmin=mi,
vmax=ma) vmax=ma)
ax.set_title(caption.pop(0)) ax.set_title(caption.pop(0))
fig.subplots_adjust(right=0.8) fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.15, 0.05, 0.7]) cbar_ax = fig.add_axes([0.85, 0.15, 0.05, 0.7])
fig.colorbar(im, cax=cbar_ax) fig.colorbar(im, cax=cbar_ax)
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
mi = np.min(s_data) mi = np.min(s_data)
ma = np.max(s_data) ma = np.max(s_data)
fig, axes = plt.subplots(3, 2, figsize=(15, 22.5)) fig, axes = plt.subplots(3, 2, figsize=(15, 22.5))
sample = HT(curv.draw_sample(from_inverse=True)+m).val sample = HT(curv.draw_sample(from_inverse=True)+m).val
post_mean = (m_mean + HT(m)).val post_mean = (m_mean + HT(m)).val
data = [s_data, m_data, post_mean, sample, s_data - m_data, uncertainty] data = [s_data, m_data, post_mean, sample, s_data - m_data, uncertainty]
caption = ["Signal", "Reconstruction", "Posterior mean", "Sample", "Residuals", "Uncertainty Map"] caption = ["Signal", "Reconstruction", "Posterior mean", "Sample", "Residuals", "Uncertainty Map"]
for ax in axes.flat: for ax in axes.flat:
im = ax.imshow(data.pop(0), interpolation='nearest', cmap=cm, vmin=mi, vmax=ma) im = ax.imshow(data.pop(0), interpolation='nearest', cmap=cm, vmin=mi, vmax=ma)
ax.set_title(caption.pop(0)) ax.set_title(caption.pop(0))
fig.subplots_adjust(right=0.8) fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([.85, 0.15, 0.05, 0.7]) cbar_ax = fig.add_axes([.85, 0.15, 0.05, 0.7])
fig.colorbar(im, cax=cbar_ax) fig.colorbar(im, cax=cbar_ax)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Is the uncertainty map reliable? ### Is the uncertainty map reliable?
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
precise = (np.abs(s_data-m_data) < uncertainty) precise = (np.abs(s_data-m_data) < uncertainty)
print("Error within uncertainty map bounds: " + str(np.sum(precise) * 100 / N_pixels**2) + "%") print("Error within uncertainty map bounds: " + str(np.sum(precise) * 100 / N_pixels**2) + "%")
plt.figure(figsize=(15,10)) plt.figure(figsize=(15,10))
plt.imshow(precise.astype(float), cmap="brg") plt.imshow(precise.astype(float), cmap="brg")
plt.colorbar() plt.colorbar()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
# Start Coding # Start Coding
## NIFTy Repository + Installation guide ## NIFTy Repository + Installation guide
https://gitlab.mpcdf.mpg.de/ift/NIFTy https://gitlab.mpcdf.mpg.de/ift/NIFTy
NIFTy v6 **more or less stable!** NIFTy v6 **more or less stable!**
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
``` ```
......
...@@ -15,6 +15,8 @@ ...@@ -15,6 +15,8 @@
# #
# NIFTy is being developed at the Max-Planck-Institut fuer Astrophysik. # NIFTy is being developed at the Max-Planck-Institut fuer Astrophysik.
import numpy as np
from ..operators.inversion_enabler import InversionEnabler from ..operators.inversion_enabler import InversionEnabler
from ..operators.sampling_enabler import SamplingDtypeSetter, SamplingEnabler from ..operators.sampling_enabler import SamplingDtypeSetter, SamplingEnabler
from ..operators.sandwich_operator import SandwichOperator from ..operators.sandwich_operator import SandwichOperator
...@@ -22,8 +24,8 @@ from ..operators.sandwich_operator import SandwichOperator ...@@ -22,8 +24,8 @@ from ..operators.sandwich_operator import SandwichOperator
def WienerFilterCurvature(R, N, S, iteration_controller=None, def WienerFilterCurvature(R, N, S, iteration_controller=None,
iteration_controller_sampling=None, iteration_controller_sampling=None,
data_sampling_dtype=None, data_sampling_dtype=np.float64,
prior_sampling_dtype=None): prior_sampling_dtype=np.float64):
"""The curvature of the WienerFilterEnergy. """The curvature of the WienerFilterEnergy.
This operator implements the second derivative of the This operator implements the second derivative of the
...@@ -44,6 +46,13 @@ def WienerFilterCurvature(R, N, S, iteration_controller=None, ...@@ -44,6 +46,13 @@ def WienerFilterCurvature(R, N, S, iteration_controller=None,
ConjugateGradient. ConjugateGradient.
iteration_controller_sampling : IterationController iteration_controller_sampling : IterationController
The iteration controller to use for sampling. The iteration controller to use for sampling.
data_sampling_dtype : numpy.dtype or dict of numpy.dtype
Data type used for sampling from likelihood. Conincides with the data
type of the data used in the inference problem. Default is float64.
prior_sampling_dtype : numpy.dtype or dict of numpy.dtype
Data type used for sampling from likelihood. Coincides with the data
type of the parameters of the forward model used for the inference
problem. Default is float64.
""" """
Ninv = N.inverse Ninv = N.inverse
Sinv = S.inverse Sinv = S.inverse
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment