Commit fccadfc5 authored by Martin Reinecke's avatar Martin Reinecke
Browse files

first try

parent ad4c6cde
...@@ -140,7 +140,6 @@ ...@@ -140,7 +140,6 @@
"outputs": [], "outputs": [],
"source": [ "source": [
"import numpy as np\n", "import numpy as np\n",
"np.random.seed(40)\n",
"import nifty6 as ift\n", "import nifty6 as ift\n",
"import matplotlib.pyplot as plt\n", "import matplotlib.pyplot as plt\n",
"%matplotlib inline" "%matplotlib inline"
......
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
# A NIFTy demonstration # A NIFTy demonstration
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## IFT: Big Picture ## IFT: Big Picture
IFT starting point: IFT starting point:
$$d = Rs+n$$ $$d = Rs+n$$
Typically, $s$ is a continuous field, $d$ a discrete data vector. Particularly, $R$ is not invertible. Typically, $s$ is a continuous field, $d$ a discrete data vector. Particularly, $R$ is not invertible.
IFT aims at **inverting** the above uninvertible problem in the **best possible way** using Bayesian statistics. IFT aims at **inverting** the above uninvertible problem in the **best possible way** using Bayesian statistics.
## NIFTy ## NIFTy
NIFTy (Numerical Information Field Theory) is a Python framework in which IFT problems can be tackled easily. NIFTy (Numerical Information Field Theory) is a Python framework in which IFT problems can be tackled easily.
Main Interfaces: Main Interfaces:
- **Spaces**: Cartesian, 2-Spheres (Healpix, Gauss-Legendre) and their respective harmonic spaces. - **Spaces**: Cartesian, 2-Spheres (Healpix, Gauss-Legendre) and their respective harmonic spaces.
- **Fields**: Defined on spaces. - **Fields**: Defined on spaces.
- **Operators**: Acting on fields. - **Operators**: Acting on fields.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Wiener Filter: Formulae ## Wiener Filter: Formulae
### Assumptions ### Assumptions
- $d=Rs+n$, $R$ linear operator. - $d=Rs+n$, $R$ linear operator.
- $\mathcal P (s) = \mathcal G (s,S)$, $\mathcal P (n) = \mathcal G (n,N)$ where $S, N$ are positive definite matrices. - $\mathcal P (s) = \mathcal G (s,S)$, $\mathcal P (n) = \mathcal G (n,N)$ where $S, N$ are positive definite matrices.
### Posterior ### Posterior
The Posterior is given by: The Posterior is given by:
$$\mathcal P (s|d) \propto P(s,d) = \mathcal G(d-Rs,N) \,\mathcal G(s,S) \propto \mathcal G (s-m,D) $$ $$\mathcal P (s|d) \propto P(s,d) = \mathcal G(d-Rs,N) \,\mathcal G(s,S) \propto \mathcal G (s-m,D) $$
where where
$$\begin{align} $$\begin{align}
m &= Dj \\ m &= Dj \\
D^{-1}&= (S^{-1} +R^\dagger N^{-1} R )\\ D^{-1}&= (S^{-1} +R^\dagger N^{-1} R )\\
j &= R^\dagger N^{-1} d j &= R^\dagger N^{-1} d
\end{align}$$ \end{align}$$
Let us implement this in NIFTy! Let us implement this in NIFTy!
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Wiener Filter: Example ## Wiener Filter: Example
- We assume statistical homogeneity and isotropy. Therefore the signal covariance $S$ is diagonal in harmonic space, and is described by a one-dimensional power spectrum, assumed here as $$P(k) = P_0\,\left(1+\left(\frac{k}{k_0}\right)^2\right)^{-\gamma /2},$$ - We assume statistical homogeneity and isotropy. Therefore the signal covariance $S$ is diagonal in harmonic space, and is described by a one-dimensional power spectrum, assumed here as $$P(k) = P_0\,\left(1+\left(\frac{k}{k_0}\right)^2\right)^{-\gamma /2},$$
with $P_0 = 0.2, k_0 = 5, \gamma = 4$. with $P_0 = 0.2, k_0 = 5, \gamma = 4$.
- $N = 0.2 \cdot \mathbb{1}$. - $N = 0.2 \cdot \mathbb{1}$.
- Number of data points $N_{pix} = 512$. - Number of data points $N_{pix} = 512$.
- reconstruction in harmonic space. - reconstruction in harmonic space.
- Response operator: - Response operator:
$$R = FFT_{\text{harmonic} \rightarrow \text{position}}$$ $$R = FFT_{\text{harmonic} \rightarrow \text{position}}$$
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
N_pixels = 512 # Number of pixels N_pixels = 512 # Number of pixels
def pow_spec(k): def pow_spec(k):
P0, k0, gamma = [.2, 5, 4] P0, k0, gamma = [.2, 5, 4]
return P0 / ((1. + (k/k0)**2)**(gamma / 2)) return P0 / ((1. + (k/k0)**2)**(gamma / 2))
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Wiener Filter: Implementation ## Wiener Filter: Implementation
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Import Modules ### Import Modules
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
import numpy as np import numpy as np
np.random.seed(40)
import nifty6 as ift import nifty6 as ift
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
%matplotlib inline %matplotlib inline
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Implement Propagator ### Implement Propagator
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
def Curvature(R, N, Sh): def Curvature(R, N, Sh):
IC = ift.GradientNormController(iteration_limit=50000, IC = ift.GradientNormController(iteration_limit=50000,
tol_abs_gradnorm=0.1) tol_abs_gradnorm=0.1)
# WienerFilterCurvature is (R.adjoint*N.inverse*R + Sh.inverse) plus some handy # WienerFilterCurvature is (R.adjoint*N.inverse*R + Sh.inverse) plus some handy
# helper methods. # helper methods.
return ift.WienerFilterCurvature(R,N,Sh,iteration_controller=IC,iteration_controller_sampling=IC) return ift.WienerFilterCurvature(R,N,Sh,iteration_controller=IC,iteration_controller_sampling=IC)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Conjugate Gradient Preconditioning ### Conjugate Gradient Preconditioning
- $D$ is defined via: - $D$ is defined via:
$$D^{-1} = \mathcal S_h^{-1} + R^\dagger N^{-1} R.$$ $$D^{-1} = \mathcal S_h^{-1} + R^\dagger N^{-1} R.$$
In the end, we want to apply $D$ to $j$, i.e. we need the inverse action of $D^{-1}$. This is done numerically (algorithm: *Conjugate Gradient*). In the end, we want to apply $D$ to $j$, i.e. we need the inverse action of $D^{-1}$. This is done numerically (algorithm: *Conjugate Gradient*).
<!-- <!--
- One can define the *condition number* of a non-singular and normal matrix $A$: - One can define the *condition number* of a non-singular and normal matrix $A$:
$$\kappa (A) := \frac{|\lambda_{\text{max}}|}{|\lambda_{\text{min}}|},$$ $$\kappa (A) := \frac{|\lambda_{\text{max}}|}{|\lambda_{\text{min}}|},$$
where $\lambda_{\text{max}}$ and $\lambda_{\text{min}}$ are the largest and smallest eigenvalue of $A$, respectively. where $\lambda_{\text{max}}$ and $\lambda_{\text{min}}$ are the largest and smallest eigenvalue of $A$, respectively.
- The larger $\kappa$ the slower Conjugate Gradient. - The larger $\kappa$ the slower Conjugate Gradient.
- By default, conjugate gradient solves: $D^{-1} m = j$ for $m$, where $D^{-1}$ can be badly conditioned. If one knows a non-singular matrix $T$ for which $TD^{-1}$ is better conditioned, one can solve the equivalent problem: - By default, conjugate gradient solves: $D^{-1} m = j$ for $m$, where $D^{-1}$ can be badly conditioned. If one knows a non-singular matrix $T$ for which $TD^{-1}$ is better conditioned, one can solve the equivalent problem:
$$\tilde A m = \tilde j,$$ $$\tilde A m = \tilde j,$$
where $\tilde A = T D^{-1}$ and $\tilde j = Tj$. where $\tilde A = T D^{-1}$ and $\tilde j = Tj$.
- In our case $S^{-1}$ is responsible for the bad conditioning of $D$ depending on the chosen power spectrum. Thus, we choose - In our case $S^{-1}$ is responsible for the bad conditioning of $D$ depending on the chosen power spectrum. Thus, we choose
$$T = \mathcal F^\dagger S_h^{-1} \mathcal F.$$ $$T = \mathcal F^\dagger S_h^{-1} \mathcal F.$$
--> -->
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Generate Mock data ### Generate Mock data
- Generate a field $s$ and $n$ with given covariances. - Generate a field $s$ and $n$ with given covariances.
- Calculate $d$. - Calculate $d$.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
s_space = ift.RGSpace(N_pixels) s_space = ift.RGSpace(N_pixels)
h_space = s_space.get_default_codomain() h_space = s_space.get_default_codomain()
HT = ift.HarmonicTransformOperator(h_space, target=s_space) HT = ift.HarmonicTransformOperator(h_space, target=s_space)
# Operators # Operators
Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec) Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec)
R = HT #*ift.create_harmonic_smoothing_operator((h_space,), 0, 0.02) R = HT #*ift.create_harmonic_smoothing_operator((h_space,), 0, 0.02)
# Fields and data # Fields and data
sh = Sh.draw_sample() sh = Sh.draw_sample()
noiseless_data=R(sh) noiseless_data=R(sh)
noise_amplitude = np.sqrt(0.2) noise_amplitude = np.sqrt(0.2)
N = ift.ScalingOperator(s_space, noise_amplitude**2) N = ift.ScalingOperator(s_space, noise_amplitude**2)
n = ift.Field.from_random(domain=s_space, random_type='normal', n = ift.Field.from_random(domain=s_space, random_type='normal',
std=noise_amplitude, mean=0) std=noise_amplitude, mean=0)
d = noiseless_data + n d = noiseless_data + n
j = R.adjoint_times(N.inverse_times(d)) j = R.adjoint_times(N.inverse_times(d))
curv = Curvature(R=R, N=N, Sh=Sh) curv = Curvature(R=R, N=N, Sh=Sh)
D = curv.inverse D = curv.inverse
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Run Wiener Filter ### Run Wiener Filter
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
m = D(j) m = D(j)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Signal Reconstruction ### Signal Reconstruction
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Get signal data and reconstruction data # Get signal data and reconstruction data
s_data = HT(sh).val s_data = HT(sh).val
m_data = HT(m).val m_data = HT(m).val
d_data = d.val d_data = d.val
plt.figure(figsize=(15,10)) plt.figure(figsize=(15,10))
plt.plot(s_data, 'r', label="Signal", linewidth=3) plt.plot(s_data, 'r', label="Signal", linewidth=3)
plt.plot(d_data, 'k.', label="Data") plt.plot(d_data, 'k.', label="Data")
plt.plot(m_data, 'k', label="Reconstruction",linewidth=3) plt.plot(m_data, 'k', label="Reconstruction",linewidth=3)
plt.title("Reconstruction") plt.title("Reconstruction")
plt.legend() plt.legend()
plt.show() plt.show()
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
plt.figure(figsize=(15,10)) plt.figure(figsize=(15,10))
plt.plot(s_data - s_data, 'r', label="Signal", linewidth=3) plt.plot(s_data - s_data, 'r', label="Signal", linewidth=3)
plt.plot(d_data - s_data, 'k.', label="Data") plt.plot(d_data - s_data, 'k.', label="Data")
plt.plot(m_data - s_data, 'k', label="Reconstruction",linewidth=3) plt.plot(m_data - s_data, 'k', label="Reconstruction",linewidth=3)
plt.axhspan(-noise_amplitude,noise_amplitude, facecolor='0.9', alpha=.5) plt.axhspan(-noise_amplitude,noise_amplitude, facecolor='0.9', alpha=.5)
plt.title("Residuals") plt.title("Residuals")
plt.legend() plt.legend()
plt.show() plt.show()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Power Spectrum ### Power Spectrum
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
s_power_data = ift.power_analyze(sh).val s_power_data = ift.power_analyze(sh).val
m_power_data = ift.power_analyze(m).val m_power_data = ift.power_analyze(m).val
plt.figure(figsize=(15,10)) plt.figure(figsize=(15,10))
plt.loglog() plt.loglog()
plt.xlim(1, int(N_pixels/2)) plt.xlim(1, int(N_pixels/2))
ymin = min(m_power_data) ymin = min(m_power_data)
plt.ylim(ymin, 1) plt.ylim(ymin, 1)
xs = np.arange(1,int(N_pixels/2),.1) xs = np.arange(1,int(N_pixels/2),.1)
plt.plot(xs, pow_spec(xs), label="True Power Spectrum", color='k',alpha=0.5) plt.plot(xs, pow_spec(xs), label="True Power Spectrum", color='k',alpha=0.5)
plt.plot(s_power_data, 'r', label="Signal") plt.plot(s_power_data, 'r', label="Signal")
plt.plot(m_power_data, 'k', label="Reconstruction") plt.plot(m_power_data, 'k', label="Reconstruction")
plt.axhline(noise_amplitude**2 / N_pixels, color="k", linestyle='--', label="Noise level", alpha=.5) plt.axhline(noise_amplitude**2 / N_pixels, color="k", linestyle='--', label="Noise level", alpha=.5)
plt.axhspan(noise_amplitude**2 / N_pixels, ymin, facecolor='0.9', alpha=.5) plt.axhspan(noise_amplitude**2 / N_pixels, ymin, facecolor='0.9', alpha=.5)
plt.title("Power Spectrum") plt.title("Power Spectrum")
plt.legend() plt.legend()
plt.show() plt.show()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Wiener Filter on Incomplete Data ## Wiener Filter on Incomplete Data
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Operators # Operators
Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec) Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec)
N = ift.ScalingOperator(s_space, noise_amplitude**2) N = ift.ScalingOperator(s_space, noise_amplitude**2)
# R is defined below # R is defined below
# Fields # Fields
sh = Sh.draw_sample() sh = Sh.draw_sample()
s = HT(sh) s = HT(sh)
n = ift.Field.from_random(domain=s_space, random_type='normal', n = ift.Field.from_random(domain=s_space, random_type='normal',
std=noise_amplitude, mean=0) std=noise_amplitude, mean=0)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Partially Lose Data ### Partially Lose Data
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
l = int(N_pixels * 0.2) l = int(N_pixels * 0.2)
h = int(N_pixels * 0.2 * 2) h = int(N_pixels * 0.2 * 2)
mask = np.full(s_space.shape, 1.) mask = np.full(s_space.shape, 1.)
mask[l:h] = 0 mask[l:h] = 0
mask = ift.Field.from_raw(s_space, mask) mask = ift.Field.from_raw(s_space, mask)
R = ift.DiagonalOperator(mask)(HT) R = ift.DiagonalOperator(mask)(HT)
n = n.val_rw() n = n.val_rw()
n[l:h] = 0 n[l:h] = 0
n = ift.Field.from_raw(s_space, n) n = ift.Field.from_raw(s_space, n)
d = R(sh) + n d = R(sh) + n
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
curv = Curvature(R=R, N=N, Sh=Sh) curv = Curvature(R=R, N=N, Sh=Sh)
D = curv.inverse D = curv.inverse
j = R.adjoint_times(N.inverse_times(d)) j = R.adjoint_times(N.inverse_times(d))
m = D(j) m = D(j)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Compute Uncertainty ### Compute Uncertainty
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
m_mean, m_var = ift.probe_with_posterior_samples(curv, HT, 200) m_mean, m_var = ift.probe_with_posterior_samples(curv, HT, 200)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Get data ### Get data
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Get signal data and reconstruction data # Get signal data and reconstruction data
s_data = s.val s_data = s.val
m_data = HT(m).val m_data = HT(m).val
m_var_data = m_var.val m_var_data = m_var.val
uncertainty = np.sqrt(m_var_data) uncertainty = np.sqrt(m_var_data)
d_data = d.val_rw() d_data = d.val_rw()
# Set lost data to NaN for proper plotting # Set lost data to NaN for proper plotting
d_data[d_data == 0] = np.nan d_data[d_data == 0] = np.nan
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
fig = plt.figure(figsize=(15,10)) fig = plt.figure(figsize=(15,10))
plt.axvspan(l, h, facecolor='0.8',alpha=0.5) plt.axvspan(l, h, facecolor='0.8',alpha=0.5)
plt.fill_between(range(N_pixels), m_data - uncertainty, m_data + uncertainty, facecolor='0.5', alpha=0.5) plt.fill_between(range(N_pixels), m_data - uncertainty, m_data + uncertainty, facecolor='0.5', alpha=0.5)
plt.plot(s_data, 'r', label="Signal", alpha=1, linewidth=3) plt.plot(s_data, 'r', label="Signal", alpha=1, linewidth=3)
plt.plot(d_data, 'k.', label="Data") plt.plot(d_data, 'k.', label="Data")
plt.plot(m_data, 'k', label="Reconstruction", linewidth=3) plt.plot(m_data, 'k', label="Reconstruction", linewidth=3)
plt.title("Reconstruction of incomplete data") plt.title("Reconstruction of incomplete data")
plt.legend() plt.legend()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
# 2d Example # 2d Example
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
N_pixels = 256 # Number of pixels N_pixels = 256 # Number of pixels
sigma2 = 2. # Noise variance sigma2 = 2. # Noise variance
def pow_spec(k): def pow_spec(k):
P0, k0, gamma = [.2, 2, 4] P0, k0, gamma = [.2, 2, 4]
return P0 * (1. + (k/k0)**2)**(-gamma/2) return P0 * (1. + (k/k0)**2)**(-gamma/2)
s_space = ift.RGSpace([N_pixels, N_pixels]) s_space = ift.RGSpace([N_pixels, N_pixels])
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
h_space = s_space.get_default_codomain() h_space = s_space.get_default_codomain()
HT = ift.HarmonicTransformOperator(h_space,s_space) HT = ift.HarmonicTransformOperator(h_space,s_space)
# Operators # Operators
Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec) Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec)
N = ift.ScalingOperator(s_space, sigma2) N = ift.ScalingOperator(s_space, sigma2)
# Fields and data # Fields and data
sh = Sh.draw_sample() sh = Sh.draw_sample()
n = ift.Field.from_random(domain=s_space, random_type='normal', n = ift.Field.from_random(domain=s_space, random_type='normal',
std=np.sqrt(sigma2), mean=0) std=np.sqrt(sigma2), mean=0)
# Lose some data # Lose some data
l = int(N_pixels * 0.33) l = int(N_pixels * 0.33)
h = int(N_pixels * 0.33 * 2) h = int(N_pixels * 0.33 * 2)
mask = np.full(s_space.shape, 1.) mask = np.full(s_space.shape, 1.)
mask[l:h,l:h] = 0. mask[l:h,l:h] = 0.
mask = ift.Field.from_raw(s_space, mask) mask = ift.Field.from_raw(s_space, mask)
R = ift.DiagonalOperator(mask)(HT) R = ift.DiagonalOperator(mask)(HT)
n = n.val_rw() n = n.val_rw()
n[l:h, l:h] = 0 n[l:h, l:h] = 0
n = ift.Field.from_raw(s_space, n) n = ift.Field.from_raw(s_space, n)
curv = Curvature(R=R, N=N, Sh=Sh) curv = Curvature(R=R, N=N, Sh=Sh)
D = curv.inverse D = curv.inverse
d = R(sh) + n d = R(sh) + n
j = R.adjoint_times(N.inverse_times(d)) j = R.adjoint_times(N.inverse_times(d))
# Run Wiener filter # Run Wiener filter
m = D(j) m = D(j)
# Uncertainty # Uncertainty
m_mean, m_var = ift.probe_with_posterior_samples(curv, HT, 20) m_mean, m_var = ift.probe_with_posterior_samples(curv, HT, 20)
# Get data # Get data
s_data = HT(sh).val s_data = HT(sh).val
m_data = HT(m).val m_data = HT(m).val
m_var_data = m_var.val m_var_data = m_var.val
d_data = d.val d_data = d.val
uncertainty = np.sqrt(np.abs(m_var_data)) uncertainty = np.sqrt(np.abs(m_var_data))
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
cm = ['magma', 'inferno', 'plasma', 'viridis'][1] cm = ['magma', 'inferno', 'plasma', 'viridis'][1]
mi = np.min(s_data) mi = np.min(s_data)
ma = np.max(s_data) ma = np.max(s_data)
fig, axes = plt.subplots(1, 2, figsize=(15, 7)) fig, axes = plt.subplots(1, 2, figsize=(15, 7))
data = [s_data, d_data] data = [s_data, d_data]
caption = ["Signal", "Data"] caption = ["Signal", "Data"]
for ax in axes.flat: for ax in axes.flat:
im = ax.imshow(data.pop(0), interpolation='nearest', cmap=cm, vmin=mi, im = ax.imshow(data.pop(0), interpolation='nearest', cmap=cm, vmin=mi,
vmax=ma) vmax=ma)
ax.set_title(caption.pop(0)) ax.set_title(caption.pop(0))
fig.subplots_adjust(right=0.8) fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.15, 0.05, 0.7]) cbar_ax = fig.add_axes([0.85, 0.15, 0.05, 0.7])
fig.colorbar(im, cax=cbar_ax) fig.colorbar(im, cax=cbar_ax)
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
mi = np.min(s_data) mi = np.min(s_data)
ma = np.max(s_data) ma = np.max(s_data)
fig, axes = plt.subplots(3, 2, figsize=(15, 22.5)) fig, axes = plt.subplots(3, 2, figsize=(15, 22.5))
sample = HT(curv.draw_sample(from_inverse=True)+m).val sample = HT(curv.draw_sample(from_inverse=True)+m).val
post_mean = (m_mean + HT(m)).val post_mean = (m_mean + HT(m)).val
data = [s_data, m_data, post_mean, sample, s_data - m_data, uncertainty] data = [s_data, m_data, post_mean, sample, s_data - m_data, uncertainty]
caption = ["Signal", "Reconstruction", "Posterior mean", "Sample", "Residuals", "Uncertainty Map"] caption = ["Signal", "Reconstruction", "Posterior mean", "Sample", "Residuals", "Uncertainty Map"]
for ax in axes.flat: for ax in axes.flat:
im = ax.imshow(data.pop(0), interpolation='nearest', cmap=cm, vmin=mi, vmax=ma) im = ax.imshow(data.pop(0), interpolation='nearest', cmap=cm, vmin=mi, vmax=ma)
ax.set_title(caption.pop(0)) ax.set_title(caption.pop(0))
fig.subplots_adjust(right=0.8) fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([.85, 0.15, 0.05, 0.7]) cbar_ax = fig.add_axes([.85, 0.15, 0.05, 0.7])
fig.colorbar(im, cax=cbar_ax) fig.colorbar(im, cax=cbar_ax)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Is the uncertainty map reliable? ### Is the uncertainty map reliable?
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
precise = (np.abs(s_data-m_data) < uncertainty) precise = (np.abs(s_data-m_data) < uncertainty)
print("Error within uncertainty map bounds: " + str(np.sum(precise) * 100 / N_pixels**2) + "%") print("Error within uncertainty map bounds: " + str(np.sum(precise) * 100 / N_pixels**2) + "%")
plt.figure(figsize=(15,10)) plt.figure(figsize=(15,10))
plt.imshow(precise.astype(float), cmap="brg") plt.imshow(precise.astype(float), cmap="brg")
plt.colorbar() plt.colorbar()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
# Start Coding # Start Coding
## NIFTy Repository + Installation guide ## NIFTy Repository + Installation guide
https://gitlab.mpcdf.mpg.de/ift/NIFTy https://gitlab.mpcdf.mpg.de/ift/NIFTy
NIFTy v5 **more or less stable!** NIFTy v5 **more or less stable!**
......
...@@ -5,8 +5,6 @@ import numpy as np ...@@ -5,8 +5,6 @@ import numpy as np
import nifty6 as ift import nifty6 as ift
np.random.seed(40)
N0s, a0s, b0s, c0s = [], [], [], [] N0s, a0s, b0s, c0s = [], [], [], []
for ii in range(10, 26): for ii in range(10, 26):
...@@ -15,15 +13,15 @@ for ii in range(10, 26): ...@@ -15,15 +13,15 @@ for ii in range(10, 26):
N = int(2**ii) N = int(2**ii)
print('N = {}'.format(N)) print('N = {}'.format(N))
uv = np.random.rand(N, 2) - 0.5 rng = ift.random.current_rng()
vis = np.random.randn(N) + 1j*np.random.randn(N) uv = rng.uniform(-.5, .5, (N,2))
vis = rng.normal(0., 1., N) + 1j*rng.normal(0., 1., N)
uvspace = ift.RGSpace((nu, nv)) uvspace = ift.RGSpace((nu, nv))
visspace = ift.UnstructuredDomain(N) visspace = ift.UnstructuredDomain(N)
img = np.random.randn(nu*nv) img = rng.standard_normal((nu, nv))
img = img.reshape((nu, nv))
img = ift.makeField(uvspace, img) img = ift.makeField(uvspace, img)
t0 = time() t0 = time()
......
...@@ -27,8 +27,6 @@ import numpy as np ...@@ -27,8 +27,6 @@ import numpy as np
import nifty6 as ift import nifty6 as ift
if __name__ == '__main__': if __name__ == '__main__':
np.random.seed(41)
# Set up the position space of the signal # Set up the position space of the signal
mode = 2 mode = 2
if mode == 0: if mode == 0:
...@@ -62,7 +60,7 @@ if __name__ == '__main__': ...@@ -62,7 +60,7 @@ if __name__ == '__main__':
p = R(sky) p = R(sky)
mock_position = ift.from_random('normal', harmonic_space) mock_position = ift.from_random('normal', harmonic_space)
tmp = p(mock_position).val.astype(np.float64) tmp = p(mock_position).val.astype(np.float64)
data = np.random.binomial(1, tmp) data = ift.random.current_rng().binomial(1, tmp)
data = ift.Field.from_raw(R.target, data) data = ift.Field.from_raw(R.target, data)
# Compute likelihood and Hamiltonian # Compute likelihood and Hamiltonian
......
...@@ -46,8 +46,6 @@ def make_random_mask(): ...@@ -46,8 +46,6 @@ def make_random_mask():
if __name__ == '__main__': if __name__ == '__main__':
np.random.seed(42)
# Choose space on which the signal field is defined # Choose space on which the signal field is defined
if len(sys.argv) == 2: if len(sys.argv) == 2:
mode = int(sys.argv[1]) mode = int(sys.argv[1])
......
...@@ -44,8 +44,6 @@ def exposure_2d(): ...@@ -44,8 +44,6 @@ def exposure_2d():
if __name__ == '__main__': if __name__ == '__main__':
np.random.seed(42)
# Choose space on which the signal field is defined # Choose space on which the signal field is defined
if len(sys.argv) == 2: if len(sys.argv) == 2:
mode = int(sys.argv[1]) mode = int(sys.argv[1])
...@@ -94,7 +92,7 @@ if __name__ == '__main__': ...@@ -94,7 +92,7 @@ if __name__ == '__main__':
lamb = R(sky) lamb = R(sky)
mock_position = ift.from_random('normal', domain) mock_position = ift.from_random('normal', domain)
data = lamb(mock_position) data = lamb(mock_position)
data = np.random.poisson(data.val.astype(np.float64)) data = ift.random.current_rng().poisson(data.val.astype(np.float64))
data = ift.Field.from_raw(d_space, data) data = ift.Field.from_raw(d_space, data)
likelihood = ift.PoissonianEnergy(data) @ lamb likelihood = ift.PoissonianEnergy(data) @ lamb
......
...@@ -33,20 +33,18 @@ import nifty6 as ift ...@@ -33,20 +33,18 @@ import nifty6 as ift
def random_los(n_los): def random_los(n_los):
starts = list(np.random.uniform(0, 1, (n_los, 2)).T) starts = list(ift.random.current_rng().random((n_los, 2)).T)
ends = list(np.random.uniform(0, 1, (n_los, 2)).T) ends = list(ift.random.current_rng().random((n_los, 2)).T)
return starts, ends return starts, ends
def radial_los(n_los): def radial_los(n_los):
starts = list(np.random.uniform(0, 1, (n_los, 2)).T) starts = list(ift.random.current_rng().random((n_los, 2)).T)
ends = list(0.5 + 0*np.random.uniform(0, 1, (n_los, 2)).T) ends = list(0.5 + 0*ift.random.current_rng().random((n_los, 2)).T)
return starts, ends return starts, ends
if __name__ == '__main__': if __name__ == '__main__':
np.random.seed(420)
# Choose between random line-of-sight response (mode=0) and radial lines # Choose between random line-of-sight response (mode=0) and radial lines
# of sight (mode=1) # of sight (mode=1)
if len(sys.argv) == 2: if len(sys.argv) == 2:
......
...@@ -44,20 +44,18 @@ class SingleDomain(ift.LinearOperator): ...@@ -44,20 +44,18 @@ class SingleDomain(ift.LinearOperator):
def random_los(n_los): def random_los(n_los):
starts = list(np.random.uniform(0, 1, (n_los, 2)).T) starts = list(ift.random.current_rng().random((n_los, 2)).T)
ends = list(np.random.uniform(0, 1, (n_los, 2)).T) ends = list(ift.random.current_rng().random((n_los, 2)).T)
return starts, ends return starts, ends
def radial_los(n_los): def radial_los(n_los):
starts = list(np.random.uniform(0, 1, (n_los, 2)).T) starts = list(ift.random.current_rng().random((n_los, 2)).T)
ends = list(0.5 + 0*np.random.uniform(0, 1, (n_los, 2)).T) ends = list(0.5 + 0*ift.random.current_rng().random((n_los, 2)).T)
return starts, ends return starts, ends
if __name__ == '__main__': if __name__ == '__main__':
np.random.seed(43)
# Choose between random line-of-sight response (mode=0) and radial lines # Choose between random line-of-sight response (mode=0) and radial lines
# of sight (mode=1) # of sight (mode=1)
if len(sys.argv) == 2: if len(sys.argv) == 2:
......
...@@ -19,7 +19,6 @@ import matplotlib.pyplot as plt ...@@ -19,7 +19,6 @@ import matplotlib.pyplot as plt
import numpy as np import numpy as np
import nifty6 as ift import nifty6 as ift
np.random.seed(12)
def polynomial(coefficients, sampling_points): def polynomial(coefficients, sampling_points):
...@@ -86,7 +85,7 @@ if __name__ == '__main__': ...@@ -86,7 +85,7 @@ if __name__ == '__main__':
N_params = 10 N_params = 10
N_samples = 100 N_samples = 100
size = (12,) size = (12,)
x = np.random.random(size) * 10 x = ift.random.current_rng().random(size) * 10
y = np.sin(x**2) * x**3 y = np.sin(x**2) * x**3
var = np.full_like(y, y.var() / 10) var = np.full_like(y, y.var() / 10)
var[-2] *= 4 var[-2] *= 4
......
from .version import __version__ from .version import __version__
from . import random
from .domains.domain import Domain from .domains.domain import Domain
from .domains.structured_domain import StructuredDomain from .domains.structured_domain import StructuredDomain
from .domains.unstructured_domain import UnstructuredDomain from .domains.unstructured_domain import UnstructuredDomain
......
...@@ -25,6 +25,7 @@ from ..field import Field ...@@ -25,6 +25,7 @@ from ..field import Field
from ..linearization import Linearization from ..linearization import Linearization
from ..operators.operator import Operator from ..operators.operator import Operator
from ..sugar import makeOp from ..sugar import makeOp
from .. import random
def _f_on_np(f, arr): def _f_on_np(f, arr):
...@@ -67,7 +68,8 @@ class _InterpolationOperator(Operator): ...@@ -67,7 +68,8 @@ class _InterpolationOperator(Operator):
if table_func is not None: if table_func is not None:
if inv_table_func is None: if inv_table_func is None:
raise ValueError raise ValueError
a = func(np.random.randn(10)) # MR FIXME: not sure whether we should have this in production code
a = func(random.current_rng().random(10))
a1 = _f_on_np(lambda x: inv_table_func(table_func(x)), a) a1 = _f_on_np(lambda x: inv_table_func(table_func(x)), a)
np.testing.assert_allclose(a, a1) np.testing.assert_allclose(a, a1)
self._table = _f_on_np(table_func, self._table) self._table = _f_on_np(table_func, self._table)
......
...@@ -11,7 +11,7 @@ ...@@ -11,7 +11,7 @@
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
# #
# Copyright(C) 2013-2019 Max-Planck-Society # Copyright(C) 2013-2020 Max-Planck-Society
# #
# NIFTy is being developed at the Max-Planck-Institut fuer Astrophysik. # NIFTy is being developed at the Max-Planck-Institut fuer Astrophysik.
...@@ -113,10 +113,6 @@ class MetricGaussianKL_MPI(Energy): ...@@ -113,10 +113,6 @@ class MetricGaussianKL_MPI(Energy):
preconditioning is done by default. preconditioning is done by default.
_samples : None _samples : None
Only a parameter for internal uses. Typically not to be set by users. Only a parameter for internal uses. Typically not to be set by users.
seed_offset : int
A parameter with which one can controll from which seed the samples
are drawn. Per default, the seed is different for MPI tasks, but the
same every time this class is initialized.
Note Note
---- ----
...@@ -132,7 +128,7 @@ class MetricGaussianKL_MPI(Energy): ...@@ -132,7 +128,7 @@ class MetricGaussianKL_MPI(Energy):
def __init__(self, mean, hamiltonian, n_samples, constants=[], def __init__(self, mean, hamiltonian, n_samples, constants=[],
point_estimates=[], mirror_samples=False, point_estimates=[], mirror_samples=False,
napprox=0, _samples=None, seed_offset=0, napprox=0, _samples=None,
lh_sampling_dtype=np.float64): lh_sampling_dtype=np.float64):
super(MetricGaussianKL_MPI, self).__init__(mean) super(MetricGaussianKL_MPI, self).__init__(mean)
...@@ -159,10 +155,10 @@ class MetricGaussianKL_MPI(Energy): ...@@ -159,10 +155,10 @@ class MetricGaussianKL_MPI(Energy):
if napprox > 1: if napprox > 1:
met._approximation = makeOp(approximation2endo(met, napprox)) met._approximation = makeOp(approximation2endo(met, napprox))
_samples = [] _samples = []
rand_state = np.random.get_state() sseq = random.spawn_sseq(n_samples)
for i in range(lo, hi): for i in range(lo, hi):
if mirror_samples: if mirror_samples:
np.random.seed(i//2+seed_offset) random.push_sseq(sseq[i//2])
if (i % 2) and (i-1 >= lo): if (i % 2) and (i-1 >= lo):
_samples.append(-_samples[-1]) _samples.append(-_samples[-1])
...@@ -170,16 +166,16 @@ class MetricGaussianKL_MPI(Energy): ...@@ -170,16 +166,16 @@ class MetricGaussianKL_MPI(Energy):
_samples.append(((i % 2)*2-1) * _samples.append(((i % 2)*2-1) *
met.draw_sample(from_inverse=True, met.draw_sample(from_inverse=True,
dtype=lh_sampling_dtype)) dtype=lh_sampling_dtype))
random.pop_sseq()
else: else:
np.random.seed(i+seed_offset) random.push_sseq(sseq[i])
_samples.append(met.draw_sample(from_inverse=True, _samples.append(met.draw_sample(from_inverse=True,
dtype=lh_sampling_dtype)) dtype=lh_sampling_dtype))
np.random.set_state(rand_state) random.pop_sseq()
_samples = tuple(_samples) _samples = tuple(_samples)
if mirror_samples: if mirror_samples:
n_samples *= 2 n_samples *= 2
self._samples = _samples self._samples = _samples
self._seed_offset = seed_offset
self._n_samples = n_samples self._n_samples = n_samples
self._lin = Linearization.make_partial_var(mean, constants) self._lin = Linearization.make_partial_var(mean, constants)
v, g = None, None v, g = None, None
...@@ -205,7 +201,7 @@ class MetricGaussianKL_MPI(Energy): ...@@ -205,7 +201,7 @@ class MetricGaussianKL_MPI(Energy):
return MetricGaussianKL_MPI( return MetricGaussianKL_MPI(
position, self._hamiltonian, self._n_samples, self._constants, position, self._hamiltonian, self._n_samples, self._constants,
self._point_estimates, _samples=self._samples, self._point_estimates, _samples=self._samples,
seed_offset=self._seed_offset, lh_sampling_dtype=self._sampdt) lh_sampling_dtype=self._sampdt)
@property @property
def value(self): def value(self):
...@@ -245,11 +241,13 @@ class MetricGaussianKL_MPI(Energy): ...@@ -245,11 +241,13 @@ class MetricGaussianKL_MPI(Energy):
raise NotImplementedError() raise NotImplementedError()
lin = self._lin.with_want_metric() lin = self._lin.with_want_metric()
samp = full(self._hamiltonian.domain, 0.) samp = full(self._hamiltonian.domain, 0.)
rand_state = np.random.get_state() sseq = random.spawn_sseq(n_samples)
np.random.seed(rank+np.random.randint(99999)) for i, v in enumerate(self._samples):
for v in self._samples: # FIXME: this is not yet correct. We need to use the _global_ sample
# index, not the local one!
random.push_sseq(sseq[i])
samp = samp + self._hamiltonian(lin+v).metric.draw_sample(from_inverse=False, dtype=dtype) samp = samp + self._hamiltonian(lin+v).metric.draw_sample(from_inverse=False, dtype=dtype)
np.random.set_state(rand_state) random.pop_sseq()
return allreduce_sum_field(samp) return allreduce_sum_field(samp)
def metric_sample(self, from_inverse=False, dtype=np.float64): def metric_sample(self, from_inverse=False, dtype=np.float64):
......
...@@ -11,21 +11,50 @@ ...@@ -11,21 +11,50 @@
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
# #
# Copyright(C) 2013-2019 Max-Planck-Society # Copyright(C) 2013-2020 Max-Planck-Society
# #
# NIFTy is being developed at the Max-Planck-Institut fuer Astrophysik. # NIFTy is being developed at the Max-Planck-Institut fuer Astrophysik.
import numpy as np import numpy as np
_sseq = [np.random.SeedSequence(42)]
_rng = [np.random.default_rng(_sseq[-1])]
def