Commit dd8e0f42 authored by Martin Reinecke's avatar Martin Reinecke
Browse files

Merge branch 'move_getting_started_to_docs' into 'NIFTy_8'

Add getting started to docs

See merge request !666
parents 88258c6e b530bdb1
Pipeline #106640 passed with stages
in 34 minutes and 30 seconds
# never store the git version file # never store the git version file
git_version.py git_version.py
docs/source/user/getting_started_0.rst
# custom # custom
*.txt *.txt
setup.cfg setup.cfg
......
...@@ -8,7 +8,7 @@ ...@@ -8,7 +8,7 @@
} }
}, },
"source": [ "source": [
"# A NIFTy demonstration" "# Code example: Wiener filter"
] ]
}, },
{ {
...@@ -19,7 +19,7 @@ ...@@ -19,7 +19,7 @@
} }
}, },
"source": [ "source": [
"## IFT: Big Picture\n", "## Introduction\n",
"IFT starting point:\n", "IFT starting point:\n",
"\n", "\n",
"$$d = Rs+n$$\n", "$$d = Rs+n$$\n",
...@@ -28,9 +28,6 @@ ...@@ -28,9 +28,6 @@
"\n", "\n",
"IFT aims at **inverting** the above uninvertible problem in the **best possible way** using Bayesian statistics.\n", "IFT aims at **inverting** the above uninvertible problem in the **best possible way** using Bayesian statistics.\n",
"\n", "\n",
"\n",
"## NIFTy\n",
"\n",
"NIFTy (Numerical Information Field Theory) is a Python framework in which IFT problems can be tackled easily.\n", "NIFTy (Numerical Information Field Theory) is a Python framework in which IFT problems can be tackled easily.\n",
"\n", "\n",
"Main Interfaces:\n", "Main Interfaces:\n",
...@@ -48,7 +45,7 @@ ...@@ -48,7 +45,7 @@
} }
}, },
"source": [ "source": [
"## Wiener Filter: Formulae\n", "## Wiener filter on one-dimensional fields\n",
"\n", "\n",
"### Assumptions\n", "### Assumptions\n",
"\n", "\n",
...@@ -61,11 +58,9 @@ ...@@ -61,11 +58,9 @@
"$$\\mathcal P (s|d) \\propto P(s,d) = \\mathcal G(d-Rs,N) \\,\\mathcal G(s,S) \\propto \\mathcal G (s-m,D) $$\n", "$$\\mathcal P (s|d) \\propto P(s,d) = \\mathcal G(d-Rs,N) \\,\\mathcal G(s,S) \\propto \\mathcal G (s-m,D) $$\n",
"\n", "\n",
"where\n", "where\n",
"$$\\begin{align}\n", "$$m = Dj$$\n",
"m &= Dj \\\\\n", "with\n",
"D^{-1}&= (S^{-1} +R^\\dagger N^{-1} R )\\\\\n", "$$D = (S^{-1} +R^\\dagger N^{-1} R)^{-1} , \\quad j = R^\\dagger N^{-1} d.$$\n",
"j &= R^\\dagger N^{-1} d\n",
"\\end{align}$$\n",
"\n", "\n",
"Let us implement this in NIFTy!" "Let us implement this in NIFTy!"
] ]
...@@ -78,7 +73,7 @@ ...@@ -78,7 +73,7 @@
} }
}, },
"source": [ "source": [
"## Wiener Filter: Example\n", "### In NIFTy\n",
"\n", "\n",
"- We assume statistical homogeneity and isotropy. Therefore the signal covariance $S$ is diagonal in harmonic space, and is described by a one-dimensional power spectrum, assumed here as $$P(k) = P_0\\,\\left(1+\\left(\\frac{k}{k_0}\\right)^2\\right)^{-\\gamma /2},$$\n", "- We assume statistical homogeneity and isotropy. Therefore the signal covariance $S$ is diagonal in harmonic space, and is described by a one-dimensional power spectrum, assumed here as $$P(k) = P_0\\,\\left(1+\\left(\\frac{k}{k_0}\\right)^2\\right)^{-\\gamma /2},$$\n",
"with $P_0 = 0.2, k_0 = 5, \\gamma = 4$.\n", "with $P_0 = 0.2, k_0 = 5, \\gamma = 4$.\n",
...@@ -114,7 +109,7 @@ ...@@ -114,7 +109,7 @@
} }
}, },
"source": [ "source": [
"## Wiener Filter: Implementation" "### Implementation"
] ]
}, },
{ {
...@@ -125,7 +120,7 @@ ...@@ -125,7 +120,7 @@
} }
}, },
"source": [ "source": [
"### Import Modules" "#### Import Modules"
] ]
}, },
{ {
...@@ -142,7 +137,8 @@ ...@@ -142,7 +137,8 @@
"import numpy as np\n", "import numpy as np\n",
"import nifty8 as ift\n", "import nifty8 as ift\n",
"import matplotlib.pyplot as plt\n", "import matplotlib.pyplot as plt\n",
"%matplotlib inline" "plt.rcParams['figure.dpi'] = 100\n",
"plt.style.use(\"seaborn-notebook\")"
] ]
}, },
{ {
...@@ -153,7 +149,7 @@ ...@@ -153,7 +149,7 @@
} }
}, },
"source": [ "source": [
"### Implement Propagator" "#### Implement Propagator"
] ]
}, },
{ {
...@@ -182,7 +178,7 @@ ...@@ -182,7 +178,7 @@
} }
}, },
"source": [ "source": [
"### Conjugate Gradient Preconditioning\n", "#### Conjugate Gradient Preconditioning\n",
"\n", "\n",
"- $D$ is defined via:\n", "- $D$ is defined via:\n",
"$$D^{-1} = \\mathcal S_h^{-1} + R^\\dagger N^{-1} R.$$\n", "$$D^{-1} = \\mathcal S_h^{-1} + R^\\dagger N^{-1} R.$$\n",
...@@ -213,7 +209,7 @@ ...@@ -213,7 +209,7 @@
} }
}, },
"source": [ "source": [
"### Generate Mock data\n", "#### Generate Mock data\n",
"\n", "\n",
"- Generate a field $s$ and $n$ with given covariances.\n", "- Generate a field $s$ and $n$ with given covariances.\n",
"- Calculate $d$." "- Calculate $d$."
...@@ -257,7 +253,7 @@ ...@@ -257,7 +253,7 @@
} }
}, },
"source": [ "source": [
"### Run Wiener Filter" "#### Run Wiener Filter"
] ]
}, },
{ {
...@@ -281,7 +277,7 @@ ...@@ -281,7 +277,7 @@
} }
}, },
"source": [ "source": [
"### Signal Reconstruction" "#### Results"
] ]
}, },
{ {
...@@ -299,10 +295,9 @@ ...@@ -299,10 +295,9 @@
"m_data = HT(m).val\n", "m_data = HT(m).val\n",
"d_data = d.val\n", "d_data = d.val\n",
"\n", "\n",
"plt.figure(figsize=(15,10))\n", "plt.plot(s_data, 'r', label=\"Signal\", linewidth=2)\n",
"plt.plot(s_data, 'r', label=\"Signal\", linewidth=3)\n",
"plt.plot(d_data, 'k.', label=\"Data\")\n", "plt.plot(d_data, 'k.', label=\"Data\")\n",
"plt.plot(m_data, 'k', label=\"Reconstruction\",linewidth=3)\n", "plt.plot(m_data, 'k', label=\"Reconstruction\",linewidth=2)\n",
"plt.title(\"Reconstruction\")\n", "plt.title(\"Reconstruction\")\n",
"plt.legend()\n", "plt.legend()\n",
"plt.show()" "plt.show()"
...@@ -318,10 +313,9 @@ ...@@ -318,10 +313,9 @@
}, },
"outputs": [], "outputs": [],
"source": [ "source": [
"plt.figure(figsize=(15,10))\n", "plt.plot(s_data - s_data, 'r', label=\"Signal\", linewidth=2)\n",
"plt.plot(s_data - s_data, 'r', label=\"Signal\", linewidth=3)\n",
"plt.plot(d_data - s_data, 'k.', label=\"Data\")\n", "plt.plot(d_data - s_data, 'k.', label=\"Data\")\n",
"plt.plot(m_data - s_data, 'k', label=\"Reconstruction\",linewidth=3)\n", "plt.plot(m_data - s_data, 'k', label=\"Reconstruction\",linewidth=2)\n",
"plt.axhspan(-noise_amplitude,noise_amplitude, facecolor='0.9', alpha=.5)\n", "plt.axhspan(-noise_amplitude,noise_amplitude, facecolor='0.9', alpha=.5)\n",
"plt.title(\"Residuals\")\n", "plt.title(\"Residuals\")\n",
"plt.legend()\n", "plt.legend()\n",
...@@ -336,7 +330,7 @@ ...@@ -336,7 +330,7 @@
} }
}, },
"source": [ "source": [
"### Power Spectrum" "#### Power Spectrum"
] ]
}, },
{ {
...@@ -351,7 +345,6 @@ ...@@ -351,7 +345,6 @@
"source": [ "source": [
"s_power_data = ift.power_analyze(sh).val\n", "s_power_data = ift.power_analyze(sh).val\n",
"m_power_data = ift.power_analyze(m).val\n", "m_power_data = ift.power_analyze(m).val\n",
"plt.figure(figsize=(15,10))\n",
"plt.loglog()\n", "plt.loglog()\n",
"plt.xlim(1, int(N_pixels/2))\n", "plt.xlim(1, int(N_pixels/2))\n",
"ymin = min(m_power_data)\n", "ymin = min(m_power_data)\n",
...@@ -516,12 +509,11 @@ ...@@ -516,12 +509,11 @@
}, },
"outputs": [], "outputs": [],
"source": [ "source": [
"fig = plt.figure(figsize=(15,10))\n",
"plt.axvspan(l, h, facecolor='0.8',alpha=0.5)\n", "plt.axvspan(l, h, facecolor='0.8',alpha=0.5)\n",
"plt.fill_between(range(N_pixels), m_data - uncertainty, m_data + uncertainty, facecolor='0.5', alpha=0.5)\n", "plt.fill_between(range(N_pixels), m_data - uncertainty, m_data + uncertainty, facecolor='0.5', alpha=0.5)\n",
"plt.plot(s_data, 'r', label=\"Signal\", alpha=1, linewidth=3)\n", "plt.plot(s_data, 'r', label=\"Signal\", alpha=1, linewidth=2)\n",
"plt.plot(d_data, 'k.', label=\"Data\")\n", "plt.plot(d_data, 'k.', label=\"Data\")\n",
"plt.plot(m_data, 'k', label=\"Reconstruction\", linewidth=3)\n", "plt.plot(m_data, 'k', label=\"Reconstruction\", linewidth=2)\n",
"plt.title(\"Reconstruction of incomplete data\")\n", "plt.title(\"Reconstruction of incomplete data\")\n",
"plt.legend()" "plt.legend()"
] ]
...@@ -534,7 +526,7 @@ ...@@ -534,7 +526,7 @@
} }
}, },
"source": [ "source": [
"# 2d Example" "## Wiener filter on two-dimensional field"
] ]
}, },
{ {
...@@ -618,18 +610,18 @@ ...@@ -618,18 +610,18 @@
}, },
"outputs": [], "outputs": [],
"source": [ "source": [
"cm = ['magma', 'inferno', 'plasma', 'viridis'][1]\n", "cmap = ['magma', 'inferno', 'plasma', 'viridis'][1]\n",
"\n", "\n",
"mi = np.min(s_data)\n", "mi = np.min(s_data)\n",
"ma = np.max(s_data)\n", "ma = np.max(s_data)\n",
"\n", "\n",
"fig, axes = plt.subplots(1, 2, figsize=(15, 7))\n", "fig, axes = plt.subplots(1, 2)\n",
"\n", "\n",
"data = [s_data, d_data]\n", "data = [s_data, d_data]\n",
"caption = [\"Signal\", \"Data\"]\n", "caption = [\"Signal\", \"Data\"]\n",
"\n", "\n",
"for ax in axes.flat:\n", "for ax in axes.flat:\n",
" im = ax.imshow(data.pop(0), interpolation='nearest', cmap=cm, vmin=mi,\n", " im = ax.imshow(data.pop(0), interpolation='nearest', cmap=cmap, vmin=mi,\n",
" vmax=ma)\n", " vmax=ma)\n",
" ax.set_title(caption.pop(0))\n", " ax.set_title(caption.pop(0))\n",
"\n", "\n",
...@@ -651,7 +643,7 @@ ...@@ -651,7 +643,7 @@
"mi = np.min(s_data)\n", "mi = np.min(s_data)\n",
"ma = np.max(s_data)\n", "ma = np.max(s_data)\n",
"\n", "\n",
"fig, axes = plt.subplots(3, 2, figsize=(15, 22.5))\n", "fig, axes = plt.subplots(3, 2, figsize=(10, 15))\n",
"sample = HT(curv.draw_sample(from_inverse=True)+m).val\n", "sample = HT(curv.draw_sample(from_inverse=True)+m).val\n",
"post_mean = (m_mean + HT(m)).val\n", "post_mean = (m_mean + HT(m)).val\n",
"\n", "\n",
...@@ -659,7 +651,7 @@ ...@@ -659,7 +651,7 @@
"caption = [\"Signal\", \"Reconstruction\", \"Posterior mean\", \"Sample\", \"Residuals\", \"Uncertainty Map\"]\n", "caption = [\"Signal\", \"Reconstruction\", \"Posterior mean\", \"Sample\", \"Residuals\", \"Uncertainty Map\"]\n",
"\n", "\n",
"for ax in axes.flat:\n", "for ax in axes.flat:\n",
" im = ax.imshow(data.pop(0), interpolation='nearest', cmap=cm, vmin=mi, vmax=ma)\n", " im = ax.imshow(data.pop(0), interpolation='nearest', cmap=cmap, vmin=mi, vmax=ma)\n",
" ax.set_title(caption.pop(0))\n", " ax.set_title(caption.pop(0))\n",
"\n", "\n",
"fig.subplots_adjust(right=0.8)\n", "fig.subplots_adjust(right=0.8)\n",
...@@ -691,31 +683,9 @@ ...@@ -691,31 +683,9 @@
"precise = (np.abs(s_data-m_data) < uncertainty)\n", "precise = (np.abs(s_data-m_data) < uncertainty)\n",
"print(\"Error within uncertainty map bounds: \" + str(np.sum(precise) * 100 / N_pixels**2) + \"%\")\n", "print(\"Error within uncertainty map bounds: \" + str(np.sum(precise) * 100 / N_pixels**2) + \"%\")\n",
"\n", "\n",
"plt.figure(figsize=(15,10))\n",
"plt.imshow(precise.astype(float), cmap=\"brg\")\n", "plt.imshow(precise.astype(float), cmap=\"brg\")\n",
"plt.colorbar()" "plt.colorbar()"
] ]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"# Start Coding\n",
"## NIFTy Repository + Installation guide\n",
"\n",
"https://gitlab.mpcdf.mpg.de/ift/NIFTy\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
} }
], ],
"metadata": { "metadata": {
...@@ -735,7 +705,7 @@ ...@@ -735,7 +705,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.8.2" "version": "3.9.2"
} }
}, },
"nbformat": 4, "nbformat": 4,
......
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
# A NIFTy demonstration # Code example: Wiener filter
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## IFT: Big Picture ## Introduction
IFT starting point: IFT starting point:
$$d = Rs+n$$ $$d = Rs+n$$
Typically, $s$ is a continuous field, $d$ a discrete data vector. Particularly, $R$ is not invertible. Typically, $s$ is a continuous field, $d$ a discrete data vector. Particularly, $R$ is not invertible.
IFT aims at **inverting** the above uninvertible problem in the **best possible way** using Bayesian statistics. IFT aims at **inverting** the above uninvertible problem in the **best possible way** using Bayesian statistics.
## NIFTy
NIFTy (Numerical Information Field Theory) is a Python framework in which IFT problems can be tackled easily. NIFTy (Numerical Information Field Theory) is a Python framework in which IFT problems can be tackled easily.
Main Interfaces: Main Interfaces:
- **Spaces**: Cartesian, 2-Spheres (Healpix, Gauss-Legendre) and their respective harmonic spaces. - **Spaces**: Cartesian, 2-Spheres (Healpix, Gauss-Legendre) and their respective harmonic spaces.
- **Fields**: Defined on spaces. - **Fields**: Defined on spaces.
- **Operators**: Acting on fields. - **Operators**: Acting on fields.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Wiener Filter: Formulae ## Wiener filter on one-dimensional fields
### Assumptions ### Assumptions
- $d=Rs+n$, $R$ linear operator. - $d=Rs+n$, $R$ linear operator.
- $\mathcal P (s) = \mathcal G (s,S)$, $\mathcal P (n) = \mathcal G (n,N)$ where $S, N$ are positive definite matrices. - $\mathcal P (s) = \mathcal G (s,S)$, $\mathcal P (n) = \mathcal G (n,N)$ where $S, N$ are positive definite matrices.
### Posterior ### Posterior
The Posterior is given by: The Posterior is given by:
$$\mathcal P (s|d) \propto P(s,d) = \mathcal G(d-Rs,N) \,\mathcal G(s,S) \propto \mathcal G (s-m,D) $$ $$\mathcal P (s|d) \propto P(s,d) = \mathcal G(d-Rs,N) \,\mathcal G(s,S) \propto \mathcal G (s-m,D) $$
where where
$$\begin{align} $$m = Dj$$
m &= Dj \\ with
D^{-1}&= (S^{-1} +R^\dagger N^{-1} R )\\ $$D = (S^{-1} +R^\dagger N^{-1} R)^{-1} , \quad j = R^\dagger N^{-1} d.$$
j &= R^\dagger N^{-1} d
\end{align}$$
Let us implement this in NIFTy! Let us implement this in NIFTy!
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Wiener Filter: Example ### In NIFTy
- We assume statistical homogeneity and isotropy. Therefore the signal covariance $S$ is diagonal in harmonic space, and is described by a one-dimensional power spectrum, assumed here as $$P(k) = P_0\,\left(1+\left(\frac{k}{k_0}\right)^2\right)^{-\gamma /2},$$ - We assume statistical homogeneity and isotropy. Therefore the signal covariance $S$ is diagonal in harmonic space, and is described by a one-dimensional power spectrum, assumed here as $$P(k) = P_0\,\left(1+\left(\frac{k}{k_0}\right)^2\right)^{-\gamma /2},$$
with $P_0 = 0.2, k_0 = 5, \gamma = 4$. with $P_0 = 0.2, k_0 = 5, \gamma = 4$.
- $N = 0.2 \cdot \mathbb{1}$. - $N = 0.2 \cdot \mathbb{1}$.
- Number of data points $N_{pix} = 512$. - Number of data points $N_{pix} = 512$.
- reconstruction in harmonic space. - reconstruction in harmonic space.
- Response operator: - Response operator:
$$R = FFT_{\text{harmonic} \rightarrow \text{position}}$$ $$R = FFT_{\text{harmonic} \rightarrow \text{position}}$$
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
N_pixels = 512 # Number of pixels N_pixels = 512 # Number of pixels
def pow_spec(k): def pow_spec(k):
P0, k0, gamma = [.2, 5, 4] P0, k0, gamma = [.2, 5, 4]
return P0 / ((1. + (k/k0)**2)**(gamma / 2)) return P0 / ((1. + (k/k0)**2)**(gamma / 2))
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Wiener Filter: Implementation ### Implementation
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Import Modules #### Import Modules
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
import numpy as np import numpy as np
import nifty8 as ift import nifty8 as ift
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
%matplotlib inline plt.rcParams['figure.dpi'] = 100
plt.style.use("seaborn-notebook")
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Implement Propagator #### Implement Propagator
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
def Curvature(R, N, Sh): def Curvature(R, N, Sh):
IC = ift.GradientNormController(iteration_limit=50000, IC = ift.GradientNormController(iteration_limit=50000,
tol_abs_gradnorm=0.1) tol_abs_gradnorm=0.1)
# WienerFilterCurvature is (R.adjoint*N.inverse*R + Sh.inverse) plus some handy # WienerFilterCurvature is (R.adjoint*N.inverse*R + Sh.inverse) plus some handy
# helper methods. # helper methods.
return ift.WienerFilterCurvature(R,N,Sh,iteration_controller=IC,iteration_controller_sampling=IC) return ift.WienerFilterCurvature(R,N,Sh,iteration_controller=IC,iteration_controller_sampling=IC)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Conjugate Gradient Preconditioning #### Conjugate Gradient Preconditioning
- $D$ is defined via: - $D$ is defined via:
$$D^{-1} = \mathcal S_h^{-1} + R^\dagger N^{-1} R.$$ $$D^{-1} = \mathcal S_h^{-1} + R^\dagger N^{-1} R.$$
In the end, we want to apply $D$ to $j$, i.e. we need the inverse action of $D^{-1}$. This is done numerically (algorithm: *Conjugate Gradient*). In the end, we want to apply $D$ to $j$, i.e. we need the inverse action of $D^{-1}$. This is done numerically (algorithm: *Conjugate Gradient*).
<!-- <!--
- One can define the *condition number* of a non-singular and normal matrix $A$: - One can define the *condition number* of a non-singular and normal matrix $A$:
$$\kappa (A) := \frac{|\lambda_{\text{max}}|}{|\lambda_{\text{min}}|},$$ $$\kappa (A) := \frac{|\lambda_{\text{max}}|}{|\lambda_{\text{min}}|},$$
where $\lambda_{\text{max}}$ and $\lambda_{\text{min}}$ are the largest and smallest eigenvalue of $A$, respectively. where $\lambda_{\text{max}}$ and $\lambda_{\text{min}}$ are the largest and smallest eigenvalue of $A$, respectively.
- The larger $\kappa$ the slower Conjugate Gradient. - The larger $\kappa$ the slower Conjugate Gradient.
- By default, conjugate gradient solves: $D^{-1} m = j$ for $m$, where $D^{-1}$ can be badly conditioned. If one knows a non-singular matrix $T$ for which $TD^{-1}$ is better conditioned, one can solve the equivalent problem: - By default, conjugate gradient solves: $D^{-1} m = j$ for $m$, where $D^{-1}$ can be badly conditioned. If one knows a non-singular matrix $T$ for which $TD^{-1}$ is better conditioned, one can solve the equivalent problem:
$$\tilde A m = \tilde j,$$ $$\tilde A m = \tilde j,$$
where $\tilde A = T D^{-1}$ and $\tilde j = Tj$. where $\tilde A = T D^{-1}$ and $\tilde j = Tj$.
- In our case $S^{-1}$ is responsible for the bad conditioning of $D$ depending on the chosen power spectrum. Thus, we choose - In our case $S^{-1}$ is responsible for the bad conditioning of $D$ depending on the chosen power spectrum. Thus, we choose
$$T = \mathcal F^\dagger S_h^{-1} \mathcal F.$$ $$T = \mathcal F^\dagger S_h^{-1} \mathcal F.$$
--> -->
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Generate Mock data #### Generate Mock data
- Generate a field $s$ and $n$ with given covariances. - Generate a field $s$ and $n$ with given covariances.
- Calculate $d$. - Calculate $d$.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
s_space = ift.RGSpace(N_pixels) s_space = ift.RGSpace(N_pixels)
h_space = s_space.get_default_codomain() h_space = s_space.get_default_codomain()
HT = ift.HarmonicTransformOperator(h_space, target=s_space) HT = ift.HarmonicTransformOperator(h_space, target=s_space)
# Operators # Operators
Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec) Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec)
R = HT # @ ift.create_harmonic_smoothing_operator((h_space,), 0, 0.02) R = HT # @ ift.create_harmonic_smoothing_operator((h_space,), 0, 0.02)
# Fields and data # Fields and data
sh = Sh.draw_sample_with_dtype(dtype=np.float64) sh = Sh.draw_sample_with_dtype(dtype=np.float64)
noiseless_data=R(sh) noiseless_data=R(sh)
noise_amplitude = np.sqrt(0.2) noise_amplitude = np.sqrt(0.2)
N = ift.ScalingOperator(s_space, noise_amplitude**2) N = ift.ScalingOperator(s_space, noise_amplitude**2)
n = ift.Field.from_random(domain=s_space, random_type='normal', n = ift.Field.from_random(domain=s_space, random_type='normal',
std=noise_amplitude, mean=0) std=noise_amplitude, mean=0)
d = noiseless_data + n d = noiseless_data + n
j = R.adjoint_times(N.inverse_times(d)) j = R.adjoint_times(N.inverse_times(d))
curv = Curvature(R=R, N=N, Sh=Sh) curv = Curvature(R=R, N=N, Sh=Sh)
D = curv.inverse D = curv.inverse
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Run Wiener Filter #### Run Wiener Filter
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
m = D(j) m = D(j)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Signal Reconstruction #### Results
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Get signal data and reconstruction data # Get signal data and reconstruction data
s_data = HT(sh).val s_data = HT(sh).val
m_data = HT(m).val m_data = HT(m).val
d_data = d.val d_data = d.val
plt.figure(figsize=(15,10)) plt.plot(s_data, 'r', label="Signal", linewidth=2)
plt.plot(s_data, 'r', label="Signal", linewidth=3)
plt.plot(d_data, 'k.', label="Data") plt.plot(d_data, 'k.', label="Data")
plt.plot(m_data, 'k', label="Reconstruction",linewidth=3) plt.plot(m_data, 'k', label="Reconstruction",linewidth=2)
plt.title("Reconstruction") plt.title("Reconstruction")
plt.legend() plt.legend()
plt.show() plt.show()
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
plt.figure(figsize=(15,10)) plt.plot(s_data - s_data, 'r', label="Signal", linewidth=2)
plt.plot(s_data - s_data, 'r', label="Signal", linewidth=3)
plt.plot(d_data - s_data, 'k.', label="Data") plt.plot(d_data - s_data, 'k.', label="Data")
plt.plot(m_data - s_data, 'k', label="Reconstruction",linewidth=3) plt.plot(m_data - s_data, 'k', label="Reconstruction",linewidth=2)
plt.axhspan(-noise_amplitude,noise_amplitude, facecolor='0.9', alpha=.5) plt.axhspan(-noise_amplitude,noise_amplitude, facecolor='0.9', alpha=.5)
plt.title("Residuals") plt.title("Residuals")
plt.legend() plt.legend()
plt.show() plt.show()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Power Spectrum #### Power Spectrum
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
s_power_data = ift.power_analyze(sh).val s_power_data = ift.power_analyze(sh).val
m_power_data = ift.power_analyze(m).val m_power_data = ift.power_analyze(m).val
plt.figure(figsize=(15,10))
plt.loglog() plt.loglog()
plt.xlim(1, int(N_pixels/2)) plt.xlim(1, int(N_pixels/2))
ymin = min(m_power_data) ymin = min(m_power_data)
plt.ylim(ymin, 1) plt.ylim(ymin, 1)
xs = np.arange(1,int(N_pixels/2),.1) xs = np.arange(1,int(N_pixels/2),.1)
plt.plot(xs, pow_spec(xs), label="True Power Spectrum", color='k',alpha=0.5) plt.plot(xs, pow_spec(xs), label="True Power Spectrum", color='k',alpha=0.5)
plt.plot(s_power_data, 'r', label="Signal") plt.plot(s_power_data, 'r', label="Signal")
plt.plot(m_power_data, 'k', label="Reconstruction") plt.plot(m_power_data, 'k', label="Reconstruction")
plt.axhline(noise_amplitude**2 / N_pixels, color="k", linestyle='--', label="Noise level", alpha=.5) plt.axhline(noise_amplitude**2 / N_pixels, color="k", linestyle='--', label="Noise level", alpha=.5)
plt.axhspan(noise_amplitude**2 / N_pixels, ymin, facecolor='0.9', alpha=.5) plt.axhspan(noise_amplitude**2 / N_pixels, ymin, facecolor='0.9', alpha=.5)
plt.title("Power Spectrum") plt.title("Power Spectrum")
plt.legend() plt.legend()
plt.show() plt.show()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Wiener Filter on Incomplete Data ## Wiener Filter on Incomplete Data
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Operators # Operators
Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec) Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec)
N = ift.ScalingOperator(s_space, noise_amplitude**2) N = ift.ScalingOperator(s_space, noise_amplitude**2)
# R is defined below # R is defined below
# Fields # Fields
sh = Sh.draw_sample_with_dtype(dtype=np.float64) sh = Sh.draw_sample_with_dtype(dtype=np.float64)
s = HT(sh) s = HT(sh)
n = ift.Field.from_random(domain=s_space, random_type='normal', n = ift.Field.from_random(domain=s_space, random_type='normal',
std=noise_amplitude, mean=0) std=noise_amplitude, mean=0)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Partially Lose Data ### Partially Lose Data
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
l = int(N_pixels * 0.2) l = int(N_pixels * 0.2)
h = int(N_pixels * 0.2 * 2) h = int(N_pixels * 0.2 * 2)
mask = np.full(s_space.shape, 1.) mask = np.full(s_space.shape, 1.)
mask[l:h] = 0 mask[l:h] = 0
mask = ift.Field.from_raw(s_space, mask) mask = ift.Field.from_raw(s_space, mask)
R = ift.DiagonalOperator(mask)(HT) R = ift.DiagonalOperator(mask)(HT)
n = n.val_rw() n = n.val_rw()
n[l:h] = 0 n[l:h] = 0
n = ift.Field.from_raw(s_space, n) n = ift.Field.from_raw(s_space, n)
d = R(sh) + n d = R(sh) + n
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
curv = Curvature(R=R, N=N, Sh=Sh) curv = Curvature(R=R, N=N, Sh=Sh)
D = curv.inverse D = curv.inverse
j = R.adjoint_times(N.inverse_times(d)) j = R.adjoint_times(N.inverse_times(d))
m = D(j) m = D(j)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Compute Uncertainty ### Compute Uncertainty
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
m_mean, m_var = ift.probe_with_posterior_samples(curv, HT, 200, np.float64) m_mean, m_var = ift.probe_with_posterior_samples(curv, HT, 200, np.float64)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Get data ### Get data
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Get signal data and reconstruction data # Get signal data and reconstruction data
s_data = s.val s_data = s.val
m_data = HT(m).val m_data = HT(m).val
m_var_data = m_var.val m_var_data = m_var.val
uncertainty = np.sqrt(m_var_data) uncertainty = np.sqrt(m_var_data)
d_data = d.val_rw() d_data = d.val_rw()
# Set lost data to NaN for proper plotting # Set lost data to NaN for proper plotting
d_data[d_data == 0] = np.nan d_data[d_data == 0] = np.nan
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
fig = plt.figure(figsize=(15,10))
plt.axvspan(l, h, facecolor='0.8',alpha=0.5) plt.axvspan(l, h, facecolor='0.8',alpha=0.5)
plt.fill_between(range(N_pixels), m_data - uncertainty, m_data + uncertainty, facecolor='0.5', alpha=0.5) plt.fill_between(range(N_pixels), m_data - uncertainty, m_data + uncertainty, facecolor='0.5', alpha=0.5)
plt.plot(s_data, 'r', label="Signal", alpha=1, linewidth</