- There, $j$ is the information source defined as $$ j = R^\dagger N^{-1} d.$$
- There, $j$ is the information source defined as $$ j = R^\dagger N^{-1} d.$$
Let us implement this in **NIFTy!** So let's import all the packages we need.
Let us implement this in **NIFTy!** So let's import all the packages we need.
%% Cell type:code id:73573037 tags:
%% Cell type:code id:73573037 tags:
``` python
``` python
%matplotlibinline
%matplotlibinline
importnumpyasnp
importnumpyasnp
importnifty8asift
importnifty8asift
importmatplotlib.pyplotasplt
importmatplotlib.pyplotasplt
plt.rcParams['figure.dpi']=100
plt.rcParams['figure.dpi']=100
plt.style.use("seaborn-notebook")
```
```
%% Cell type:markdown id:1b447738 tags:
%% Cell type:markdown id:1b447738 tags:
### Implementation in NIFTy
### Implementation in NIFTy
We assume statistical **homogeneity** and **isotropy**, so the signal covariance $S$ is **translation invariant** and only depends on the **absolute value** of the distance.
We assume statistical **homogeneity** and **isotropy**, so the signal covariance $S$ is **translation invariant** and only depends on the **absolute value** of the distance.
According to Wiener-Khinchin theorem, the signal covariance $S$ is diagonal in harmonic space, $$S_{kk^{\prime}} = 2 \pi \delta(k-k^{\prime}) P(k)= \text{diag}(S) \equiv \widehat{S_k}$$ and is described by a one-dimensional power spectrum.
According to Wiener-Khinchin theorem, the signal covariance $S$ is diagonal in harmonic space, $$S_{kk^{\prime}} = 2 \pi \delta(k-k^{\prime}) P(k)= \text{diag}(S) \equiv \widehat{S_k}$$ and is described by a one-dimensional power spectrum.
We assume the power spectrum to follow a power-law, $$P(k) = P_0\,\left(1+\left(\frac{k}{k_0}\right)^2\right)^{-\gamma /2},$$ with $P_0 = 2 \cdot 10^4, \ k_0 = 5, \ \gamma = 4$, thus the reconstruction starts in harmonic space.
We assume the power spectrum to follow a power-law, $$P(k) = P_0\,\left(1+\left(\frac{k}{k_0}\right)^2\right)^{-\gamma /2},$$ with $P_0 = 2 \cdot 10^4, \ k_0 = 5, \ \gamma = 4$, thus the reconstruction starts in harmonic space.
%% Cell type:code id:c3a935c1 tags:
%% Cell type:code id:c3a935c1 tags:
``` python
``` python
defpow_spec(k):
defpow_spec(k):
P0,k0,gamma=[2e4,5,4]
P0,k0,gamma=[2e4,5,4]
returnP0/((1.+(k/k0)**2)**(gamma/2))
returnP0/((1.+(k/k0)**2)**(gamma/2))
```
```
%% Cell type:markdown id:5ce7c229 tags:
%% Cell type:markdown id:5ce7c229 tags:
### Spaces and harmonic transformations
### Spaces and harmonic transformations
- We define our non-harmonic signal space to be Cartesian with $N_{pix} = 512$ being the number of grid cells.
- We define our non-harmonic signal space to be Cartesian with $N_{pix} = 512$ being the number of grid cells.
- To connect harmonic and non-harmonic spaces we introduce the Hartley transform $H$ that is closely related to the Fourier transform but maps $\mathbb{R}\rightarrow\mathbb{R}$.
- To connect harmonic and non-harmonic spaces we introduce the Hartley transform $H$ that is closely related to the Fourier transform but maps $\mathbb{R}\rightarrow\mathbb{R}$.
- The covariance S in non-harmonic space is given by $$S = H^{\dagger}\widehat{S_k} H \ .$$
- The covariance S in non-harmonic space is given by $$S = H^{\dagger}\widehat{S_k} H \ .$$
%% Cell type:code id:f754e1b4 tags:
%% Cell type:code id:f754e1b4 tags:
``` python
``` python
# Signal space is a regular Cartesian grid space
# Signal space is a regular Cartesian grid space
N_pix=512
N_pix=512
s_space=ift.RGSpace(N_pix)
s_space=ift.RGSpace(N_pix)
```
```
%% Cell type:code id:6c25ac41 tags:
%% Cell type:code id:6c25ac41 tags:
``` python
``` python
# k_space is the harmonic conjugate space of s_space
# k_space is the harmonic conjugate space of s_space
# Sandwich Operator implements S = HT.adjoint @ S_k @ HT and enables NIFTy to sample from S
# Sandwich Operator implements S = HT.adjoint @ S_k @ HT and enables NIFTy to sample from S
S=ift.SandwichOperator.make(bun=HT,cheese=S_k)
S=ift.SandwichOperator.make(bun=HT,cheese=S_k)
```
```
%% Cell type:markdown id:a84f4b80 tags:
%% Cell type:markdown id:a84f4b80 tags:
### Synthetic Data
### Synthetic Data
- In order to demonstrate the Wiener filter, we use **synthetic data**. Therefore, we draw a sample $\tilde{s}$ from $S$. (see Sampling)
- In order to demonstrate the Wiener filter, we use **synthetic data**. Therefore, we draw a sample $\tilde{s}$ from $S$. (see Sampling)
- For simplicity we define the response operator as a unit matrix, $R = \mathbb{1}$.
- For simplicity we define the response operator as a unit matrix, $R = \mathbb{1}$.
- We assume the noise covariance to be uncorrelated and constant, $N = 0.2 \cdot \mathbb{1}$ and draw a sample $\tilde{n}$.
- We assume the noise covariance to be uncorrelated and constant, $N = 0.2 \cdot \mathbb{1}$ and draw a sample $\tilde{n}$.
- Thus the synthetic data $d = R(\tilde{s}) + \tilde{n}$.
- Thus the synthetic data $d = R(\tilde{s}) + \tilde{n}$.
%% Cell type:markdown id:387bd2c5 tags:
%% Cell type:markdown id:387bd2c5 tags:
### Sampling
### Sampling
- Assuming we have a distribution $\mathcal{G}(b,B)$ we can sample from and we want to draw a sample from a distritbution $\mathcal{G}(c,C)$ with covariance $C$. The two distributions are connected via the relation $C = ABA^{\dagger}.$ One can show that $c= Ab$ with $b \curvearrowleft \mathcal{G}(b,B)$ has a probability distribution with covariance $C$ as desired.
- Assuming we have a distribution $\mathcal{G}(b,B)$ we can sample from and we want to draw a sample from a distritbution $\mathcal{G}(c,C)$ with covariance $C$. The two distributions are connected via the relation $C = ABA^{\dagger}.$ One can show that $c= Ab$ with $b \curvearrowleft \mathcal{G}(b,B)$ has a probability distribution with covariance $C$ as desired.
- This is also true for the case that $B = \mathbb{1}$, meaning that $\mathcal{G}(b,\mathbb{1})$ Thus $C = AA^{\dagger}$ .
- This is also true for the case that $B = \mathbb{1}$, meaning that $\mathcal{G}(b,\mathbb{1})$ Thus $C = AA^{\dagger}$ .
- Note that, if $C$ is diagonal, $A$ is diagonal as well.
- Note that, if $C$ is diagonal, $A$ is diagonal as well.
- It can be shown that if $C = A + B$, then $c = a + b$ with $b \curvearrowleft \mathcal{G}(b,B)$ and $a \curvearrowleft \mathcal{G}(a,A)$ has a probability distribution with covariance $C$ as desired.
- It can be shown that if $C = A + B$, then $c = a + b$ with $b \curvearrowleft \mathcal{G}(b,B)$ and $a \curvearrowleft \mathcal{G}(a,A)$ has a probability distribution with covariance $C$ as desired.
- If we can draw samples from $\mathcal{G}(a,A)$, and we want to draw a sample with the covariance $A^{-1}$, one can simply show that $c = A^{-1}a$ has a a probability distribution with covariance $A^{-1}$.
- If we can draw samples from $\mathcal{G}(a,A)$, and we want to draw a sample with the covariance $A^{-1}$, one can simply show that $c = A^{-1}a$ has a a probability distribution with covariance $A^{-1}$.
# Enable .inverse to invert D via Conjugate Gradient.
# Enable .inverse to invert D via Conjugate Gradient.
D_inv=ift.InversionEnabler(D_inv,ic)
D_inv=ift.InversionEnabler(D_inv,ic)
D=D_inv.inverse
D=D_inv.inverse
```
```
%% Cell type:markdown id:c2ab51d4 tags:
%% Cell type:markdown id:c2ab51d4 tags:
### Apply Wiener Filter
### Apply Wiener Filter
After defining the information source and propagator, we are able to apply the Wiener filter in order to get the posterior mean $m = \langle s \rangle_{\mathcal{P}(s|d)}$ that is our reconstruction of the signal:
After defining the information source and propagator, we are able to apply the Wiener filter in order to get the posterior mean $m = \langle s \rangle_{\mathcal{P}(s|d)}$ that is our reconstruction of the signal:
%% Cell type:code id:8e7616c8 tags:
%% Cell type:code id:8e7616c8 tags:
``` python
``` python
m=D(j)
m=D(j)
```
```
%% Cell type:markdown id:60b7922d tags:
%% Cell type:markdown id:60b7922d tags:
### Results
### Results
%% Cell type:code id:3d4fdd46 tags:
%% Cell type:code id:3d4fdd46 tags:
``` python
``` python
# `.val` retrieves the underlying numpy array from a NIFTy Field.
# `.val` retrieves the underlying numpy array from a NIFTy Field.
Now we consider a case that the data is not complete.
Now we consider a case that the data is not complete.
This might be the case in real situations as the instrument might not be able to receive data.
This might be the case in real situations as the instrument might not be able to receive data.
In order to apply the Wiener filter to this case, we first need to build the response corresponding to the incomplete measurement in NIFTy!
In order to apply the Wiener filter to this case, we first need to build the response corresponding to the incomplete measurement in NIFTy!
%% Cell type:markdown id:dae020f7 tags:
%% Cell type:markdown id:dae020f7 tags:
### Incomplete Measuring / Masking
### Incomplete Measuring / Masking
We need to build mask operator which cuts out all the unobserved parts of the signal.
We need to build mask operator which cuts out all the unobserved parts of the signal.
Lets assume that we first observe the signal for some time, but then something goes wrong with our instrument and we don't collect data for a while.
Lets assume that we first observe the signal for some time, but then something goes wrong with our instrument and we don't collect data for a while.
After fixing the instrument we can collect data again.
After fixing the instrument we can collect data again.
This means that data lives on an unstructured domain as the there is data missing for the period of time $t_{\text{off}}$ when the instrument was offline.
This means that data lives on an unstructured domain as the there is data missing for the period of time $t_{\text{off}}$ when the instrument was offline.
In order to implement this incomplete measurement we need to define a new response operator $R$ which masks the signal for the time $t_{\text{off}}$.
In order to implement this incomplete measurement we need to define a new response operator $R$ which masks the signal for the time $t_{\text{off}}$.
%% Cell type:code id:14726428 tags:
%% Cell type:code id:14726428 tags:
``` python
``` python
# Whole observation time
# Whole observation time
npix=s_space.size
npix=s_space.size
# Time when the instrument is turned off
# Time when the instrument is turned off
l=int(npix*0.2)
l=int(npix*0.2)
# Time when the instrument is turned on again
# Time when the instrument is turned on again
h=int(npix*0.4)
h=int(npix*0.4)
# Initialise a new array for the whole time frame
# Initialise a new array for the whole time frame
mask=np.zeros(s_space.shape,bool)
mask=np.zeros(s_space.shape,bool)
# Define the mask
# Define the mask
mask[l:h]=1
mask[l:h]=1
# Turn the numpy array into a nifty field
# Turn the numpy array into a nifty field
mask=ift.makeField(s_space,mask)
mask=ift.makeField(s_space,mask)
# Define the response operator which masks the places where mask == 1
# Define the response operator which masks the places where mask == 1
R=ift.MaskOperator(mask)
R=ift.MaskOperator(mask)
```
```
%% Cell type:markdown id:dd3e8af1 tags:
%% Cell type:markdown id:dd3e8af1 tags:
### Synthetic Data
### Synthetic Data
As in the Wiener filter example with complete data, we are generating some synthetic data now by propagating the previously drawn prior sample through the incomplete measurement response and adding a noise sample.
As in the Wiener filter example with complete data, we are generating some synthetic data now by propagating the previously drawn prior sample through the incomplete measurement response and adding a noise sample.
Since we have an incomplete measurement we want to know how uncertain we are about our Wiener filter solution. We can easily obtain both, the mean and the standard deviation by sampling from $D$ and computing them directly from the drawn samples.
Since we have an incomplete measurement we want to know how uncertain we are about our Wiener filter solution. We can easily obtain both, the mean and the standard deviation by sampling from $D$ and computing them directly from the drawn samples.
In order to enable NIFTy to sample from $D$ we need to use some helper functions.
In order to enable NIFTy to sample from $D$ we need to use some helper functions.
%% Cell type:code id:53b10df1 tags:
%% Cell type:code id:53b10df1 tags:
``` python
``` python
# This implements the rule how to sample from a sum of covariances
# This implements the rule how to sample from a sum of covariances
# Number of samples to calculate the posterior standard deviation
# Number of samples to calculate the posterior standard deviation
n_samples=200
n_samples=200
# Helper function that calculates the mean and the variance from a set of samples efficiently
# Helper function that calculates the mean and the variance from a set of samples efficiently
sc=ift.StatCalculator()
sc=ift.StatCalculator()
for_inrange(n_samples):
for_inrange(n_samples):
# Draw a sample of G(s,D) and shifting it by m -> G(s-m,D)
# Draw a sample of G(s,D) and shifting it by m -> G(s-m,D)
sample=m+D.draw_sample()
sample=m+D.draw_sample()
# Add it to the StatCalculator
# Add it to the StatCalculator
sc.add(sample)
sc.add(sample)
# Standard deviation from samples
# Standard deviation from samples
samples_std=sc.var.sqrt()
samples_std=sc.var.sqrt()
# Mean from samples that converges to m in the limit of infinitely many samples
# Mean from samples that converges to m in the limit of infinitely many samples
samples_mean=sc.mean
samples_mean=sc.mean
```
```
%% Cell type:markdown id:7c1cb854 tags:
%% Cell type:markdown id:7c1cb854 tags:
### Plots
### Plots
Let us visualize the results of the Wiener filter $m$, the sampled standard deviation and mean, as well as the true signal (ground truth) and the data.
Let us visualize the results of the Wiener filter $m$, the sampled standard deviation and mean, as well as the true signal (ground truth) and the data.
Since the data lives in data space, we first need to project it back into the signal space via $R^{\dagger}d$.
Since the data lives in data space, we first need to project it back into the signal space via $R^{\dagger}d$.
%% Cell type:code id:09c0d576 tags:
%% Cell type:code id:09c0d576 tags:
``` python
``` python
plt.axvspan(l,h,facecolor='0.8',alpha=0.3,label="masked area")# Shading the masked interval
plt.axvspan(l,h,facecolor='0.8',alpha=0.3,label="masked area")# Shading the masked interval