Commit 049e7fa9 authored by Philipp Arras's avatar Philipp Arras
Browse files

Merge remote-tracking branch 'origin/NIFTy_7' into various_pa

parents 5e53e253 06f6250f
Pipeline #75689 passed with stages
in 13 minutes
...@@ -37,7 +37,7 @@ build_docker_from_cache: ...@@ -37,7 +37,7 @@ build_docker_from_cache:
test_serial: test_serial:
stage: test stage: test
script: script:
- pytest-3 -q --cov=nifty6 test - pytest-3 -q --cov=nifty7 test
- > - >
python3 -m coverage report --omit "*plot*" | tee coverage.txt python3 -m coverage report --omit "*plot*" | tee coverage.txt
- > - >
......
Changes since NIFTy 5: Changes since NIFTy 6
=====================
*None.*
Changes since NIFTy 5
=====================
Minimum Python version increased to 3.6 Minimum Python version increased to 3.6
======================================= ---------------------------------------
New operators New operators
============= -------------
In addition to the below changes, the following operators were introduced: In addition to the below changes, the following operators were introduced:
...@@ -20,14 +27,14 @@ In addition to the below changes, the following operators were introduced: ...@@ -20,14 +27,14 @@ In addition to the below changes, the following operators were introduced:
* IntegrationOperator: Integrates over subspaces of fields * IntegrationOperator: Integrates over subspaces of fields
FFT convention adjusted FFT convention adjusted
======================= -----------------------
When going to harmonic space, NIFTy's FFT operator now uses a minus sign in the When going to harmonic space, NIFTy's FFT operator now uses a minus sign in the
exponent (and, consequently, a plus sign on the adjoint transform). This exponent (and, consequently, a plus sign on the adjoint transform). This
convention is consistent with almost all other numerical FFT libraries. convention is consistent with almost all other numerical FFT libraries.
Interface change in EndomorphicOperator.draw_sample() Interface change in EndomorphicOperator.draw_sample()
===================================================== -----------------------------------------------------
Both complex-valued and real-valued Gaussian probability distributions have Both complex-valued and real-valued Gaussian probability distributions have
Hermitian and positive endomorphisms as covariance. Just by looking at an Hermitian and positive endomorphisms as covariance. Just by looking at an
...@@ -43,7 +50,7 @@ In order to dive into those subtleties I suggest running the following code and ...@@ -43,7 +50,7 @@ In order to dive into those subtleties I suggest running the following code and
playing around with the dtypes. playing around with the dtypes.
``` ```
import nifty6 as ift import nifty7 as ift
import numpy as np import numpy as np
dom = ift.UnstructuredDomain(5) dom = ift.UnstructuredDomain(5)
...@@ -59,29 +66,29 @@ print(met.draw_sample()) ...@@ -59,29 +66,29 @@ print(met.draw_sample())
``` ```
MPI parallelisation over samples in MetricGaussianKL MPI parallelisation over samples in MetricGaussianKL
==================================================== ----------------------------------------------------
The classes `MetricGaussianKL` and `MetricGaussianKL_MPI` have been unified The classes `MetricGaussianKL` and `MetricGaussianKL_MPI` have been unified
into one `MetricGaussianKL` class which has MPI support built in. into one `MetricGaussianKL` class which has MPI support built in.
New approach for random number generation New approach for random number generation
========================================= -----------------------------------------
The code now uses `numpy`'s new `SeedSequence` and `Generator` classes for the The code now uses `numpy`'s new `SeedSequence` and `Generator` classes for the
production of random numbers (introduced in numpy 1.17. This greatly simplifies production of random numbers (introduced in numpy 1.17. This greatly simplifies
the generation of reproducible random numbers in the presence of MPI parallelism the generation of reproducible random numbers in the presence of MPI parallelism
and leads to cleaner code overall. Please see the documentation of and leads to cleaner code overall. Please see the documentation of
`nifty6.random` for details. `nifty7.random` for details.
Interface Change for from_random and OuterProduct Interface Change for from_random and OuterProduct
================================================= -------------------------------------------------
The sugar.from_random, Field.from_random, MultiField.from_random now take domain The sugar.from_random, Field.from_random, MultiField.from_random now take domain
as the first argument and default to 'normal' for the second argument. as the first argument and default to 'normal' for the second argument.
Likewise OuterProduct takes domain as the first argument and a field as the second. Likewise OuterProduct takes domain as the first argument and a field as the second.
Interface Change for non-linear Operators Interface Change for non-linear Operators
========================================= -----------------------------------------
The method `Operator.apply()` takes a `Linearization` or a `Field` or a The method `Operator.apply()` takes a `Linearization` or a `Field` or a
`MultiField` as input. This has not changed. However, now each non-linear `MultiField` as input. This has not changed. However, now each non-linear
...@@ -98,7 +105,7 @@ behaviour since both `Operator._check_input()` and ...@@ -98,7 +105,7 @@ behaviour since both `Operator._check_input()` and
fulfilled. fulfilled.
Special functions for complete Field reduction operations Special functions for complete Field reduction operations
========================================================= ---------------------------------------------------------
So far, reduction operations called on Fields (like `vdot`, `sum`, `integrate`, So far, reduction operations called on Fields (like `vdot`, `sum`, `integrate`,
`mean`, `var`, `std`, `prod` etc.) returned a scalar when the reduction was `mean`, `var`, `std`, `prod` etc.) returned a scalar when the reduction was
...@@ -110,7 +117,7 @@ operate over all subdomains and therefore don't take a `spaces` argument; they ...@@ -110,7 +117,7 @@ operate over all subdomains and therefore don't take a `spaces` argument; they
are named `s_vdot`, `s_sum` etc. and always return a scalar. are named `s_vdot`, `s_sum` etc. and always return a scalar.
Updates regarding correlated fields Updates regarding correlated fields
=================================== -----------------------------------
The most commonly used model for homogeneous and isotropic correlated fields in The most commonly used model for homogeneous and isotropic correlated fields in
nifty5 has been `SLAmplitude` combined with `CorrelatedField`. This model nifty5 has been `SLAmplitude` combined with `CorrelatedField`. This model
...@@ -127,7 +134,7 @@ via `napprox` breaks the inference scheme with the new model so `napprox` may no ...@@ -127,7 +134,7 @@ via `napprox` breaks the inference scheme with the new model so `napprox` may no
be used here. be used here.
Removal of the standard MPI parallelization scheme: Removal of the standard MPI parallelization scheme:
=================================================== ---------------------------------------------------
When several MPI tasks are present, NIFTy5 distributes every Field over these When several MPI tasks are present, NIFTy5 distributes every Field over these
tasks by splitting it along the first axis. This approach to parallelism is not tasks by splitting it along the first axis. This approach to parallelism is not
......
NIFTy - Numerical Information Field Theory NIFTy - Numerical Information Field Theory
========================================== ==========================================
[![build status](https://gitlab.mpcdf.mpg.de/ift/NIFTy/badges/NIFTy_6/build.svg)](https://gitlab.mpcdf.mpg.de/ift/NIFTy/commits/NIFTy_6) [![build status](https://gitlab.mpcdf.mpg.de/ift/NIFTy/badges/NIFTy_7/build.svg)](https://gitlab.mpcdf.mpg.de/ift/NIFTy/commits/NIFTy_7)
[![coverage report](https://gitlab.mpcdf.mpg.de/ift/NIFTy/badges/NIFTy_6/coverage.svg)](https://gitlab.mpcdf.mpg.de/ift/NIFTy/commits/NIFTy_6) [![coverage report](https://gitlab.mpcdf.mpg.de/ift/NIFTy/badges/NIFTy_7/coverage.svg)](https://gitlab.mpcdf.mpg.de/ift/NIFTy/commits/NIFTy_7)
**NIFTy** project homepage: **NIFTy** project homepage:
[http://ift.pages.mpcdf.de/nifty](http://ift.pages.mpcdf.de/nifty) [http://ift.pages.mpcdf.de/nifty](http://ift.pages.mpcdf.de/nifty)
...@@ -59,8 +59,8 @@ Optional dependencies: ...@@ -59,8 +59,8 @@ Optional dependencies:
### Sources ### Sources
The current version of NIFTy6 can be obtained by cloning the repository and The current version of NIFTy7 can be obtained by cloning the repository and
switching to the NIFTy_6 branch: switching to the NIFTy_7 branch:
git clone https://gitlab.mpcdf.mpg.de/ift/nifty.git git clone https://gitlab.mpcdf.mpg.de/ift/nifty.git
...@@ -69,10 +69,10 @@ switching to the NIFTy_6 branch: ...@@ -69,10 +69,10 @@ switching to the NIFTy_6 branch:
In the following, we assume a Debian-based distribution. For other In the following, we assume a Debian-based distribution. For other
distributions, the "apt" lines will need slight changes. distributions, the "apt" lines will need slight changes.
NIFTy6 and its mandatory dependencies can be installed via: NIFTy7 and its mandatory dependencies can be installed via:
sudo apt-get install git python3 python3-pip python3-dev sudo apt-get install git python3 python3-pip python3-dev
pip3 install --user git+https://gitlab.mpcdf.mpg.de/ift/nifty.git@NIFTy_6 pip3 install --user git+https://gitlab.mpcdf.mpg.de/ift/nifty.git@NIFTy_7
Plotting support is added via: Plotting support is added via:
...@@ -107,7 +107,7 @@ To run the tests, additional packages are required: ...@@ -107,7 +107,7 @@ To run the tests, additional packages are required:
Afterwards the tests (including a coverage report) can be run using the Afterwards the tests (including a coverage report) can be run using the
following command in the repository root: following command in the repository root:
pytest-3 --cov=nifty6 test pytest-3 --cov=nifty7 test
### First Steps ### First Steps
......
...@@ -21,7 +21,7 @@ from time import time ...@@ -21,7 +21,7 @@ from time import time
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
import numpy as np import numpy as np
import nifty6 as ift import nifty7 as ift
N0s, a0s, b0s, c0s = [], [], [], [] N0s, a0s, b0s, c0s = [], [], [], []
......
...@@ -24,7 +24,7 @@ ...@@ -24,7 +24,7 @@
import numpy as np import numpy as np
import nifty6 as ift import nifty7 as ift
if __name__ == '__main__': if __name__ == '__main__':
# Set up the position space of the signal # Set up the position space of the signal
......
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
# A NIFTy demonstration # A NIFTy demonstration
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## IFT: Big Picture ## IFT: Big Picture
IFT starting point: IFT starting point:
$$d = Rs+n$$ $$d = Rs+n$$
Typically, $s$ is a continuous field, $d$ a discrete data vector. Particularly, $R$ is not invertible. Typically, $s$ is a continuous field, $d$ a discrete data vector. Particularly, $R$ is not invertible.
IFT aims at **inverting** the above uninvertible problem in the **best possible way** using Bayesian statistics. IFT aims at **inverting** the above uninvertible problem in the **best possible way** using Bayesian statistics.
## NIFTy ## NIFTy
NIFTy (Numerical Information Field Theory) is a Python framework in which IFT problems can be tackled easily. NIFTy (Numerical Information Field Theory) is a Python framework in which IFT problems can be tackled easily.
Main Interfaces: Main Interfaces:
- **Spaces**: Cartesian, 2-Spheres (Healpix, Gauss-Legendre) and their respective harmonic spaces. - **Spaces**: Cartesian, 2-Spheres (Healpix, Gauss-Legendre) and their respective harmonic spaces.
- **Fields**: Defined on spaces. - **Fields**: Defined on spaces.
- **Operators**: Acting on fields. - **Operators**: Acting on fields.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Wiener Filter: Formulae ## Wiener Filter: Formulae
### Assumptions ### Assumptions
- $d=Rs+n$, $R$ linear operator. - $d=Rs+n$, $R$ linear operator.
- $\mathcal P (s) = \mathcal G (s,S)$, $\mathcal P (n) = \mathcal G (n,N)$ where $S, N$ are positive definite matrices. - $\mathcal P (s) = \mathcal G (s,S)$, $\mathcal P (n) = \mathcal G (n,N)$ where $S, N$ are positive definite matrices.
### Posterior ### Posterior
The Posterior is given by: The Posterior is given by:
$$\mathcal P (s|d) \propto P(s,d) = \mathcal G(d-Rs,N) \,\mathcal G(s,S) \propto \mathcal G (s-m,D) $$ $$\mathcal P (s|d) \propto P(s,d) = \mathcal G(d-Rs,N) \,\mathcal G(s,S) \propto \mathcal G (s-m,D) $$
where where
$$\begin{align} $$\begin{align}
m &= Dj \\ m &= Dj \\
D^{-1}&= (S^{-1} +R^\dagger N^{-1} R )\\ D^{-1}&= (S^{-1} +R^\dagger N^{-1} R )\\
j &= R^\dagger N^{-1} d j &= R^\dagger N^{-1} d
\end{align}$$ \end{align}$$
Let us implement this in NIFTy! Let us implement this in NIFTy!
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Wiener Filter: Example ## Wiener Filter: Example
- We assume statistical homogeneity and isotropy. Therefore the signal covariance $S$ is diagonal in harmonic space, and is described by a one-dimensional power spectrum, assumed here as $$P(k) = P_0\,\left(1+\left(\frac{k}{k_0}\right)^2\right)^{-\gamma /2},$$ - We assume statistical homogeneity and isotropy. Therefore the signal covariance $S$ is diagonal in harmonic space, and is described by a one-dimensional power spectrum, assumed here as $$P(k) = P_0\,\left(1+\left(\frac{k}{k_0}\right)^2\right)^{-\gamma /2},$$
with $P_0 = 0.2, k_0 = 5, \gamma = 4$. with $P_0 = 0.2, k_0 = 5, \gamma = 4$.
- $N = 0.2 \cdot \mathbb{1}$. - $N = 0.2 \cdot \mathbb{1}$.
- Number of data points $N_{pix} = 512$. - Number of data points $N_{pix} = 512$.
- reconstruction in harmonic space. - reconstruction in harmonic space.
- Response operator: - Response operator:
$$R = FFT_{\text{harmonic} \rightarrow \text{position}}$$ $$R = FFT_{\text{harmonic} \rightarrow \text{position}}$$
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
N_pixels = 512 # Number of pixels N_pixels = 512 # Number of pixels
def pow_spec(k): def pow_spec(k):
P0, k0, gamma = [.2, 5, 4] P0, k0, gamma = [.2, 5, 4]
return P0 / ((1. + (k/k0)**2)**(gamma / 2)) return P0 / ((1. + (k/k0)**2)**(gamma / 2))
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Wiener Filter: Implementation ## Wiener Filter: Implementation
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Import Modules ### Import Modules
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
import numpy as np import numpy as np
import nifty6 as ift import nifty7 as ift
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
%matplotlib inline %matplotlib inline
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Implement Propagator ### Implement Propagator
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
def Curvature(R, N, Sh): def Curvature(R, N, Sh):
IC = ift.GradientNormController(iteration_limit=50000, IC = ift.GradientNormController(iteration_limit=50000,
tol_abs_gradnorm=0.1) tol_abs_gradnorm=0.1)
# WienerFilterCurvature is (R.adjoint*N.inverse*R + Sh.inverse) plus some handy # WienerFilterCurvature is (R.adjoint*N.inverse*R + Sh.inverse) plus some handy
# helper methods. # helper methods.
return ift.WienerFilterCurvature(R,N,Sh,iteration_controller=IC,iteration_controller_sampling=IC) return ift.WienerFilterCurvature(R,N,Sh,iteration_controller=IC,iteration_controller_sampling=IC)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Conjugate Gradient Preconditioning ### Conjugate Gradient Preconditioning
- $D$ is defined via: - $D$ is defined via:
$$D^{-1} = \mathcal S_h^{-1} + R^\dagger N^{-1} R.$$ $$D^{-1} = \mathcal S_h^{-1} + R^\dagger N^{-1} R.$$
In the end, we want to apply $D$ to $j$, i.e. we need the inverse action of $D^{-1}$. This is done numerically (algorithm: *Conjugate Gradient*). In the end, we want to apply $D$ to $j$, i.e. we need the inverse action of $D^{-1}$. This is done numerically (algorithm: *Conjugate Gradient*).
<!-- <!--
- One can define the *condition number* of a non-singular and normal matrix $A$: - One can define the *condition number* of a non-singular and normal matrix $A$:
$$\kappa (A) := \frac{|\lambda_{\text{max}}|}{|\lambda_{\text{min}}|},$$ $$\kappa (A) := \frac{|\lambda_{\text{max}}|}{|\lambda_{\text{min}}|},$$
where $\lambda_{\text{max}}$ and $\lambda_{\text{min}}$ are the largest and smallest eigenvalue of $A$, respectively. where $\lambda_{\text{max}}$ and $\lambda_{\text{min}}$ are the largest and smallest eigenvalue of $A$, respectively.
- The larger $\kappa$ the slower Conjugate Gradient. - The larger $\kappa$ the slower Conjugate Gradient.
- By default, conjugate gradient solves: $D^{-1} m = j$ for $m$, where $D^{-1}$ can be badly conditioned. If one knows a non-singular matrix $T$ for which $TD^{-1}$ is better conditioned, one can solve the equivalent problem: - By default, conjugate gradient solves: $D^{-1} m = j$ for $m$, where $D^{-1}$ can be badly conditioned. If one knows a non-singular matrix $T$ for which $TD^{-1}$ is better conditioned, one can solve the equivalent problem:
$$\tilde A m = \tilde j,$$ $$\tilde A m = \tilde j,$$
where $\tilde A = T D^{-1}$ and $\tilde j = Tj$. where $\tilde A = T D^{-1}$ and $\tilde j = Tj$.
- In our case $S^{-1}$ is responsible for the bad conditioning of $D$ depending on the chosen power spectrum. Thus, we choose - In our case $S^{-1}$ is responsible for the bad conditioning of $D$ depending on the chosen power spectrum. Thus, we choose
$$T = \mathcal F^\dagger S_h^{-1} \mathcal F.$$ $$T = \mathcal F^\dagger S_h^{-1} \mathcal F.$$
--> -->
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Generate Mock data ### Generate Mock data
- Generate a field $s$ and $n$ with given covariances. - Generate a field $s$ and $n$ with given covariances.
- Calculate $d$. - Calculate $d$.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
s_space = ift.RGSpace(N_pixels) s_space = ift.RGSpace(N_pixels)
h_space = s_space.get_default_codomain() h_space = s_space.get_default_codomain()
HT = ift.HarmonicTransformOperator(h_space, target=s_space) HT = ift.HarmonicTransformOperator(h_space, target=s_space)
# Operators # Operators
Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec) Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec)
R = HT #*ift.create_harmonic_smoothing_operator((h_space,), 0, 0.02) R = HT #*ift.create_harmonic_smoothing_operator((h_space,), 0, 0.02)
# Fields and data # Fields and data
sh = Sh.draw_sample_with_dtype(dtype=np.float64) sh = Sh.draw_sample_with_dtype(dtype=np.float64)
noiseless_data=R(sh) noiseless_data=R(sh)
noise_amplitude = np.sqrt(0.2) noise_amplitude = np.sqrt(0.2)
N = ift.ScalingOperator(s_space, noise_amplitude**2) N = ift.ScalingOperator(s_space, noise_amplitude**2)
n = ift.Field.from_random(domain=s_space, random_type='normal', n = ift.Field.from_random(domain=s_space, random_type='normal',
std=noise_amplitude, mean=0) std=noise_amplitude, mean=0)
d = noiseless_data + n d = noiseless_data + n
j = R.adjoint_times(N.inverse_times(d)) j = R.adjoint_times(N.inverse_times(d))
curv = Curvature(R=R, N=N, Sh=Sh) curv = Curvature(R=R, N=N, Sh=Sh)
D = curv.inverse D = curv.inverse
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Run Wiener Filter ### Run Wiener Filter
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
m = D(j) m = D(j)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Signal Reconstruction ### Signal Reconstruction
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Get signal data and reconstruction data # Get signal data and reconstruction data
s_data = HT(sh).val s_data = HT(sh).val
m_data = HT(m).val m_data = HT(m).val
d_data = d.val d_data = d.val
plt.figure(figsize=(15,10)) plt.figure(figsize=(15,10))
plt.plot(s_data, 'r', label="Signal", linewidth=3) plt.plot(s_data, 'r', label="Signal", linewidth=3)
plt.plot(d_data, 'k.', label="Data") plt.plot(d_data, 'k.', label="Data")
plt.plot(m_data, 'k', label="Reconstruction",linewidth=3) plt.plot(m_data, 'k', label="Reconstruction",linewidth=3)
plt.title("Reconstruction") plt.title("Reconstruction")
plt.legend() plt.legend()
plt.show() plt.show()
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
plt.figure(figsize=(15,10)) plt.figure(figsize=(15,10))
plt.plot(s_data - s_data, 'r', label="Signal", linewidth=3) plt.plot(s_data - s_data, 'r', label="Signal", linewidth=3)
plt.plot(d_data - s_data, 'k.', label="Data") plt.plot(d_data - s_data, 'k.', label="Data")
plt.plot(m_data - s_data, 'k', label="Reconstruction",linewidth=3) plt.plot(m_data - s_data, 'k', label="Reconstruction",linewidth=3)
plt.axhspan(-noise_amplitude,noise_amplitude, facecolor='0.9', alpha=.5) plt.axhspan(-noise_amplitude,noise_amplitude, facecolor='0.9', alpha=.5)
plt.title("Residuals") plt.title("Residuals")
plt.legend() plt.legend()
plt.show() plt.show()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Power Spectrum ### Power Spectrum