Commit fc6f5a7a authored by Lukas Platz's avatar Lukas Platz
Browse files

Merge branch 'NIFTy_8' into flexible_FieldZeroPadder

parents 9d7ae618 80fd5456
# never store the git version file
git_version.py
docs/source/user/getting_started_0.rst
docs/source/user/custom_nonlinearities.rst
docs/source/user/getting_started_4_CorrelatedFields.rst
# custom
*.txt
setup.cfg
......
......@@ -46,7 +46,7 @@ build_docker_from_cache:
test_serial:
stage: test
script:
- pytest-3 -q --cov=nifty7 test
- pytest-3 -q --cov=nifty8 test
- >
python3 -m coverage report --omit "*plot*" | tee coverage.txt
- >
......@@ -69,7 +69,7 @@ pages:
paths:
- public
only:
- NIFTy_6
- NIFTy_7
before_script:
......@@ -147,3 +147,8 @@ run_visual_vi:
stage: demo_runs
script:
- python3 demos/variational_inference_visualized.py
run_meanfield:
stage: demo_runs
script:
- python3 demos/parametric_variational_inference.py
Changes since NIFTy 7
=====================
Minisanity
----------
Terminal colors can be disabled in order to make the output of
`ift.extra.minisanity` more readable when written to a file.
Jax interface
-------------
The interfaces `ift.JaxOperator` and `ift.JaxLikelihoodEnergyOperator` provide
an interface to jax.
Interface change for nthreads
-----------------------------
The number of threads, which are used for the FFTs and ducc in general, used to
be set via `ift.fft.set_nthreads(n)`. This has been moved to
`ift.set_nthreads(n)`. Similarly, `ift.fft.nthreads()` -> `ift.nthreads()`.
Changes since NIFTy 6
=====================
......@@ -82,14 +106,22 @@ comparison to MGVI can be found in `demos/variational_inference_visualized.py`.
For further details see (<https://arxiv.org/abs/2105.10470>).
LikelihoodOperator
------------------
LikelihoodEnergyOperator
------------------------
A new subclass of `EnergyOperator` was introduced and all `EnergyOperator`s that
are likelihoods are now `LikelihoodEnergyOperator`s. A
`LikelihoodEnergyOperator` has to implement the function `get_transformation`,
which returns a coordinate transformation in which the Fisher metric of the
likelihood becomes the identity matrix. This is needed for the `GeoMetricKL`
algorithm.
Remove gitversion interface
---------------------------
A new subclass of `EnergyOperator` was introduced and all `EnergyOperator`s
that are likelihoods are now `LikelihoodOperator`s. A `LikelihoodOperator`
has to implement the function `get_transformation`, which returns a
coordinate transformation in which the Fisher metric of the likelihood becomes
the identity matrix. This is needed for the `GeoMetricKL` algorithm.
Since we provide proper nifty releases on PyPI now, the gitversion interface is
not supported any longer.
Changes since NIFTy 5
......@@ -138,7 +170,7 @@ In order to dive into those subtleties I suggest running the following code and
playing around with the dtypes.
```
import nifty7 as ift
import nifty8 as ift
import numpy as np
dom = ift.UnstructuredDomain(5)
......@@ -179,7 +211,7 @@ The code now uses `numpy`'s new `SeedSequence` and `Generator` classes for the
production of random numbers (introduced in numpy 1.17. This greatly simplifies
the generation of reproducible random numbers in the presence of MPI parallelism
and leads to cleaner code overall. Please see the documentation of
`nifty7.random` for details.
`nifty8.random` for details.
Interface Change for from_random and OuterProduct
......
......@@ -6,15 +6,13 @@ RUN apt-get update && apt-get install -y \
# Packages needed for NIFTy
python3-scipy \
# Documentation build dependencies
python3-sphinx-rtd-theme dvipng texlive-latex-base texlive-latex-extra \
dvipng texlive-latex-base texlive-latex-extra \
# Testing dependencies
python3-pytest-cov jupyter \
# Optional NIFTy dependencies
python3-mpi4py python3-matplotlib \
# more optional NIFTy dependencies
&& pip3 install ducc0 \
&& pip3 install finufft \
&& pip3 install jupyter \
&& DUCC0_OPTIMIZATION=portable pip3 install ducc0 finufft jupyter jax jaxlib sphinx pydata-sphinx-theme \
&& rm -rf /var/lib/apt/lists/*
# Set matplotlib backend
......
include ChangeLog.md
include demos/*.py
graft tests
global-exclude *.py[cod]
NIFTy - Numerical Information Field Theory
==========================================
[![pipeline status](https://gitlab.mpcdf.mpg.de/ift/nifty/badges/NIFTy_7/pipeline.svg)](https://gitlab.mpcdf.mpg.de/ift/nifty/-/commits/NIFTy_7)
[![coverage report](https://gitlab.mpcdf.mpg.de/ift/nifty/badges/NIFTy_7/coverage.svg)](https://gitlab.mpcdf.mpg.de/ift/nifty/-/commits/NIFTy_7)
[![pipeline status](https://gitlab.mpcdf.mpg.de/ift/nifty/badges/NIFTy_8/pipeline.svg)](https://gitlab.mpcdf.mpg.de/ift/nifty/-/commits/NIFTy_8)
[![coverage report](https://gitlab.mpcdf.mpg.de/ift/nifty/badges/NIFTy_8/coverage.svg)](https://gitlab.mpcdf.mpg.de/ift/nifty/-/commits/NIFTy_8)
**NIFTy** project homepage:
[http://ift.pages.mpcdf.de/nifty](http://ift.pages.mpcdf.de/nifty)
[https://ift.pages.mpcdf.de/nifty](https://ift.pages.mpcdf.de/nifty)
Summary
-------
......@@ -49,27 +49,28 @@ Installation
- [SciPy](https://www.scipy.org/)
Optional dependencies:
- [DUCC0](https://gitlab.mpcdf.mpg.de/mtr/ducc) for faster FFTs, spherical
- [ducc0](https://gitlab.mpcdf.mpg.de/mtr/ducc) for faster FFTs, spherical
harmonic transforms, and radio interferometry gridding support
- [mpi4py](https://mpi4py.scipy.org) (for MPI-parallel execution)
- [matplotlib](https://matplotlib.org/) (for field plotting)
- [jax](https://github.com/google/jax) (for implementing operators with jax)
### Sources
The current version of NIFTy7 can be obtained by cloning the repository and
switching to the NIFTy_7 branch:
The current version of NIFTy8 can be obtained by cloning the repository and
switching to the NIFTy_8 branch:
git clone https://gitlab.mpcdf.mpg.de/ift/nifty.git
git clone -b NIFTy_8 https://gitlab.mpcdf.mpg.de/ift/nifty.git
### Installation
In the following, we assume a Debian-based distribution. For other
distributions, the "apt" lines will need slight changes.
NIFTy7 and its mandatory dependencies can be installed via:
NIFTy8 and its mandatory dependencies can be installed via:
sudo apt-get install git python3 python3-pip python3-dev
pip3 install --user git+https://gitlab.mpcdf.mpg.de/ift/nifty.git@NIFTy_7
pip3 install --user git+https://gitlab.mpcdf.mpg.de/ift/nifty.git@NIFTy_8
Plotting support is added via:
......@@ -79,6 +80,8 @@ The DUCC0 package is installed via:
pip3 install ducc0
For installing jax refer to [google/jax:README#Installation](https://github.com/google/jax#installation).
If this library is present, NIFTy will detect it automatically and prefer
`ducc0.fft` over SciPy's FFT. The underlying code is actually the same, but
DUCC's FFT is compiled with optimizations for the host CPU and can provide
......@@ -88,7 +91,7 @@ MPI support is added via:
sudo apt-get install python3-mpi4py
### Running the tests
### Run the tests
To run the tests, additional packages are required:
......@@ -97,34 +100,25 @@ To run the tests, additional packages are required:
Afterwards the tests (including a coverage report) can be run using the
following command in the repository root:
pytest-3 --cov=nifty7 test
pytest-3 --cov=nifty8 test
### First Steps
For a quick start, you can browse through the [informal
introduction](http://ift.pages.mpcdf.de/nifty/code.html) or
introduction](https://ift.pages.mpcdf.de/nifty/code.html) or
dive into NIFTy by running one of the demonstrations, e.g.:
python3 demos/getting_started_1.py
### Building the documentation from source
To build the documentation from source, install
[sphinx](https://www.sphinx-doc.org/en/stable/index.html) and the
[Read The Docs Sphinx Theme](https://github.com/readthedocs/sphinx_rtd_theme)
on your system and run
sh docs/generate.sh
### Acknowledgements
Please acknowledge the use of NIFTy in your publication(s) by using a
phrase such as the following:
Please consider acknowledging NIFTy in your publication(s) by using a phrase
such as the following:
> "Some of the results in this publication have been derived using the
> NIFTy package [(https://gitlab.mpcdf.mpg.de/ift/NIFTy)](https://gitlab.mpcdf.mpg.de/ift/NIFTy)"
and a citation to one of the [publications](http://ift.pages.mpcdf.de/nifty/citations.html).
and a citation to one of the [publications](https://ift.pages.mpcdf.de/nifty/citations.html).
### Licensing terms
......@@ -134,17 +128,11 @@ The NIFTy package is licensed under the terms of the
*without any warranty*.
Contributing
Contributors
------------
Please note our convention not to use pure Python `assert` statements in
production code. They are not guaranteed to by executed by Python and can be
turned off by the user (`python -O` in cPython). As an alternative use
`ift.myassert`.
### NIFTy8
Contributors
------------
### NIFTy7
......
......@@ -11,7 +11,7 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
# Copyright(C) 2013-2020 Max-Planck-Society
# Copyright(C) 2013-2021 Max-Planck-Society
#
# NIFTy is being developed at the Max-Planck-Institut fuer Astrophysik.
......@@ -24,7 +24,7 @@
import numpy as np
import nifty7 as ift
import nifty8 as ift
def main():
......@@ -64,16 +64,16 @@ def main():
data = ift.random.current_rng().binomial(1, tmp)
data = ift.Field.from_raw(R.target, data)
# Compute likelihood and Hamiltonian
# Compute likelihood energy and Hamiltonian
position = ift.from_random(harmonic_space, 'normal')
likelihood = ift.BernoulliEnergy(data) @ p
likelihood_energy = ift.BernoulliEnergy(data) @ p
ic_newton = ift.DeltaEnergyController(
name='Newton', iteration_limit=100, tol_rel_deltaE=1e-8)
minimizer = ift.NewtonCG(ic_newton)
ic_sampling = ift.GradientNormController(iteration_limit=100)
# Minimize the Hamiltonian
H = ift.StandardHamiltonian(likelihood, ic_sampling)
H = ift.StandardHamiltonian(likelihood_energy, ic_sampling)
H = ift.EnergyAdapter(position, H, want_metric=True)
# minimizer = ift.L_BFGS(ic_newton)
H, convergence = minimizer(H)
......
......@@ -8,7 +8,7 @@
}
},
"source": [
"# A NIFTy demonstration"
"# Code example: Wiener filter"
]
},
{
......@@ -19,7 +19,7 @@
}
},
"source": [
"## IFT: Big Picture\n",
"## Introduction\n",
"IFT starting point:\n",
"\n",
"$$d = Rs+n$$\n",
......@@ -28,9 +28,6 @@
"\n",
"IFT aims at **inverting** the above uninvertible problem in the **best possible way** using Bayesian statistics.\n",
"\n",
"\n",
"## NIFTy\n",
"\n",
"NIFTy (Numerical Information Field Theory) is a Python framework in which IFT problems can be tackled easily.\n",
"\n",
"Main Interfaces:\n",
......@@ -48,7 +45,7 @@
}
},
"source": [
"## Wiener Filter: Formulae\n",
"## Wiener filter on one-dimensional fields\n",
"\n",
"### Assumptions\n",
"\n",
......@@ -61,11 +58,9 @@
"$$\\mathcal P (s|d) \\propto P(s,d) = \\mathcal G(d-Rs,N) \\,\\mathcal G(s,S) \\propto \\mathcal G (s-m,D) $$\n",
"\n",
"where\n",
"$$\\begin{align}\n",
"m &= Dj \\\\\n",
"D^{-1}&= (S^{-1} +R^\\dagger N^{-1} R )\\\\\n",
"j &= R^\\dagger N^{-1} d\n",
"\\end{align}$$\n",
"$$m = Dj$$\n",
"with\n",
"$$D = (S^{-1} +R^\\dagger N^{-1} R)^{-1} , \\quad j = R^\\dagger N^{-1} d.$$\n",
"\n",
"Let us implement this in NIFTy!"
]
......@@ -78,7 +73,7 @@
}
},
"source": [
"## Wiener Filter: Example\n",
"### In NIFTy\n",
"\n",
"- We assume statistical homogeneity and isotropy. Therefore the signal covariance $S$ is diagonal in harmonic space, and is described by a one-dimensional power spectrum, assumed here as $$P(k) = P_0\\,\\left(1+\\left(\\frac{k}{k_0}\\right)^2\\right)^{-\\gamma /2},$$\n",
"with $P_0 = 0.2, k_0 = 5, \\gamma = 4$.\n",
......@@ -114,7 +109,7 @@
}
},
"source": [
"## Wiener Filter: Implementation"
"### Implementation"
]
},
{
......@@ -125,7 +120,7 @@
}
},
"source": [
"### Import Modules"
"#### Import Modules"
]
},
{
......@@ -139,10 +134,12 @@
},
"outputs": [],
"source": [
"%matplotlib inline\n",
"import numpy as np\n",
"import nifty7 as ift\n",
"import nifty8 as ift\n",
"import matplotlib.pyplot as plt\n",
"%matplotlib inline"
"plt.rcParams['figure.dpi'] = 100\n",
"plt.style.use(\"seaborn-notebook\")"
]
},
{
......@@ -153,7 +150,7 @@
}
},
"source": [
"### Implement Propagator"
"#### Implement Propagator"
]
},
{
......@@ -182,7 +179,7 @@
}
},
"source": [
"### Conjugate Gradient Preconditioning\n",
"#### Conjugate Gradient Preconditioning\n",
"\n",
"- $D$ is defined via:\n",
"$$D^{-1} = \\mathcal S_h^{-1} + R^\\dagger N^{-1} R.$$\n",
......@@ -213,7 +210,7 @@
}
},
"source": [
"### Generate Mock data\n",
"#### Generate Mock data\n",
"\n",
"- Generate a field $s$ and $n$ with given covariances.\n",
"- Calculate $d$."
......@@ -257,7 +254,7 @@
}
},
"source": [
"### Run Wiener Filter"
"#### Run Wiener Filter"
]
},
{
......@@ -281,7 +278,7 @@
}
},
"source": [
"### Signal Reconstruction"
"#### Results"
]
},
{
......@@ -299,10 +296,9 @@
"m_data = HT(m).val\n",
"d_data = d.val\n",
"\n",
"plt.figure(figsize=(15,10))\n",
"plt.plot(s_data, 'r', label=\"Signal\", linewidth=3)\n",
"plt.plot(s_data, 'r', label=\"Signal\", linewidth=2)\n",
"plt.plot(d_data, 'k.', label=\"Data\")\n",
"plt.plot(m_data, 'k', label=\"Reconstruction\",linewidth=3)\n",
"plt.plot(m_data, 'k', label=\"Reconstruction\",linewidth=2)\n",
"plt.title(\"Reconstruction\")\n",
"plt.legend()\n",
"plt.show()"
......@@ -318,10 +314,9 @@
},
"outputs": [],
"source": [
"plt.figure(figsize=(15,10))\n",
"plt.plot(s_data - s_data, 'r', label=\"Signal\", linewidth=3)\n",
"plt.plot(s_data - s_data, 'r', label=\"Signal\", linewidth=2)\n",
"plt.plot(d_data - s_data, 'k.', label=\"Data\")\n",
"plt.plot(m_data - s_data, 'k', label=\"Reconstruction\",linewidth=3)\n",
"plt.plot(m_data - s_data, 'k', label=\"Reconstruction\",linewidth=2)\n",
"plt.axhspan(-noise_amplitude,noise_amplitude, facecolor='0.9', alpha=.5)\n",
"plt.title(\"Residuals\")\n",
"plt.legend()\n",
......@@ -336,7 +331,7 @@
}
},
"source": [
"### Power Spectrum"
"#### Power Spectrum"
]
},
{
......@@ -351,7 +346,6 @@
"source": [
"s_power_data = ift.power_analyze(sh).val\n",
"m_power_data = ift.power_analyze(m).val\n",
"plt.figure(figsize=(15,10))\n",
"plt.loglog()\n",
"plt.xlim(1, int(N_pixels/2))\n",
"ymin = min(m_power_data)\n",
......@@ -516,12 +510,11 @@
},
"outputs": [],
"source": [
"fig = plt.figure(figsize=(15,10))\n",
"plt.axvspan(l, h, facecolor='0.8',alpha=0.5)\n",
"plt.fill_between(range(N_pixels), m_data - uncertainty, m_data + uncertainty, facecolor='0.5', alpha=0.5)\n",
"plt.plot(s_data, 'r', label=\"Signal\", alpha=1, linewidth=3)\n",
"plt.plot(s_data, 'r', label=\"Signal\", alpha=1, linewidth=2)\n",
"plt.plot(d_data, 'k.', label=\"Data\")\n",
"plt.plot(m_data, 'k', label=\"Reconstruction\", linewidth=3)\n",
"plt.plot(m_data, 'k', label=\"Reconstruction\", linewidth=2)\n",
"plt.title(\"Reconstruction of incomplete data\")\n",
"plt.legend()"
]
......@@ -534,7 +527,7 @@
}
},
"source": [
"# 2d Example"
"## Wiener filter on two-dimensional field"
]
},
{
......@@ -618,18 +611,18 @@
},
"outputs": [],
"source": [
"cm = ['magma', 'inferno', 'plasma', 'viridis'][1]\n",
"cmap = ['magma', 'inferno', 'plasma', 'viridis'][1]\n",
"\n",
"mi = np.min(s_data)\n",
"ma = np.max(s_data)\n",
"\n",
"fig, axes = plt.subplots(1, 2, figsize=(15, 7))\n",
"fig, axes = plt.subplots(1, 2)\n",
"\n",
"data = [s_data, d_data]\n",
"caption = [\"Signal\", \"Data\"]\n",
"\n",
"for ax in axes.flat:\n",
" im = ax.imshow(data.pop(0), interpolation='nearest', cmap=cm, vmin=mi,\n",
" im = ax.imshow(data.pop(0), interpolation='nearest', cmap=cmap, vmin=mi,\n",
" vmax=ma)\n",
" ax.set_title(caption.pop(0))\n",
"\n",
......@@ -651,7 +644,7 @@
"mi = np.min(s_data)\n",
"ma = np.max(s_data)\n",
"\n",
"fig, axes = plt.subplots(3, 2, figsize=(15, 22.5))\n",
"fig, axes = plt.subplots(3, 2, figsize=(10, 15))\n",
"sample = HT(curv.draw_sample(from_inverse=True)+m).val\n",
"post_mean = (m_mean + HT(m)).val\n",
"\n",
......@@ -659,7 +652,7 @@
"caption = [\"Signal\", \"Reconstruction\", \"Posterior mean\", \"Sample\", \"Residuals\", \"Uncertainty Map\"]\n",
"\n",
"for ax in axes.flat:\n",
" im = ax.imshow(data.pop(0), interpolation='nearest', cmap=cm, vmin=mi, vmax=ma)\n",
" im = ax.imshow(data.pop(0), interpolation='nearest', cmap=cmap, vmin=mi, vmax=ma)\n",
" ax.set_title(caption.pop(0))\n",
"\n",
"fig.subplots_adjust(right=0.8)\n",
......@@ -691,31 +684,9 @@
"precise = (np.abs(s_data-m_data) < uncertainty)\n",
"print(\"Error within uncertainty map bounds: \" + str(np.sum(precise) * 100 / N_pixels**2) + \"%\")\n",
"\n",
"plt.figure(figsize=(15,10))\n",
"plt.imshow(precise.astype(float), cmap=\"brg\")\n",
"plt.colorbar()"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"# Start Coding\n",
"## NIFTy Repository + Installation guide\n",
"\n",
"https://gitlab.mpcdf.mpg.de/ift/NIFTy\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
......@@ -735,7 +706,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.2"
"version": "3.9.2"
}
},
"nbformat": 4,
......
%% Cell type:markdown id: tags:
# A NIFTy demonstration
# Code example: Wiener filter
%% Cell type:markdown id: tags:
## IFT: Big Picture
## Introduction
IFT starting point:
$$d = Rs+n$$
Typically, $s$ is a continuous field, $d$ a discrete data vector. Particularly, $R$ is not invertible.
IFT aims at **inverting** the above uninvertible problem in the **best possible way** using Bayesian statistics.
## NIFTy
NIFTy (Numerical Information Field Theory) is a Python framework in which IFT problems can be tackled easily.
Main Interfaces:
- **Spaces**: Cartesian, 2-Spheres (Healpix, Gauss-Legendre) and their respective harmonic spaces.
- **Fields**: Defined on spaces.
- **Operators**: Acting on fields.
%% Cell type:markdown id: tags:
## Wiener Filter: Formulae
## Wiener filter on one-dimensional fields
### Assumptions
- $d=Rs+n$, $R$ linear operator.
- $\mathcal P (s) = \mathcal G (s,S)$, $\mathcal P (n) = \mathcal G (n,N)$ where $S, N$ are positive definite matrices.
### Posterior
The Posterior is given by:
$$\mathcal P (s|d) \propto P(s,d) = \mathcal G(d-Rs,N) \,\mathcal G(s,S) \propto \mathcal G (s-m,D) $$
where
$$\begin{align}
m &= Dj \\
D^{-1}&= (S^{-1} +R^\dagger N^{-1} R )\\
j &= R^\dagger N^{-1} d
\end{align}$$
$$m = Dj$$
with
$$D = (S^{-1} +R^\dagger N^{-1} R)^{-1} , \quad j = R^\dagger N^{-1} d.$$
Let us implement this in NIFTy!
%% Cell type:markdown id: tags:
## Wiener Filter: Example
### In NIFTy
- We assume statistical homogeneity and isotropy. Therefore the signal covariance $S$ is diagonal in harmonic space, and is described by a one-dimensional power spectrum, assumed here as $$P(k) = P_0\,\left(1+\left(\frac{k}{k_0}\right)^2\right)^{-\gamma /2},$$
with $P_0 = 0.2, k_0 = 5, \gamma = 4$.
- $N = 0.2 \cdot \mathbb{1}$.
- Number of data points $N_{pix} = 512$.
- reconstruction in harmonic space.
- Response operator:
$$R = FFT_{\text{harmonic} \rightarrow \text{position}}$$
%% Cell type:code id: tags:
``` python
N_pixels = 512 # Number of pixels
def pow_spec(k):
P0, k0, gamma = [.2, 5, 4]
return P0 / ((1. + (k/k0)**2)**(gamma / 2))
```
%% Cell type:markdown id: tags:
## Wiener Filter: Implementation
### Implementation
%% Cell type:markdown id: tags:
### Import Modules
#### Import Modules
%% Cell type:code id: tags:
``` python
%matplotlib inline
import numpy as np
import nifty7 as ift
import nifty8 as ift
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.dpi'] = 100
plt.style.use("seaborn-notebook")
```
%% Cell type:markdown id: tags:
### Implement Propagator
#### Implement Propagator
%% Cell type:code id: tags:
``` python
def Curvature(R, N, Sh):
IC = ift.GradientNormController(iteration_limit=50000,
tol_abs_gradnorm=0.1)
# WienerFilterCurvature is (R.adjoint*N.inverse*R + Sh.inverse) plus some handy
# helper methods.
return ift.WienerFilterCurvature(R,N,Sh,iteration_controller=IC,iteration_controller_sampling=IC)
```
%% Cell type:markdown id: tags:
### Conjugate Gradient Preconditioning
#### Conjugate Gradient Preconditioning
- $D$ is defined via:
$$D^{-1} = \mathcal S_h^{-1} + R^\dagger N^{-1} R.$$
In the end, we want to apply $D$ to $j$, i.e. we need the inverse action of $D^{-1}$. This is done numerically (algorithm: *Conjugate Gradient*).
<!--
- One can define the *condition number* of a non-singular and normal matrix $A$:
$$\kappa (A) := \frac{|\lambda_{\text{max}}|}{|\lambda_{\text{min}}|},$$
where $\lambda_{\text{max}}$ and $\lambda_{\text{min}}$ are the largest and smallest eigenvalue of $A$, respectively.
- The larger $\kappa$ the slower Conjugate Gradient.
- By default, conjugate gradient solves: $D^{-1} m = j$ for $m$, where $D^{-1}$ can be badly conditioned. If one knows a non-singular matrix $T$ for which $TD^{-1}$ is better conditioned, one can solve the equivalent problem:
$$\tilde A m = \tilde j,$$
where $\tilde A = T D^{-1}$ and $\tilde j = Tj$.
- In our case $S^{-1}$ is responsible for the bad conditioning of $D$ depending on the chosen power spectrum. Thus, we choose
$$T = \mathcal F^\dagger S_h^{-1} \mathcal F.$$
-->
%% Cell type:markdown id: tags:
### Generate Mock data
#### Generate Mock data
- Generate a field $s$ and $n$ with given covarianc