diff --git a/demo_joint_cal_image.ipynb b/demo_joint_cal_image.ipynb
new file mode 100644
index 0000000000000000000000000000000000000000..21c07c467509ca17e5a53c533ff8d5493c0b9a2d
--- /dev/null
+++ b/demo_joint_cal_image.ipynb
@@ -0,0 +1,890 @@
+{
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "id": "d8e9df44-064a-40e0-91a2-37f7b5f108ba",
+   "metadata": {},
+   "source": [
+    "# RESOLVE Tutorial"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "696d564a-fa4a-4488-a2a6-e885fe8818e4",
+   "metadata": {},
+   "source": [
+    "This RESOLVE tutorial demonstrates Bayesian joint imaging and calibration with RESOLVE. For this demo, we use VLA data of the supernova remnant 1006. Originally joint calibration and imaging was showcased with RESOLVE in https://arxiv.org/abs/1903.11169. In this demo script, we use the same data as in the publication. You can download the data from here: https://datashare.mpcdf.mpg.de/s/HNONketDVCR011V"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "fec18c2e-bc2c-4d8f-b257-dd2dfaf152b7",
+   "metadata": {},
+   "source": [
+    "## Import libaries"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "d9fbf37d-1cd1-4a00-b0dd-8fd2513f5939",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "import numpy as np\n",
+    "import resolve as rve\n",
+    "import nifty8 as ift\n",
+    "import ducc0\n",
+    "import matplotlib.pyplot as plt\n",
+    "from matplotlib.colors import LogNorm"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "8aafb16f-0c0f-4df1-921e-622feaec5e05",
+   "metadata": {},
+   "source": [
+    "NIFTy provides the ability to parallelize across MPI tasks to speed up inference. In this notebook, the reconstruction is only done on one task, but if you copy snippets from this notebook into your own script, you should also include this part to be able to run your code with MPI."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "a5aff266-32da-4f9f-870a-c5f243b02e9c",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "try:\n",
+    "    from mpi4py import MPI\n",
+    "    comm = MPI.COMM_WORLD\n",
+    "    master = comm.Get_rank() == 0\n",
+    "except ImportError:\n",
+    "    comm = None\n",
+    "    master = True"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "d57e145d-1ead-40a3-8112-475e7506bac3",
+   "metadata": {},
+   "source": [
+    "## Load and preporcess data"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "3c4ba7b2-880b-4491-a484-475536c42c8e",
+   "metadata": {},
+   "source": [
+    "Load data from measurement set to resolve observations:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "5a0b1cd7-c334-43df-b918-70b6a419ae6e",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "obs = rve.ms2observations(\n",
+    "    \"AM754_A030124_flagged.ms\", \"DATA\", True, 0, \"stokesi\"\n",
+    ")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "4867ef66-0491-4622-adb7-ee4b7dfb4360",
+   "metadata": {},
+   "source": [
+    "The data contains three observations. Two calibration observations and the science observation of SN1006."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "53753fa5-62e4-4398-bf0e-0a02f7c8e3aa",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "print(len(obs)) # 3 observations"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "23216211-f4b7-4b83-b8a2-eec6371a9ff2",
+   "metadata": {},
+   "source": [
+    "For convinience we will shift the time stamps of the visibilities such that the observations start at $t=0$."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "36879301-9fb7-43f7-81ea-adef235689be",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "tmin, tmax = rve.tmin_tmax(*obs)\n",
+    "obs = [oo.move_time(-tmin) for oo in obs]"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "6d77480d-d4b5-4227-926f-de264f646b56",
+   "metadata": {},
+   "source": [
+    "For this demo, we will use only 10% of the data in order to speed up the calibration and imaging. To subsample the data, we randomly select 10% of the rows in the observation. When executing this demo with a bit more time or when using MPI parallelization (see above), you might use the full data and compare the results."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "54d9c24d-3c45-4823-8c93-c1dc431eb4d3",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "ocalib1, ocalib2, oscience = obs\n",
+    "\n",
+    "n_rows = oscience.vis.shape[1]\n",
+    "use_fac = 0.1\n",
+    "n_rows_use = int(use_fac * n_rows)\n",
+    "\n",
+    "np.random.seed(42)\n",
+    "row_inds = np.arange(n_rows)\n",
+    "selected_rows = np.random.choice(row_inds, n_rows_use, replace=False)\n",
+    "oscience = oscience[selected_rows]"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "2f0fe933-4154-4e50-a716-67dafc1fd487",
+   "metadata": {},
+   "source": [
+    "Print some info about the science observation and plot the uv-coverage."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "7d2374da-6d0d-4d4a-a286-ee6278412dee",
+   "metadata": {
+    "scrolled": true
+   },
+   "outputs": [],
+   "source": [
+    "print(f\"frequency: {oscience.freq} Hz\")\n",
+    "print(f\"number of visibilities: {oscience.vis.shape}\")\n",
+    "\n",
+    "plt.scatter(oscience.uvw[:,0], oscience.uvw[:,1])\n",
+    "plt.xlabel('u')\n",
+    "plt.ylabel('v')\n",
+    "plt.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "3ac65e7e-2e0b-4232-8907-50f59410baab",
+   "metadata": {},
+   "source": [
+    "Additionaly we plot the dirty image of the data. As the data is not calibrated we can not see any structures in the data."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "ecdef2d5-dd5f-4230-b0df-087f63861fc2",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "fov = np.array([1, 1]) * rve.DEG2RAD\n",
+    "npix = np.array([160, 160])\n",
+    "skydom = rve.default_sky_domain(sdom=ift.RGSpace(npix, fov / npix))\n",
+    "dirty = rve.dirty_image(oscience, 'natural', skydom, True, 1e-5)\n",
+    "ift.single_plot(dirty)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "f85ae046-faab-43e4-9fe6-cfbe6f3cfef9",
+   "metadata": {},
+   "source": [
+    "## Prior Models"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "8616e61e-8a73-492b-825d-c1a22149cefe",
+   "metadata": {},
+   "source": [
+    "In this demo, we want to perform direction-independent calibration and Stokes I imaging. For this, the radio interferometric measurement equation is given by:\n",
+    "\n",
+    "$$vis_{pqt} = \\int I(\\boldsymbol{l})G_p(t)G_{q}^{*}(t)  C(\\boldsymbol{l}, w_{pqt})e^{2\\pi i (\\boldsymbol{k}_{pqt}\\cdot \\boldsymbol{l})},$$\n",
+    "\n",
+    "with:\n",
+    "- $vis_{pqt}$ being the visibility of baseline $(p, q)$ at time $t$,\n",
+    "- $\\boldsymbol{l} = (l, m)$ the sky coordinate,\n",
+    "- $t$ the time,\n",
+    "- $\\boldsymbol{k}_{pqt}=(u_{pqt}, v_{pqt})$ the uv-coordinates,\n",
+    "- $I(\\boldsymbol{l})$ the sky brightness,\n",
+    "- $G_p(t)$ the gain of antenna $p$ at time $t$\n",
+    "- $G_q(t)$ the gain of antenna $q$ at time $t$\n",
+    "- $C(\\boldsymbol{l}, w_{pqt}) = e^{-2\\pi i (\\sqrt{1-\\boldsymbol{l}})} / (\\sqrt{1 - \\boldsymbol{l}})$ the $w-$effect"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "8f679770-485d-4076-9e51-7f6e2dea7748",
+   "metadata": {},
+   "source": [
+    "For joint calibration and imaging, we want to infer $I(\\boldsymbol{l})$ and $G_{\\cdot}(t)$. As we want to do this in a Bayesian way, we need to build prior models for both of them."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "ec1b4fba-4c33-4e8c-b480-0785502c2ba4",
+   "metadata": {},
+   "source": [
+    "## Calibration model"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "8999c93a-8495-486a-8526-d1366060d46d",
+   "metadata": {},
+   "source": [
+    "In this section, we will define the prior model for the antenna gains $G$. Our prior model should encode the physical property that the phases and amplitudes of the antenna gains fluctuate smoothly as a function of time. To do so, we will use the NIFTy correlated field model to define Gaussian processes for phase and log amplitude for each antenna. Expressing this model as a formula, the gain of antenna $p$ at time $t$ would be given by\n",
+    "\n",
+    "$$G_p(t) = e^{\\alpha_{p}(t) + i \\phi_{p}(t)},$$\n",
+    "\n",
+    "with $\\alpha_{p}(t)$ represnting the Gaussian process for the log amplitude and $\\phi_{p}(t)$ the Gaussian process for the phase."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "6b29de62-f4e9-4ba0-a87e-538384b30101",
+   "metadata": {},
+   "source": [
+    "Before actually setting up the Gaussian processes, we need some initial steps. First, we need to define the discretization of the time axis. Thus, we have to define the NIFTy space on which the Gaussian processes will live. Furthermore, since the Gaussian processes are based on FFTs, we need some zero padding to deal with the periodic boundary conditions."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "fd4878a8-c519-402a-ae0c-31405b1a19d7",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "dt = 20  # s\n",
+    "zero_padding_factor = 2\n",
+    "tmin, tmax = rve.tmin_tmax(*obs)\n",
+    "time_domain = ift.RGSpace(\n",
+    "    ducc0.fft.good_size(int(zero_padding_factor * (tmax - tmin) / dt)), dt\n",
+    ")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "e1363f1d-d585-4e1e-9423-470db4d447e6",
+   "metadata": {},
+   "source": [
+    "Next we check the number of antennas to know how many Gaussian processes we need for modeling the phases/ log-amplitudes. As we calibrate the Lcp and Rcp channels separately, we need twice as many Gaussian processes for phase/ log-amplitude as there are antennas.  "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "ddb57cbf-f831-42ce-a850-8dd9de9a4b16",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "uantennas = rve.unique_antennas(*obs)\n",
+    "antenna_dct = {aa: ii for ii, aa in enumerate(uantennas)}\n",
+    "total_N = 2 * len(uantennas) # factor of 2 because of Lcp and "
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "3bad54af-57ee-4a6e-bd8a-9bf2d25280cb",
+   "metadata": {},
+   "source": [
+    "Now we can define the Gaussian processes for the phase."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "236dd017-6297-42b7-bdce-178ca0bad428",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "dofdex = np.arange(total_N)\n",
+    "dct = {\"offset_mean\": 0, \"offset_std\": (1, 0.5), \"dofdex\": dofdex}\n",
+    "dct1 = {\n",
+    "    \"fluctuations\": (.2, .1),\n",
+    "    \"loglogavgslope\": (-4.0, 1),\n",
+    "    \"flexibility\": (1, .3),\n",
+    "    \"asperity\": None,\n",
+    "    \"dofdex\": dofdex\n",
+    "}\n",
+    "cfmph = ift.CorrelatedFieldMaker(prefix='phase', total_N=total_N)\n",
+    "cfmph.add_fluctuations(time_domain, **dct1)\n",
+    "cfmph.set_amplitude_total_offset(**dct)\n",
+    "phase = cfmph.finalize(0)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "06deabb9-cc9b-4e8d-a073-6c99e7546ba0",
+   "metadata": {},
+   "source": [
+    "And the same for the log-amplitude."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "39ca0784-574d-4a3f-a1a0-ef497a6a1131",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "dct = {\"offset_mean\": 0, \"offset_std\": (1, 0.5), \"dofdex\": dofdex}\n",
+    "dct1 = {\n",
+    "    \"fluctuations\": (.2, .1),\n",
+    "    \"loglogavgslope\": (-4.0, 1),\n",
+    "    \"flexibility\": (1, .3),\n",
+    "    \"asperity\": None,\n",
+    "    \"dofdex\": dofdex\n",
+    "}\n",
+    "cfmph = ift.CorrelatedFieldMaker(prefix='logamp', total_N=total_N)\n",
+    "cfmph.add_fluctuations(time_domain, **dct1)\n",
+    "cfmph.set_amplitude_total_offset(**dct)\n",
+    "logamp = cfmph.finalize(0)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "de8fe4e5-7465-4ee5-95b5-877fc3adc681",
+   "metadata": {},
+   "source": [
+    "Next we look at the target space of the phase operator."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "82b97c89-28b2-4a69-b520-856799659564",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "print(phase.target)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "af0dc681-6b85-42ef-87e0-ff37dccd9b72",
+   "metadata": {},
+   "source": [
+    "The first axis of the space has length 54 and contains the Gaussian processes for the Rcp and Lcp channels of the 27 VLA antennas. The second axis has length 1440 and is the time axis along which the Gaussian processes are defined. For inserting the phases and log amplitudes into RESOLVE, we need to reshape them. "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "65ddc69f-4c26-406d-b4fc-fd9357670165",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "pdom, _, fdom = oscience.vis.domain\n",
+    "reshape = ift.DomainChangerAndReshaper(phase.target, [pdom, ift.UnstructuredDomain(len(uantennas)), time_domain, fdom])\n",
+    "\n",
+    "phase = reshape @ phase\n",
+    "logamp = reshape @ logamp"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "d37c1cfd-4fb2-4d8e-ac70-d20224dbec28",
+   "metadata": {},
+   "source": [
+    "Now the first axis corresponds to the Lcp and Rcp channels. The second axis has length 27 and corresponds to the 27 antennas. The third axis is the time axis of the Gaussian processes. The last axis is the frequency axis. In this demo, the last axis has length one as we have only one frequency channel."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "fd2fb45b-bacf-4b32-83c5-4edb3153bc91",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "print(phase.target)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "39707b9c-1516-472b-a58c-c9517aad01bc",
+   "metadata": {},
+   "source": [
+    "As a next step, we set up the operator computing the antenna gain factor $G_{p}(t)G_{q}^{*}(t)$ for each visibility. We name this operator `cop`. Thus expressed as a formula, we compute:\n",
+    "\n",
+    "$$cop = G_{p}(t)G_{q}^{*}(t)$$\n",
+    "\n",
+    "`cop_cal` contains the calibration terms of the visibilities in the calibration observation. `cop_sci` contains the calibration factors of the visibilities in the science observation (thus SN1006)."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "5ff26c0f-0a0d-4533-ba69-d4b8efcc013a",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "cop_cal = rve.calibration_distribution(ocalib2, phase, logamp, antenna_dct, None)\n",
+    "cop_sci = rve.calibration_distribution(oscience, phase, logamp, antenna_dct, None)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "cfbe00bd-87a1-490d-b686-9b7e0e6c82ba",
+   "metadata": {},
+   "source": [
+    "## Sky model"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "a7199f5d-dfd0-4763-ab5c-9860886b62d3",
+   "metadata": {},
+   "source": [
+    "In this section, we will define the prior model for the sky brightness. As we want to encode positivity of the flux as well as smoothness of the diffuse emission into the prior model, we will again use the correlated field to model the log brightness.\n",
+    "\n",
+    "Again we start by specifying the space on which the sky brightness model will be defined."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "11fd8a13-1297-41cc-a2e2-7a04938a73a2",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "fov = np.array([1, 1]) * rve.DEG2RAD\n",
+    "npix = np.array([160, 160])\n",
+    "skydom = ift.RGSpace(npix, fov / npix)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "f538667e-811a-4b7c-a9fe-8623d8f63eec",
+   "metadata": {},
+   "source": [
+    "Next we set up the Gaussian process with the corelated field model."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "95608b21-93fe-4cbd-8e3a-ab900da9dda8",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "dct = {\n",
+    "    \"target\": skydom,\n",
+    "    \"offset_mean\": 19,\n",
+    "    \"offset_std\": (1, 0.1),\n",
+    "    \"prefix\": \"logdiffuse\",\n",
+    "    \"fluctuations\": (5.0, 1.0),\n",
+    "    \"loglogavgslope\": (-2.0, 1),\n",
+    "    \"flexibility\": (1.0, 0.5),\n",
+    "    \"asperity\": None,\n",
+    "}\n",
+    "logsky = ift.SimpleCorrelatedField(**dct)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "10c88949-8b94-4310-bebd-3960f8846a37",
+   "metadata": {},
+   "source": [
+    "Now we convert the log brightness into the actual brightness by exponentiation."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "90b58b3f-a3c4-4ce2-b131-73921bc7a361",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "sky = logsky.exp()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "7f8a822d-d203-448f-a91a-823c678efb59",
+   "metadata": {},
+   "source": [
+    "Again we need to adapt the domain to work with RESOLVE."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "20aece44-119c-4651-9bbb-e1bb13e6c574",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "sky_dom = rve.default_sky_domain(sdom = skydom)\n",
+    "reshape_sky = ift.DomainChangerAndReshaper(sky.target, sky_dom)\n",
+    "sky = reshape_sky @ sky"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "d4e67110-5fa9-4675-874d-fa6579054ed7",
+   "metadata": {},
+   "source": [
+    "## Likelihoods"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "b0c8f422-262d-4296-b4d4-ee100d02eccf",
+   "metadata": {},
+   "source": [
+    "After defining the prior models for the sky and the calibration, we can define the likelihoods. RESOLVE already includes the functionality for imaging likelihoods and calibration likelihoods. To be precise, what RESOLVE actually returns is the log-likelihood."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "2a68ccac-e0f0-428c-8fa7-2b65e3023aef",
+   "metadata": {},
+   "source": [
+    "For the imaging likelihood, you have to specify:\n",
+    "- the observation containing the data\n",
+    "- the sky model\n",
+    "- the desired numerical accuracy for evaluating the measuremnt equation\n",
+    "- if the $w$-term in the measurement equation should be accounted for\n",
+    "- the operator for the calibration factors for the visibilities in the observation\n",
+    "- the number of threads on which to evaluate the measurement equation"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "6e96148a-6510-4bf6-96f2-0f58a62c40fa",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "lh_imaging = rve.ImagingLikelihood(oscience, sky, 1e-5, True, calibration_operator=cop_sci, nthreads=2)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "b38d0747-0d70-414a-b20e-3596ff128328",
+   "metadata": {},
+   "source": [
+    "For the calibration likelihood, you have to specify:\n",
+    "- the observation containing the calibration data\n",
+    "- the operation for the calibration factors\n",
+    "- model data of the calibration target.\n",
+    "\n",
+    "For simplicity, we will assume here that the calibrator is a 1Jy source in the center of the field of view. Therefore the model visibilities of the calibrator are all equal to $1+0i$."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "289ed753-4ee9-4f3c-a1c8-50f7366148de",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "lh_calib = rve.CalibrationLikelihood(\n",
+    "    ocalib2, cop_cal, ift.full(ocalib2.vis.domain, 1 + 0.0j)\n",
+    ")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "b3b79f07-2df7-4b4d-8f13-f537c98c46a6",
+   "metadata": {},
+   "source": [
+    "## Reconstruction"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "08b70a69-cc7d-4e48-b204-388b560aa3ea",
+   "metadata": {},
+   "source": [
+    "After defining the prior model and the likelihoods, we can compute the posterior distributions. We will perform the reconstruction in two steps. In the first step, we will infer only the antenna gains. Thus, in the first step, we will only work with the calibration observation and the calibration likelihood. In the second step, we will perform a joint calibration and imaging using the antenna gain solutions from the first step as a starting point."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "423dbde7-03c6-4840-a806-c8bc3891f204",
+   "metadata": {},
+   "source": [
+    "### Only calibration"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "f7b15ba2-7a1b-46ad-805e-5a5bd96e7721",
+   "metadata": {},
+   "source": [
+    "To visualize the resulting antenna gains we define a plot function."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "f30282cb-2250-404f-8ff6-d3ec66225768",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "def callback_cal(samples, ii):\n",
+    "    phase_mean = samples.average(phase).val\n",
+    "    logamp_mean = samples.average(logamp).val\n",
+    "    nt = phase_mean.shape[2]\n",
+    "    phase_mean = phase_mean[:,:,0:-int(nt/2),:]\n",
+    "    logamp_mean = logamp_mean[:,:,0:-int(nt/2),:]\n",
+    "\n",
+    "    \n",
+    "    fig, axs = plt.subplots(nrows=2, ncols=2, figsize=(8, 8), constrained_layout=True)\n",
+    "    fig.suptitle(f\"Phases and amplitudes after iteration {ii}\")\n",
+    "    for ant in range(phase_mean.shape[1]):\n",
+    "        axs[0,0].set_title('Rcp phases')\n",
+    "        axs[0,0].plot(phase_mean[0, ant, :,0])\n",
+    "        axs[0,1].set_title('Lcp phases')\n",
+    "        axs[0,1].plot(phase_mean[1, ant, :,0])\n",
+    "        axs[1,0].set_title('Rcp amplitude')\n",
+    "        axs[1,0].plot(np.exp(logamp_mean[0, ant, :,0]))\n",
+    "        axs[1,1].set_title('Lcp amplitude')\n",
+    "        axs[1,1].plot(np.exp(logamp_mean[1, ant, :,0]))\n",
+    "    plt.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "d0fef7b5-0731-4a54-9926-2ce638683a12",
+   "metadata": {},
+   "source": [
+    "We define the minimizers for the inference"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "8db54481-433e-4237-a7b2-e0643f367a54",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "ic_sampling = ift.AbsDeltaEnergyController(name='Sampling', deltaE=0.01, iteration_limit=300)\n",
+    "ic_newton = ift.AbsDeltaEnergyController(name='Newton', deltaE=0.01, iteration_limit=15)\n",
+    "minimizer = ift.NewtonCG(ic_newton, enable_logging=True)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "97b5abe8-9948-4ccb-8d65-2193c277e1b2",
+   "metadata": {},
+   "source": [
+    "Similar to the previous demo scripts, we will run the inference with the `optimize_kl` function. To speed up the convergence, we only fit for the phases and keep the amplitudes fixed for the first two iterations. In the subsequent iterations, we reconstruct both phases and amplitudes."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "a439d663-0a54-4418-a304-06297e351b81",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "n_samples = 2\n",
+    "n_iterations = 4\n",
+    "constants = lambda iiter: logamp.domain.keys() if iiter < 2 else []\n",
+    "point_estimates = lambda iiter: logamp.domain.keys() if iiter < 2 else []\n",
+    "samples, mean = ift.optimize_kl(lh_calib, n_iterations, n_samples,\n",
+    "                                minimizer, ic_sampling, None,\n",
+    "                                output_directory=\"calibration\",\n",
+    "                                return_final_position=True,\n",
+    "                                resume=True, comm=comm, inspect_callback=callback_cal,\n",
+    "                                constants=constants, point_estimates=point_estimates)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "fe97b6bb-4165-43e7-8f93-bea5842e505b",
+   "metadata": {},
+   "source": [
+    "Plot the result:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "19e829c1-1c44-4834-9bae-fd362b82746c",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "callback_cal(samples, 4)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "0f895f76-8fd1-4f82-85b9-faf87b5f16b6",
+   "metadata": {},
+   "source": [
+    "### Joint calibration and imaging"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "966d0113-809c-4114-9125-6dd0c2f2d5f6",
+   "metadata": {},
+   "source": [
+    "In the second step of the reconstruction, we will perform joint calibration and imaging. To do this, we need to define a joint calibration and imaging likelihood. The joint likelihood is simply the product of the calibration and imaging likelihoods since we assume that the noise of the two observations is independent of each other. Because we work with log-likelihoods in NIFTy and RESOLVE, the new log-likelihood is the sum of the imaging log-likelihood and the calibration log-likelihood."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "e21a6cf2-3b4b-4a17-9342-217c43ba2e12",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "lh = lh_calib + lh_imaging"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "a9b9e650-c9c0-42b6-bf64-e521fbeedc16",
+   "metadata": {},
+   "source": [
+    "We want to initialize the joint reconstruction with the result of the previous calibration reconstruction. For this we create a NIFTy field containing the result of the previous reconstruction for the phases and a random starting position for the imaging."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "3f3ea209-88cb-4adf-8f14-37ee61395608",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "init_pos = ift.MultiField.union([0.1 * ift.from_random(lh_imaging.domain), mean])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "8d3ab616-0ed6-49ef-a94c-5b98eb2a44eb",
+   "metadata": {},
+   "source": [
+    "Again we define a callback function for ploting the results"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "cb382d1b-b4f4-4090-b116-299ebfc93171",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "def callback(samples, ii):\n",
+    "    sky_mean, sky_var = samples.sample_stat(reshape_sky.adjoint(sky))\n",
+    "    fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(9, 4), constrained_layout=True)\n",
+    "    im = axs[0].imshow(sky_mean.val.T, origin='lower', norm=LogNorm())\n",
+    "    if not np.sum(sky_var.val) == 0:\n",
+    "        im_std = axs[1].imshow(sky_var.val.T, origin='lower', norm=LogNorm())\n",
+    "    fig.colorbar(im, ax=axs[:], label=r'Jy/str$^2$', aspect=40, pad=0.01)\n",
+    "    plt.show()\n",
+    "    \n",
+    "    rve.field2fits(sky_mean, f\"sky_rct_{ii}.fits\", overwrite=True)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "c46a12b0-aa39-4832-a5c6-d2a204b8c008",
+   "metadata": {},
+   "source": [
+    "We perform the reconstruction using the `optimize_kl` function. To speed up the convergence, we keep the calibration solutions fixed for the first two iterations. Thus effectively, we perform only imaging in the first two iterations. Note: This reconstruction will run for some time. See the Outlook on how to speed up the reconstruction."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "ca3aeee0-52a9-489b-a913-ac155ac80897",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "n_iterations = 4\n",
+    "n_samples = lambda iiter: 0 if iiter < 1 else 2\n",
+    "constants = lambda iiter: lh_calib.domain.keys() if iiter < 2 else []\n",
+    "point_estimates = lambda iiter: lh_calib.domain.keys() if iiter < 2 else []\n",
+    "\n",
+    "samples, mean = ift.optimize_kl(lh, n_iterations, n_samples,\n",
+    "                                minimizer, ic_sampling, None,\n",
+    "                                output_directory=\"cal_and_image\",\n",
+    "                                return_final_position=True,\n",
+    "                                resume=True, initial_position=init_pos,\n",
+    "                                inspect_callback=callback, comm=comm,\n",
+    "                                constants=constants, point_estimates=point_estimates)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "718d33aa-f5c5-4744-ba9c-677e097fc4aa",
+   "metadata": {},
+   "source": [
+    "Plot the results:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "6913d770-190b-4a18-b7d0-459d34ed0eab",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "callback(samples, 7)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "6da5f1b5-b15c-4992-af99-32598c05d829",
+   "metadata": {},
+   "source": [
+    "## Speeding up the reconstruction"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "95432cce-c13c-4060-87f6-d5fc27dee047",
+   "metadata": {},
+   "source": [
+    "One obvious expansion of this demo is to use the full data set instead of using only 10%. When using more data, you will most likely want to reduce the computational time by exploiting parallelization. With NIFTy and RESOLVE, there are several ways to parallelize. Namely:\n",
+    "- Parallelize the radio response evaluation: RESOLVE computes the radio response with the w-gridder implemented in the ducc library. The number of CPU threads on which the w-gridder is running can be specified with the `nthreads` keyword in the imaging likelihood. Especially for large data sets, running on multiple threads can be beneficial.\n",
+    "- In NIFTy, it is possible to parallelize over the samples with MPI. As this demo uses two samples, the minimization could be parallelized with two MPI tasks. To do so, you just need to convert this notebook into a proper Python script and execute it with `mpirun -n 2 python demo.py`. Converting jupyter notebooks into proper python scripts can automatically be don with `jupytext demo.ipynb --to py`."
+   ]
+  }
+ ],
+ "metadata": {
+  "kernelspec": {
+   "display_name": "Python 3 (ipykernel)",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.11.5"
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}