Commit ca11700c authored by Martin Reinecke's avatar Martin Reinecke
Browse files

Merge branch 'NIFTy_5' into more_tests

parents 96105fd5 bab4d8c1
FROM debian:testing-slim FROM debian:testing-slim
RUN apt-get update && apt-get install -y \ RUN apt-get update && apt-get install -y \
# Needed for gitlab tests # Needed for setup
git \ git python3-pip \
# Packages needed for NIFTy # Packages needed for NIFTy
libfftw3-dev \ python3-scipy \
python3 python3-pip python3-dev python3-scipy \
# Documentation build dependencies # Documentation build dependencies
python3-sphinx python3-sphinx-rtd-theme \ python3-sphinx-rtd-theme \
# Testing dependencies # Testing dependencies
python3-coverage python3-pytest python3-pytest-cov \ python3-pytest-cov jupyter \
# Optional NIFTy dependencies # Optional NIFTy dependencies
openmpi-bin libopenmpi-dev python3-mpi4py \ libfftw3-dev python3-mpi4py python3-matplotlib \
# Packages needed for NIFTy # more optional NIFTy dependencies
&& pip3 install pyfftw \ && pip3 install pyfftw \
# Optional NIFTy dependencies
&& pip3 install git+https://gitlab.mpcdf.mpg.de/ift/pyHealpix.git \ && pip3 install git+https://gitlab.mpcdf.mpg.de/ift/pyHealpix.git \
# Testing dependencies && pip3 install jupyter \
&& rm -rf /var/lib/apt/lists/*
# Needed for demos to be running
RUN apt-get update && apt-get install -y python3-matplotlib \
&& python3 -m pip install --upgrade pip && python3 -m pip install jupyter \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
# Set matplotlib backend # Set matplotlib backend
......
...@@ -29,4 +29,4 @@ add_module_names = False ...@@ -29,4 +29,4 @@ add_module_names = False
html_theme = "sphinx_rtd_theme" html_theme = "sphinx_rtd_theme"
html_logo = 'nifty_logo_black.png' html_logo = 'nifty_logo_black.png'
exclude_patterns = ['mod/modules.rst'] exclude_patterns = ['mod/modules.rst', 'mod/*version.rst']
...@@ -219,14 +219,14 @@ Here, the uncertainty of the field and the power spectrum of its generating proc ...@@ -219,14 +219,14 @@ Here, the uncertainty of the field and the power spectrum of its generating proc
| **Output of tomography demo getting_started_3.py** | | **Output of tomography demo getting_started_3.py** |
+----------------------------------------------------+ +----------------------------------------------------+
| .. image:: images/getting_started_3_setup.png | | .. image:: images/getting_started_3_setup.png |
| :width: 50 % | | |
+----------------------------------------------------+ +----------------------------------------------------+
| Non-Gaussian signal field, | | Non-Gaussian signal field, |
| data backprojected into the image domain, power | | data backprojected into the image domain, power |
| spectrum of underlying Gausssian process. | | spectrum of underlying Gausssian process. |
+----------------------------------------------------+ +----------------------------------------------------+
| .. image:: images/getting_started_3_results.png | | .. image:: images/getting_started_3_results.png |
| :width: 50 % | | |
+----------------------------------------------------+ +----------------------------------------------------+
| Posterior mean field signal | | Posterior mean field signal |
| reconstruction, its uncertainty, and the power | | reconstruction, its uncertainty, and the power |
...@@ -235,10 +235,10 @@ Here, the uncertainty of the field and the power spectrum of its generating proc ...@@ -235,10 +235,10 @@ Here, the uncertainty of the field and the power spectrum of its generating proc
| orange line). | | orange line). |
+----------------------------------------------------+ +----------------------------------------------------+
Maximim a Posteriori Maximum a Posteriori
-------------------- --------------------
One popular field estimation method is Maximim a Posteriori (MAP). One popular field estimation method is Maximum a Posteriori (MAP).
It only requires to minimize the information Hamiltonian, e.g by a gradient descent method that stops when It only requires to minimize the information Hamiltonian, e.g by a gradient descent method that stops when
...@@ -246,7 +246,7 @@ It only requires to minimize the information Hamiltonian, e.g by a gradient desc ...@@ -246,7 +246,7 @@ It only requires to minimize the information Hamiltonian, e.g by a gradient desc
\frac{\partial \mathcal{H}(d,\xi)}{\partial \xi} = 0. \frac{\partial \mathcal{H}(d,\xi)}{\partial \xi} = 0.
NIFTy5 automatically calculates the necessary gradient from a generative model of the signal and the data and to minimize the Hamiltonian. NIFTy5 automatically calculates the necessary gradient from a generative model of the signal and the data and uses this to minimize the Hamiltonian.
However, MAP often provides unsatisfactory results in cases of deep hirachical Bayesian networks. However, MAP often provides unsatisfactory results in cases of deep hirachical Bayesian networks.
The reason for this is that MAP ignores the volume factors in parameter space, which are not to be neglected in deciding whether a solution is reasonable or not. The reason for this is that MAP ignores the volume factors in parameter space, which are not to be neglected in deciding whether a solution is reasonable or not.
...@@ -270,7 +270,7 @@ As a compromise between being optimal and being computationally affordable, the ...@@ -270,7 +270,7 @@ As a compromise between being optimal and being computationally affordable, the
\int \mathcal{D}\xi \,\mathcal{Q}(\xi) \log \left( \frac{\mathcal{Q}(\xi)}{\mathcal{P}(\xi)} \right) \int \mathcal{D}\xi \,\mathcal{Q}(\xi) \log \left( \frac{\mathcal{Q}(\xi)}{\mathcal{P}(\xi)} \right)
Minimizing this with respect to all entries of the covariance :math:`D` is unfeasible for fields. Minimizing this with respect to all entries of the covariance :math:`D` is unfeasible for fields.
Therefore, Metric Gaussian Variational Inference (MGVI) approximates the precision matrix at the location of the current mean :math:`M=D^{-1}` by the Bayesian Fisher information metric, Therefore, Metric Gaussian Variational Inference (MGVI) approximates the posterior precision matrix :math:`D^{-1}` at the location of the current mean :math:`m` by the Bayesian Fisher information metric,
.. math:: .. math::
......
...@@ -64,7 +64,7 @@ from .minimization.nonlinear_cg import NonlinearCG ...@@ -64,7 +64,7 @@ from .minimization.nonlinear_cg import NonlinearCG
from .minimization.descent_minimizers import ( from .minimization.descent_minimizers import (
DescentMinimizer, SteepestDescent, VL_BFGS, L_BFGS, RelaxedNewton, DescentMinimizer, SteepestDescent, VL_BFGS, L_BFGS, RelaxedNewton,
NewtonCG) NewtonCG)
from .minimization.scipy_minimizer import (ScipyMinimizer, L_BFGS_B, ScipyCG) from .minimization.scipy_minimizer import L_BFGS_B
from .minimization.energy import Energy from .minimization.energy import Energy
from .minimization.quadratic_energy import QuadraticEnergy from .minimization.quadratic_energy import QuadraticEnergy
from .minimization.energy_adapter import EnergyAdapter from .minimization.energy_adapter import EnergyAdapter
...@@ -73,7 +73,7 @@ from .minimization.metric_gaussian_kl import MetricGaussianKL ...@@ -73,7 +73,7 @@ from .minimization.metric_gaussian_kl import MetricGaussianKL
from .sugar import * from .sugar import *
from .plot import Plot from .plot import Plot
from .library.smooth_linear_amplitude import (SLAmplitude, CepstrumOperator) from .library.smooth_linear_amplitude import SLAmplitude, CepstrumOperator
from .library.inverse_gamma_operator import InverseGammaOperator from .library.inverse_gamma_operator import InverseGammaOperator
from .library.los_response import LOSResponse from .library.los_response import LOSResponse
from .library.dynamic_operator import (dynamic_operator, from .library.dynamic_operator import (dynamic_operator,
......
...@@ -19,6 +19,7 @@ import numpy as np ...@@ -19,6 +19,7 @@ import numpy as np
from .field import Field from .field import Field
from .linearization import Linearization from .linearization import Linearization
from .operators.linear_operator import LinearOperator
from .sugar import from_random from .sugar import from_random
__all__ = ["consistency_check", "check_value_gradient_consistency", __all__ = ["consistency_check", "check_value_gradient_consistency",
...@@ -62,8 +63,40 @@ def _full_implementation(op, domain_dtype, target_dtype, atol, rtol): ...@@ -62,8 +63,40 @@ def _full_implementation(op, domain_dtype, target_dtype, atol, rtol):
_inverse_implementation(op, domain_dtype, target_dtype, atol, rtol) _inverse_implementation(op, domain_dtype, target_dtype, atol, rtol)
def _check_linearity(op, domain_dtype, atol, rtol):
fld1 = from_random("normal", op.domain, dtype=domain_dtype)
fld2 = from_random("normal", op.domain, dtype=domain_dtype)
alpha = np.random.random()
val1 = op(alpha*fld1+fld2)
val2 = alpha*op(fld1)+op(fld2)
_assert_allclose(val1, val2, atol=atol, rtol=rtol)
def consistency_check(op, domain_dtype=np.float64, target_dtype=np.float64, def consistency_check(op, domain_dtype=np.float64, target_dtype=np.float64,
atol=0, rtol=1e-7): atol=0, rtol=1e-7):
"""Checks whether times(), adjoint_times(), inverse_times() and
adjoint_inverse_times() (if in capability list) is implemented
consistently. Additionally, it checks whether the operator is linear
actually.
Parameters
----------
op : LinearOperator
Operator which shall be checked.
domain_dtype : FIXME
The data type of the random vectors in the operator's domain. Default
is `np.float64`.
target_dtype : FIXME
The data type of the random vectors in the operator's target. Default
is `np.float64`.
atol : float
FIXME. Default is 0.
rtol : float
FIXME. Default is 0.
"""
if not isinstance(op, LinearOperator):
raise TypeError('This test tests only linear operators.')
_check_linearity(op, domain_dtype, atol, rtol)
_full_implementation(op, domain_dtype, target_dtype, atol, rtol) _full_implementation(op, domain_dtype, target_dtype, atol, rtol)
_full_implementation(op.adjoint, target_dtype, domain_dtype, atol, rtol) _full_implementation(op.adjoint, target_dtype, domain_dtype, atol, rtol)
_full_implementation(op.inverse, target_dtype, domain_dtype, atol, rtol) _full_implementation(op.inverse, target_dtype, domain_dtype, atol, rtol)
...@@ -124,8 +157,10 @@ def _check_consistency(op, loc, tol, ntries, do_metric): ...@@ -124,8 +157,10 @@ def _check_consistency(op, loc, tol, ntries, do_metric):
def check_value_gradient_consistency(op, loc, tol=1e-8, ntries=100): def check_value_gradient_consistency(op, loc, tol=1e-8, ntries=100):
"""FIXME"""
_check_consistency(op, loc, tol, ntries, False) _check_consistency(op, loc, tol, ntries, False)
def check_value_gradient_metric_consistency(op, loc, tol=1e-8, ntries=100): def check_value_gradient_metric_consistency(op, loc, tol=1e-8, ntries=100):
"""FIXME"""
_check_consistency(op, loc, tol, ntries, True) _check_consistency(op, loc, tol, ntries, True)
...@@ -31,13 +31,13 @@ class Field(object): ...@@ -31,13 +31,13 @@ class Field(object):
Parameters Parameters
---------- ----------
domain : DomainTuple domain : DomainTuple
the domain of the new Field The domain of the new Field.
val : data_object val : data_object
This object's global shape must match the domain shape This object's global shape must match the domain shape
After construction, the object will no longer be writeable! After construction, the object will no longer be writeable!
Notes Note
----- ----
If possible, do not invoke the constructor directly, but use one of the If possible, do not invoke the constructor directly, but use one of the
many convenience functions for instantiation! many convenience functions for instantiation!
""" """
...@@ -76,14 +76,14 @@ class Field(object): ...@@ -76,14 +76,14 @@ class Field(object):
Parameters Parameters
---------- ----------
domain : Domain, tuple of Domain, or DomainTuple domain : Domain, tuple of Domain, or DomainTuple
domain of the new Field Domain of the new Field.
val : float/complex/int scalar val : float/complex/int scalar
fill value. Data type of the field is inferred from val. Fill value. Data type of the field is inferred from val.
Returns Returns
------- -------
Field Field
the newly created field The newly created Field.
""" """
if not np.isscalar(val): if not np.isscalar(val):
raise TypeError("val must be a scalar") raise TypeError("val must be a scalar")
...@@ -99,7 +99,7 @@ class Field(object): ...@@ -99,7 +99,7 @@ class Field(object):
Parameters Parameters
---------- ----------
domain : DomainTuple, tuple of Domain, or Domain domain : DomainTuple, tuple of Domain, or Domain
the domain of the new Field The domain of the new Field.
arr : numpy.ndarray arr : numpy.ndarray
The data content to be used for the new Field. The data content to be used for the new Field.
Its shape must match the shape of `domain`. Its shape must match the shape of `domain`.
...@@ -132,8 +132,9 @@ class Field(object): ...@@ -132,8 +132,9 @@ class Field(object):
Returns Returns
------- -------
numpy.ndarray : array containing all field entries, which can be numpy.ndarray
modified. Its shape is identical to `self.shape`. Array containing all field entries, which can be modified. Its
shape is identical to `self.shape`.
""" """
return dobj.to_global_data_rw(self._val) return dobj.to_global_data_rw(self._val)
...@@ -171,9 +172,9 @@ class Field(object): ...@@ -171,9 +172,9 @@ class Field(object):
random_type : 'pm1', 'normal', or 'uniform' random_type : 'pm1', 'normal', or 'uniform'
The random distribution to use. The random distribution to use.
domain : DomainTuple domain : DomainTuple
The domain of the output random field The domain of the output random Field.
dtype : type dtype : type
The datatype of the output random field The datatype of the output random Field.
Returns Returns
------- -------
...@@ -187,10 +188,10 @@ class Field(object): ...@@ -187,10 +188,10 @@ class Field(object):
@property @property
def val(self): def val(self):
"""dobj.data_object : the data object storing the field's entries """dobj.data_object : the data object storing the field's entries.
Notes Note
----- ----
This property is intended for low-level, internal use only. Do not use This property is intended for low-level, internal use only. Do not use
from outside of NIFTy's core; there should be better alternatives. from outside of NIFTy's core; there should be better alternatives.
""" """
...@@ -236,13 +237,13 @@ class Field(object): ...@@ -236,13 +237,13 @@ class Field(object):
Parameters Parameters
---------- ----------
spaces : int, tuple of int or None spaces : int, tuple of int or None
indices of the sub-domains of the field's domain to be considered. Indices of the sub-domains of the field's domain to be considered.
If `None`, the entire domain is used. If `None`, the entire domain is used.
Returns Returns
------- -------
float or None float or None
if the requested sub-domain has a uniform volume element, it is If the requested sub-domain has a uniform volume element, it is
returned. Otherwise, `None` is returned. returned. Otherwise, `None` is returned.
""" """
if np.isscalar(spaces): if np.isscalar(spaces):
...@@ -264,7 +265,7 @@ class Field(object): ...@@ -264,7 +265,7 @@ class Field(object):
Parameters Parameters
---------- ----------
spaces : int, tuple of int or None spaces : int, tuple of int or None
indices of the sub-domains of the field's domain to be considered. Indices of the sub-domains of the field's domain to be considered.
If `None`, the entire domain is used. If `None`, the entire domain is used.
Returns Returns
...@@ -331,8 +332,9 @@ class Field(object): ...@@ -331,8 +332,9 @@ class Field(object):
x : Field x : Field
Returns Returns
---------- -------
Field, defined on the product space of self.domain and x.domain Field
Defined on the product space of self.domain and x.domain.
""" """
if not isinstance(x, Field): if not isinstance(x, Field):
raise TypeError("The multiplier must be an instance of " + raise TypeError("The multiplier must be an instance of " +
......
...@@ -18,7 +18,8 @@ ...@@ -18,7 +18,8 @@
from ..minimization.energy_adapter import EnergyAdapter from ..minimization.energy_adapter import EnergyAdapter
from ..multi_field import MultiField from ..multi_field import MultiField
from ..operators.distributors import PowerDistributor from ..operators.distributors import PowerDistributor
from ..operators.energy_operators import StandardHamiltonian, InverseGammaLikelihood from ..operators.energy_operators import (StandardHamiltonian,
InverseGammaLikelihood)
from ..operators.scaling_operator import ScalingOperator from ..operators.scaling_operator import ScalingOperator
from ..operators.simple_linear_operators import ducktape from ..operators.simple_linear_operators import ducktape
...@@ -72,7 +73,8 @@ def make_adjust_variances(a, ...@@ -72,7 +73,8 @@ def make_adjust_variances(a,
if scaling is not None: if scaling is not None:
x = ScalingOperator(scaling, x.target)(x) x = ScalingOperator(scaling, x.target)(x)
return StandardHamiltonian(InverseGammaLikelihood(d_eval)(x), ic_samp=ic_samp) return StandardHamiltonian(InverseGammaLikelihood(d_eval/2.)(x),
ic_samp=ic_samp)
def do_adjust_variances(position, def do_adjust_variances(position,
......
...@@ -16,6 +16,7 @@ ...@@ -16,6 +16,7 @@
# NIFTy is being developed at the Max-Planck-Institut fuer Astrophysik. # NIFTy is being developed at the Max-Planck-Institut fuer Astrophysik.
from functools import reduce from functools import reduce
from operator import mul
from ..domain_tuple import DomainTuple from ..domain_tuple import DomainTuple
from ..operators.contraction_operator import ContractionOperator from ..operators.contraction_operator import ContractionOperator
...@@ -61,6 +62,10 @@ def CorrelatedField(target, amplitude_operator, name='xi', codomain=None): ...@@ -61,6 +62,10 @@ def CorrelatedField(target, amplitude_operator, name='xi', codomain=None):
power_distributor = PowerDistributor(h_space, p_space) power_distributor = PowerDistributor(h_space, p_space)
A = power_distributor(amplitude_operator) A = power_distributor(amplitude_operator)
vol = h_space.scalar_dvol**-0.5 vol = h_space.scalar_dvol**-0.5
# When doubling the resolution of `tgt` the value of the highest k-mode
# will scale with a square root. `vol` cancels this effect such that the
# same power spectrum can be used for the spaces with the same volume,
# different resolutions and the same object in them.
return ht(vol*A*ducktape(h_space, None, name)) return ht(vol*A*ducktape(h_space, None, name))
...@@ -107,7 +112,8 @@ def MfCorrelatedField(target, amplitudes, name='xi'): ...@@ -107,7 +112,8 @@ def MfCorrelatedField(target, amplitudes, name='xi'):
d = [dd0, dd1] d = [dd0, dd1]
a = [dd @ amplitudes[ii] for ii, dd in enumerate(d)] a = [dd @ amplitudes[ii] for ii, dd in enumerate(d)]
a = reduce(lambda x, y: x*y, a) a = reduce(mul, a)
A = pd @ a A = pd @ a
vol = reduce(lambda x, y: x*y, [sp.scalar_dvol**-0.5 for sp in hsp]) # For `vol` see comment in `CorrelatedField`
vol = reduce(mul, [sp.scalar_dvol**-0.5 for sp in hsp])
return ht(vol*A*ducktape(hsp, None, name)) return ht(vol*A*ducktape(hsp, None, name))
...@@ -26,12 +26,13 @@ from ..sugar import makeOp ...@@ -26,12 +26,13 @@ from ..sugar import makeOp
class InverseGammaOperator(Operator): class InverseGammaOperator(Operator):
"""Operator which transforms a Gaussian into an inverse gamma distribution. """Transforms a Gaussian into an inverse gamma distribution.
The pdf of the inverse gamma distribution is defined as follows: The pdf of the inverse gamma distribution is defined as follows:
.. math :: .. math::
\\frac{q^\\alpha}{\\Gamma(\\alpha)}x^{-\\alpha -1}\\exp \\left(-\\frac{q}{x}\\right) \\frac{q^\\alpha}{\\Gamma(\\alpha)}x^{-\\alpha -1}
\\exp \\left(-\\frac{q}{x}\\right)
That means that for large x the pdf falls off like :math:`x^(-\\alpha -1)`. That means that for large x the pdf falls off like :math:`x^(-\\alpha -1)`.
The mean of the pdf is at :math:`q / (\\alpha - 1)` if :math:`\\alpha > 1`. The mean of the pdf is at :math:`q / (\\alpha - 1)` if :math:`\\alpha > 1`.
...@@ -54,7 +55,8 @@ class InverseGammaOperator(Operator): ...@@ -54,7 +55,8 @@ class InverseGammaOperator(Operator):
""" """
def __init__(self, domain, alpha, q, delta=0.001): def __init__(self, domain, alpha, q, delta=0.001):
self._domain = self._target = DomainTuple.make(domain) self._domain = self._target = DomainTuple.make(domain)
self._alpha, self._q, self._delta = float(alpha), float(q), float(delta) self._alpha, self._q, self._delta = \
float(alpha), float(q), float(delta)
self._xmin, self._xmax = -8.2, 8.2 self._xmin, self._xmax = -8.2, 8.2
# Precompute # Precompute
xs = np.arange(self._xmin, self._xmax+2*delta, delta) xs = np.arange(self._xmin, self._xmax+2*delta, delta)
......
...@@ -138,7 +138,8 @@ def SLAmplitude(*, target, n_pix, a, k0, sm, sv, im, iv, keys=['tau', 'phi']): ...@@ -138,7 +138,8 @@ def SLAmplitude(*, target, n_pix, a, k0, sm, sv, im, iv, keys=['tau', 'phi']):
of the linear function. of the linear function.
The prior on the smooth component is parametrized by two real numbers: the The prior on the smooth component is parametrized by two real numbers: the
strength and the cutoff of the smoothness prior (see :class:`CepstrumOperator`). strength and the cutoff of the smoothness prior
(see :class:`CepstrumOperator`).
Parameters Parameters
---------- ----------
......
...@@ -130,4 +130,4 @@ class MetricGaussianKL(Energy): ...@@ -130,4 +130,4 @@ class MetricGaussianKL(Energy):
def __repr__(self): def __repr__(self):
return 'KL ({} samples):\n'.format(len( return 'KL ({} samples):\n'.format(len(
self._samples)) + utilities.indent(self._ham.__repr__()) self._samples)) + utilities.indent(self._hamiltonian.__repr__())
...@@ -93,7 +93,7 @@ class _MinHelper(object): ...@@ -93,7 +93,7 @@ class _MinHelper(object):
return _toArray_rw(res) return _toArray_rw(res)
class ScipyMinimizer(Minimizer): class _ScipyMinimizer(Minimizer):
"""Scipy-based minimizer """Scipy-based minimizer
Parameters Parameters
...@@ -136,19 +136,19 @@ class ScipyMinimizer(Minimizer): ...@@ -136,19 +136,19 @@ class ScipyMinimizer(Minimizer):
def L_BFGS_B(ftol, gtol, maxiter, maxcor=10, disp=False, bounds=None): def L_BFGS_B(ftol, gtol, maxiter, maxcor=10, disp=False, bounds=None):
"""Returns a ScipyMinimizer object carrying out the L-BFGS-B algorithm. """Returns a _ScipyMinimizer object carrying out the L-BFGS-B algorithm.
See Also See Also
-------- --------
ScipyMinimizer _ScipyMinimizer
""" """
options = {"ftol": ftol, "gtol": gtol, "maxiter": maxiter, options = {"ftol": ftol, "gtol": gtol, "maxiter": maxiter,
"maxcor": maxcor, "disp": disp} "maxcor": maxcor, "disp": disp}
return ScipyMinimizer("L-BFGS-B", options, False, bounds) return _ScipyMinimizer("L-BFGS-B", options, False, bounds)
class ScipyCG(Minimizer): class _ScipyCG(Minimizer):
"""Returns a ScipyMinimizer object carrying out the conjugate gradient """Returns a _ScipyMinimizer object carrying out the conjugate gradient
algorithm as implemented by SciPy. algorithm as implemented by SciPy.
This class is only intended for double-checking NIFTy's own conjugate This class is only intended for double-checking NIFTy's own conjugate
......
...@@ -24,7 +24,8 @@ from .linear_operator import LinearOperator ...@@ -24,7 +24,8 @@ from .linear_operator import LinearOperator
class ContractionOperator(LinearOperator): class ContractionOperator(LinearOperator):
"""A :class:`LinearOperator` which sums up fields into the direction of subspaces. """A :class:`LinearOperator` which sums up fields into the direction of
subspaces.
This Operator sums up a field with is defined on a :class:`DomainTuple` This Operator sums up a field with is defined on a :class:`DomainTuple`
to a :class:`DomainTuple` which contains the former as a subset. to a :class:`DomainTuple` which contains the former as a subset.
......
...@@ -23,31 +23,38 @@ from .linear_operator import LinearOperator ...@@ -23,31 +23,38 @@ from .linear_operator import LinearOperator
class DomainTupleFieldInserter(LinearOperator): class DomainTupleFieldInserter(LinearOperator):
"""Writes the content of a :class:`Field` into one slice of a :class:`DomainTuple`. """Writes the content of a :class:`Field` into one slice of a
:class:`DomainTuple`.
Parameters Parameters
---------- ----------
domain : Domain, tuple of Domain or DomainTuple target : Domain, tuple of Domain or DomainTuple
new_space : Domain, tuple of Domain or DomainTuple space : int
index : Integer The index of the sub-domain which is inserted.
Index at which new_space shall be added to domain. index : tuple
position : tuple Slice in new sub-domain in which the input field shall be written into.
Slice in new_space in which the input field shall be written into.
""" """
def __init__(self, domain, new_space, index, position):