Commit 29a722fd authored by Lukas Platz's avatar Lukas Platz
Browse files

introduce nifty6.mpi namespace

parent 07ef6674
Pipeline #65082 passed with stages
in 8 minutes and 9 seconds
...@@ -17,8 +17,8 @@ onto each other exactly. We experienced that preconditioning in the `MetricGauss ...@@ -17,8 +17,8 @@ onto each other exactly. We experienced that preconditioning in the `MetricGauss
via `napprox` breaks the inference scheme with the new model so `napprox` may not via `napprox` breaks the inference scheme with the new model so `napprox` may not
be used here. be used here.
Removal of the standard parallelization scheme: Removal of the standard MPI parallelization scheme:
=============================================== ===================================================
When several MPI tasks are present, NIFTy5 distributes every Field over these When several MPI tasks are present, NIFTy5 distributes every Field over these
tasks by splitting it along the first axis. This approach to parallelism is not tasks by splitting it along the first axis. This approach to parallelism is not
...@@ -37,3 +37,11 @@ User-visible changes: ...@@ -37,3 +37,11 @@ User-visible changes:
- the property `local_shape` has been removed from `Domain` (and subclasses) - the property `local_shape` has been removed from `Domain` (and subclasses)
and `DomainTuple`. and `DomainTuple`.
Transfer of MPI parallelization into operators:
===============================================
As was already the case with the `MetricGaussianKL_MPI` in NIFTy5, MPI
parallelization in NIFTy6 is handled by specialized MPI-enabled operators.
They are accessible via the `nifty6.mpi` namespace, from which they can be
imported directly: `from nifty6.mpi import MPIenabledOperator`.
from .minimization.metric_gaussian_kl_mpi import MetricGaussianKL_MPI
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment