ift issueshttps://gitlab.mpcdf.mpg.de/groups/ift/-/issues2021-11-25T22:25:29Zhttps://gitlab.mpcdf.mpg.de/ift/public/dtwin/-/issues/1RWDatabase has bad performance2021-11-25T22:25:29ZMax-Niklas NewrzellaRWDatabase has bad performanceThe database interface implementation `RWDatabase` slows down the simulation considerably, if used.
This is, at least in part, due to the usage of SQLAlchemy's ORM.The database interface implementation `RWDatabase` slows down the simulation considerably, if used.
This is, at least in part, due to the usage of SQLAlchemy's ORM.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/342Try to replace FITS I/O where possible2021-12-06T08:51:12ZMartin ReineckeTry to replace FITS I/O where possibleJust a general remark for future I/O changes: please let's stop using FITS (and maybe even replace the FITS format in places where we don't need to be interoperable with anything else). This format really is incredibly old and dusty. Any...Just a general remark for future I/O changes: please let's stop using FITS (and maybe even replace the FITS format in places where we don't need to be interoperable with anything else). This format really is incredibly old and dusty. Anything with better metadata support should be fine, probably HDF5 (which we depend on anyway).https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/343`Variational inference visualized` is too slow2021-12-08T22:22:56ZPhilipp Arrasparras@mpa-garching.mpg.de`Variational inference visualized` is too slowRunning the file `https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_8/demos/variational_inference_visualized.py` is so slow that it is not very instructive anymore.Running the file `https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_8/demos/variational_inference_visualized.py` is so slow that it is not very instructive anymore.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/344Unexpected behaviour of `MultiField.scale`2022-01-13T14:10:36ZPhilipp Arrasparras@mpa-garching.mpg.deUnexpected behaviour of `MultiField.scale`The setup:
```python
import nifty8 as ift
import numpy as np
dom = ift.UnstructuredDomain([1])
mdom = {"a": dom, "b": dom}
fld = ift.from_random(mdom)
fld2 = ift.from_random(mdom)
```
I would expect the following line to fail since th...The setup:
```python
import nifty8 as ift
import numpy as np
dom = ift.UnstructuredDomain([1])
mdom = {"a": dom, "b": dom}
fld = ift.from_random(mdom)
fld2 = ift.from_random(mdom)
```
I would expect the following line to fail since the argument of scale is not a scalar.
```python
fld3 = fld.scale(fld2)
```
But it doesn't.
Does it at least compute something sensible?
```python
for kk in mdom.keys():
assert np.all(fld[kk].val * fld2[kk].val == fld3[kk].val)
```
No.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/345sample list average crashes for mpi run with map estimator2022-01-16T10:21:53ZJakob Rothsample list average crashes for mpi run with map estimatorWhen computing a map estimate with multiple MPI tasks the sample list average method crashes.
Here is an example code that works if executed normally with python, but crashes in an MPI run
```
import numpy as np
import nifty8 as ift
ift...When computing a map estimate with multiple MPI tasks the sample list average method crashes.
Here is an example code that works if executed normally with python, but crashes in an MPI run
```
import numpy as np
import nifty8 as ift
ift.random.push_sseq_from_seed(27)
try:
from mpi4py import MPI
comm = MPI.COMM_WORLD
master = comm.Get_rank() == 0
except ImportError:
comm = None
master = True
position_space = ift.RGSpace([128, 128])
op = ift.makeOp(ift.full(position_space, 10.))
noise = 0.1
N = ift.ScalingOperator(position_space, noise, np.float64)
mock_position = ift.from_random(op.domain)
data = op(mock_position) + N.draw_sample()
lh = ift.GaussianEnergy(mean=data, inverse_covariance=N.inverse) @ op
ic_sampling = ift.AbsDeltaEnergyController(
name="Sampling (linear)", deltaE=0.05, iteration_limit=10
)
ic_newton = ift.AbsDeltaEnergyController(
name="Newton", deltaE=0.5, convergence_level=2, iteration_limit=5
)
minimizer = ift.NewtonCG(ic_newton)
def callback(samples, i):
plot = ift.Plot()
mean = samples.average(op)
plot.add(mean, title="Reconstruction", zmin=0, zmax=1)
if master:
plot.output()
n_iterations = 3
n_samples = lambda iiter: 0 if iiter < 1 else 2
samples = ift.optimize_kl(
lh,
n_iterations,
n_samples,
minimizer,
ic_sampling,
None,
overwrite=True,
comm=comm,
callback=callback,
)
```https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/347More MPI Bugs2022-01-14T18:21:02ZJakob RothMore MPI BugsWhen I execute the getting_started_3.py script with MPI `mpiexec -n 2 python getting_started_3.py` I get the following error message:
```
Traceback (most recent call last):
File "/home/jakob/nifty/demos/getting_started_3.py", line 163,...When I execute the getting_started_3.py script with MPI `mpiexec -n 2 python getting_started_3.py` I get the following error message:
```
Traceback (most recent call last):
File "/home/jakob/nifty/demos/getting_started_3.py", line 163, in <module>
main()
File "/home/jakob/nifty/demos/getting_started_3.py", line 153, in main
[pspec.force(mock_position), samples.average(logspec).exp()],
File "/home/jakob/nifty/nifty8/minimization/sample_list.py", line 306, in average
return utilities.allreduce_sum(res, self.comm) / n
File "/home/jakob/nifty/nifty8/utilities.py", line 371, in allreduce_sum
vals[j] = vals[j] + comm.recv(source=who[j+step])
File "/home/jakob/nifty/nifty8/field.py", line 726, in func2
return self._binary_op(other, op)
File "/home/jakob/nifty/nifty8/field.py", line 689, in _binary_op
utilities.check_object_identity(other._domain, self._domain)
File "/home/jakob/nifty/nifty8/utilities.py", line 419, in check_object_identity
raise ValueError(f"Mismatch:\n{obj0}\n{obj1}")
ValueError: Mismatch:
DomainTuple, len: 1
* PowerSpace(harmonic_partner=RGSpace(shape=(128, 128), distances=(1.0, 1.0), harmonic=True), binbounds=None)
DomainTuple, len: 1
* PowerSpace(harmonic_partner=RGSpace(shape=(128, 128), distances=(1.0, 1.0), harmonic=True), binbounds=None)
```
When executing without MPI I get no error.https://gitlab.mpcdf.mpg.de/ift/resolve/-/issues/5Implement MPI-parallel cumsum2022-01-19T08:08:59ZMartin ReineckeImplement MPI-parallel cumsumGiven: every task has an array `a(:,freq)`, where the last axis is distributed over tasks.
Wanted: `suma`, where `suma(:,i) = sum(a[:,:i])`,
Possible approach:
- every task computes `suma_loc = np.cumsum(a, axis=-1)`
- the "last" entrie...Given: every task has an array `a(:,freq)`, where the last axis is distributed over tasks.
Wanted: `suma`, where `suma(:,i) = sum(a[:,:i])`,
Possible approach:
- every task computes `suma_loc = np.cumsum(a, axis=-1)`
- the "last" entries are gathered on all tasks: `tmp = np.array(comm.allgather(suma_loc[:,-1]))`
- compute the local offset to add to `suma_loc`: tmp = np.sum(tmp[:,:rank], axis=-1)`
- add to `suma_loc`: `suma = suma_loc+tmp`