ift issueshttps://gitlab.mpcdf.mpg.de/groups/ift/-/issues2022-11-09T13:10:23Zhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/353Expensive sanity checks in optimize_kl.py?2022-11-09T13:10:23ZMartin ReineckeExpensive sanity checks in optimize_kl.py?The file `optimize_KL.py` contains several calls to `check_MPI_equality`, to ensure that certain objects are identical on all MPI tasks. While this is definitely useful, some of these calls try to compare objects that can become huge (wh...The file `optimize_KL.py` contains several calls to `check_MPI_equality`, to ensure that certain objects are identical on all MPI tasks. While this is definitely useful, some of these calls try to compare objects that can become huge (which leads to a lot of additional memory/time consumption) and will in extreme cases exceed 2GB, causing MPI failures.
Would it be possible to either remove these cases or simplify the comparison somehow, e.g. by hashing the data and comparing the hash across MPI tasks?
@pfrank, @gedenhof, @kjakohttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/352Minisanity history broken if likelihoods with different names are used2022-05-13T14:06:13ZPhilipp Arrasparras@mpa-garching.mpg.deMinisanity history broken if likelihoods with different names are usedReproducer:
```python
import nifty8 as ift
dom = ift.RGSpace(2)
lh0 = ift.GaussianEnergy(domain=dom).ducktape("inp")
lh1 = ift.GaussianEnergy(domain=dom).ducktape("inp")
lh0.name = "First lh"
lh1.name = "Second lh"
ic = ift.GradientNo...Reproducer:
```python
import nifty8 as ift
dom = ift.RGSpace(2)
lh0 = ift.GaussianEnergy(domain=dom).ducktape("inp")
lh1 = ift.GaussianEnergy(domain=dom).ducktape("inp")
lh0.name = "First lh"
lh1.name = "Second lh"
ic = ift.GradientNormController(iteration_limit=2)
mini = ift.SteepestDescent(ic)
def lhs(iglobal):
if iglobal == 0:
return lh0
return lh1
ift.optimize_kl(lhs, 2, 0, mini, ic, None, overwrite=True)
```
Output:
```
Iteration limit reached. Assuming convergence
========================================================
reduced χ² mean # dof
--------------------------------------------------------
Data residuals
First lh 0.0 0.0 2
Latent space
inp 0.0 0.0 2
========================================================
/home/philipp/git/nifty/nifty8/plot.py:342: UserWarning: Attempting to set identical left == right == 0.0 results in singular transformations; automatically expanding.
ax.set_xlim((mi-delta, ma+delta))
========================================================
reduced χ² mean # dof
--------------------------------------------------------
Data residuals
Second lh 0.0 0.0 2
Latent space
inp 0.0 0.0 2
========================================================
Traceback (most recent call last):
File "/home/philipp/asdf.py", line 18, in <module>
ift.optimize_kl(lhs, 2, 0, mini, ic, None, overwrite=True)
File "/home/philipp/git/nifty/nifty8/minimization/optimize_kl.py", line 392, in optimize_kl
_minisanity(likelihood_energy, iglobal, sl, comm, plot_minisanity_history)
File "/home/philipp/git/nifty/nifty8/minimization/optimize_kl.py", line 627, in _minisanity
v = ms_val[k1][k2][k3]
KeyError: 'First lh'
```Lukas PlatzLukas Platzhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/351correlated Field model linearization adjoint very slow if total_N != 02022-03-29T18:52:49ZJakob Rothcorrelated Field model linearization adjoint very slow if total_N != 0As noticed by @parras @pfrank and me the adjoint of the linearization of the correlated field model is very slow if total_N != 0.
Here is a demo:
```
import nifty8 as ift
import numpy as np
sp1 = ift.RGSpace((4000, 4000))
cfmaker = ift...As noticed by @parras @pfrank and me the adjoint of the linearization of the correlated field model is very slow if total_N != 0.
Here is a demo:
```
import nifty8 as ift
import numpy as np
sp1 = ift.RGSpace((4000, 4000))
cfmaker = ift.CorrelatedFieldMaker('')
cfmaker.add_fluctuations(sp1, (0.1, 1e-2), (2, .2), (.01, .5), (-4, 2.),
'amp1')
cfmaker.set_amplitude_total_offset(0., (1e-2, 1e-6))
cf0 = cfmaker.finalize(0)
n_tot=1
cfmaker = ift.CorrelatedFieldMaker('', total_N=n_tot)
cfmaker.add_fluctuations(sp1, (0.1, 1e-2), (2, .2), (.01, .5), (-4, 2.),
'amp1', dofdex=np.arange(n_tot))
cfmaker.set_amplitude_total_offset(0., (1e-2, 1e-6), dofdex=np.arange(n_tot))
cf1 = cfmaker.finalize(0)
print("benchmark for total_N = 0")
ift.exec_time(cf0)
print("benchmark for total_N = 1")
ift.exec_time(cf1)
```
I had a quick look at the issue. The problem seems to be the _Distributor operator in the correlated field model file: https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_8/src/library/correlated_fields.py#L214
For the case of total_N != 0 this _Distributor is called multiple times. The call which is very slow in adjoint direction is in line: https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_8/src/library/correlated_fields.py#L362 and the line below.
Note this _Distibutor operator is only used for the case total_N != 0, and therefore the simple case with total_N = 0 is reasonably fast.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/350Optimize_kl: if save_strategy="last" and the number of samples decreases the ...2022-02-25T17:34:40ZPhilipp Arrasparras@mpa-garching.mpg.deOptimize_kl: if save_strategy="last" and the number of samples decreases the superfluous old samples are not deletedPhilipp Arrasparras@mpa-garching.mpg.dePhilipp Arrasparras@mpa-garching.mpg.dehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/349Do not track ipynb and instead use jupytext2022-05-28T08:50:24ZGordian EdenhoferDo not track ipynb and instead use jupytextSee https://gitlab.mpcdf.mpg.de/ift/nifty/-/merge_requests/702#note_111326 .See https://gitlab.mpcdf.mpg.de/ift/nifty/-/merge_requests/702#note_111326 .Gordian EdenhoferGordian Edenhoferhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/348Pipeline broken?2022-01-27T11:26:59ZJakob RothPipeline broken?It seems to me that our pipeline is broken. The check_no_assert step fails for !734 although no assert was introduced.It seems to me that our pipeline is broken. The check_no_assert step fails for !734 although no assert was introduced.https://gitlab.mpcdf.mpg.de/ift/resolve/-/issues/5Implement MPI-parallel cumsum2022-01-19T08:08:59ZMartin ReineckeImplement MPI-parallel cumsumGiven: every task has an array `a(:,freq)`, where the last axis is distributed over tasks.
Wanted: `suma`, where `suma(:,i) = sum(a[:,:i])`,
Possible approach:
- every task computes `suma_loc = np.cumsum(a, axis=-1)`
- the "last" entrie...Given: every task has an array `a(:,freq)`, where the last axis is distributed over tasks.
Wanted: `suma`, where `suma(:,i) = sum(a[:,:i])`,
Possible approach:
- every task computes `suma_loc = np.cumsum(a, axis=-1)`
- the "last" entries are gathered on all tasks: `tmp = np.array(comm.allgather(suma_loc[:,-1]))`
- compute the local offset to add to `suma_loc`: tmp = np.sum(tmp[:,:rank], axis=-1)`
- add to `suma_loc`: `suma = suma_loc+tmp`https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/347More MPI Bugs2022-02-01T08:47:37ZJakob RothMore MPI BugsWhen I execute the getting_started_3.py script with MPI `mpiexec -n 2 python getting_started_3.py` I get the following error message:
```
Traceback (most recent call last):
File "/home/jakob/nifty/demos/getting_started_3.py", line 163,...When I execute the getting_started_3.py script with MPI `mpiexec -n 2 python getting_started_3.py` I get the following error message:
```
Traceback (most recent call last):
File "/home/jakob/nifty/demos/getting_started_3.py", line 163, in <module>
main()
File "/home/jakob/nifty/demos/getting_started_3.py", line 153, in main
[pspec.force(mock_position), samples.average(logspec).exp()],
File "/home/jakob/nifty/nifty8/minimization/sample_list.py", line 306, in average
return utilities.allreduce_sum(res, self.comm) / n
File "/home/jakob/nifty/nifty8/utilities.py", line 371, in allreduce_sum
vals[j] = vals[j] + comm.recv(source=who[j+step])
File "/home/jakob/nifty/nifty8/field.py", line 726, in func2
return self._binary_op(other, op)
File "/home/jakob/nifty/nifty8/field.py", line 689, in _binary_op
utilities.check_object_identity(other._domain, self._domain)
File "/home/jakob/nifty/nifty8/utilities.py", line 419, in check_object_identity
raise ValueError(f"Mismatch:\n{obj0}\n{obj1}")
ValueError: Mismatch:
DomainTuple, len: 1
* PowerSpace(harmonic_partner=RGSpace(shape=(128, 128), distances=(1.0, 1.0), harmonic=True), binbounds=None)
DomainTuple, len: 1
* PowerSpace(harmonic_partner=RGSpace(shape=(128, 128), distances=(1.0, 1.0), harmonic=True), binbounds=None)
```
When executing without MPI I get no error.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/346Nifty installation2022-11-25T17:58:16ZVincent EberleNifty installationInstallation Issues
-------------------
Jakob Roth and me found a concerning issue.
Recently it is not possible to `pip install -e .`
Somehow this returns this error [1].
But the installation works for `pip install .`
You can try to ...Installation Issues
-------------------
Jakob Roth and me found a concerning issue.
Recently it is not possible to `pip install -e .`
Somehow this returns this error [1].
But the installation works for `pip install .`
You can try to reproduce it by uninstalling nifty and reinstalling it..
Does someone know why this is happening?
@jroth @parras @mtr @gedenhof
[1]
`ERROR: Command errored out with exit status 1:
command: /usr/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/home/veberle/MPA/nifty/setup.py'"'"'; __file__='"'"'/home/veberle/MPA/nifty/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' develop --no-deps --user --prefix=
cwd: /home/veberle/MPA/nifty/
Complete output (28 lines):
running develop
/tmp/pip-build-env-9z5pmolp/overlay/lib/python3.8/site-packages/setuptools/command/easy_install.py:156: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/veberle/MPA/nifty/setup.py", line 33, in <module>
setup(name="nifty8",
File "/tmp/pip-build-env-9z5pmolp/overlay/lib/python3.8/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/tmp/pip-build-env-9z5pmolp/overlay/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 148, in setup
return run_commands(dist)
File "/tmp/pip-build-env-9z5pmolp/overlay/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 163, in run_commands
dist.run_commands()
File "/tmp/pip-build-env-9z5pmolp/overlay/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 967, in run_commands
self.run_command(cmd)
File "/tmp/pip-build-env-9z5pmolp/overlay/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 985, in run_command
cmd_obj.ensure_finalized()
File "/tmp/pip-build-env-9z5pmolp/overlay/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 107, in ensure_finalized
self.finalize_options()
File "/tmp/pip-build-env-9z5pmolp/overlay/lib/python3.8/site-packages/setuptools/command/develop.py", line 52, in finalize_options
easy_install.finalize_options(self)
File "/tmp/pip-build-env-9z5pmolp/overlay/lib/python3.8/site-packages/setuptools/command/easy_install.py", line 263, in finalize_options
self._fix_install_dir_for_user_site()
File "/tmp/pip-build-env-9z5pmolp/overlay/lib/python3.8/site-packages/setuptools/command/easy_install.py", line 375, in _fix_install_dir_for_user_site
self.select_scheme(scheme_name)
File "/tmp/pip-build-env-9z5pmolp/overlay/lib/python3.8/site-packages/setuptools/command/easy_install.py", line 716, in select_scheme
scheme = INSTALL_SCHEMES[name]
KeyError: 'unix_user'
----------------------------------------
ERROR: Can't roll back nifty8; was not uninstalled
ERROR: Command errored out with exit status 1: /usr/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/home/veberle/MPA/nifty/setup.py'"'"'; __file__='"'"'/home/veberle/MPA/nifty/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' develop --no-deps --user --prefix= Check the logs for full command output.
`https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/345sample list average crashes for mpi run with map estimator2022-01-27T11:18:26ZJakob Rothsample list average crashes for mpi run with map estimatorWhen computing a map estimate with multiple MPI tasks the sample list average method crashes.
Here is an example code that works if executed normally with python, but crashes in an MPI run
```
import numpy as np
import nifty8 as ift
ift...When computing a map estimate with multiple MPI tasks the sample list average method crashes.
Here is an example code that works if executed normally with python, but crashes in an MPI run
```
import numpy as np
import nifty8 as ift
ift.random.push_sseq_from_seed(27)
try:
from mpi4py import MPI
comm = MPI.COMM_WORLD
master = comm.Get_rank() == 0
except ImportError:
comm = None
master = True
position_space = ift.RGSpace([128, 128])
op = ift.makeOp(ift.full(position_space, 10.))
noise = 0.1
N = ift.ScalingOperator(position_space, noise, np.float64)
mock_position = ift.from_random(op.domain)
data = op(mock_position) + N.draw_sample()
lh = ift.GaussianEnergy(mean=data, inverse_covariance=N.inverse) @ op
ic_sampling = ift.AbsDeltaEnergyController(
name="Sampling (linear)", deltaE=0.05, iteration_limit=10
)
ic_newton = ift.AbsDeltaEnergyController(
name="Newton", deltaE=0.5, convergence_level=2, iteration_limit=5
)
minimizer = ift.NewtonCG(ic_newton)
def callback(samples, i):
plot = ift.Plot()
mean = samples.average(op)
plot.add(mean, title="Reconstruction", zmin=0, zmax=1)
if master:
plot.output()
n_iterations = 3
n_samples = lambda iiter: 0 if iiter < 1 else 2
samples = ift.optimize_kl(
lh,
n_iterations,
n_samples,
minimizer,
ic_sampling,
None,
overwrite=True,
comm=comm,
callback=callback,
)
```https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/344Unexpected behaviour of `MultiField.scale`2022-01-31T18:02:56ZPhilipp Arrasparras@mpa-garching.mpg.deUnexpected behaviour of `MultiField.scale`The setup:
```python
import nifty8 as ift
import numpy as np
dom = ift.UnstructuredDomain([1])
mdom = {"a": dom, "b": dom}
fld = ift.from_random(mdom)
fld2 = ift.from_random(mdom)
```
I would expect the following line to fail since th...The setup:
```python
import nifty8 as ift
import numpy as np
dom = ift.UnstructuredDomain([1])
mdom = {"a": dom, "b": dom}
fld = ift.from_random(mdom)
fld2 = ift.from_random(mdom)
```
I would expect the following line to fail since the argument of scale is not a scalar.
```python
fld3 = fld.scale(fld2)
```
But it doesn't.
Does it at least compute something sensible?
```python
for kk in mdom.keys():
assert np.all(fld[kk].val * fld2[kk].val == fld3[kk].val)
```
No.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/343`Variational inference visualized` is too slow2021-12-08T22:22:56ZPhilipp Arrasparras@mpa-garching.mpg.de`Variational inference visualized` is too slowRunning the file `https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_8/demos/variational_inference_visualized.py` is so slow that it is not very instructive anymore.Running the file `https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_8/demos/variational_inference_visualized.py` is so slow that it is not very instructive anymore.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/342Try to replace FITS I/O where possible2021-12-06T08:51:12ZMartin ReineckeTry to replace FITS I/O where possibleJust a general remark for future I/O changes: please let's stop using FITS (and maybe even replace the FITS format in places where we don't need to be interoperable with anything else). This format really is incredibly old and dusty. Any...Just a general remark for future I/O changes: please let's stop using FITS (and maybe even replace the FITS format in places where we don't need to be interoperable with anything else). This format really is incredibly old and dusty. Anything with better metadata support should be fine, probably HDF5 (which we depend on anyway).https://gitlab.mpcdf.mpg.de/ift/resolve/-/issues/4Calibration Operator in Imaging Likelihood2021-12-01T08:35:37ZJakob RothCalibration Operator in Imaging LikelihoodThe resolve imaging Likelihood accepts a calibration operator as a keyword argument ([https://gitlab.mpcdf.mpg.de/ift/resolve/-/blob/devel/resolve/likelihood.py#L107](https://gitlab.mpcdf.mpg.de/ift/resolve/-/blob/devel/resolve/likelihoo...The resolve imaging Likelihood accepts a calibration operator as a keyword argument ([https://gitlab.mpcdf.mpg.de/ift/resolve/-/blob/devel/resolve/likelihood.py#L107](https://gitlab.mpcdf.mpg.de/ift/resolve/-/blob/devel/resolve/likelihood.py#L107)), but as far as I see this calibration operator is never used.
I think what should be there is something like:
```
model_data = cal_op * R @ sky
res = data - model_data
lh = 0.5 * res^dagger @ inv_cov @ res
```
@parras do you see a mistake here? If not I could implement this fix.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/341Correlated Field model, `total_N != 0`: Sample statistic is not correct2021-12-08T10:00:54ZPhilipp Arrasparras@mpa-garching.mpg.deCorrelated Field model, `total_N != 0`: Sample statistic is not correctIf I compare prior samples generated by `SimpleCorrelatedField` and `CorrelatedFieldMaker` with `total_N > 0`, the statistics looks different.
```
import nifty8 as ift
offset_mean = 0.
offset_std = None
fluctuations = (0.1, 0.05)
loglo...If I compare prior samples generated by `SimpleCorrelatedField` and `CorrelatedFieldMaker` with `total_N > 0`, the statistics looks different.
```
import nifty8 as ift
offset_mean = 0.
offset_std = None
fluctuations = (0.1, 0.05)
loglogslope = (-4, 1)
dom = ift.RGSpace(2625, 20)
op = ift.SimpleCorrelatedField(dom, offset_mean, offset_std, fluctuations, None, None, loglogslope)
ift.single_plot([op(ift.from_random(op.domain)) for _ in range(20)], name="cfm_single.png")
N = 1
cfm = ift.CorrelatedFieldMaker("", N)
cfm.add_fluctuations(dom, fluctuations, None, None, loglogslope)
cfm.set_amplitude_total_offset(offset_mean, offset_std)
op = cfm.finalize(0)
prior_samples = [op(ift.from_random(op.domain)) for _ in range(10)]
p = ift.Plot()
for ii in range(N):
p.add([ift.DomainTupleFieldInserter(pp.domain, 0, (ii,)).adjoint(pp) for pp in prior_samples])
p.output(name="cfm_multi.png")
```
Prior samples for SimpleCorrelatedField:
![cfm_single](/uploads/3f9db21b292bccab96ec48c94e3296c6/cfm_single.png)
Prior samples for non-trivial CorrelatedFieldMaker with `N_total = 1`
![cfm_multi](/uploads/564ca8466f3e955ab7218bfe8ff8149f/cfm_multi.png)
ping: @jrothhttps://gitlab.mpcdf.mpg.de/ift/public/dtwin/-/issues/1RWDatabase has bad performance2021-11-25T22:25:29ZMax-Niklas NewrzellaRWDatabase has bad performanceThe database interface implementation `RWDatabase` slows down the simulation considerably, if used.
This is, at least in part, due to the usage of SQLAlchemy's ORM.The database interface implementation `RWDatabase` slows down the simulation considerably, if used.
This is, at least in part, due to the usage of SQLAlchemy's ORM.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/339optimize_kl: Improvement list2021-11-26T11:52:41ZPhilipp Arrasparras@mpa-garching.mpg.deoptimize_kl: Improvement list- [x] Fix sample plot with ground truth
- [x] Option for storing all intermediate latent positions
- [x] String of model and domain to hdf5
- [x] Save nifty random state
- [x] Pseudo code of optimize_kl into doc string
- [x] plottable sp...- [x] Fix sample plot with ground truth
- [x] Option for storing all intermediate latent positions
- [x] String of model and domain to hdf5
- [x] Save nifty random state
- [x] Pseudo code of optimize_kl into doc string
- [x] plottable spelling?
- [x] Check documentaiton on callback (iglobal)Philipp Arrasparras@mpa-garching.mpg.dePhilipp Arrasparras@mpa-garching.mpg.dehttps://gitlab.mpcdf.mpg.de/ift/resolve/-/issues/3setup.py: multi-line descriptions are not allowed since 59.1.02021-12-01T08:37:32ZPhilipp Arrasparras@mpa-garching.mpg.desetup.py: multi-line descriptions are not allowed since 59.1.0This has to be fixed in line 104 of `setup.py`. I suggest that you, @mtr, do this yourself but if you like I can make a proposal.This has to be fixed in line 104 of `setup.py`. I suggest that you, @mtr, do this yourself but if you like I can make a proposal.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/338[New feature] Add tensorflow operator2021-10-29T15:38:45ZPhilipp Arrasparras@mpa-garching.mpg.de[New feature] Add tensorflow operatorhttps://gitlab.mpcdf.mpg.de/ift/deepreasoning/-/blob/master/operators/tensorflow_operator.pyhttps://gitlab.mpcdf.mpg.de/ift/deepreasoning/-/blob/master/operators/tensorflow_operator.pyhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/337Possible error in assert_equal and assert_allclose for MultiFields2021-09-24T08:03:42ZPhilipp FrankPossible error in assert_equal and assert_allclose for MultiFieldsIn their current form, both methods `assert_equal` and `assert_allclose` only perform a one sided test for `MultiFields`. This leads to some unexpected behaviour when comparing two Fields which are only equal one some sub-domain. In part...In their current form, both methods `assert_equal` and `assert_allclose` only perform a one sided test for `MultiFields`. This leads to some unexpected behaviour when comparing two Fields which are only equal one some sub-domain. In particular the following test
```
import nifty8 as ift
d = ift.RGSpace(10)
d = ift.MultiDomain.make({'a' : d, 'b' : d})
f = ift.from_random(d)
fp = f.extract_by_keys(['a',])
ift.extra.assert_equal(fp, f)
```
will pass, but
```
ift.extra.assert_equal(f, fp)
```
produces an error, as only the keys of the first input are looped over. The same goes for `assert_allclose`.
This does not look like an intended behaviour to me. @mtr, @parras, am I missing something here?