ift issueshttps://gitlab.mpcdf.mpg.de/groups/ift/-/issues2022-01-27T11:18:26Zhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/345sample list average crashes for mpi run with map estimator2022-01-27T11:18:26ZJakob Rothsample list average crashes for mpi run with map estimatorWhen computing a map estimate with multiple MPI tasks the sample list average method crashes.
Here is an example code that works if executed normally with python, but crashes in an MPI run
```
import numpy as np
import nifty8 as ift
ift...When computing a map estimate with multiple MPI tasks the sample list average method crashes.
Here is an example code that works if executed normally with python, but crashes in an MPI run
```
import numpy as np
import nifty8 as ift
ift.random.push_sseq_from_seed(27)
try:
from mpi4py import MPI
comm = MPI.COMM_WORLD
master = comm.Get_rank() == 0
except ImportError:
comm = None
master = True
position_space = ift.RGSpace([128, 128])
op = ift.makeOp(ift.full(position_space, 10.))
noise = 0.1
N = ift.ScalingOperator(position_space, noise, np.float64)
mock_position = ift.from_random(op.domain)
data = op(mock_position) + N.draw_sample()
lh = ift.GaussianEnergy(mean=data, inverse_covariance=N.inverse) @ op
ic_sampling = ift.AbsDeltaEnergyController(
name="Sampling (linear)", deltaE=0.05, iteration_limit=10
)
ic_newton = ift.AbsDeltaEnergyController(
name="Newton", deltaE=0.5, convergence_level=2, iteration_limit=5
)
minimizer = ift.NewtonCG(ic_newton)
def callback(samples, i):
plot = ift.Plot()
mean = samples.average(op)
plot.add(mean, title="Reconstruction", zmin=0, zmax=1)
if master:
plot.output()
n_iterations = 3
n_samples = lambda iiter: 0 if iiter < 1 else 2
samples = ift.optimize_kl(
lh,
n_iterations,
n_samples,
minimizer,
ic_sampling,
None,
overwrite=True,
comm=comm,
callback=callback,
)
```https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/344Unexpected behaviour of `MultiField.scale`2022-01-31T18:02:56ZPhilipp Arrasparras@mpa-garching.mpg.deUnexpected behaviour of `MultiField.scale`The setup:
```python
import nifty8 as ift
import numpy as np
dom = ift.UnstructuredDomain([1])
mdom = {"a": dom, "b": dom}
fld = ift.from_random(mdom)
fld2 = ift.from_random(mdom)
```
I would expect the following line to fail since th...The setup:
```python
import nifty8 as ift
import numpy as np
dom = ift.UnstructuredDomain([1])
mdom = {"a": dom, "b": dom}
fld = ift.from_random(mdom)
fld2 = ift.from_random(mdom)
```
I would expect the following line to fail since the argument of scale is not a scalar.
```python
fld3 = fld.scale(fld2)
```
But it doesn't.
Does it at least compute something sensible?
```python
for kk in mdom.keys():
assert np.all(fld[kk].val * fld2[kk].val == fld3[kk].val)
```
No.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/343`Variational inference visualized` is too slow2021-12-08T22:22:56ZPhilipp Arrasparras@mpa-garching.mpg.de`Variational inference visualized` is too slowRunning the file `https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_8/demos/variational_inference_visualized.py` is so slow that it is not very instructive anymore.Running the file `https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_8/demos/variational_inference_visualized.py` is so slow that it is not very instructive anymore.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/342Try to replace FITS I/O where possible2021-12-06T08:51:12ZMartin ReineckeTry to replace FITS I/O where possibleJust a general remark for future I/O changes: please let's stop using FITS (and maybe even replace the FITS format in places where we don't need to be interoperable with anything else). This format really is incredibly old and dusty. Any...Just a general remark for future I/O changes: please let's stop using FITS (and maybe even replace the FITS format in places where we don't need to be interoperable with anything else). This format really is incredibly old and dusty. Anything with better metadata support should be fine, probably HDF5 (which we depend on anyway).https://gitlab.mpcdf.mpg.de/ift/resolve/-/issues/4Calibration Operator in Imaging Likelihood2021-12-01T08:35:37ZJakob RothCalibration Operator in Imaging LikelihoodThe resolve imaging Likelihood accepts a calibration operator as a keyword argument ([https://gitlab.mpcdf.mpg.de/ift/resolve/-/blob/devel/resolve/likelihood.py#L107](https://gitlab.mpcdf.mpg.de/ift/resolve/-/blob/devel/resolve/likelihoo...The resolve imaging Likelihood accepts a calibration operator as a keyword argument ([https://gitlab.mpcdf.mpg.de/ift/resolve/-/blob/devel/resolve/likelihood.py#L107](https://gitlab.mpcdf.mpg.de/ift/resolve/-/blob/devel/resolve/likelihood.py#L107)), but as far as I see this calibration operator is never used.
I think what should be there is something like:
```
model_data = cal_op * R @ sky
res = data - model_data
lh = 0.5 * res^dagger @ inv_cov @ res
```
@parras do you see a mistake here? If not I could implement this fix.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/341Correlated Field model, `total_N != 0`: Sample statistic is not correct2021-12-08T10:00:54ZPhilipp Arrasparras@mpa-garching.mpg.deCorrelated Field model, `total_N != 0`: Sample statistic is not correctIf I compare prior samples generated by `SimpleCorrelatedField` and `CorrelatedFieldMaker` with `total_N > 0`, the statistics looks different.
```
import nifty8 as ift
offset_mean = 0.
offset_std = None
fluctuations = (0.1, 0.05)
loglo...If I compare prior samples generated by `SimpleCorrelatedField` and `CorrelatedFieldMaker` with `total_N > 0`, the statistics looks different.
```
import nifty8 as ift
offset_mean = 0.
offset_std = None
fluctuations = (0.1, 0.05)
loglogslope = (-4, 1)
dom = ift.RGSpace(2625, 20)
op = ift.SimpleCorrelatedField(dom, offset_mean, offset_std, fluctuations, None, None, loglogslope)
ift.single_plot([op(ift.from_random(op.domain)) for _ in range(20)], name="cfm_single.png")
N = 1
cfm = ift.CorrelatedFieldMaker("", N)
cfm.add_fluctuations(dom, fluctuations, None, None, loglogslope)
cfm.set_amplitude_total_offset(offset_mean, offset_std)
op = cfm.finalize(0)
prior_samples = [op(ift.from_random(op.domain)) for _ in range(10)]
p = ift.Plot()
for ii in range(N):
p.add([ift.DomainTupleFieldInserter(pp.domain, 0, (ii,)).adjoint(pp) for pp in prior_samples])
p.output(name="cfm_multi.png")
```
Prior samples for SimpleCorrelatedField:
![cfm_single](/uploads/3f9db21b292bccab96ec48c94e3296c6/cfm_single.png)
Prior samples for non-trivial CorrelatedFieldMaker with `N_total = 1`
![cfm_multi](/uploads/564ca8466f3e955ab7218bfe8ff8149f/cfm_multi.png)
ping: @jrothhttps://gitlab.mpcdf.mpg.de/ift/public/dtwin/-/issues/1RWDatabase has bad performance2021-11-25T22:25:29ZMax-Niklas NewrzellaRWDatabase has bad performanceThe database interface implementation `RWDatabase` slows down the simulation considerably, if used.
This is, at least in part, due to the usage of SQLAlchemy's ORM.The database interface implementation `RWDatabase` slows down the simulation considerably, if used.
This is, at least in part, due to the usage of SQLAlchemy's ORM.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/339optimize_kl: Improvement list2021-11-26T11:52:41ZPhilipp Arrasparras@mpa-garching.mpg.deoptimize_kl: Improvement list- [x] Fix sample plot with ground truth
- [x] Option for storing all intermediate latent positions
- [x] String of model and domain to hdf5
- [x] Save nifty random state
- [x] Pseudo code of optimize_kl into doc string
- [x] plottable sp...- [x] Fix sample plot with ground truth
- [x] Option for storing all intermediate latent positions
- [x] String of model and domain to hdf5
- [x] Save nifty random state
- [x] Pseudo code of optimize_kl into doc string
- [x] plottable spelling?
- [x] Check documentaiton on callback (iglobal)Philipp Arrasparras@mpa-garching.mpg.dePhilipp Arrasparras@mpa-garching.mpg.dehttps://gitlab.mpcdf.mpg.de/ift/resolve/-/issues/3setup.py: multi-line descriptions are not allowed since 59.1.02021-12-01T08:37:32ZPhilipp Arrasparras@mpa-garching.mpg.desetup.py: multi-line descriptions are not allowed since 59.1.0This has to be fixed in line 104 of `setup.py`. I suggest that you, @mtr, do this yourself but if you like I can make a proposal.This has to be fixed in line 104 of `setup.py`. I suggest that you, @mtr, do this yourself but if you like I can make a proposal.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/338[New feature] Add tensorflow operator2021-10-29T15:38:45ZPhilipp Arrasparras@mpa-garching.mpg.de[New feature] Add tensorflow operatorhttps://gitlab.mpcdf.mpg.de/ift/deepreasoning/-/blob/master/operators/tensorflow_operator.pyhttps://gitlab.mpcdf.mpg.de/ift/deepreasoning/-/blob/master/operators/tensorflow_operator.pyhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/337Possible error in assert_equal and assert_allclose for MultiFields2021-09-24T08:03:42ZPhilipp FrankPossible error in assert_equal and assert_allclose for MultiFieldsIn their current form, both methods `assert_equal` and `assert_allclose` only perform a one sided test for `MultiFields`. This leads to some unexpected behaviour when comparing two Fields which are only equal one some sub-domain. In part...In their current form, both methods `assert_equal` and `assert_allclose` only perform a one sided test for `MultiFields`. This leads to some unexpected behaviour when comparing two Fields which are only equal one some sub-domain. In particular the following test
```
import nifty8 as ift
d = ift.RGSpace(10)
d = ift.MultiDomain.make({'a' : d, 'b' : d})
f = ift.from_random(d)
fp = f.extract_by_keys(['a',])
ift.extra.assert_equal(fp, f)
```
will pass, but
```
ift.extra.assert_equal(f, fp)
```
produces an error, as only the keys of the first input are looped over. The same goes for `assert_allclose`.
This does not look like an intended behaviour to me. @mtr, @parras, am I missing something here?https://gitlab.mpcdf.mpg.de/ift/resolve/-/issues/2Bug: calibrator flux is probably computed incorrectly (log10 vs ln vs log2)2021-09-08T11:14:27ZPhilipp Arrasparras@mpa-garching.mpg.deBug: calibrator flux is probably computed incorrectly (log10 vs ln vs log2)Connected to https://github.com/caracal-pipeline/caracal/issues/1354Connected to https://github.com/caracal-pipeline/caracal/issues/1354https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/336Sampling dtypes in sandwich operators2021-09-22T08:26:46ZPhilipp Arrasparras@mpa-garching.mpg.deSampling dtypes in sandwich operatorsI think (but I am not sure), that it is relatively important to fix the FIXME that @pfrank has put here:
https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_8/src/operators/sandwich_operator.py#L77I think (but I am not sure), that it is relatively important to fix the FIXME that @pfrank has put here:
https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_8/src/operators/sandwich_operator.py#L77https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/335CorrelatedFieldMaker amplitude normalization breaks fluctuation amplitude2021-08-24T10:52:00ZLukas PlatzCorrelatedFieldMaker amplitude normalization breaks fluctuation amplitudeIf one adds more than one amplitude operator to a `CorrelatedFieldMaker` and sets a very small/large offset standard deviation, the field fluctuation amplitude gets distorted.
This is because in `get_normalized_amplitudes()` every ampli...If one adds more than one amplitude operator to a `CorrelatedFieldMaker` and sets a very small/large offset standard deviation, the field fluctuation amplitude gets distorted.
This is because in `get_normalized_amplitudes()` every amplitude operator gets devided by `self.azm` and the results of this function are then multiplied with an outer product to create the joint amplitude operator.
In CFs with more than one amplitude operator, this results in the joint amplitude operator being divided by multiples of the zeromode operator and thus an incorrect output scaling behavior.
To demonstrate this effect, I created correlated field operators for identical 2d fields, but one with a single amplitude operator for `ift.RGSpace((N, N)` and one with two amplitude operators on `ift.RGSpace(N)`. From top to bottom, I varied the `offset std` setting passed to `set_amplitude_total_offset()` and in each case plotted a histogram of the sampled fields' standard deviation.
![output](/uploads/6779601e231148c36a77a91aa8e7fd62/bug.png)
One can clearly see the output fluctuation amplitude is independant from the `offset_std` for the 'single' case, as it should be, but not for the 'dual' case.
A fix for this is proposed in merge requests !669 and !670.
To reproduce, run the following code:
```
import nifty7 as ift
import matplotlib.pyplot as plt
fluct_pars = {
'fluctuations': (1.0, 0.1),
'flexibility': None,
'asperity': None,
'loglogavgslope': (-2.0, 0.2),
'prefix': 'flucts_',
'dofdex': None
}
n_offsets = 5
n_samples = 1000
dom_single = ift.RGSpace((25, 25))
dom_dual = ift.RGSpace(25)
fig, axs = plt.subplots(nrows=n_offsets, ncols=2, figsize=(12, 4 * n_offsets))
for i in range(n_offsets):
cfmaker_single = ift.CorrelatedFieldMaker(prefix='str(i)_')
cfmaker_dual = ift.CorrelatedFieldMaker(prefix='str(i)_')
cfmaker_single.add_fluctuations(dom_single, **fluct_pars)
cfmaker_dual.add_fluctuations(dom_dual, **fluct_pars)
cfmaker_dual.add_fluctuations(dom_dual, **fluct_pars)
offset_std = 10 ** -i
zm_pars = {'offset_mean': 0.,
'offset_std': (offset_std, offset_std / 10.)}
cfmaker_single.set_amplitude_total_offset(**zm_pars)
cfmaker_dual.set_amplitude_total_offset(**zm_pars)
cf_single = cfmaker_single.finalize(prior_info=0)
cf_dual = cfmaker_dual.finalize(prior_info=0)
samples_single = [cf_single(ift.from_random(cf_single.domain)).val.std() for _ in range(n_samples)]
samples_dual = [cf_dual(ift.from_random(cf_dual.domain)).val.std() for _ in range(n_samples)]
_ = axs[i, 0].hist(samples_single, bins=20, density=True)
_ = axs[i, 1].hist(samples_dual, bins=20, density=True)
axs[i, 0].set_title(f'offset_std = {offset_std:1.0e}, single')
axs[i, 1].set_title(f'offset_std = {offset_std:1.0e}, dual')
axs[i, 0].set_xlabel('sample std')
axs[i, 1].set_xlabel('sample std')
plt.tight_layout()
plt.show()
```
Any comments?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/334`MetricGaussianKL` changed its interface (there is not `.make`) but there is ...2021-07-01T15:34:54ZGordian Edenhofer`MetricGaussianKL` changed its interface (there is not `.make`) but there is no corresponding changelog entryhttps://gitlab.mpcdf.mpg.de/ift/resolve/-/issues/1Implement uniform weighting2021-06-30T17:12:22ZPhilipp Arrasparras@mpa-garching.mpg.deImplement uniform weightinghttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/333New feature: Add flag to minisanity that disables terminal colors2021-08-05T12:39:13ZPhilipp Arrasparras@mpa-garching.mpg.deNew feature: Add flag to minisanity that disables terminal colorsPhilipp Arrasparras@mpa-garching.mpg.dePhilipp Arrasparras@mpa-garching.mpg.dehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/332Rename `Likelihood` -> `LogLikelihood`2021-06-10T10:44:36ZPhilipp Arrasparras@mpa-garching.mpg.deRename `Likelihood` -> `LogLikelihood`NIFTy7 releasehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/331Build html coverage report of both release and develop branch2021-06-09T15:51:21ZPhilipp Arrasparras@mpa-garching.mpg.deBuild html coverage report of both release and develop branchNIFTy7 releasehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/330Write guide how to define custom non-linearities2021-06-10T11:39:25ZPhilipp Arrasparras@mpa-garching.mpg.deWrite guide how to define custom non-linearitiesNIFTy7 release