ift issueshttps://gitlab.mpcdf.mpg.de/groups/ift/-/issues2020-05-19T08:02:40Zhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/298Tests are taking too long2020-05-19T08:02:40ZMartin ReineckeTests are taking too longOn my laptop, the Nifty regression test are taking almost 10 minutes at the moment, which starts to get really problematic during development. Is there a chance of reducing test sizes and/or avoiding redundancies in the current tests?
T...On my laptop, the Nifty regression test are taking almost 10 minutes at the moment, which starts to get really problematic during development. Is there a chance of reducing test sizes and/or avoiding redundancies in the current tests?
The biggest problem by far is test_correlated_fields.py, which accounts for roughly 40% of the testing time. Next is test_jacobian.py with 20%; next is test_energy_gradients.py.
@parras, @pfrank, @phaimhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/299Add possibility to log energies2020-05-19T12:51:06ZPhilipp Arrasparras@mpa-garching.mpg.deAdd possibility to log energiesI have implemented a method to log energies inside a IterationController such that they can be analyzed afterwards. @mtr how do you like this implementation? Do you think we could/should make this part of nifty? Or do you have other idea...I have implemented a method to log energies inside a IterationController such that they can be analyzed afterwards. @mtr how do you like this implementation? Do you think we could/should make this part of nifty? Or do you have other ideas how to approach the problem?
https://gitlab.mpcdf.mpg.de/ift/papers/rickandresolve/-/commit/8b0e15dc4a533dfba17f29361345309e336dfb8a
(Since this is part of an unpublished project, this link is only visible for ift group members. Sorry!)
@veberle @mtrhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/300Streamline arguments position2020-05-19T13:05:11ZGordian EdenhoferStreamline arguments positionSwitch positional arguments to get a consistent order of e.g. `domain` within the positional arguments of all operators.
Affected operators are:
* `ift.from_random`
* `ift.OuterProduct`
@lerou You said that you are keeping a list of op...Switch positional arguments to get a consistent order of e.g. `domain` within the positional arguments of all operators.
Affected operators are:
* `ift.from_random`
* `ift.OuterProduct`
@lerou You said that you are keeping a list of operators that annoy you by their inconsistency. Can you amend the list with such cases please?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/301Broken GaussianEnergy2020-05-19T16:19:40ZGordian EdenhoferBroken GaussianEnergySampling with a Gaussian energy and no mean was broken in one of the recent merge request.
Applying the following path
```patch
diff --git a/demos/getting_started_3.py b/demos/getting_started_3.py
index 2dcffa17..c053e0a0 100644
--- a/...Sampling with a Gaussian energy and no mean was broken in one of the recent merge request.
Applying the following path
```patch
diff --git a/demos/getting_started_3.py b/demos/getting_started_3.py
index 2dcffa17..c053e0a0 100644
--- a/demos/getting_started_3.py
+++ b/demos/getting_started_3.py
@@ -109,8 +109,7 @@ if __name__ == '__main__':
minimizer = ift.NewtonCG(ic_newton)
# Set up likelihood and information Hamiltonian
- likelihood = (ift.GaussianEnergy(mean=data, inverse_covariance=N.inverse) @
- signal_response)
+ likelihood = ift.GaussianEnergy(inverse_covariance=N.inverse) @ ift.Adder(data, neg=True) @ signal_response
H = ift.StandardHamiltonian(likelihood, ic_sampling)
initial_mean = ift.MultiField.full(H.domain, 0.)
```
results in the demo failing though it actually should do the exact same thing as before.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/302Invalid call to inverse2020-06-24T15:04:05ZGordian EdenhoferInvalid call to inverseWhy is nifty calling `.inverse` of an operator that definitely does not support it?
```ipython
In [40]: n2f_jac
Out[40]:
ChainOperator:
DiagonalOperator
OuterProduct
In [41]: n2f_jac.domain
Out[41]:
DomainTuple:
HPSpace(nside=16)
...Why is nifty calling `.inverse` of an operator that definitely does not support it?
```ipython
In [40]: n2f_jac
Out[40]:
ChainOperator:
DiagonalOperator
OuterProduct
In [41]: n2f_jac.domain
Out[41]:
DomainTuple:
HPSpace(nside=16)
In [42]: n2f_jac.target
Out[42]:
DomainTuple:
UnstructuredDomain(shape=(30,))
HPSpace(nside=16)
In [43]: n2f_jac.capability
Out[43]: 3
In [44]: ift.extra.consistency_check(n2f_jac)
~/Projects/nifty/nifty6/extra.py in consistency_check(op, domain_dtype, target_dtype, atol, rtol, only_r_linear)
201 _actual_domain_check_linear(op, domain_dtype)
202 _actual_domain_check_linear(op.adjoint, target_dtype)
--> 203 _actual_domain_check_linear(op.inverse, target_dtype)
~/Projects/nifty/nifty6/operators/linear_operator.py in inverse(self)
86 Returns a LinearOperator object which behaves as if it were
87 the inverse of this operator."""
---> 88 return self._flip_modes(self.INVERSE_BIT)
~/Projects/nifty/nifty6/operators/chain_operator.py in _flip_modes(self, trafo)
123 if trafo == ADJ or trafo == INV:
124 return self.make(
--> 125 [op._flip_modes(trafo) for op in reversed(self._ops)])
~/Projects/nifty/nifty6/operators/chain_operator.py in <listcomp>(.0)
123 if trafo == ADJ or trafo == INV:
124 return self.make(
--> 125 [op._flip_modes(trafo) for op in reversed(self._ops)])
~/Projects/nifty/nifty6/operators/diagonal_operator.py in _flip_modes(self, trafo)
149 if trafo & self.INVERSE_BIT:
--> 150 xdiag = 1./xdiag # This operator contains zeros zeros on one axis
FloatingPointError: divide by zero encountered in true_divide
In [45]: n2f_jac(r).val
Out[45]:
array([[ 2.87943447e-02, 2.84809183e-02, 2.86276722e-02, ...,
2.85864172e-02, 2.88285874e-02, 2.88791209e-02],
...,
[ 0.00000000e+00, 1.46037080e-16, 0.00000000e+00, ...,
1.43154658e-16, -1.34704751e-16, 0.00000000e+00], # zero except for numerical fluctuations
[ 3.18092631e-02, 3.11583354e-02, 3.14395536e-02, ...,
3.13576202e-02, 3.18985295e-02, 3.20487145e-02]])
```https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/303Jacobian consistency check broken for CorrelatedField with N > 12020-05-22T09:26:39ZPhilipp Arrasparras@mpa-garching.mpg.deJacobian consistency check broken for CorrelatedField with N > 1If the return statement here
https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_6/test/test_operators/test_correlated_fields.py#L92
is removed, the jacobian tests fail for the individual amplitudes and the whole model.If the return statement here
https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_6/test/test_operators/test_correlated_fields.py#L92
is removed, the jacobian tests fail for the individual amplitudes and the whole model.Philipp HaimPhilipp Haimhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/304Broken jacobian2020-06-02T07:46:37ZPhilipp Arrasparras@mpa-garching.mpg.deBroken jacobianI have discovered an example in which the Jacobian test breaks. I do not see the reason (yet?) why it should not work.
```
import numpy as np
import nifty6 as ift
dom = ift.UnstructuredDomain(10)
op = (1.j*(ift.ScalingOperator(dom, 1....I have discovered an example in which the Jacobian test breaks. I do not see the reason (yet?) why it should not work.
```
import numpy as np
import nifty6 as ift
dom = ift.UnstructuredDomain(10)
op = (1.j*(ift.ScalingOperator(dom, 1.).log()).imag).exp()
loc = ift.from_random(op.domain, dtype=np.complex128) + 5
ift.extra.check_jacobian_consistency(op, loc)
```
Note the `+ 5` which shall make sure that the branch cut of the logarithm is avoided.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/305MPI Summation Routine in global NIFTy namespace?2020-05-30T10:40:45ZLukas PlatzMPI Summation Routine in global NIFTy namespace?Currently, `nifty6.minimization.metric_gaussian_kl` contains valuable MPI routines like the summation routine of !512 and `_shareRange`.
Currently I am working on an MPI operator using MPI allreduce summation and used to import `_allred...Currently, `nifty6.minimization.metric_gaussian_kl` contains valuable MPI routines like the summation routine of !512 and `_shareRange`.
Currently I am working on an MPI operator using MPI allreduce summation and used to import `_allreduce_sum_field` and `_shareRange` directly from the KL file for development purposes.
Now that !512 removed the former, should we make the new summation routine general again and include it in the global namespace?
@mtr: would that be feasible without a large effort?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/306Complex samples are inconsistent2020-06-19T12:31:22ZReimar H LeikeComplex samples are inconsistentWhen we draw samples with complex `dtype`, we multiply their magnitude with `sqrt(0.5)`.
This is done because then a sample from a variance of 1 also has standard deviation of 1 on average.
However, this is inconsistent with its Hamilto...When we draw samples with complex `dtype`, we multiply their magnitude with `sqrt(0.5)`.
This is done because then a sample from a variance of 1 also has standard deviation of 1 on average.
However, this is inconsistent with its Hamiltonian.
If we instead regard a complex number as a pair of real numbers, then in order to reproduce the sample variance, we would need the vector (real, imag) to have covariance
```
/ 0.5 0 \
| |
\ 0 0.5 /
```
However this covariance would lead to the Hamiltonian `(real**2 + imag**2)*2` which is double of what we would get when using the complex likelihood with covariance 1.Philipp FrankPhilipp Frankhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/62FFTOperator fails for zerocenter=True spaces.2018-04-02T09:42:38ZDixit, Jait (jaitd)FFTOperator fails for zerocenter=True spaces.Both scenarios fail:
```
from nifty import *
x = RGSpace((16,), zerocenter=True)
f = Field((x,x), val=1)
fft = FFTOperator(x)
fft(f, spaces=(1,))
```
```
from nifty import *
x = RGSpace((16,), zerocenter=True)
f = Field((x,), val=1)
fft ...Both scenarios fail:
```
from nifty import *
x = RGSpace((16,), zerocenter=True)
f = Field((x,x), val=1)
fft = FFTOperator(x)
fft(f, spaces=(1,))
```
```
from nifty import *
x = RGSpace((16,), zerocenter=True)
f = Field((x,), val=1)
fft = FFTOperator(x)
fft(f)
```https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/60Configuration for FFT during (de)briefing.2016-10-25T13:23:30ZTheo SteiningerConfiguration for FFT during (de)briefing.Make it optional, whether the briefing should try to (FFT-)transform the input. By this the user could avoid that somewhere still a FFT takes place, although not wanted.Make it optional, whether the briefing should try to (FFT-)transform the input. By this the user could avoid that somewhere still a FFT takes place, although not wanted.Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/8Profile the d2o.bincount method2016-05-26T11:09:57ZTheo SteiningerProfile the d2o.bincount methodThe d2o.bincount method scales well with MPI parallelization but compared to single-core np.bincount has a rather big overhead. The d2o.bincount method scales well with MPI parallelization but compared to single-core np.bincount has a rather big overhead. Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/3d2o: _contraction_helper does not work when using numpy keyword arguments2017-09-20T21:44:34ZTheo Steiningerd2o: _contraction_helper does not work when using numpy keyword argumentsThe `_contraction_helper` passes keyword arguments to the underlying numpy functions (axis=, keepdims=). The result of the _contraction_helper's local computation is then an array and not a scalar. Therefore the dtype check fails.
Fi...The `_contraction_helper` passes keyword arguments to the underlying numpy functions (axis=, keepdims=). The result of the _contraction_helper's local computation is then an array and not a scalar. Therefore the dtype check fails.
Fix: After solving theos/NIFTy#2, adopt to the case that the local run's result object is an array and make a further distinction of cases, i.e for something like axis=0 for the slicing_distributor. Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/14The d2o_librarian will fail when mixing different MPI comms2016-05-26T11:09:36ZTheo SteiningerThe d2o_librarian will fail when mixing different MPI commsEvery local librarian instance on a node of a MPI cluster just increments its internal counter by one when a new d2o is registered. This gets out of sync, when only a part of the full cluster is covered by a special comm.
?Possible s...Every local librarian instance on a node of a MPI cluster just increments its internal counter by one when a new d2o is registered. This gets out of sync, when only a part of the full cluster is covered by a special comm.
?Possible solution: The individual librarians store the id of 'their' d2o and communicate a common id for their dictionary.
Con: Involves MPI communication.Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/13Add support for `from array` indexing2016-05-26T11:09:42ZTheo SteiningerAdd support for `from array` indexingWhen building the kdict from pindex and kindex something of the following form must be done (a==kindex, b==pindex):
a = np.arange(16)*2
b = np.array([[3,2],[1,0]])
In [1]: a[b]
Out[1]:
array([[6, 4],
...When building the kdict from pindex and kindex something of the following form must be done (a==kindex, b==pindex):
a = np.arange(16)*2
b = np.array([[3,2],[1,0]])
In [1]: a[b]
Out[1]:
array([[6, 4],
[2, 0]])
Currently, this is solved using a hack:
p.apply_scalar_function(lambda z: obj[z])
This functionality could easily be added to the get_data interface.
Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/10Semi-advanced indexing is not recognized2016-05-26T11:09:50ZTheo SteiningerSemi-advanced indexing is not recognized a = np.arange(24).reshape((3, 4,2))
obj = distributed_data_object(a)
Semi-advanced indexing
a[(2,1,1),1]
yields
array([[18, 19],
[10, 11],
[10, 11]])
The ``indexinglist'' scheme in d2o expects... a = np.arange(24).reshape((3, 4,2))
obj = distributed_data_object(a)
Semi-advanced indexing
a[(2,1,1),1]
yields
array([[18, 19],
[10, 11],
[10, 11]])
The ``indexinglist'' scheme in d2o expects either scalars or numpy arrays as tuple elements and therefore:
obj[(2,1,1),1] -> AttributeError
However,
obj[np.array((2,1,1)), 1]
works.
Solution: Parse the elements and in doubt cast them to numpy arrays.
Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/22Add .all() and .any() methods to field.2016-07-04T10:48:17ZTheo SteiningerAdd .all() and .any() methods to field.Passthrough to d2o's methods. Passthrough to d2o's methods. Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/36No user access to files in ~/.nifty2016-07-22T22:56:41ZTheo SteiningerNo user access to files in ~/.niftyIf setuptools is run with sudo priviledges, the files are not changeable by the user.
Fix: Make chmod during setupIf setuptools is run with sudo priviledges, the files are not changeable by the user.
Fix: Make chmod during setupTheo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/34Add fft_machine keyword to the rg_space.copy()2018-04-02T09:42:38ZTheo SteiningerAdd fft_machine keyword to the rg_space.copy()https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/11space casting fails for lower dimensional arrays2016-06-30T22:29:44ZTheo Steiningerspace casting fails for lower dimensional arraysIn [26]: x = rg_space((6,6))
In [27]: x.cast(np.arange(6))
Out[27]:
<distributed_data_object>
array([ 0., 1., 2., 3., 4., 5.])
See 1167 in nifty_core.py
In [26]: x = rg_space((6,6))
In [27]: x.cast(np.arange(6))
Out[27]:
<distributed_data_object>
array([ 0., 1., 2., 3., 4., 5.])
See 1167 in nifty_core.py
Theo SteiningerTheo Steininger