ift issueshttps://gitlab.mpcdf.mpg.de/groups/ift/-/issues2017-01-20T00:32:57Zhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/1632 bit linux crashes tests because of float1282017-01-20T00:32:57ZGhost User32 bit linux crashes tests because of float128The tests should include a check for the system version and not do tests for float128The tests should include a check for the system version and not do tests for float128https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/15d2o cumsum and flatten rely on certain features of distribution strategy2016-05-26T11:09:31ZTheo Steiningerd2o cumsum and flatten rely on certain features of distribution strategycumsum and flatten assume: if the shape of the d2o changes through flattening, the distribution strategy was "slicing". cumsum and flatten assume: if the shape of the d2o changes through flattening, the distribution strategy was "slicing". Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/14The d2o_librarian will fail when mixing different MPI comms2016-05-26T11:09:36ZTheo SteiningerThe d2o_librarian will fail when mixing different MPI commsEvery local librarian instance on a node of a MPI cluster just increments its internal counter by one when a new d2o is registered. This gets out of sync, when only a part of the full cluster is covered by a special comm.
?Possible s...Every local librarian instance on a node of a MPI cluster just increments its internal counter by one when a new d2o is registered. This gets out of sync, when only a part of the full cluster is covered by a special comm.
?Possible solution: The individual librarians store the id of 'their' d2o and communicate a common id for their dictionary.
Con: Involves MPI communication.Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/13Add support for `from array` indexing2016-05-26T11:09:42ZTheo SteiningerAdd support for `from array` indexingWhen building the kdict from pindex and kindex something of the following form must be done (a==kindex, b==pindex):
a = np.arange(16)*2
b = np.array([[3,2],[1,0]])
In [1]: a[b]
Out[1]:
array([[6, 4],
...When building the kdict from pindex and kindex something of the following form must be done (a==kindex, b==pindex):
a = np.arange(16)*2
b = np.array([[3,2],[1,0]])
In [1]: a[b]
Out[1]:
array([[6, 4],
[2, 0]])
Currently, this is solved using a hack:
p.apply_scalar_function(lambda z: obj[z])
This functionality could easily be added to the get_data interface.
Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/12Remove np support in point_space2017-06-21T04:52:39ZTheo SteiningerRemove np support in point_spacehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/11space casting fails for lower dimensional arrays2016-06-30T22:29:44ZTheo Steiningerspace casting fails for lower dimensional arraysIn [26]: x = rg_space((6,6))
In [27]: x.cast(np.arange(6))
Out[27]:
<distributed_data_object>
array([ 0., 1., 2., 3., 4., 5.])
See 1167 in nifty_core.py
In [26]: x = rg_space((6,6))
In [27]: x.cast(np.arange(6))
Out[27]:
<distributed_data_object>
array([ 0., 1., 2., 3., 4., 5.])
See 1167 in nifty_core.py
Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/10Semi-advanced indexing is not recognized2016-05-26T11:09:50ZTheo SteiningerSemi-advanced indexing is not recognized a = np.arange(24).reshape((3, 4,2))
obj = distributed_data_object(a)
Semi-advanced indexing
a[(2,1,1),1]
yields
array([[18, 19],
[10, 11],
[10, 11]])
The ``indexinglist'' scheme in d2o expects... a = np.arange(24).reshape((3, 4,2))
obj = distributed_data_object(a)
Semi-advanced indexing
a[(2,1,1),1]
yields
array([[18, 19],
[10, 11],
[10, 11]])
The ``indexinglist'' scheme in d2o expects either scalars or numpy arrays as tuple elements and therefore:
obj[(2,1,1),1] -> AttributeError
However,
obj[np.array((2,1,1)), 1]
works.
Solution: Parse the elements and in doubt cast them to numpy arrays.
Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/9Add "S_inv" keyword to propagator_operator2018-01-18T15:57:53ZTorsten EnsslinAdd "S_inv" keyword to propagator_operatorIf S_inv is set to an inverse operator, it should be use instead of S.inverse_multiply/times in
D^-1 = S_inv + M
Usecase: Sometimes only a non-invertable smoothness enforcing operator S_inv \propto k^2 or k^4 should be used, or an ...If S_inv is set to an inverse operator, it should be use instead of S.inverse_multiply/times in
D^-1 = S_inv + M
Usecase: Sometimes only a non-invertable smoothness enforcing operator S_inv \propto k^2 or k^4 should be used, or an explicitly coded pixel space operator. 2018-01-19https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/8Profile the d2o.bincount method2016-05-26T11:09:57ZTheo SteiningerProfile the d2o.bincount methodThe d2o.bincount method scales well with MPI parallelization but compared to single-core np.bincount has a rather big overhead. The d2o.bincount method scales well with MPI parallelization but compared to single-core np.bincount has a rather big overhead. Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/7Move `flatten` method into the distributor.2016-05-26T11:10:06ZTheo SteiningerMove `flatten` method into the distributor.At the moment `flatten` is performed by the distributed_data_object itself. Thereby it assumes, that flattening the local arrays produces the right result. In general with arbitrary distribution strategies this is wrong. At the moment `flatten` is performed by the distributed_data_object itself. Thereby it assumes, that flattening the local arrays produces the right result. In general with arbitrary distribution strategies this is wrong. Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/6Add `source_rank` parameter to d2o.set_full_data()2016-05-26T11:10:11ZTheo SteiningerAdd `source_rank` parameter to d2o.set_full_data()A source_rank parameter should be added to the distributed_data_object in order to specify on which node the source data-array resides on. A source_rank parameter should be added to the distributed_data_object in order to specify on which node the source data-array resides on. Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/5Add `copy` parameter to d2o.get_data()2016-04-06T00:56:53ZTheo SteiningerAdd `copy` parameter to d2o.get_data()A `copy` parameter should be added to d2o.get_data in order to control, whether the resulting d2o should contain a view on or a copy of the old data. A `copy` parameter should be added to d2o.get_data in order to control, whether the resulting d2o should contain a view on or a copy of the old data. Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/4Add `axis` keyword functionality to unary methods.2016-05-26T11:10:16ZTheo SteiningerAdd `axis` keyword functionality to unary methods.Many numpy functions support the `axis` keyword in order to perform an operation only along certain directions of the array. The current implementation of d2o does not support this, e.g. for `all`, `any`, `sum`, etc...
Related to: the...Many numpy functions support the `axis` keyword in order to perform an operation only along certain directions of the array. The current implementation of d2o does not support this, e.g. for `all`, `any`, `sum`, etc...
Related to: theos/NIFTy#3https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/3d2o: _contraction_helper does not work when using numpy keyword arguments2017-09-20T21:44:34ZTheo Steiningerd2o: _contraction_helper does not work when using numpy keyword argumentsThe `_contraction_helper` passes keyword arguments to the underlying numpy functions (axis=, keepdims=). The result of the _contraction_helper's local computation is then an array and not a scalar. Therefore the dtype check fails.
Fi...The `_contraction_helper` passes keyword arguments to the underlying numpy functions (axis=, keepdims=). The result of the _contraction_helper's local computation is then an array and not a scalar. Therefore the dtype check fails.
Fix: After solving theos/NIFTy#2, adopt to the case that the local run's result object is an array and make a further distinction of cases, i.e for something like axis=0 for the slicing_distributor. Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/2d2o: Contraction functions rely on non-degeneracy of distribution strategy2017-09-20T21:36:14ZTheo Steiningerd2o: Contraction functions rely on non-degeneracy of distribution strategySeveral methods of the distributed_data_object rely on the fact, that the distribution strategy behaves as if the local data was non-degenerate. Currently the non-distributor fixes this by returning trivial (local) results in the _allgat...Several methods of the distributed_data_object rely on the fact, that the distribution strategy behaves as if the local data was non-degenerate. Currently the non-distributor fixes this by returning trivial (local) results in the _allgather and the _Allreduce_sum method.
Affected d2o methods are at least: _contraction_helper, mean
Fix: Move the functionality for sum, prod, etc... into the distributor.Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/1Add out-array parameter to numerical d2o operations.2016-05-26T11:10:36ZTheo SteiningerAdd out-array parameter to numerical d2o operations.Numpy supports to specify an out array in order to avoid memory reallocation.
a = np.array([1,2,3,4])
b = np.array([5,6,7,8])
# slow:
a = a + b
# fast:
np.add(a,b,out=a)
Numpy supports to specify an out array in order to avoid memory reallocation.
a = np.array([1,2,3,4])
b = np.array([5,6,7,8])
# slow:
a = a + b
# fast:
np.add(a,b,out=a)
Theo SteiningerTheo Steininger