ift issueshttps://gitlab.mpcdf.mpg.de/groups/ift/-/issues2019-12-02T20:58:52Zhttps://gitlab.mpcdf.mpg.de/ift/nifty/issues/250[post Nifty6] new design for the `Field` class?2019-12-02T20:58:52ZMartin Reinecke[post Nifty6] new design for the `Field` class?I'm wondering whether we can solve most of our current problems regarding fields and multifields by introducing a single new `Field` class that
- unifies the current `Field` and `MultiField` classes (a classic field would have a single dictionary entry with an empty string as name)
- is immutable
- can contain scalars or array-like objects to describe field content (maybe even functions?)
If we have such a class we can again demand that all operations on fields require identical domains, and we can still avoid writing huge zero-filled arrays by writing scalar zeros instead.
@reimar, @kjako, @parras please let me know what you think.I'm wondering whether we can solve most of our current problems regarding fields and multifields by introducing a single new `Field` class that
- unifies the current `Field` and `MultiField` classes (a classic field would have a single dictionary entry with an empty string as name)
- is immutable
- can contain scalars or array-like objects to describe field content (maybe even functions?)
If we have such a class we can again demand that all operations on fields require identical domains, and we can still avoid writing huge zero-filled arrays by writing scalar zeros instead.
@reimar, @kjako, @parras please let me know what you think.https://gitlab.mpcdf.mpg.de/ift/nifty/issues/276testing differentiability for complex operators2019-11-19T14:43:26ZReimar Heinrich Leiketesting differentiability for complex operatorsThe routine extra.check_jacobian_consistency only checks derivatives in real direction. There are two other interesting cases: differentiability in real and imaginary direction and complex derifferentiability (which is the former with the additional requirement that df/d(Imag) = i*df/d(Re)).The routine extra.check_jacobian_consistency only checks derivatives in real direction. There are two other interesting cases: differentiability in real and imaginary direction and complex derifferentiability (which is the former with the additional requirement that df/d(Imag) = i*df/d(Re)).Reimar Heinrich LeikeReimar Heinrich Leikehttps://gitlab.mpcdf.mpg.de/ift/nifty/issues/274Restructure DOFDistributor2019-10-17T13:44:44ZJakob KnollmuellerRestructure DOFDistributorHi,
I just want to document the thought to change the DOFDistributor to a BinDistributor and build it analogous to the adjoint numpy bincount function. This should make things more clear and allow for operations on fields of any one-dimensional domain. We could get rid of the DOFSpace and change the default to an UnstructuredDomain
JakobHi,
I just want to document the thought to change the DOFDistributor to a BinDistributor and build it analogous to the adjoint numpy bincount function. This should make things more clear and allow for operations on fields of any one-dimensional domain. We could get rid of the DOFSpace and change the default to an UnstructuredDomain
Jakobhttps://gitlab.mpcdf.mpg.de/ift/nifty/issues/254Better support for partial inference2019-05-21T07:32:43ZMartin ReineckeBetter support for partial inferenceI have added the `partial-const` branch for tweaks that improve NIFTy's support for partial inference.
The current idea is that one first builds the full Hamiltonian (as before), and then obtains a simplified version for partially constant inputs by calling its method `simplify_for_constant_input()` with the constant input components.
All that needs to be done (I hope) is to specialize this method for all composed operators, i.e. those that call other operators internally.
@kjako, @reimar, @parras, @pfrank: does the principal idea look sound to you?I have added the `partial-const` branch for tweaks that improve NIFTy's support for partial inference.
The current idea is that one first builds the full Hamiltonian (as before), and then obtains a simplified version for partially constant inputs by calling its method `simplify_for_constant_input()` with the constant input components.
All that needs to be done (I hope) is to specialize this method for all composed operators, i.e. those that call other operators internally.
@kjako, @reimar, @parras, @pfrank: does the principal idea look sound to you?https://gitlab.mpcdf.mpg.de/ift/nifty/issues/258Branch cleanup2019-05-21T06:55:16ZMartin ReineckeBranch cleanupI'd like to get a better understanding which branches are still used and which ones can be deleted:
- new_los (I'll adjust the code for NIFTy 5)
- new_sampling (@reimar is this still relevant?)
- symbolicDifferentiation (@parras do you want to keep this?)
- yango_minimizer (@reimar ?)
- addUnits (@parras ?)
- theo_master (I guess I'll convert this into a tag)
- nifty2go (is anyone still using this? Otherwise I'd convert it into a tag as well)
@ensslint, any comments?I'd like to get a better understanding which branches are still used and which ones can be deleted:
- new_los (I'll adjust the code for NIFTy 5)
- new_sampling (@reimar is this still relevant?)
- symbolicDifferentiation (@parras do you want to keep this?)
- yango_minimizer (@reimar ?)
- addUnits (@parras ?)
- theo_master (I guess I'll convert this into a tag)
- nifty2go (is anyone still using this? Otherwise I'd convert it into a tag as well)
@ensslint, any comments?Martin ReineckeMartin Reineckehttps://gitlab.mpcdf.mpg.de/ift/IMAGINE/issues/1Extend carrier-mapper to partial -np.inf <-> np.inf2018-05-17T13:28:24ZTheo SteiningerExtend carrier-mapper to partial -np.inf <-> np.infIf a and/or b are (-)np.inf, just shift x by value of x if smaller/greater than m.If a and/or b are (-)np.inf, just shift x by value of x if smaller/greater than m.https://gitlab.mpcdf.mpg.de/ift/IMAGINE/issues/3Long-term plans?2018-05-14T08:26:03ZMartin ReineckeLong-term plans?I expect that IMAGINE will continue to be used in the future.
Is the goal to adjust it to NIFTy 4 or to stick to NIFTy 3?
I can help with the migration if necessary.
In the current state the package is not installable, since setup.py tries to clone the "master" branch of NIFTy, which does not exist.I expect that IMAGINE will continue to be used in the future.
Is the goal to adjust it to NIFTy 4 or to stick to NIFTy 3?
I can help with the migration if necessary.
In the current state the package is not installable, since setup.py tries to clone the "master" branch of NIFTy, which does not exist.Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/issues/235Improved LOS response?2018-04-30T09:19:16ZMartin ReineckeImproved LOS response?@kjako mentioned some time ago that the current LOSResponse exhibits some artifacts. I suspect that they are caused by our method to compute the "influence" of lines of sight on individual data points.
Currently, the influence of a line of sight on a data point is simply proportional to the length of its intersection with the cell around the data point. It doesn't matter whether the LOS cuts the cell near the edges or goes straight through the center. This also implies that tiny shifts of a line of sight may lead to dramatic changes of its influences on data points, which is probably not desirable.
I propose to use an approach that is more similar to SPH methods:
- each data point has a sphere of influence with a given R_max (similar to the cell distances)
- it interacts with all lines of sight that intersect its sphere of influence
- the interaction strength scales with the integral along the line of sight over a function f(r) around the data point, which falls to zero at R_max.
My expectation is that this will reduce grid-related artifacts.
@ensslint, @kjako: does this sound worthwhile to try?@kjako mentioned some time ago that the current LOSResponse exhibits some artifacts. I suspect that they are caused by our method to compute the "influence" of lines of sight on individual data points.
Currently, the influence of a line of sight on a data point is simply proportional to the length of its intersection with the cell around the data point. It doesn't matter whether the LOS cuts the cell near the edges or goes straight through the center. This also implies that tiny shifts of a line of sight may lead to dramatic changes of its influences on data points, which is probably not desirable.
I propose to use an approach that is more similar to SPH methods:
- each data point has a sphere of influence with a given R_max (similar to the cell distances)
- it interacts with all lines of sight that intersect its sphere of influence
- the interaction strength scales with the integral along the line of sight over a function f(r) around the data point, which falls to zero at R_max.
My expectation is that this will reduce grid-related artifacts.
@ensslint, @kjako: does this sound worthwhile to try?https://gitlab.mpcdf.mpg.de/ift/IMAGINE/issues/2Add `Galaxy` class which carries constituents (e-, dust, B)2017-10-04T23:43:23ZTheo SteiningerAdd `Galaxy` class which carries constituents (e-, dust, B)Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/D2O/issues/4Add unit tests for `copy=True/False` functionality2017-07-06T05:50:23ZTheo SteiningerAdd unit tests for `copy=True/False` functionalityTheo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/D2O/issues/5Add function d2o.arange2017-07-06T05:50:23ZTheo SteiningerAdd function d2o.arangeTheo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/D2O/issues/6d2o cumsum and flatten rely on certain features of distribution strategy2017-07-06T05:50:23ZTheo Steiningerd2o cumsum and flatten rely on certain features of distribution strategycumsum and flatten assume: if the shape of the d2o changes through flattening, the distribution strategy was "slicing". cumsum and flatten assume: if the shape of the d2o changes through flattening, the distribution strategy was "slicing". Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/D2O/issues/7The d2o_librarian will fail when mixing different MPI comms2017-07-06T05:50:23ZTheo SteiningerThe d2o_librarian will fail when mixing different MPI commsEvery local librarian instance on a node of a MPI cluster just increments its internal counter by one when a new d2o is registered. This gets out of sync, when only a part of the full cluster is covered by a special comm.
?Possible solution: The individual librarians store the id of 'their' d2o and communicate a common id for their dictionary.
Con: Involves MPI communication.Every local librarian instance on a node of a MPI cluster just increments its internal counter by one when a new d2o is registered. This gets out of sync, when only a part of the full cluster is covered by a special comm.
?Possible solution: The individual librarians store the id of 'their' d2o and communicate a common id for their dictionary.
Con: Involves MPI communication.Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/D2O/issues/8Add support for `from array` indexing2017-07-06T05:50:23ZTheo SteiningerAdd support for `from array` indexingWhen building the kdict from pindex and kindex something of the following form must be done (a==kindex, b==pindex):
a = np.arange(16)*2
b = np.array([[3,2],[1,0]])
In [1]: a[b]
Out[1]:
array([[6, 4],
[2, 0]])
Currently, this is solved using a hack:
p.apply_scalar_function(lambda z: obj[z])
This functionality could easily be added to the get_data interface.
When building the kdict from pindex and kindex something of the following form must be done (a==kindex, b==pindex):
a = np.arange(16)*2
b = np.array([[3,2],[1,0]])
In [1]: a[b]
Out[1]:
array([[6, 4],
[2, 0]])
Currently, this is solved using a hack:
p.apply_scalar_function(lambda z: obj[z])
This functionality could easily be added to the get_data interface.
Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/D2O/issues/9Semi-advanced indexing is not recognized2017-07-06T05:50:23ZTheo SteiningerSemi-advanced indexing is not recognized a = np.arange(24).reshape((3, 4,2))
obj = distributed_data_object(a)
Semi-advanced indexing
a[(2,1,1),1]
yields
array([[18, 19],
[10, 11],
[10, 11]])
The ``indexinglist'' scheme in d2o expects either scalars or numpy arrays as tuple elements and therefore:
obj[(2,1,1),1] -> AttributeError
However,
obj[np.array((2,1,1)), 1]
works.
Solution: Parse the elements and in doubt cast them to numpy arrays.
a = np.arange(24).reshape((3, 4,2))
obj = distributed_data_object(a)
Semi-advanced indexing
a[(2,1,1),1]
yields
array([[18, 19],
[10, 11],
[10, 11]])
The ``indexinglist'' scheme in d2o expects either scalars or numpy arrays as tuple elements and therefore:
obj[(2,1,1),1] -> AttributeError
However,
obj[np.array((2,1,1)), 1]
works.
Solution: Parse the elements and in doubt cast them to numpy arrays.
Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/D2O/issues/10Profile the d2o.bincount method2017-07-06T05:50:23ZTheo SteiningerProfile the d2o.bincount methodThe d2o.bincount method scales well with MPI parallelization but compared to single-core np.bincount has a rather big overhead. The d2o.bincount method scales well with MPI parallelization but compared to single-core np.bincount has a rather big overhead. Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/D2O/issues/11Move `flatten` method into the distributor.2017-07-06T05:50:23ZTheo SteiningerMove `flatten` method into the distributor.At the moment `flatten` is performed by the distributed_data_object itself. Thereby it assumes, that flattening the local arrays produces the right result. In general with arbitrary distribution strategies this is wrong. At the moment `flatten` is performed by the distributed_data_object itself. Thereby it assumes, that flattening the local arrays produces the right result. In general with arbitrary distribution strategies this is wrong. Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/D2O/issues/12Add `source_rank` parameter to d2o.set_full_data()2017-07-06T05:50:23ZTheo SteiningerAdd `source_rank` parameter to d2o.set_full_data()A source_rank parameter should be added to the distributed_data_object in order to specify on which node the source data-array resides on. A source_rank parameter should be added to the distributed_data_object in order to specify on which node the source data-array resides on. Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/D2O/issues/13Add `axis` keyword functionality to unary methods.2017-07-06T05:50:23ZTheo SteiningerAdd `axis` keyword functionality to unary methods.Many numpy functions support the `axis` keyword in order to perform an operation only along certain directions of the array. The current implementation of d2o does not support this, e.g. for `all`, `any`, `sum`, etc...
Related to: theos/NIFTy#3Many numpy functions support the `axis` keyword in order to perform an operation only along certain directions of the array. The current implementation of d2o does not support this, e.g. for `all`, `any`, `sum`, etc...
Related to: theos/NIFTy#3https://gitlab.mpcdf.mpg.de/ift/D2O/issues/14d2o: _contraction_helper does not work when using numpy keyword arguments2017-07-06T05:50:22ZTheo Steiningerd2o: _contraction_helper does not work when using numpy keyword argumentsThe `_contraction_helper` passes keyword arguments to the underlying numpy functions (axis=, keepdims=). The result of the _contraction_helper's local computation is then an array and not a scalar. Therefore the dtype check fails.
Fix: After solving theos/NIFTy#2, adopt to the case that the local run's result object is an array and make a further distinction of cases, i.e for something like axis=0 for the slicing_distributor. The `_contraction_helper` passes keyword arguments to the underlying numpy functions (axis=, keepdims=). The result of the _contraction_helper's local computation is then an array and not a scalar. Therefore the dtype check fails.
Fix: After solving theos/NIFTy#2, adopt to the case that the local run's result object is an array and make a further distinction of cases, i.e for something like axis=0 for the slicing_distributor. Theo SteiningerTheo Steininger