ift issueshttps://gitlab.mpcdf.mpg.de/groups/ift/-/issues2019-10-17T13:44:44Zhttps://gitlab.mpcdf.mpg.de/ift/nifty/issues/274Restructure DOFDistributor2019-10-17T13:44:44ZJakob KnollmuellerRestructure DOFDistributorHi,
I just want to document the thought to change the DOFDistributor to a BinDistributor and build it analogous to the adjoint numpy bincount function. This should make things more clear and allow for operations on fields of any one-dimensional domain. We could get rid of the DOFSpace and change the default to an UnstructuredDomain
JakobHi,
I just want to document the thought to change the DOFDistributor to a BinDistributor and build it analogous to the adjoint numpy bincount function. This should make things more clear and allow for operations on fields of any one-dimensional domain. We could get rid of the DOFSpace and change the default to an UnstructuredDomain
Jakobhttps://gitlab.mpcdf.mpg.de/ift/nifty/issues/266Amplitude model2019-04-18T20:05:11ZPhilipp ArrasAmplitude modelI am back at considering what kind of default amplitude model is suitable for nifty. I think the current `SLAmplitude` operator does not provide a sensible prior on the zeromode. Therefore, I think we should change it.
What I do for Lognormal problems is the following: set the zeromode of the amplitude operator A constantly to one and have as sky model: exp(ht @ U @ (A*xi)), where U is an operator which turns a standard normal distribution into a flat distribution with a lower and an upper bound.
My problem is that `demos/getting_started_3.py` does not show how to properly set up a prior on the zero mode.
@reimar, @pfrank, do you have a suggestion how to deal with this?I am back at considering what kind of default amplitude model is suitable for nifty. I think the current `SLAmplitude` operator does not provide a sensible prior on the zeromode. Therefore, I think we should change it.
What I do for Lognormal problems is the following: set the zeromode of the amplitude operator A constantly to one and have as sky model: exp(ht @ U @ (A*xi)), where U is an operator which turns a standard normal distribution into a flat distribution with a lower and an upper bound.
My problem is that `demos/getting_started_3.py` does not show how to properly set up a prior on the zero mode.
@reimar, @pfrank, do you have a suggestion how to deal with this?https://gitlab.mpcdf.mpg.de/ift/nifty/issues/258Branch cleanup2019-05-21T06:55:16ZMartin ReineckeBranch cleanupI'd like to get a better understanding which branches are still used and which ones can be deleted:
- new_los (I'll adjust the code for NIFTy 5)
- new_sampling (@reimar is this still relevant?)
- symbolicDifferentiation (@parras do you want to keep this?)
- yango_minimizer (@reimar ?)
- addUnits (@parras ?)
- theo_master (I guess I'll convert this into a tag)
- nifty2go (is anyone still using this? Otherwise I'd convert it into a tag as well)
@ensslint, any comments?I'd like to get a better understanding which branches are still used and which ones can be deleted:
- new_los (I'll adjust the code for NIFTy 5)
- new_sampling (@reimar is this still relevant?)
- symbolicDifferentiation (@parras do you want to keep this?)
- yango_minimizer (@reimar ?)
- addUnits (@parras ?)
- theo_master (I guess I'll convert this into a tag)
- nifty2go (is anyone still using this? Otherwise I'd convert it into a tag as well)
@ensslint, any comments?Martin ReineckeMartin Reineckehttps://gitlab.mpcdf.mpg.de/ift/nifty/issues/255Unify SlopeOperator, OffsetOperator, Polynom-fitting demo into one Parametric...2019-01-31T19:41:56ZPhilipp ArrasUnify SlopeOperator, OffsetOperator, Polynom-fitting demo into one ParametricOperator@pfrank, just to keep track of this idea.@pfrank, just to keep track of this idea.https://gitlab.mpcdf.mpg.de/ift/nifty/issues/254Better support for partial inference2019-05-21T07:32:43ZMartin ReineckeBetter support for partial inferenceI have added the `partial-const` branch for tweaks that improve NIFTy's support for partial inference.
The current idea is that one first builds the full Hamiltonian (as before), and then obtains a simplified version for partially constant inputs by calling its method `simplify_for_constant_input()` with the constant input components.
All that needs to be done (I hope) is to specialize this method for all composed operators, i.e. those that call other operators internally.
@kjako, @reimar, @parras, @pfrank: does the principal idea look sound to you?I have added the `partial-const` branch for tweaks that improve NIFTy's support for partial inference.
The current idea is that one first builds the full Hamiltonian (as before), and then obtains a simplified version for partially constant inputs by calling its method `simplify_for_constant_input()` with the constant input components.
All that needs to be done (I hope) is to specialize this method for all composed operators, i.e. those that call other operators internally.
@kjako, @reimar, @parras, @pfrank: does the principal idea look sound to you?https://gitlab.mpcdf.mpg.de/ift/nifty/issues/250[post Nifty5] new design for the `Field` class?2018-06-26T12:59:51ZMartin Reinecke[post Nifty5] new design for the `Field` class?I'm wondering whether we can solve most of our current problems regarding fields and multifields by introducing a single new `Field` class that
- unifies the current `Field` and `MultiField` classes (a classic field would have a single dictionary entry with an empty string as name)
- is immutable
- can contain scalars or array-like objects to describe field content (maybe even functions?)
If we have such a class we can again demand that all operations on fields require identical domains, and we can still avoid writing huge zero-filled arrays by writing scalar zeros instead.
@reimar, @kjako, @parras please let me know what you think.I'm wondering whether we can solve most of our current problems regarding fields and multifields by introducing a single new `Field` class that
- unifies the current `Field` and `MultiField` classes (a classic field would have a single dictionary entry with an empty string as name)
- is immutable
- can contain scalars or array-like objects to describe field content (maybe even functions?)
If we have such a class we can again demand that all operations on fields require identical domains, and we can still avoid writing huge zero-filled arrays by writing scalar zeros instead.
@reimar, @kjako, @parras please let me know what you think.https://gitlab.mpcdf.mpg.de/ift/IMAGINE/issues/3Long-term plans?2018-05-14T08:26:03ZMartin ReineckeLong-term plans?I expect that IMAGINE will continue to be used in the future.
Is the goal to adjust it to NIFTy 4 or to stick to NIFTy 3?
I can help with the migration if necessary.
In the current state the package is not installable, since setup.py tries to clone the "master" branch of NIFTy, which does not exist.I expect that IMAGINE will continue to be used in the future.
Is the goal to adjust it to NIFTy 4 or to stick to NIFTy 3?
I can help with the migration if necessary.
In the current state the package is not installable, since setup.py tries to clone the "master" branch of NIFTy, which does not exist.Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/issues/235Improved LOS response?2018-04-30T09:19:16ZMartin ReineckeImproved LOS response?@kjako mentioned some time ago that the current LOSResponse exhibits some artifacts. I suspect that they are caused by our method to compute the "influence" of lines of sight on individual data points.
Currently, the influence of a line of sight on a data point is simply proportional to the length of its intersection with the cell around the data point. It doesn't matter whether the LOS cuts the cell near the edges or goes straight through the center. This also implies that tiny shifts of a line of sight may lead to dramatic changes of its influences on data points, which is probably not desirable.
I propose to use an approach that is more similar to SPH methods:
- each data point has a sphere of influence with a given R_max (similar to the cell distances)
- it interacts with all lines of sight that intersect its sphere of influence
- the interaction strength scales with the integral along the line of sight over a function f(r) around the data point, which falls to zero at R_max.
My expectation is that this will reduce grid-related artifacts.
@ensslint, @kjako: does this sound worthwhile to try?@kjako mentioned some time ago that the current LOSResponse exhibits some artifacts. I suspect that they are caused by our method to compute the "influence" of lines of sight on individual data points.
Currently, the influence of a line of sight on a data point is simply proportional to the length of its intersection with the cell around the data point. It doesn't matter whether the LOS cuts the cell near the edges or goes straight through the center. This also implies that tiny shifts of a line of sight may lead to dramatic changes of its influences on data points, which is probably not desirable.
I propose to use an approach that is more similar to SPH methods:
- each data point has a sphere of influence with a given R_max (similar to the cell distances)
- it interacts with all lines of sight that intersect its sphere of influence
- the interaction strength scales with the integral along the line of sight over a function f(r) around the data point, which falls to zero at R_max.
My expectation is that this will reduce grid-related artifacts.
@ensslint, @kjako: does this sound worthwhile to try?https://gitlab.mpcdf.mpg.de/ift/IMAGINE/issues/2Add `Galaxy` class which carries constituents (e-, dust, B)2017-10-04T23:43:23ZTheo SteiningerAdd `Galaxy` class which carries constituents (e-, dust, B)Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/D2O/issues/23"float128"-error on Windows 10 64bit2017-06-22T13:16:50ZChristoph Lienhard"float128"-error on Windows 10 64bitI get an error when using D20 related functions in NIFTy on my Windows machine:
site-packages\d2o-1.1.0-py2.7.egg\d2o\dtype_converter.py", line 54, in __init__
[np.dtype('float128'), MPI.LONG_DOUBLE],
TypeError: data type "float128" not understood
As far as I understand numpy.float128 does not exist on every system (for some reason).
Edit:
same Problem with "complex256":
\site-packages\d2o-1.1.0-py2.7.egg\d2o\distributed_data_object.py", line 1898, in _to_hdf5
if self.dtype is np.dtype(np.complex256):
AttributeError: 'module' object has no attribute 'complex256'I get an error when using D20 related functions in NIFTy on my Windows machine:
site-packages\d2o-1.1.0-py2.7.egg\d2o\dtype_converter.py", line 54, in __init__
[np.dtype('float128'), MPI.LONG_DOUBLE],
TypeError: data type "float128" not understood
As far as I understand numpy.float128 does not exist on every system (for some reason).
Edit:
same Problem with "complex256":
\site-packages\d2o-1.1.0-py2.7.egg\d2o\distributed_data_object.py", line 1898, in _to_hdf5
if self.dtype is np.dtype(np.complex256):
AttributeError: 'module' object has no attribute 'complex256'https://gitlab.mpcdf.mpg.de/ift/D2O/issues/22`imag` and `real` break memory view2017-07-06T05:50:22ZTheo Steininger`imag` and `real` break memory viewfrom d2o import *
a = np.array([1,2,3,4], dtype=np.complex)
obj = distributed_data_object(a)
obj.imag[0] = 1234
objfrom d2o import *
a = np.array([1,2,3,4], dtype=np.complex)
obj = distributed_data_object(a)
obj.imag[0] = 1234
objTheo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/IMAGINE/issues/1Extend carrier-mapper to partial -np.inf <-> np.inf2018-05-17T13:28:24ZTheo SteiningerExtend carrier-mapper to partial -np.inf <-> np.infIf a and/or b are (-)np.inf, just shift x by value of x if smaller/greater than m.If a and/or b are (-)np.inf, just shift x by value of x if smaller/greater than m.https://gitlab.mpcdf.mpg.de/ift/D2O/issues/21Create dedicated object for 'distribution_strategy'2017-07-06T05:50:22ZTheo SteiningerCreate dedicated object for 'distribution_strategy'Only global-type distribution strategies are comparable by their name directly. In order to compare local-type distribution strategies as well -> implement an object which represents the distribution strategy. For 'freeform' this includes the individual slices lengths. Only global-type distribution strategies are comparable by their name directly. In order to compare local-type distribution strategies as well -> implement an object which represents the distribution strategy. For 'freeform' this includes the individual slices lengths. Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/D2O/issues/20Initialization: prefer global_shape/local_shape over shape of data2017-07-06T05:50:22ZTheo SteiningerInitialization: prefer global_shape/local_shape over shape of dataRight now, durring initialization if some init-data is provided, the shape of this data is prefered over an explicitly given shape.
-> Change the behavior in Distributor-Factory.
-> Make a disperse_data instead of a distribute_data at the end of init.
Right now, durring initialization if some init-data is provided, the shape of this data is prefered over an explicitly given shape.
-> Change the behavior in Distributor-Factory.
-> Make a disperse_data instead of a distribute_data at the end of init.
Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/D2O/issues/19reshaping -> enfold/defold2017-07-06T05:50:22ZTheo Steiningerreshaping -> enfold/defoldenfold in the slicing distributor relies on the fact, that the global axes correspond to the local array axes, as it operates on local data from `get_local_data()`.
In general this is not true for generic distribution strategies. enfold in the slicing distributor relies on the fact, that the global axes correspond to the local array axes, as it operates on local data from `get_local_data()`.
In general this is not true for generic distribution strategies. Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/D2O/issues/18Improve obj.searchsorted such that it supports scalar, numpy arrays and distr...2017-07-06T05:50:22ZTheo SteiningerImprove obj.searchsorted such that it supports scalar, numpy arrays and distributed_data_objects as input......and return a distributed_data_object if the input was distributed.
...and return a distributed_data_object if the input was distributed.
Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/D2O/issues/17Add axis keyword to obj.vdot2017-07-06T05:50:22ZTheo SteiningerAdd axis keyword to obj.vdotTheo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/D2O/issues/16Add out-array parameter to numerical d2o operations.2017-07-06T05:50:22ZTheo SteiningerAdd out-array parameter to numerical d2o operations.Numpy supports to specify an out array in order to avoid memory reallocation.
a = np.array([1,2,3,4])
b = np.array([5,6,7,8])
# slow:
a = a + b
# fast:
np.add(a,b,out=a)
Numpy supports to specify an out array in order to avoid memory reallocation.
a = np.array([1,2,3,4])
b = np.array([5,6,7,8])
# slow:
a = a + b
# fast:
np.add(a,b,out=a)
Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/D2O/issues/15d2o: Contraction functions rely on non-degeneracy of distribution strategy2017-07-06T05:50:22ZTheo Steiningerd2o: Contraction functions rely on non-degeneracy of distribution strategySeveral methods of the distributed_data_object rely on the fact, that the distribution strategy behaves as if the local data was non-degenerate. Currently the non-distributor fixes this by returning trivial (local) results in the _allgather and the _Allreduce_sum method.
Affected d2o methods are at least: _contraction_helper, mean
Fix: Move the functionality for sum, prod, etc... into the distributor.Several methods of the distributed_data_object rely on the fact, that the distribution strategy behaves as if the local data was non-degenerate. Currently the non-distributor fixes this by returning trivial (local) results in the _allgather and the _Allreduce_sum method.
Affected d2o methods are at least: _contraction_helper, mean
Fix: Move the functionality for sum, prod, etc... into the distributor.Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/D2O/issues/14d2o: _contraction_helper does not work when using numpy keyword arguments2017-07-06T05:50:22ZTheo Steiningerd2o: _contraction_helper does not work when using numpy keyword argumentsThe `_contraction_helper` passes keyword arguments to the underlying numpy functions (axis=, keepdims=). The result of the _contraction_helper's local computation is then an array and not a scalar. Therefore the dtype check fails.
Fix: After solving theos/NIFTy#2, adopt to the case that the local run's result object is an array and make a further distinction of cases, i.e for something like axis=0 for the slicing_distributor. The `_contraction_helper` passes keyword arguments to the underlying numpy functions (axis=, keepdims=). The result of the _contraction_helper's local computation is then an array and not a scalar. Therefore the dtype check fails.
Fix: After solving theos/NIFTy#2, adopt to the case that the local run's result object is an array and make a further distinction of cases, i.e for something like axis=0 for the slicing_distributor. Theo SteiningerTheo Steininger