ift issueshttps://gitlab.mpcdf.mpg.de/groups/ift/-/issues2020-10-20T17:29:57Zhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/312Unexpected amplification of fluctuations in the correlated field model by add...2020-10-20T17:29:57ZGordian EdenhoferUnexpected amplification of fluctuations in the correlated field model by adding axesI am working on a problem in which I might dynamically add axes and as such was relying on the normalization of the zero-mode in the correlated field model. I was surprised to notice that with each added axis the fluctuations of the correlated field get amplified by the inverse of `offset_std_mean`. I am using NIFTy6 @ afc3e2df5e6b6d5f90eb414f07beeeefec2a085d .
The following code snippets reproduces the described amplification
```python
cf_axes = {
"offset_mean": 0.,
"offset_std_mean": 1e-3,
"offset_std_std": 1e-4,
"prefix": ''
}
temporal_axis_fluctuations = {
'fluctuations_mean': 1.,
'fluctuations_stddev': 0.1,
'loglogavgslope_mean': -1.0,
'loglogavgslope_stddev': 0.5,
'flexibility_mean': 2.5,
'flexibility_stddev': 1.0,
'asperity_mean': 0.5,
'asperity_stddev': 0.5,
'prefix': 'temporal_axis'
}
fish_axis_fluctuations = {
'fluctuations_mean': 1.,
'fluctuations_stddev': 0.1,
'loglogavgslope_mean': -1.5,
'loglogavgslope_stddev': 0.5,
'flexibility_mean': 2.5,
'flexibility_stddev': 1.0,
'asperity_mean': 0.5,
'asperity_stddev': 0.3,
'prefix': 'fish_axis'
}
cfmaker = ift.CorrelatedFieldMaker.make(**cf_axes)
cfmaker.add_fluctuations(ift.RGSpace(7638), **temporal_axis_fluctuations)
# cfmaker.add_fluctuations(ift.RGSpace(8), **fish_axis_fluctuations)
```
yields
```
cf = cfmaker.finalize()
np.mean([cf(ift.from_random(cf.domain)).s_std() for _ in range(100)]) # \approx 1
```
which is what I would expect. However, if I uncomment the last line in the second cell above, the result becomes
```
cf = cfmaker.finalize()
np.mean([cf(ift.from_random(cf.domain)).s_std() for _ in range(100)]) # \approx 1e+3
```
This does not make sense to me and I was expecting the same result as in the previous cell.
In short: Am I missing something or is there a bug in the correlated field model?I am working on a problem in which I might dynamically add axes and as such was relying on the normalization of the zero-mode in the correlated field model. I was surprised to notice that with each added axis the fluctuations of the correlated field get amplified by the inverse of `offset_std_mean`. I am using NIFTy6 @ afc3e2df5e6b6d5f90eb414f07beeeefec2a085d .
The following code snippets reproduces the described amplification
```python
cf_axes = {
"offset_mean": 0.,
"offset_std_mean": 1e-3,
"offset_std_std": 1e-4,
"prefix": ''
}
temporal_axis_fluctuations = {
'fluctuations_mean': 1.,
'fluctuations_stddev': 0.1,
'loglogavgslope_mean': -1.0,
'loglogavgslope_stddev': 0.5,
'flexibility_mean': 2.5,
'flexibility_stddev': 1.0,
'asperity_mean': 0.5,
'asperity_stddev': 0.5,
'prefix': 'temporal_axis'
}
fish_axis_fluctuations = {
'fluctuations_mean': 1.,
'fluctuations_stddev': 0.1,
'loglogavgslope_mean': -1.5,
'loglogavgslope_stddev': 0.5,
'flexibility_mean': 2.5,
'flexibility_stddev': 1.0,
'asperity_mean': 0.5,
'asperity_stddev': 0.3,
'prefix': 'fish_axis'
}
cfmaker = ift.CorrelatedFieldMaker.make(**cf_axes)
cfmaker.add_fluctuations(ift.RGSpace(7638), **temporal_axis_fluctuations)
# cfmaker.add_fluctuations(ift.RGSpace(8), **fish_axis_fluctuations)
```
yields
```
cf = cfmaker.finalize()
np.mean([cf(ift.from_random(cf.domain)).s_std() for _ in range(100)]) # \approx 1
```
which is what I would expect. However, if I uncomment the last line in the second cell above, the result becomes
```
cf = cfmaker.finalize()
np.mean([cf(ift.from_random(cf.domain)).s_std() for _ in range(100)]) # \approx 1e+3
```
This does not make sense to me and I was expecting the same result as in the previous cell.
In short: Am I missing something or is there a bug in the correlated field model?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/310Unexpected behaviour of ducktape and ducktape_left2020-07-01T11:36:55ZPhilipp FrankUnexpected behaviour of ducktape and ducktape_leftI've noticed an unexpected behaviour of `ducktape_left` and `ducktape` in combination with `Linearization`.
The following example code:
a = ift.Linearization.make_var(ift.from_random(ift.RGSpace(3))).ducktape_left('a')
does not produce an error but produces something that is not an instance of `Linearization` any more. Instead it returns an `_OpChain`. The same is true if we replace `ducktape_left` with `ducktape`.
I suspect this is due to the fact that `Linearization` is now an instance of `Operator` but does not implement `ducktape` itself.
In contrast if we try this with `Field`:
a = ift.from_random(ift.RGSpace(3)).ducktape_left('a')
we get an error.
I see two possible solutions for this: either we disable the (currently unintended?) support of `Linearization` in `ducktape` and `ducktape_left` or we implement a proper version of them for `Field` and `Linearization`.
@all does somebody know what is going on here?I've noticed an unexpected behaviour of `ducktape_left` and `ducktape` in combination with `Linearization`.
The following example code:
a = ift.Linearization.make_var(ift.from_random(ift.RGSpace(3))).ducktape_left('a')
does not produce an error but produces something that is not an instance of `Linearization` any more. Instead it returns an `_OpChain`. The same is true if we replace `ducktape_left` with `ducktape`.
I suspect this is due to the fact that `Linearization` is now an instance of `Operator` but does not implement `ducktape` itself.
In contrast if we try this with `Field`:
a = ift.from_random(ift.RGSpace(3)).ducktape_left('a')
we get an error.
I see two possible solutions for this: either we disable the (currently unintended?) support of `Linearization` in `ducktape` and `ducktape_left` or we implement a proper version of them for `Field` and `Linearization`.
@all does somebody know what is going on here?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/308Fisher test for VariableCovarianceGaussianEnergy not sensitive2020-06-23T10:41:17ZPhilipp Arrasparras@mpa-garching.mpg.deFisher test for VariableCovarianceGaussianEnergy not sensitiveOn the branch `metric_tests` I have introduced a breaking factor (949578182c660faee1ea7344d79993d9d1b35310) and the test does not break. I am not sure how to fix it. Is it even possible to fix it in this case? Do we need two test cases, one where the mean is sampled and one where the variance is sampled?On the branch `metric_tests` I have introduced a breaking factor (949578182c660faee1ea7344d79993d9d1b35310) and the test does not break. I am not sure how to fix it. Is it even possible to fix it in this case? Do we need two test cases, one where the mean is sampled and one where the variance is sampled?Reimar Heinrich LeikeReimar Heinrich Leikehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/307Inverse Gamma excitations: distribution too centered2020-08-14T14:44:19ZJulia StadlerInverse Gamma excitations: distribution too centeredDear NIFTy gang,
I have noticed a strange behavior with the InverseGammaOperator and was wondering if somebody knows where it might coming from. In cases where the data imposes little constraints on the Field generated by InverseGammaOperator the distribution of excitations (after the inference) is very narrowly centered around zero while naively I would expect a Gaussian with standard deviation of unity. This can either be because parts of the data are masked or because I upscaled my error bars. In principle I don't care much about this, but I am worried that it might be caused by a bug I have somewhere in my inference. Attached is an example, where the left side shows the excitations of a field which is constrained by the data over the full FOV and on the right side most of the data is masked.
Cheers, Julia
![2020-06-02_analysis20200528set01_invGexcitations](/uploads/3d38eb80bf4097c6dff9bb980459359a/2020-06-02_analysis20200528set01_invGexcitations.png)Dear NIFTy gang,
I have noticed a strange behavior with the InverseGammaOperator and was wondering if somebody knows where it might coming from. In cases where the data imposes little constraints on the Field generated by InverseGammaOperator the distribution of excitations (after the inference) is very narrowly centered around zero while naively I would expect a Gaussian with standard deviation of unity. This can either be because parts of the data are masked or because I upscaled my error bars. In principle I don't care much about this, but I am worried that it might be caused by a bug I have somewhere in my inference. Attached is an example, where the left side shows the excitations of a field which is constrained by the data over the full FOV and on the right side most of the data is masked.
Cheers, Julia
![2020-06-02_analysis20200528set01_invGexcitations](/uploads/3d38eb80bf4097c6dff9bb980459359a/2020-06-02_analysis20200528set01_invGexcitations.png)https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/291Problematic MultiField methods2020-04-02T13:55:50ZMartin ReineckeProblematic MultiField methodsCurrently, `MultiField` has methods like `s_sum`, `clip`, and many transcendental functions. I'm wondering whether they make any sense: what's the point of computing the sum over all values in several fields ... or computing the sine of all field entries?
We currently use `MultiField`'s `norm` property in minimization, but given that the components of the `MultiField` may have vastly different scales, is this actually a clever thing to do? This basically means that we stop minimizing once the field component with the largest values has converged ... not necessarily what we want.
@parras, @pfrank, @reimar, @kjakoCurrently, `MultiField` has methods like `s_sum`, `clip`, and many transcendental functions. I'm wondering whether they make any sense: what's the point of computing the sum over all values in several fields ... or computing the sine of all field entries?
We currently use `MultiField`'s `norm` property in minimization, but given that the components of the `MultiField` may have vastly different scales, is this actually a clever thing to do? This basically means that we stop minimizing once the field component with the largest values has converged ... not necessarily what we want.
@parras, @pfrank, @reimar, @kjakohttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/290Data types for the Grand Unification2020-04-08T18:20:48ZMartin ReineckeData types for the Grand Unification```
Space:
structured or unstructured set of points (current NIFTy's "Domain")
SpaceTuple:
external product of zero or more Spaces (current NIFTy's "DomainTuple")
SpaceTuple = (Space, Space, ...)
SpaceTupleDict:
dictionary (with string keys) of one or more SpaceTuples
SpaceTupleDict = {name1: SpaceTuple1, name2: SpaceTuple2, ...}
(current NIFTy's "MultiDomain")
Domain:
This is an abstract concept. Can currently be represented by SpaceTuple or SpaceTupleDict
Special domains:
ScalarDomain: empty SpaceTuple
Operator:
- Operators take an Operand defined on an input domain, transform it in some way
and return a new Operand defined on a (possibly different) target domain.
- Operators can be concatenated, as long as the domains at the interface are
identical. The result is another Operator.
Operand:
- Operand objects represent fields and potentially their Jacobians and metrics.
- an Operand object can be asked for its "value" and (if configured accordingly)
its Jacobian. If the target domain of an Operand object is scalar, a metric
may also be available.
- Applying an Operator to an Operand object will always return another Operand object.
class Operator(object):
@property
def domain(self):
# return input domain
@property
def target(self):
# return output domain
def __call__(self, other):
# if isinstance(other, Operand) return an Operand object
# else return an Operator object
def __matmul__(self, other):
return self(other)
class LinearOperator(Operator):
# more or less analogous to the current LinearOperator
class Operand(object):
# this unifies current NIFTy's Field, MultiField, and Linearization classes
@property
def domain(self):
# if no Jacobian is present, return None, else the Jacobian's domain.
@property
def target(self):
# return the domain on which the value of the Operand is defined. This is
# also the Jacobian's target (if a Jacobian is defined)
@property
def val(self):
# return a low level data structure holding the actual values (currently numpy.ndarray
# or dictionary of numpy.ndarrays. Read-only.
def val_rw(self):
# return a writeable copy of `val`
def fld(self):
# return am Operand that only contains the value content of this object. Its Jacobian and
# potential higher derivatives will be `None`.
@property
def jac(self):
return a Jacobian LinearOperator if possible, else None
@property
def want_metric(self):
return True or False
@property
def metric(self):
if self.jacobian is None, raise an exception
if self.target is not ScalarDomain, raise an exception
if not self.want_metric, raise an exception
if metric cannot be computed, raise an exception
return metric
```
```
Space:
structured or unstructured set of points (current NIFTy's "Domain")
SpaceTuple:
external product of zero or more Spaces (current NIFTy's "DomainTuple")
SpaceTuple = (Space, Space, ...)
SpaceTupleDict:
dictionary (with string keys) of one or more SpaceTuples
SpaceTupleDict = {name1: SpaceTuple1, name2: SpaceTuple2, ...}
(current NIFTy's "MultiDomain")
Domain:
This is an abstract concept. Can currently be represented by SpaceTuple or SpaceTupleDict
Special domains:
ScalarDomain: empty SpaceTuple
Operator:
- Operators take an Operand defined on an input domain, transform it in some way
and return a new Operand defined on a (possibly different) target domain.
- Operators can be concatenated, as long as the domains at the interface are
identical. The result is another Operator.
Operand:
- Operand objects represent fields and potentially their Jacobians and metrics.
- an Operand object can be asked for its "value" and (if configured accordingly)
its Jacobian. If the target domain of an Operand object is scalar, a metric
may also be available.
- Applying an Operator to an Operand object will always return another Operand object.
class Operator(object):
@property
def domain(self):
# return input domain
@property
def target(self):
# return output domain
def __call__(self, other):
# if isinstance(other, Operand) return an Operand object
# else return an Operator object
def __matmul__(self, other):
return self(other)
class LinearOperator(Operator):
# more or less analogous to the current LinearOperator
class Operand(object):
# this unifies current NIFTy's Field, MultiField, and Linearization classes
@property
def domain(self):
# if no Jacobian is present, return None, else the Jacobian's domain.
@property
def target(self):
# return the domain on which the value of the Operand is defined. This is
# also the Jacobian's target (if a Jacobian is defined)
@property
def val(self):
# return a low level data structure holding the actual values (currently numpy.ndarray
# or dictionary of numpy.ndarrays. Read-only.
def val_rw(self):
# return a writeable copy of `val`
def fld(self):
# return am Operand that only contains the value content of this object. Its Jacobian and
# potential higher derivatives will be `None`.
@property
def jac(self):
return a Jacobian LinearOperator if possible, else None
@property
def want_metric(self):
return True or False
@property
def metric(self):
if self.jacobian is None, raise an exception
if self.target is not ScalarDomain, raise an exception
if not self.want_metric, raise an exception
if metric cannot be computed, raise an exception
return metric
```
https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/288Make convergence tests less fragile2020-03-22T13:06:44ZMartin ReineckeMake convergence tests less fragileThe switch to `numpy`'s new RNG interface has shown that some of our convergence and consistency tests are not very robust: in principle these tests should succeed for any random seed we use during the problem setup, but this is apparently not the case. We should have a closer look at the problematic tests and fix them accordingly.The switch to `numpy`'s new RNG interface has shown that some of our convergence and consistency tests are not very robust: in principle these tests should succeed for any random seed we use during the problem setup, but this is apparently not the case. We should have a closer look at the problematic tests and fix them accordingly.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/286NIFTy grand unification: unify MultiFields and Fields2020-04-07T17:33:33ZMartin ReineckeNIFTy grand unification: unify MultiFields and Fields- all new fields have the internal structure of a MultiField
- a classic "standard" field is represented by a new field with a single key
that is the empty string
- Many of our operators work on part of a DomainTuple (e.g. FFTOperator).
Typically this is specified by passing the domain and additionally a "spaces"
argument, which is None, int of tupe of ints.
Since in the future every domain is a "multi-domain", this is no longer
sufficient: the partial domain must now contain an additional string defining
the name of the required field component. This requires an update
(and renaming) of "parse_spaces", "infer_space" etc.
Maybe it's good to introduce a new "PartialDomain" class which contains
* a string containing the desired field component, and
* an integer tuple containing the desired subspaces of that component
- "MultiField" will be renamed to "Field"; "Field" will probably be renamed
to some internal helper class or completely implemented within the new "Field".
- "MultiDomain" will be renamed to ???; "DomainTuple" will probably become
"_DomainTuple", i.e. it should not be directly accessed by external users.
- "makeField" and "makeDomain" become static "make" members of "Field" and
"Domain"- all new fields have the internal structure of a MultiField
- a classic "standard" field is represented by a new field with a single key
that is the empty string
- Many of our operators work on part of a DomainTuple (e.g. FFTOperator).
Typically this is specified by passing the domain and additionally a "spaces"
argument, which is None, int of tupe of ints.
Since in the future every domain is a "multi-domain", this is no longer
sufficient: the partial domain must now contain an additional string defining
the name of the required field component. This requires an update
(and renaming) of "parse_spaces", "infer_space" etc.
Maybe it's good to introduce a new "PartialDomain" class which contains
* a string containing the desired field component, and
* an integer tuple containing the desired subspaces of that component
- "MultiField" will be renamed to "Field"; "Field" will probably be renamed
to some internal helper class or completely implemented within the new "Field".
- "MultiDomain" will be renamed to ???; "DomainTuple" will probably become
"_DomainTuple", i.e. it should not be directly accessed by external users.
- "makeField" and "makeDomain" become static "make" members of "Field" and
"Domain"Martin ReineckeMartin Reineckehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/276testing differentiability for complex operators2019-11-19T14:43:26ZReimar Heinrich Leiketesting differentiability for complex operatorsThe routine extra.check_jacobian_consistency only checks derivatives in real direction. There are two other interesting cases: differentiability in real and imaginary direction and complex derifferentiability (which is the former with the additional requirement that df/d(Imag) = i*df/d(Re)).The routine extra.check_jacobian_consistency only checks derivatives in real direction. There are two other interesting cases: differentiability in real and imaginary direction and complex derifferentiability (which is the former with the additional requirement that df/d(Imag) = i*df/d(Re)).Reimar Heinrich LeikeReimar Heinrich Leikehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/274Restructure DOFDistributor2019-10-17T13:44:44ZJakob KnollmuellerRestructure DOFDistributorHi,
I just want to document the thought to change the DOFDistributor to a BinDistributor and build it analogous to the adjoint numpy bincount function. This should make things more clear and allow for operations on fields of any one-dimensional domain. We could get rid of the DOFSpace and change the default to an UnstructuredDomain
JakobHi,
I just want to document the thought to change the DOFDistributor to a BinDistributor and build it analogous to the adjoint numpy bincount function. This should make things more clear and allow for operations on fields of any one-dimensional domain. We could get rid of the DOFSpace and change the default to an UnstructuredDomain
Jakobhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/258Branch cleanup2019-05-21T06:55:16ZMartin ReineckeBranch cleanupI'd like to get a better understanding which branches are still used and which ones can be deleted:
- new_los (I'll adjust the code for NIFTy 5)
- new_sampling (@reimar is this still relevant?)
- symbolicDifferentiation (@parras do you want to keep this?)
- yango_minimizer (@reimar ?)
- addUnits (@parras ?)
- theo_master (I guess I'll convert this into a tag)
- nifty2go (is anyone still using this? Otherwise I'd convert it into a tag as well)
@ensslint, any comments?I'd like to get a better understanding which branches are still used and which ones can be deleted:
- new_los (I'll adjust the code for NIFTy 5)
- new_sampling (@reimar is this still relevant?)
- symbolicDifferentiation (@parras do you want to keep this?)
- yango_minimizer (@reimar ?)
- addUnits (@parras ?)
- theo_master (I guess I'll convert this into a tag)
- nifty2go (is anyone still using this? Otherwise I'd convert it into a tag as well)
@ensslint, any comments?Martin ReineckeMartin Reineckehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/250[post Nifty6] new design for the `Field` class?2019-12-02T20:58:52ZMartin Reinecke[post Nifty6] new design for the `Field` class?I'm wondering whether we can solve most of our current problems regarding fields and multifields by introducing a single new `Field` class that
- unifies the current `Field` and `MultiField` classes (a classic field would have a single dictionary entry with an empty string as name)
- is immutable
- can contain scalars or array-like objects to describe field content (maybe even functions?)
If we have such a class we can again demand that all operations on fields require identical domains, and we can still avoid writing huge zero-filled arrays by writing scalar zeros instead.
@reimar, @kjako, @parras please let me know what you think.I'm wondering whether we can solve most of our current problems regarding fields and multifields by introducing a single new `Field` class that
- unifies the current `Field` and `MultiField` classes (a classic field would have a single dictionary entry with an empty string as name)
- is immutable
- can contain scalars or array-like objects to describe field content (maybe even functions?)
If we have such a class we can again demand that all operations on fields require identical domains, and we can still avoid writing huge zero-filled arrays by writing scalar zeros instead.
@reimar, @kjako, @parras please let me know what you think.https://gitlab.mpcdf.mpg.de/ift/IMAGINE/-/issues/3Long-term plans?2018-05-14T08:26:03ZMartin ReineckeLong-term plans?I expect that IMAGINE will continue to be used in the future.
Is the goal to adjust it to NIFTy 4 or to stick to NIFTy 3?
I can help with the migration if necessary.
In the current state the package is not installable, since setup.py tries to clone the "master" branch of NIFTy, which does not exist.I expect that IMAGINE will continue to be used in the future.
Is the goal to adjust it to NIFTy 4 or to stick to NIFTy 3?
I can help with the migration if necessary.
In the current state the package is not installable, since setup.py tries to clone the "master" branch of NIFTy, which does not exist.Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/235Improved LOS response?2018-04-30T09:19:16ZMartin ReineckeImproved LOS response?@kjako mentioned some time ago that the current LOSResponse exhibits some artifacts. I suspect that they are caused by our method to compute the "influence" of lines of sight on individual data points.
Currently, the influence of a line of sight on a data point is simply proportional to the length of its intersection with the cell around the data point. It doesn't matter whether the LOS cuts the cell near the edges or goes straight through the center. This also implies that tiny shifts of a line of sight may lead to dramatic changes of its influences on data points, which is probably not desirable.
I propose to use an approach that is more similar to SPH methods:
- each data point has a sphere of influence with a given R_max (similar to the cell distances)
- it interacts with all lines of sight that intersect its sphere of influence
- the interaction strength scales with the integral along the line of sight over a function f(r) around the data point, which falls to zero at R_max.
My expectation is that this will reduce grid-related artifacts.
@ensslint, @kjako: does this sound worthwhile to try?@kjako mentioned some time ago that the current LOSResponse exhibits some artifacts. I suspect that they are caused by our method to compute the "influence" of lines of sight on individual data points.
Currently, the influence of a line of sight on a data point is simply proportional to the length of its intersection with the cell around the data point. It doesn't matter whether the LOS cuts the cell near the edges or goes straight through the center. This also implies that tiny shifts of a line of sight may lead to dramatic changes of its influences on data points, which is probably not desirable.
I propose to use an approach that is more similar to SPH methods:
- each data point has a sphere of influence with a given R_max (similar to the cell distances)
- it interacts with all lines of sight that intersect its sphere of influence
- the interaction strength scales with the integral along the line of sight over a function f(r) around the data point, which falls to zero at R_max.
My expectation is that this will reduce grid-related artifacts.
@ensslint, @kjako: does this sound worthwhile to try?https://gitlab.mpcdf.mpg.de/ift/IMAGINE/-/issues/2Add `Galaxy` class which carries constituents (e-, dust, B)2017-10-04T23:43:23ZTheo SteiningerAdd `Galaxy` class which carries constituents (e-, dust, B)Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/D2O/-/issues/23"float128"-error on Windows 10 64bit2017-06-22T13:16:50ZChristoph Lienhard"float128"-error on Windows 10 64bitI get an error when using D20 related functions in NIFTy on my Windows machine:
site-packages\d2o-1.1.0-py2.7.egg\d2o\dtype_converter.py", line 54, in __init__
[np.dtype('float128'), MPI.LONG_DOUBLE],
TypeError: data type "float128" not understood
As far as I understand numpy.float128 does not exist on every system (for some reason).
Edit:
same Problem with "complex256":
\site-packages\d2o-1.1.0-py2.7.egg\d2o\distributed_data_object.py", line 1898, in _to_hdf5
if self.dtype is np.dtype(np.complex256):
AttributeError: 'module' object has no attribute 'complex256'I get an error when using D20 related functions in NIFTy on my Windows machine:
site-packages\d2o-1.1.0-py2.7.egg\d2o\dtype_converter.py", line 54, in __init__
[np.dtype('float128'), MPI.LONG_DOUBLE],
TypeError: data type "float128" not understood
As far as I understand numpy.float128 does not exist on every system (for some reason).
Edit:
same Problem with "complex256":
\site-packages\d2o-1.1.0-py2.7.egg\d2o\distributed_data_object.py", line 1898, in _to_hdf5
if self.dtype is np.dtype(np.complex256):
AttributeError: 'module' object has no attribute 'complex256'https://gitlab.mpcdf.mpg.de/ift/D2O/-/issues/22`imag` and `real` break memory view2017-07-06T05:50:22ZTheo Steininger`imag` and `real` break memory viewfrom d2o import *
a = np.array([1,2,3,4], dtype=np.complex)
obj = distributed_data_object(a)
obj.imag[0] = 1234
objfrom d2o import *
a = np.array([1,2,3,4], dtype=np.complex)
obj = distributed_data_object(a)
obj.imag[0] = 1234
objTheo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/IMAGINE/-/issues/1Extend carrier-mapper to partial -np.inf <-> np.inf2018-05-17T13:28:24ZTheo SteiningerExtend carrier-mapper to partial -np.inf <-> np.infIf a and/or b are (-)np.inf, just shift x by value of x if smaller/greater than m.If a and/or b are (-)np.inf, just shift x by value of x if smaller/greater than m.https://gitlab.mpcdf.mpg.de/ift/D2O/-/issues/21Create dedicated object for 'distribution_strategy'2017-07-06T05:50:22ZTheo SteiningerCreate dedicated object for 'distribution_strategy'Only global-type distribution strategies are comparable by their name directly. In order to compare local-type distribution strategies as well -> implement an object which represents the distribution strategy. For 'freeform' this includes the individual slices lengths. Only global-type distribution strategies are comparable by their name directly. In order to compare local-type distribution strategies as well -> implement an object which represents the distribution strategy. For 'freeform' this includes the individual slices lengths. Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/D2O/-/issues/20Initialization: prefer global_shape/local_shape over shape of data2017-07-06T05:50:22ZTheo SteiningerInitialization: prefer global_shape/local_shape over shape of dataRight now, durring initialization if some init-data is provided, the shape of this data is prefered over an explicitly given shape.
-> Change the behavior in Distributor-Factory.
-> Make a disperse_data instead of a distribute_data at the end of init.
Right now, durring initialization if some init-data is provided, the shape of this data is prefered over an explicitly given shape.
-> Change the behavior in Distributor-Factory.
-> Make a disperse_data instead of a distribute_data at the end of init.
Theo SteiningerTheo Steininger