ift issueshttps://gitlab.mpcdf.mpg.de/groups/ift/-/issues2020-04-14T06:46:41Zhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/293[GU] Simplify point-wise Field operations2020-04-14T06:46:41ZMartin Reinecke[GU] Simplify point-wise Field operationsCurrently, point-wise operations on Nifty objects are not really implemented in a beautiful way: they are injected via some nasty Python tricks into `Field`, `MultiField`, `Operator` and the `sugar` module, and a slightly more complicated variant is added to `Linearization`.
My plan is to have a single method in `Field`, `MultiField`, `Linearization` and `Operator`, which is called `ptw` (or `pointwise`), which takes a string describing the requested pointwise operation (like "sin", "exp", "one_over" etc.). Implementation of the actual operations would be fairly trivial via a dictionary mapping these names to their `numpy` equivalents, and also describing their derivatives via `numpy` or custom functions where necessary.
If we like, we can leave the global pointwise functions injected into `sugar.py` unchanged for aesthetic reasons :)
Opinions?Currently, point-wise operations on Nifty objects are not really implemented in a beautiful way: they are injected via some nasty Python tricks into `Field`, `MultiField`, `Operator` and the `sugar` module, and a slightly more complicated variant is added to `Linearization`.
My plan is to have a single method in `Field`, `MultiField`, `Linearization` and `Operator`, which is called `ptw` (or `pointwise`), which takes a string describing the requested pointwise operation (like "sin", "exp", "one_over" etc.). Implementation of the actual operations would be fairly trivial via a dictionary mapping these names to their `numpy` equivalents, and also describing their derivatives via `numpy` or custom functions where necessary.
If we like, we can leave the global pointwise functions injected into `sugar.py` unchanged for aesthetic reasons :)
Opinions?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/292Many Linearization methods explicitly expect Fields2020-04-03T07:53:56ZMartin ReineckeMany Linearization methods explicitly expect FieldsWhile working on Field/MultiField unification, I noticed that the largest part of `Linearization` members do not accept `MultiFields`; a clear example is `sinc`, which explicitly creates a `Field` for its output, but many other methods break in more obscure ways.
If we want unification, we need all `Linearization` methods to work on `Multifields`, which is quite a bit of tedious work. Any volunteers?While working on Field/MultiField unification, I noticed that the largest part of `Linearization` members do not accept `MultiFields`; a clear example is `sinc`, which explicitly creates a `Field` for its output, but many other methods break in more obscure ways.
If we want unification, we need all `Linearization` methods to work on `Multifields`, which is quite a bit of tedious work. Any volunteers?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/291Problematic MultiField methods2020-04-02T13:55:50ZMartin ReineckeProblematic MultiField methodsCurrently, `MultiField` has methods like `s_sum`, `clip`, and many transcendental functions. I'm wondering whether they make any sense: what's the point of computing the sum over all values in several fields ... or computing the sine of all field entries?
We currently use `MultiField`'s `norm` property in minimization, but given that the components of the `MultiField` may have vastly different scales, is this actually a clever thing to do? This basically means that we stop minimizing once the field component with the largest values has converged ... not necessarily what we want.
@parras, @pfrank, @reimar, @kjakoCurrently, `MultiField` has methods like `s_sum`, `clip`, and many transcendental functions. I'm wondering whether they make any sense: what's the point of computing the sum over all values in several fields ... or computing the sine of all field entries?
We currently use `MultiField`'s `norm` property in minimization, but given that the components of the `MultiField` may have vastly different scales, is this actually a clever thing to do? This basically means that we stop minimizing once the field component with the largest values has converged ... not necessarily what we want.
@parras, @pfrank, @reimar, @kjakohttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/290Data types for the Grand Unification2020-04-08T18:20:48ZMartin ReineckeData types for the Grand Unification```
Space:
structured or unstructured set of points (current NIFTy's "Domain")
SpaceTuple:
external product of zero or more Spaces (current NIFTy's "DomainTuple")
SpaceTuple = (Space, Space, ...)
SpaceTupleDict:
dictionary (with string keys) of one or more SpaceTuples
SpaceTupleDict = {name1: SpaceTuple1, name2: SpaceTuple2, ...}
(current NIFTy's "MultiDomain")
Domain:
This is an abstract concept. Can currently be represented by SpaceTuple or SpaceTupleDict
Special domains:
ScalarDomain: empty SpaceTuple
Operator:
- Operators take an Operand defined on an input domain, transform it in some way
and return a new Operand defined on a (possibly different) target domain.
- Operators can be concatenated, as long as the domains at the interface are
identical. The result is another Operator.
Operand:
- Operand objects represent fields and potentially their Jacobians and metrics.
- an Operand object can be asked for its "value" and (if configured accordingly)
its Jacobian. If the target domain of an Operand object is scalar, a metric
may also be available.
- Applying an Operator to an Operand object will always return another Operand object.
class Operator(object):
@property
def domain(self):
# return input domain
@property
def target(self):
# return output domain
def __call__(self, other):
# if isinstance(other, Operand) return an Operand object
# else return an Operator object
def __matmul__(self, other):
return self(other)
class LinearOperator(Operator):
# more or less analogous to the current LinearOperator
class Operand(object):
# this unifies current NIFTy's Field, MultiField, and Linearization classes
@property
def domain(self):
# if no Jacobian is present, return None, else the Jacobian's domain.
@property
def target(self):
# return the domain on which the value of the Operand is defined. This is
# also the Jacobian's target (if a Jacobian is defined)
@property
def val(self):
# return a low level data structure holding the actual values (currently numpy.ndarray
# or dictionary of numpy.ndarrays. Read-only.
def val_rw(self):
# return a writeable copy of `val`
def fld(self):
# return am Operand that only contains the value content of this object. Its Jacobian and
# potential higher derivatives will be `None`.
@property
def jac(self):
return a Jacobian LinearOperator if possible, else None
@property
def want_metric(self):
return True or False
@property
def metric(self):
if self.jacobian is None, raise an exception
if self.target is not ScalarDomain, raise an exception
if not self.want_metric, raise an exception
if metric cannot be computed, raise an exception
return metric
```
```
Space:
structured or unstructured set of points (current NIFTy's "Domain")
SpaceTuple:
external product of zero or more Spaces (current NIFTy's "DomainTuple")
SpaceTuple = (Space, Space, ...)
SpaceTupleDict:
dictionary (with string keys) of one or more SpaceTuples
SpaceTupleDict = {name1: SpaceTuple1, name2: SpaceTuple2, ...}
(current NIFTy's "MultiDomain")
Domain:
This is an abstract concept. Can currently be represented by SpaceTuple or SpaceTupleDict
Special domains:
ScalarDomain: empty SpaceTuple
Operator:
- Operators take an Operand defined on an input domain, transform it in some way
and return a new Operand defined on a (possibly different) target domain.
- Operators can be concatenated, as long as the domains at the interface are
identical. The result is another Operator.
Operand:
- Operand objects represent fields and potentially their Jacobians and metrics.
- an Operand object can be asked for its "value" and (if configured accordingly)
its Jacobian. If the target domain of an Operand object is scalar, a metric
may also be available.
- Applying an Operator to an Operand object will always return another Operand object.
class Operator(object):
@property
def domain(self):
# return input domain
@property
def target(self):
# return output domain
def __call__(self, other):
# if isinstance(other, Operand) return an Operand object
# else return an Operator object
def __matmul__(self, other):
return self(other)
class LinearOperator(Operator):
# more or less analogous to the current LinearOperator
class Operand(object):
# this unifies current NIFTy's Field, MultiField, and Linearization classes
@property
def domain(self):
# if no Jacobian is present, return None, else the Jacobian's domain.
@property
def target(self):
# return the domain on which the value of the Operand is defined. This is
# also the Jacobian's target (if a Jacobian is defined)
@property
def val(self):
# return a low level data structure holding the actual values (currently numpy.ndarray
# or dictionary of numpy.ndarrays. Read-only.
def val_rw(self):
# return a writeable copy of `val`
def fld(self):
# return am Operand that only contains the value content of this object. Its Jacobian and
# potential higher derivatives will be `None`.
@property
def jac(self):
return a Jacobian LinearOperator if possible, else None
@property
def want_metric(self):
return True or False
@property
def metric(self):
if self.jacobian is None, raise an exception
if self.target is not ScalarDomain, raise an exception
if not self.want_metric, raise an exception
if metric cannot be computed, raise an exception
return metric
```
https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/289Overflows after updating NIFTy2020-03-27T14:50:45ZPhilipp Arrasparras@mpa-garching.mpg.deOverflows after updating NIFTyWhen updating to the new interpolation scheme (https://gitlab.mpcdf.mpg.de/ift/nifty/commit/37a5691e3013ca36303ee4c3e78c74485f516793), I get overflows in the minimization where I did not get any before. I am confused since the both gradient tests work better than before and the accuracy of the approximation is better. Maybe the operator works better now so that actually an inconsistency in my model is triggered or so. I have the feeling that it is not the operator's fault... ;)
@gedenhof can you share your observations about the overflows? If it is the same commit as for me, then I will go ahead and try to assemble a minimal demonstration case but I think this might be a lot of work so maybe we can find the problem without that.
@gedenhof, @lerouWhen updating to the new interpolation scheme (https://gitlab.mpcdf.mpg.de/ift/nifty/commit/37a5691e3013ca36303ee4c3e78c74485f516793), I get overflows in the minimization where I did not get any before. I am confused since the both gradient tests work better than before and the accuracy of the approximation is better. Maybe the operator works better now so that actually an inconsistency in my model is triggered or so. I have the feeling that it is not the operator's fault... ;)
@gedenhof can you share your observations about the overflows? If it is the same commit as for me, then I will go ahead and try to assemble a minimal demonstration case but I think this might be a lot of work so maybe we can find the problem without that.
@gedenhof, @lerouhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/288Make convergence tests less fragile2020-03-22T13:06:44ZMartin ReineckeMake convergence tests less fragileThe switch to `numpy`'s new RNG interface has shown that some of our convergence and consistency tests are not very robust: in principle these tests should succeed for any random seed we use during the problem setup, but this is apparently not the case. We should have a closer look at the problematic tests and fix them accordingly.The switch to `numpy`'s new RNG interface has shown that some of our convergence and consistency tests are not very robust: in principle these tests should succeed for any random seed we use during the problem setup, but this is apparently not the case. We should have a closer look at the problematic tests and fix them accordingly.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/287Switch to new random interface of numpy2020-03-24T11:25:13ZPhilipp Arrasparras@mpa-garching.mpg.deSwitch to new random interface of numpy`np.random.seed` is legacy. The numpy docs say:
> The best practice is to **not** reseed a BitGenerator, rather to recreate a new one. This method is here for legacy reasons.
> This example demonstrates best practice.
```python
from numpy.random import MT19937
from numpy.random import RandomState, SeedSequence
rs = RandomState(MT19937(SeedSequence(123456789)))
# Later, you want to restart the stream
rs = RandomState(MT19937(SeedSequence(987654321)))
```
We might want to migrate eventually.`np.random.seed` is legacy. The numpy docs say:
> The best practice is to **not** reseed a BitGenerator, rather to recreate a new one. This method is here for legacy reasons.
> This example demonstrates best practice.
```python
from numpy.random import MT19937
from numpy.random import RandomState, SeedSequence
rs = RandomState(MT19937(SeedSequence(123456789)))
# Later, you want to restart the stream
rs = RandomState(MT19937(SeedSequence(987654321)))
```
We might want to migrate eventually.Martin ReineckeMartin Reineckehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/286NIFTy grand unification: unify MultiFields and Fields2020-04-07T17:33:33ZMartin ReineckeNIFTy grand unification: unify MultiFields and Fields- all new fields have the internal structure of a MultiField
- a classic "standard" field is represented by a new field with a single key
that is the empty string
- Many of our operators work on part of a DomainTuple (e.g. FFTOperator).
Typically this is specified by passing the domain and additionally a "spaces"
argument, which is None, int of tupe of ints.
Since in the future every domain is a "multi-domain", this is no longer
sufficient: the partial domain must now contain an additional string defining
the name of the required field component. This requires an update
(and renaming) of "parse_spaces", "infer_space" etc.
Maybe it's good to introduce a new "PartialDomain" class which contains
* a string containing the desired field component, and
* an integer tuple containing the desired subspaces of that component
- "MultiField" will be renamed to "Field"; "Field" will probably be renamed
to some internal helper class or completely implemented within the new "Field".
- "MultiDomain" will be renamed to ???; "DomainTuple" will probably become
"_DomainTuple", i.e. it should not be directly accessed by external users.
- "makeField" and "makeDomain" become static "make" members of "Field" and
"Domain"- all new fields have the internal structure of a MultiField
- a classic "standard" field is represented by a new field with a single key
that is the empty string
- Many of our operators work on part of a DomainTuple (e.g. FFTOperator).
Typically this is specified by passing the domain and additionally a "spaces"
argument, which is None, int of tupe of ints.
Since in the future every domain is a "multi-domain", this is no longer
sufficient: the partial domain must now contain an additional string defining
the name of the required field component. This requires an update
(and renaming) of "parse_spaces", "infer_space" etc.
Maybe it's good to introduce a new "PartialDomain" class which contains
* a string containing the desired field component, and
* an integer tuple containing the desired subspaces of that component
- "MultiField" will be renamed to "Field"; "Field" will probably be renamed
to some internal helper class or completely implemented within the new "Field".
- "MultiDomain" will be renamed to ???; "DomainTuple" will probably become
"_DomainTuple", i.e. it should not be directly accessed by external users.
- "makeField" and "makeDomain" become static "make" members of "Field" and
"Domain"Martin ReineckeMartin Reineckehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/285Performance of Operator Jacobians2020-03-10T10:32:22ZReimar H LeikePerformance of Operator JacobiansMany of our operators call the Jacobian of Linearizations more than once, leading to exponential performance losses in chains of operators, as uncovered by the test_for_performance_issues branch.
TODOS:
- [x] extend the tests to not only cover energy operators
- [x] fix all uncovered performance issuesMany of our operators call the Jacobian of Linearizations more than once, leading to exponential performance losses in chains of operators, as uncovered by the test_for_performance_issues branch.
TODOS:
- [x] extend the tests to not only cover energy operators
- [x] fix all uncovered performance issueshttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/284Inconsistency of .sum() for Fields and Linearizations2020-03-11T23:04:51ZPhilipp Arrasparras@mpa-garching.mpg.deInconsistency of .sum() for Fields and LinearizationsIs it wanted that `Linearization.sum()` returns a scalar Field but `Field.sum()` returns an actual scalar, i.e. in most situations a `float`? `MultiField.sum()` inherits the behaviour from `Field`.
Linearizations: https://gitlab.mpcdf.mpg.de/ift/nifty/blob/NIFTy_6/nifty6/linearization.py#L237
Fields: https://gitlab.mpcdf.mpg.de/ift/nifty/blob/NIFTy_6/nifty6/field.py#L382
MultiFields: https://gitlab.mpcdf.mpg.de/ift/nifty/blob/NIFTy_6/nifty6/multi_field.py#L169Is it wanted that `Linearization.sum()` returns a scalar Field but `Field.sum()` returns an actual scalar, i.e. in most situations a `float`? `MultiField.sum()` inherits the behaviour from `Field`.
Linearizations: https://gitlab.mpcdf.mpg.de/ift/nifty/blob/NIFTy_6/nifty6/linearization.py#L237
Fields: https://gitlab.mpcdf.mpg.de/ift/nifty/blob/NIFTy_6/nifty6/field.py#L382
MultiFields: https://gitlab.mpcdf.mpg.de/ift/nifty/blob/NIFTy_6/nifty6/multi_field.py#L169https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/283Reduce licensing boilerplate2020-05-13T11:36:01ZGordian EdenhoferReduce licensing boilerplateIf we really feel the need to put licensing boilerplate in every source file, let's at least make it concise by e.g. using https://spdx.org/ids.If we really feel the need to put licensing boilerplate in every source file, let's at least make it concise by e.g. using https://spdx.org/ids.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/282FieldZeroPadder2020-02-19T16:45:27ZVincent EberleFieldZeroPadderFieldZeropadding.adjoint doesn't work in 2 dimensions.FieldZeropadding.adjoint doesn't work in 2 dimensions.Philipp FrankPhilipp Frankhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/281[NIFTy6] Boost performance of Correlated Fields2019-12-06T13:53:49ZPhilipp Arrasparras@mpa-garching.mpg.de[NIFTy6] Boost performance of Correlated FieldsDo PowerDistributor only on one slice and copy it with ContractionOperator.adjoint afterwards. The power distributor appears to be the bottleneck of correlated field evaluations currently.Do PowerDistributor only on one slice and copy it with ContractionOperator.adjoint afterwards. The power distributor appears to be the bottleneck of correlated field evaluations currently.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/280[NIFTy6] Implement partial insert operator for multi fields2019-12-06T16:54:00ZPhilipp Arrasparras@mpa-garching.mpg.de[NIFTy6] Implement partial insert operator for multi fieldshttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/279Consistent ordering of domain and non-domain arguments for standard operators2019-12-06T08:34:31ZGordian EdenhoferConsistent ordering of domain and non-domain arguments for standard operatorsCurrently, the order in which arguments shall be provided differs between operators, e.g. `ift.from_global_data` and `ift.full` both take the domain as first argument, while `ift.ScalingOperator` requires the factor with which to scale a field as first argument and the domain as second. Moving to NIFTY6 might be the perfect opportunity to fix this either by consistently requiring the domain to be e.g. the first argument or by making the `ift.ScalingOperator` more flexible by swapping arguments in a suitable manner during initialization.Currently, the order in which arguments shall be provided differs between operators, e.g. `ift.from_global_data` and `ift.full` both take the domain as first argument, while `ift.ScalingOperator` requires the factor with which to scale a field as first argument and the domain as second. Moving to NIFTY6 might be the perfect opportunity to fix this either by consistently requiring the domain to be e.g. the first argument or by making the `ift.ScalingOperator` more flexible by swapping arguments in a suitable manner during initialization.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/278Getting rid of standard parallelization?2019-12-09T12:50:25ZMartin ReineckeGetting rid of standard parallelization?The default way of parallelizing large tasks with MPI is currently to distribute fields over MPI tasks along their first axis. While it appears to work (our tests run fine with MPI), it has not been used during the last few years, and other parallelization approaches seem more promising.
If there is general agreement, I propose to remove this parallelization from the code, which would make NIFTy much smaller and easier to use and maintain.The default way of parallelizing large tasks with MPI is currently to distribute fields over MPI tasks along their first axis. While it appears to work (our tests run fine with MPI), it has not been used during the last few years, and other parallelization approaches seem more promising.
If there is general agreement, I propose to remove this parallelization from the code, which would make NIFTy much smaller and easier to use and maintain.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/277Changelog for NIFTy_62019-12-06T16:54:45ZLukas PlatzChangelog for NIFTy_6Dear @parras, @pfrank, @phaim and @mtr,
should we introduce a changelog for the NIFTy_6 branch?
Currently, there are 107 files changed w.r.t. NIFTy 5, but no indication which of these changes affect the user interface.
If there was a list of (compatibility breaking) changes to the UI, NIFTy 5 users who want to update their code (now or in the distant future) could do so without testing by trial and error which changes they need to make.
Currently, your memory of the changes is probably still fresh enough so you could create such a changelog without excessive trouble, and it would be a very valuable feature to everyone else (and spare you explaining it over and over again).
What do you think? Would you be willing to write up a short summary of your UI changes?
And how about a policy "No merges into NIFTy_6_without a UI changelog entry (if applicable)" for the future?
Thank you all for building the multi frequency capabilities and for considering this,
LukasDear @parras, @pfrank, @phaim and @mtr,
should we introduce a changelog for the NIFTy_6 branch?
Currently, there are 107 files changed w.r.t. NIFTy 5, but no indication which of these changes affect the user interface.
If there was a list of (compatibility breaking) changes to the UI, NIFTy 5 users who want to update their code (now or in the distant future) could do so without testing by trial and error which changes they need to make.
Currently, your memory of the changes is probably still fresh enough so you could create such a changelog without excessive trouble, and it would be a very valuable feature to everyone else (and spare you explaining it over and over again).
What do you think? Would you be willing to write up a short summary of your UI changes?
And how about a policy "No merges into NIFTy_6_without a UI changelog entry (if applicable)" for the future?
Thank you all for building the multi frequency capabilities and for considering this,
Lukashttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/276testing differentiability for complex operators2019-11-19T14:43:26ZReimar H Leiketesting differentiability for complex operatorsThe routine extra.check_jacobian_consistency only checks derivatives in real direction. There are two other interesting cases: differentiability in real and imaginary direction and complex derifferentiability (which is the former with the additional requirement that df/d(Imag) = i*df/d(Re)).The routine extra.check_jacobian_consistency only checks derivatives in real direction. There are two other interesting cases: differentiability in real and imaginary direction and complex derifferentiability (which is the former with the additional requirement that df/d(Imag) = i*df/d(Re)).Reimar H LeikeReimar H Leikehttps://gitlab.mpcdf.mpg.de/ift/nifty_gridder/-/issues/3Segfault when epsilon >= 1e-1 (roughly)2019-10-02T16:57:59ZPhilipp Arrasparras@mpa-garching.mpg.deSegfault when epsilon >= 1e-1 (roughly)I have edited the tests in order to trigger the bug. See branch `bugreport` and https://gitlab.mpcdf.mpg.de/ift/nifty_gridder/-/jobs/938200I have edited the tests in order to trigger the bug. See branch `bugreport` and https://gitlab.mpcdf.mpg.de/ift/nifty_gridder/-/jobs/938200https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/275MPI broken master branch2019-10-04T14:44:21ZReimar H LeikeMPI broken master branchthe recent merge of operator spectra to master broke the functionality of the MPI KL, since newtonCG now requires the metric of the KL as an oeprator.the recent merge of operator spectra to master broke the functionality of the MPI KL, since newtonCG now requires the metric of the KL as an oeprator.Reimar H LeikeReimar H Leike