ift issueshttps://gitlab.mpcdf.mpg.de/groups/ift/-/issues2017-08-15T08:04:24Zhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/95Tests: Space classes2017-08-15T08:04:24ZTheo SteiningerTests: Space classesCheck if they are complete.Check if they are complete.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/152Clarification request for ComposedOperator2017-08-15T08:04:01ZMartin ReineckeClarification request for ComposedOperatorI think that there are two different ways to interpret the meaning of
"ComposedOperator", and I'm not sure if both of them are supported by Nifty at
the moment:
Scenario 1:
Say you have an input field F living on an RGSpace. If you want...I think that there are two different ways to interpret the meaning of
"ComposedOperator", and I'm not sure if both of them are supported by Nifty at
the moment:
Scenario 1:
Say you have an input field F living on an RGSpace. If you want to
apply an FFT, then some scaling, and then an inverse FFT again to this field,
is it possible to combine these three operators into a single ComposedOperator?
It doesn't seem that this works at the moment, because the "domain" property of
such a ComposedOperator returns a triple of spaces, and the input field only
lives on a single space.
Scenario 2:
You have a field living on a product of spaces, and you want to run several
operators on that field, every one operating on an individual space of the field.
This seems to be the anticipated scenario for ComposedOperator at the moment.
Is scenario 1 also needed in the real world? I have a feeling that it is; if
true, we should provide another "composed operator" for this use case and make
sure (by means of good naming, documentation and tests in the operator
constructors) that they are not confused.
(I'm mentioning this because until yesterday I was convinced that
ComposedOperator also worked for scenario 1...)https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/162Fix class-docstrings for new operators2017-08-15T08:03:20ZTheo SteiningerFix class-docstrings for new operatorsThe docstrings for the following operators must be refactored (80-chars limit, complete missing entries, etc...)
- LaplaceOperator
- SmoothnessOperatorThe docstrings for the following operators must be refactored (80-chars limit, complete missing entries, etc...)
- LaplaceOperator
- SmoothnessOperatorJakob KnollmuellerJakob Knollmueller2017-07-12https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/161Rework methods taking an "axes" keyword2017-08-15T08:03:13ZMartin ReineckeRework methods taking an "axes" keywordAt the moment, there are a lot of routines which take an "axes" keyword, and in many of these, it could be avoided altogether or reduced to an integer.
Example 1: the "weight" method of DomainObject and derived classes
Here, `axes` is ...At the moment, there are a lot of routines which take an "axes" keyword, and in many of these, it could be avoided altogether or reduced to an integer.
Example 1: the "weight" method of DomainObject and derived classes
Here, `axes` is mostly used to determine, which particular axes in a `Field` belong to the space. In principle, all of this could be avoided if this method simply returned an array with one weight per pixel (or a scalar if all weights are equal), and leave all the juggling with array indices to Field.weight().
As things are, there are various alternative implementations of the re-shaping task (look for example at the `weight` methods of `PowerSpace` and `GLSpace`).
My suggestion is to simplify `DomainObject`'s `weight` method to:
```
def volfactor(self):
""" Returns an array containing the object's volume factors.
The array must be broadcastable to the space's dimensions
(useful if the volume factors are uniform in a direction, e.g. for GLSpace).
Alternatively, if all volume factors are equal, returns a scalar.
"""
```
... and leave all the operations involving powers, reshaping etc. to the calling `Field.weight` method.
Example 2: `Space.hermitianize_decomposition`():
Here the `axes` variable always contains a tuple with as many entries as the space has dimensions, and the entries take the form `i`, `i+1`, `i+2`, ... I suggest to pass a scalar integer with the name `first_axis` instead, which should be less error-prone.
If there is agreement to go into this direction, there are more cases, but we can discuss them later :)https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/174More zerocenter weirdness...2017-08-15T00:50:33ZMartin ReineckeMore zerocenter weirdness...```
from nifty import *
s=RGSpace(3,zerocenter=True)
f=Field(s,val=[0.,1.,2.])
op=FFTOperator(s)
print "op domain:", op.domain[0]
print "op target:", op.target[0]
print "op times real field", op.times(f).val
```
This prints
```
op doma...```
from nifty import *
s=RGSpace(3,zerocenter=True)
f=Field(s,val=[0.,1.,2.])
op=FFTOperator(s)
print "op domain:", op.domain[0]
print "op target:", op.target[0]
print "op times real field", op.times(f).val
```
This prints
```
op domain: RGSpace(shape=(3,), zerocenter=(True,), distances=(0.33333333333333331,), harmonic=False)
op target: RGSpace(shape=(3,), zerocenter=(True,), distances=(1.0,), harmonic=True)
op times real field [-0.33333333+0.j -0.16666667+0.8660254j 0.16666667+0.8660254j]
```
What I don't understand: if the target domain of the operator is zero-centered, why is the first value of the result real-only, and not the middle one?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/172Overflow using lognormal2017-08-14T22:05:30ZMatevz, Sraml (sraml)Overflow using lognormalIn my project I use constantly lognormal model and since the normalization of the gradient in the SteepestDescent was removed I get overflow. I fix that with normalizing my gradient.
The overflow can be reproduced with this:
[test.py](...In my project I use constantly lognormal model and since the normalization of the gradient in the SteepestDescent was removed I get overflow. I fix that with normalizing my gradient.
The overflow can be reproduced with this:
[test.py](/uploads/5f5ce380ae65b699b5e350fd54ba8b18/test.py)https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/176mpi crashes with power_analyze2017-08-14T22:04:56ZJakob Knollmuellermpi crashes with power_analyzeCalling power_analyze of a fftw distributed field crashes the mpirun. I attached a minimal example:
[minimal_example.py](/uploads/3253d1e327fb18e6ec1c6326779cea6f/minimal_example.py)
I run it with:
mpirun -n 2 python minimal_example.py ...Calling power_analyze of a fftw distributed field crashes the mpirun. I attached a minimal example:
[minimal_example.py](/uploads/3253d1e327fb18e6ec1c6326779cea6f/minimal_example.py)
I run it with:
mpirun -n 2 python minimal_example.py
and I get:
```
[MainThread][ERROR ] root Uncaught exception
Traceback (most recent call last):
File "minimal_example.py", line 22, in <module>
pp=sh.power_analyze()
File "/usr/local/lib/python2.7/dist-packages/ift_nifty-3.0.4-py2.7.egg/nifty/field.py", line 377, in power_analyze
for part in parts]
File "/usr/local/lib/python2.7/dist-packages/ift_nifty-3.0.4-py2.7.egg/nifty/field.py", line 413, in _single_power_analyze
axes=work_field.domain_axes[space_index])
File "/usr/local/lib/python2.7/dist-packages/ift_nifty-3.0.4-py2.7.egg/nifty/field.py", line 439, in _calculate_power_spectrum
axes=axes)
File "/usr/local/lib/python2.7/dist-packages/ift_nifty-3.0.4-py2.7.egg/nifty/field.py", line 469, in _shape_up_pindex
semiscaled_local_data = local_data.reshape(semiscaled_shape)
ValueError: cannot reshape array of size 131072 into shape (64,64,64)
-------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.
-------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[12289,1],0]
Exit code: 1
--------------------------------------------------------------------------
```https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/136Multiprocessing to process multiple probes for diagonal probing in parallel2017-08-10T07:12:09ZPumpe, Daniel (dpumpe)Multiprocessing to process multiple probes for diagonal probing in parallelHi,
I've tried to upgrade the Prober class, making it capable of processing multiple probes at the same time on different cpus. Due to various mysterious pickling errors I failed to do so.
Does anybody have a deeper experience in par...Hi,
I've tried to upgrade the Prober class, making it capable of processing multiple probes at the same time on different cpus. Due to various mysterious pickling errors I failed to do so.
Does anybody have a deeper experience in parallelising a simple for-loop? I think this would speed up our codes by multiple factors, as most of them a run on prelude.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/105Wiener Filter with physical units2017-08-02T21:21:10ZPumpe, Daniel (dpumpe)Wiener Filter with physical unitshttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/165Critical power energy doesn't work for product spaces2017-08-02T21:20:47ZTheo SteiningerCritical power energy doesn't work for product spacesIn principle it is no problem to port the existing code to product-space functionality. The biggest thing is that we then would need a n-dimensional SmoothnessOperator. Any volunteers?In principle it is no problem to port the existing code to product-space functionality. The biggest thing is that we then would need a n-dimensional SmoothnessOperator. Any volunteers?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/139Broken demo?2017-07-29T11:50:49ZPhilipp ArrasBroken demo?When I try to run *demos/wiener_filter_harmonic.py* the program does not terminate after a sensible amount of time.
I get the following warning:
>>>
[MainThread][WARNING ] Field The distribution_stragey of pindex does not fit the...When I try to run *demos/wiener_filter_harmonic.py* the program does not terminate after a sensible amount of time.
I get the following warning:
>>>
[MainThread][WARNING ] Field The distribution_stragey of pindex does not fit the slice_local distribution strategy of the synthesized field.
/home/philipp/.local/lib/python2.7/site-packages/d2o-1.1.0-py2.7.egg/d2o/distributor_factory.py:339: ComplexWarning:
>>>
I am looking forward to joining your group and working on NIFTy :)https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/166MPI-enabled tests2017-07-29T11:50:06ZMartin ReineckeMPI-enabled testsMostly for fun, I enabled MPI nosetests in out continuous integration, and it seems that many tests actually continue to work!
But unfortunately I think I managed to freeze the test VM, so the pipeline is stuck now :(Mostly for fun, I enabled MPI nosetests in out continuous integration, and it seems that many tests actually continue to work!
But unfortunately I think I managed to freeze the test VM, so the pipeline is stuck now :(https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/146Reuse curvature and gradient for gradient and energy in Energy classes2017-07-29T11:34:37ZTheo SteiningerReuse curvature and gradient for gradient and energy in Energy classesEspecially in `nifty/library/energy_library/critical_power_energy.py`
the gradient information is not reused for the computation of the energy itself.
Additionally, use memoization/caching. For this the memo decorator from nifty.energi...Especially in `nifty/library/energy_library/critical_power_energy.py`
the gradient information is not reused for the computation of the energy itself.
Additionally, use memoization/caching. For this the memo decorator from nifty.energies can be used.Jakob KnollmuellerJakob Knollmuellerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/94Update Wiener filter demo2017-07-29T11:34:09ZTheo SteiningerUpdate Wiener filter demoInclude all the convenience features, like dynamic parametrization, logging, versioning, etc...Include all the convenience features, like dynamic parametrization, logging, versioning, etc...https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/83Create Wiener Filter demo which utilizes full keepers functionality.2017-07-29T11:33:55ZTheo SteiningerCreate Wiener Filter demo which utilizes full keepers functionality.* Explicitly configure (MPI-)logging.
* Use keepers.versioning for restartable reconstruction.
* Use dynamic parametrization for setting `iteration_limit`, etc...* Explicitly configure (MPI-)logging.
* Use keepers.versioning for restartable reconstruction.
* Use dynamic parametrization for setting `iteration_limit`, etc...https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/169destroying distribution_strategy2017-07-26T08:27:00ZJakob Knollmuellerdestroying distribution_strategyHi,
I am currently trying to get my code running using the full capabilities of d2o, but some things in nifty seem to destroy the distribution_strategy, namely the InvertibleOperatorMixin and power_analyze (I tracked it down to the pind...Hi,
I am currently trying to get my code running using the full capabilities of d2o, but some things in nifty seem to destroy the distribution_strategy, namely the InvertibleOperatorMixin and power_analyze (I tracked it down to the pindex.bincount, see the example)
[minimal_example.py](/uploads/95bab8b0159517babf98330ce2a9ae5c/minimal_example.py)
Another problem is in the InvertibleOperatorMixin:
If no initial guess x0 is specified it is set to
x0 = Field(self.target, val=0., dtype=x.dtype)
setting the distribution to the default of "not" so the result also has this distribution, I would suggest setting
x0 = Field(self.target, val=0., dtype=x.dtype, distribution_strategy=x.distribution_strategy)
Jakobhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/137Limit critical plotting functions to MPI.rank==0.2017-07-22T23:30:09ZTheo SteiningerLimit critical plotting functions to MPI.rank==0.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/170CriticalPowerEnergy has incomplete at() method2017-07-19T11:57:02ZMartin ReineckeCriticalPowerEnergy has incomplete at() methodThe `at()` method in `CriticalPowerEnergy` does not pass the `logarithmic` property to the newly built object, which results in a (potentially) different T inside that object.
What's the best solution to fix this? Store `logarithmic` ex...The `at()` method in `CriticalPowerEnergy` does not pass the `logarithmic` property to the newly built object, which results in a (potentially) different T inside that object.
What's the best solution to fix this? Store `logarithmic` explicitly to have it available in the `at()` method?
@kjako, any preferences?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/127Find good convergence criterion for DescentMinimizer2017-07-11T08:26:57ZTheo SteiningerFind good convergence criterion for DescentMinimizerAt the moment the `convergence number` is defined as `delta = abs(gradient).max() * (step_length/gradient_norm)`
Are there better measures? For a simple quadratic potential, this definition of `delta` causes a steepest descent to not tr...At the moment the `convergence number` is defined as `delta = abs(gradient).max() * (step_length/gradient_norm)`
Are there better measures? For a simple quadratic potential, this definition of `delta` causes a steepest descent to not trust the (actual true) minimum.
A good measure should be independent of scale and initial conditions.
Some possible candidates are:
* `(new_energy.position - energy.position).norm()`
* `(new_energy.position - energy.position).max()`
* `((new_energy.position - energy.position)/(1+energy.position)).norm()`
* `((new_energy.position - energy.position)/(1+energy.position)).max()`
* `step_length`
* `step_length / energy.position.norm()`
* `gradient.norm()`Martin ReineckeMartin Reineckehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/164Another Field.vdot() bug...2017-07-11T08:18:02ZMartin ReineckeAnother Field.vdot() bug...```
from nifty import *
import numpy as np
s=RGSpace((10,))
f1=Field.from_random("normal",domain=s,dtype=np.complex128)
f2=Field.from_random("normal",domain=s,dtype=np.complex128)
print f1.vdot(f2)
print f1.vdot(f2,spaces=0)
``````
from nifty import *
import numpy as np
s=RGSpace((10,))
f1=Field.from_random("normal",domain=s,dtype=np.complex128)
f2=Field.from_random("normal",domain=s,dtype=np.complex128)
print f1.vdot(f2)
print f1.vdot(f2,spaces=0)
```