ift issueshttps://gitlab.mpcdf.mpg.de/groups/ift/-/issues2018-02-15T14:31:18Zhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/179Field.from_random crashes mpi2018-02-15T14:31:18ZJakob KnollmuellerField.from_random crashes mpiHi,
I found another bug with mpi:
[minimal_example.py](/uploads/3863c7e25ac0661caab65a64bd964264/minimal_example.py)
the error I get is:
```
[MainThread][ERROR ] root Uncaught exception
Traceback (most recent call last):
File "minimal_example-3.py", line 16, in <module>
std=some_diagonal.val)
File "/usr/local/lib/python2.7/dist-packages/ift_nifty-3.1.0-py2.7.egg/nifty/field.py", line 251, in from_random
lambda shape: generator_function(dtype=f.dtype,
File "/usr/local/lib/python2.7/dist-packages/d2o/distributed_data_object.py", line 469, in apply_generator
self.set_local_data(generator(self.distributor.local_shape), copy=copy)
File "/usr/local/lib/python2.7/dist-packages/ift_nifty-3.1.0-py2.7.egg/nifty/field.py", line 253, in <lambda>
**random_arguments))
File "/usr/local/lib/python2.7/dist-packages/ift_nifty-3.1.0-py2.7.egg/nifty/random.py", line 50, in normal
x *= dtype.type(std)
ValueError: operands could not be broadcast together with shapes (16,64,64) (64,64,64) (16,64,64)
-------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.
-------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[29272,1],1]
Exit code: 1
--------------------------------------------------------------------------
```
I only get it if the std in from_random is a array, for numbers it seems to work
JakobHi,
I found another bug with mpi:
[minimal_example.py](/uploads/3863c7e25ac0661caab65a64bd964264/minimal_example.py)
the error I get is:
```
[MainThread][ERROR ] root Uncaught exception
Traceback (most recent call last):
File "minimal_example-3.py", line 16, in <module>
std=some_diagonal.val)
File "/usr/local/lib/python2.7/dist-packages/ift_nifty-3.1.0-py2.7.egg/nifty/field.py", line 251, in from_random
lambda shape: generator_function(dtype=f.dtype,
File "/usr/local/lib/python2.7/dist-packages/d2o/distributed_data_object.py", line 469, in apply_generator
self.set_local_data(generator(self.distributor.local_shape), copy=copy)
File "/usr/local/lib/python2.7/dist-packages/ift_nifty-3.1.0-py2.7.egg/nifty/field.py", line 253, in <lambda>
**random_arguments))
File "/usr/local/lib/python2.7/dist-packages/ift_nifty-3.1.0-py2.7.egg/nifty/random.py", line 50, in normal
x *= dtype.type(std)
ValueError: operands could not be broadcast together with shapes (16,64,64) (64,64,64) (16,64,64)
-------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.
-------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[29272,1],1]
Exit code: 1
--------------------------------------------------------------------------
```
I only get it if the std in from_random is a array, for numbers it seems to work
Jakobhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/180Use MPCDF's runners for continuous integration?2017-11-13T09:39:16ZMartin ReineckeUse MPCDF's runners for continuous integration?It seems that MPCDF now offers shared runners (http://www.mpcdf.mpg.de/about-mpcdf/publications/bits-n-bytes?BB-View=196&BB-Doc=187).
Should we try to use these instead of Theo's machine?It seems that MPCDF now offers shared runners (http://www.mpcdf.mpg.de/about-mpcdf/publications/bits-n-bytes?BB-View=196&BB-Doc=187).
Should we try to use these instead of Theo's machine?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/181Logarithmic smoothing on non-PowerSpaces?2017-09-15T23:25:48ZMartin ReineckeLogarithmic smoothing on non-PowerSpaces?Currently, NIFTy allows to construct a `SmoothingOperator` like this:
```
import nifty as ift
op=ift.SmoothingOperator(ift.RGSpace(10),sigma=1.,log_distances=True)
```
I do not think this makes sense, as the RGSace is periodic and there are no absolute distances.
This combination of parameters also leads to overflows because the code in FFTSmoothingOperator tries to compute log(0.).
Therefore I propose to ensure that `log_distance==False` whenever an FFTSmoothingOperator is constructed and raise an exception otherwise.Currently, NIFTy allows to construct a `SmoothingOperator` like this:
```
import nifty as ift
op=ift.SmoothingOperator(ift.RGSpace(10),sigma=1.,log_distances=True)
```
I do not think this makes sense, as the RGSace is periodic and there are no absolute distances.
This combination of parameters also leads to overflows because the code in FFTSmoothingOperator tries to compute log(0.).
Therefore I propose to ensure that `log_distance==False` whenever an FFTSmoothingOperator is constructed and raise an exception otherwise.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/182Completely remove inverse smoothing2017-09-15T23:25:14ZMartin ReineckeCompletely remove inverse smoothingThe inverse_times() of both `SmoothingOerator`s is currently broken.
`DirectSmoothingOperator` considers the inverse operation of "smoothing with sigma" to be "smoothing with 1/sigma", which is simply wrong.
`FFTSmoothingOperator` does in principle perform the correct operations, but does not check for division by zero, which occurs for most sigma values and indicates that the smoothing is in fact non-invertible with finite precision arithmetics.
Unfortunately the test `test_inverse_adjoint_times` does not recognize this, because the results it compares are both NaN and therefore the same...
Since this operation appears to be unused, I'd suggest to remove it ... doing it properly may be possible (at least for FFT smoothing), but is only worth the effort if someone really needs it.The inverse_times() of both `SmoothingOerator`s is currently broken.
`DirectSmoothingOperator` considers the inverse operation of "smoothing with sigma" to be "smoothing with 1/sigma", which is simply wrong.
`FFTSmoothingOperator` does in principle perform the correct operations, but does not check for division by zero, which occurs for most sigma values and indicates that the smoothing is in fact non-invertible with finite precision arithmetics.
Unfortunately the test `test_inverse_adjoint_times` does not recognize this, because the results it compares are both NaN and therefore the same...
Since this operation appears to be unused, I'd suggest to remove it ... doing it properly may be possible (at least for FFT smoothing), but is only worth the effort if someone really needs it.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/183Harmonize ConjugateGradient with other minimizers?2017-09-28T01:17:59ZMartin ReineckeHarmonize ConjugateGradient with other minimizers?In principle, ConjugateGradient is not fundamentally different from our descent minimizers, but its interface (including the form of the callbacks) is very different.
Wouldn't it be helpful to adjust it in a way that it also takes an `Energy` object for minimization and passes its current energy to the callback in the same way the descent minimizers do? Then we could, for example, replace `ConjugateGradient` objects with, say, `VL_BFGS` ones and experiment much more flexibly with different minimizers.
Of course, `ConjugateGradient` would need a special subclass of `Energy` to work properly: it would, for example, need to invoke the linear operator within this energy directly. But that should be no fundamental problem ... after all, we already have some energy classes that provide `curvature` and some that don't.
I would be very interested to hear feedback!In principle, ConjugateGradient is not fundamentally different from our descent minimizers, but its interface (including the form of the callbacks) is very different.
Wouldn't it be helpful to adjust it in a way that it also takes an `Energy` object for minimization and passes its current energy to the callback in the same way the descent minimizers do? Then we could, for example, replace `ConjugateGradient` objects with, say, `VL_BFGS` ones and experiment much more flexibly with different minimizers.
Of course, `ConjugateGradient` would need a special subclass of `Energy` to work properly: it would, for example, need to invoke the linear operator within this energy directly. But that should be no fundamental problem ... after all, we already have some energy classes that provide `curvature` and some that don't.
I would be very interested to hear feedback!https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/184PowerSpace probing calls sugar.create_composed_fft_operator and dies2017-08-29T08:26:05ZPhilipp Arrasparras@mpa-garching.mpg.dePowerSpace probing calls sugar.create_composed_fft_operator and diesI narrowed the problem down to commit a09c6ed by @theos. The attached file executes until this commit and is broken afterwards.
Apparently, the prober tries to fft something although the probing takes place in a PowerSpace which does not admit for a fft.
[Critical+Filter.py](/uploads/7b0e6c37d18c6cdf932457da0ba682d1/Critical+Filter.py)I narrowed the problem down to commit a09c6ed by @theos. The attached file executes until this commit and is broken afterwards.
Apparently, the prober tries to fft something although the probing takes place in a PowerSpace which does not admit for a fft.
[Critical+Filter.py](/uploads/7b0e6c37d18c6cdf932457da0ba682d1/Critical+Filter.py)https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/185Binbounds calculation in PowerSpace needs fixing2017-09-15T23:26:29ZMartin ReineckeBinbounds calculation in PowerSpace needs fixingCurrently, the initialization of a `PowerSpace` has several problems:
- when specifying `logarithmic=False`, the code behaves differently from the code before the `PowerSpace` reorganization two months ago (this is because I'm now interpreting the parameter `logarithmic=False` to mean "please use linear binning", instead of "please use natural binning", which was its original meaning).
- the "magic" formulae for computing nbins and binbounds are buggy; this was not really obvious before the reorganization, because they were not used in most circumstances).
- the algorithm tries to derive binbounds from insufficient user information (e.g. if the user specifies the number of bins, the code simply guesses the position of the first and last binbounds, and the guess is not necessarily intuitive).
I would strongly favor an approach where the user has to specify a set of parameters that defines the binbounds absolutely unambiguously. E.g.:
- by providing a binbounds array
- by providing a type of spacing (linear or logarithmic), a number of bins, and the first and last binbound
- by providing absolutely nothing, which leads to natural binning
There are very few cases where it could make sense for the code to guess the binbounds; the only situations that I can imagine are 1D RGSpaces and LMSpaces with linear binning. Here, there is an obvious choice for the number of bins and the placing of the binbounds.
Would it be OK if I introduced the necessary additional constraints?Currently, the initialization of a `PowerSpace` has several problems:
- when specifying `logarithmic=False`, the code behaves differently from the code before the `PowerSpace` reorganization two months ago (this is because I'm now interpreting the parameter `logarithmic=False` to mean "please use linear binning", instead of "please use natural binning", which was its original meaning).
- the "magic" formulae for computing nbins and binbounds are buggy; this was not really obvious before the reorganization, because they were not used in most circumstances).
- the algorithm tries to derive binbounds from insufficient user information (e.g. if the user specifies the number of bins, the code simply guesses the position of the first and last binbounds, and the guess is not necessarily intuitive).
I would strongly favor an approach where the user has to specify a set of parameters that defines the binbounds absolutely unambiguously. E.g.:
- by providing a binbounds array
- by providing a type of spacing (linear or logarithmic), a number of bins, and the first and last binbound
- by providing absolutely nothing, which leads to natural binning
There are very few cases where it could make sense for the code to guess the binbounds; the only situations that I can imagine are 1D RGSpaces and LMSpaces with linear binning. Here, there is an obvious choice for the number of bins and the placing of the binbounds.
Would it be OK if I introduced the necessary additional constraints?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/186TransformationCache re-uses FFTs in situations where it shouldn't2017-09-28T01:59:35ZMartin ReineckeTransformationCache re-uses FFTs in situations where it shouldn't(This is mostly a reminder ... the problem should go away automatically once the `byebye_zerocenter` branch is merged.)
Currently the TransformationCache for FFTs caches and re-uses transforms even when the harmonic base has changes in the meantime, which leads to wrong results. This can be avoided by simply not changing the harmonic base during a run, but this approach seems impractical, since we violate this constraint already in test_fft_operator.py.
Apparently, the whole test environment is not torn down and set up again between individual tests. I verified this by printing the size of the transformation cache whenever its create() method is called (line 27 in transformation_cache.py) and then ran
`nosetests test/test_operators/test_fft_operator.py -s`
I would have expected the length of the cache to be 1 all the time, but it went up beyond 100.
This is possibly also the cause for the strange behaviour in `test_power_synthesize_analyze()`.
To verify this, I suggest running the tests on maste ran on `byebye_zerocenter` and compare the convergence behaviour.(This is mostly a reminder ... the problem should go away automatically once the `byebye_zerocenter` branch is merged.)
Currently the TransformationCache for FFTs caches and re-uses transforms even when the harmonic base has changes in the meantime, which leads to wrong results. This can be avoided by simply not changing the harmonic base during a run, but this approach seems impractical, since we violate this constraint already in test_fft_operator.py.
Apparently, the whole test environment is not torn down and set up again between individual tests. I verified this by printing the size of the transformation cache whenever its create() method is called (line 27 in transformation_cache.py) and then ran
`nosetests test/test_operators/test_fft_operator.py -s`
I would have expected the length of the cache to be 1 all the time, but it went up beyond 100.
This is possibly also the cause for the strange behaviour in `test_power_synthesize_analyze()`.
To verify this, I suggest running the tests on maste ran on `byebye_zerocenter` and compare the convergence behaviour.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/187Field.dim returns 0 for fields without domain2017-09-28T01:37:34ZMartin ReineckeField.dim returns 0 for fields without domain```
import nifty as ift
f=ift.Field((),val=42)
print f.val
print f.dim
```
As expected, the Field contains a single value, but its `dim` property is 0, which seems incorrect and is also inconsistent with `numpy.ndarray`'s behavior for zero-dimensional arrays.```
import nifty as ift
f=ift.Field((),val=42)
print f.val
print f.dim
```
As expected, the Field contains a single value, but its `dim` property is 0, which seems incorrect and is also inconsistent with `numpy.ndarray`'s behavior for zero-dimensional arrays.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/188Field.real/imag copy arrays, contrary to documentation2018-01-11T14:25:17ZMartin ReineckeField.real/imag copy arrays, contrary to documentation```
import nifty as ift
a=ift.RGSpace(10)
f=ift.Field(a,val=1.+1j)
f.real.val+=2
f.imag.val+=2
print f.val
```
This prints '1+1j' as the field value, i.e. the manipulations of real and imaginary parts did not affect the original field. This seems to contradict the documentation, which states that the data is not copied.```
import nifty as ift
a=ift.RGSpace(10)
f=ift.Field(a,val=1.+1j)
f.real.val+=2
f.imag.val+=2
print f.val
```
This prints '1+1j' as the field value, i.e. the manipulations of real and imaginary parts did not affect the original field. This seems to contradict the documentation, which states that the data is not copied.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/189PowerSpace volume factors2017-12-01T20:18:08ZMartin ReineckePowerSpace volume factorsAs mentioned in the NIFTy paper, there are many different ways to define volume factors for the PowerSpace.
If I remember correctly, the agreement that we reached some time ago was that `PowerSpace.weight()` should use a weight of 1, i.e. it should not do anything. However, the current implementation uses `rho` as weights.
I suggest to change this to 1 as discussed. The advantage of using this weight over the others is that the user does not need to do any additional corrections when they have to use a different weight; they can just apply their own volume factors without needing to undo anything the space did by default.As mentioned in the NIFTy paper, there are many different ways to define volume factors for the PowerSpace.
If I remember correctly, the agreement that we reached some time ago was that `PowerSpace.weight()` should use a weight of 1, i.e. it should not do anything. However, the current implementation uses `rho` as weights.
I suggest to change this to 1 as discussed. The advantage of using this weight over the others is that the user does not need to do any additional corrections when they have to use a different weight; they can just apply their own volume factors without needing to undo anything the space did by default.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/190DiagonalOperator calls nonexistent attribute adjoint()2017-09-15T23:36:07ZMartin ReineckeDiagonalOperator calls nonexistent attribute adjoint()`DiagonalOperator` contains the following code:
```
def _adjoint_times(self, x, spaces):
return self._times_helper(x, spaces,
operation=lambda z: z.adjoint().__mul__)
def _adjoint_inverse_times(self, x, spaces):
return self._times_helper(x, spaces,
operation=lambda z: z.adjoint().__rtruediv__)
```
The `z.adjoint()` expression does not work, since nothing in NIFTy has an .adjoint() attribute. So far we have not noticed this, apparently because all of our `DiagonalOperator`s were self-adjoint and these methods were not called.
I don't know what the correct version is. @theos?`DiagonalOperator` contains the following code:
```
def _adjoint_times(self, x, spaces):
return self._times_helper(x, spaces,
operation=lambda z: z.adjoint().__mul__)
def _adjoint_inverse_times(self, x, spaces):
return self._times_helper(x, spaces,
operation=lambda z: z.adjoint().__rtruediv__)
```
The `z.adjoint()` expression does not work, since nothing in NIFTy has an .adjoint() attribute. So far we have not noticed this, apparently because all of our `DiagonalOperator`s were self-adjoint and these methods were not called.
I don't know what the correct version is. @theos?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/191Field.power_synthesize(): some clarifications needed2017-12-09T14:10:49ZMartin ReineckeField.power_synthesize(): some clarifications neededI'm trying to understand the intricacies of `Field.power_synthesize()` and am encountering a few points that are unclear to me:
- which combinations of `real_signal` and `real_power` are allowed? Specifically, is it allowed/sensible to have `real_signal==True` and `real_power==False`?
- if `self.dtype==float`, does it make sense to have `real_power==False`? In that case, `local_rescaler.imag` will become zero.
Once I understand all of this better, I'd volunteer to extend the docstring, and (if necessary) add a few sanity checks to the code.I'm trying to understand the intricacies of `Field.power_synthesize()` and am encountering a few points that are unclear to me:
- which combinations of `real_signal` and `real_power` are allowed? Specifically, is it allowed/sensible to have `real_signal==True` and `real_power==False`?
- if `self.dtype==float`, does it make sense to have `real_power==False`? In that case, `local_rescaler.imag` will become zero.
Once I understand all of this better, I'd volunteer to extend the docstring, and (if necessary) add a few sanity checks to the code.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/192Drawing gaussian random fields over multiple domains2017-09-23T07:14:41ZPumpe, Daniel (dpumpe)Drawing gaussian random fields over multiple domainsHi,
currently I am facing some inconsistencies for drawing gaussian random fields over multiple domains. Yet I did not find the root of these inconsistencies. However in the minimal test example below, `s_1` and `s_2` do not match, neither mean nor variance even though they should be drawn from the same gaussian just in two different ways. Further none of the drawn fields, `s_1` `s_2`, obey the desired spectra.
What am I doing wrong or missing?
```
from nifty import *
x_0 = RGSpace(50, zerocenter=False, distances=.5)
k_0 = RGRGTransformation.get_codomain(x_0)
p_0 = PowerSpace(harmonic_partner=k_0)
x_1 = RGSpace(75, zerocenter=False, distances=.25)
k_1 = RGRGTransformation.get_codomain(x_1)
p_1 = PowerSpace(harmonic_partner=k_1)
fft_0 = FFTOperator(domain=x_0, target=k_0)
fft_1 = FFTOperator(domain=x_1, target=k_1)
p_spec_0 = Field(p_0, val=lambda l: 1/(1+l)**3)
p_spec_1 = Field(p_1, val=lambda k: 10/(1+k)**5)
def random_field_method_1():
w = Field((x_0, x_1), val=0.)
for ii in xrange(100):
outer = Field((p_0, p_1), val=np.outer(p_spec_0.val, p_spec_1.val))
sk = outer.power_synthesize()
w += fft_0.inverse_times(fft_1.inverse_times(sk, spaces=1), spaces=0)
return w/100.
def random_field_method_2():
w = Field((x_0, x_1), val=0.)
S_0 = create_power_operator(k_0, sqrt(p_spec_0))
S_1 = create_power_operator(k_1, sqrt(p_spec_1))
for ii in xrange(100):
rand = Field.from_random('normal', domain=(x_0, x_1))
rand_k = fft_0.times(fft_1.times(rand, spaces=1), spaces=0)
sk = S_0.times(S_1.times(rand_k, spaces=1), spaces=0)
w += fft_0.inverse_times(fft_1.inverse_times(sk, spaces=1), spaces=0)
return w/100.
def analyze_power(signal):
sk = fft_0.times(fft_1.times(signal,spaces=1), spaces=0)
power = sk.power_analyze()
p_drawn_0 = power.sum(spaces=1)/p_spec_1.sum()
p_drawn_1 = power.sum(spaces=0)/p_spec_0.sum()
return p_drawn_0, p_drawn_1
s_1 = random_field_method_1()
s_2 = random_field_method_2()
method_1_power_0, method_1_power_1 = analyze_power(s_1)
methdo_2_power_0, method_2_power_2 = analyze_power(s_2)
```Hi,
currently I am facing some inconsistencies for drawing gaussian random fields over multiple domains. Yet I did not find the root of these inconsistencies. However in the minimal test example below, `s_1` and `s_2` do not match, neither mean nor variance even though they should be drawn from the same gaussian just in two different ways. Further none of the drawn fields, `s_1` `s_2`, obey the desired spectra.
What am I doing wrong or missing?
```
from nifty import *
x_0 = RGSpace(50, zerocenter=False, distances=.5)
k_0 = RGRGTransformation.get_codomain(x_0)
p_0 = PowerSpace(harmonic_partner=k_0)
x_1 = RGSpace(75, zerocenter=False, distances=.25)
k_1 = RGRGTransformation.get_codomain(x_1)
p_1 = PowerSpace(harmonic_partner=k_1)
fft_0 = FFTOperator(domain=x_0, target=k_0)
fft_1 = FFTOperator(domain=x_1, target=k_1)
p_spec_0 = Field(p_0, val=lambda l: 1/(1+l)**3)
p_spec_1 = Field(p_1, val=lambda k: 10/(1+k)**5)
def random_field_method_1():
w = Field((x_0, x_1), val=0.)
for ii in xrange(100):
outer = Field((p_0, p_1), val=np.outer(p_spec_0.val, p_spec_1.val))
sk = outer.power_synthesize()
w += fft_0.inverse_times(fft_1.inverse_times(sk, spaces=1), spaces=0)
return w/100.
def random_field_method_2():
w = Field((x_0, x_1), val=0.)
S_0 = create_power_operator(k_0, sqrt(p_spec_0))
S_1 = create_power_operator(k_1, sqrt(p_spec_1))
for ii in xrange(100):
rand = Field.from_random('normal', domain=(x_0, x_1))
rand_k = fft_0.times(fft_1.times(rand, spaces=1), spaces=0)
sk = S_0.times(S_1.times(rand_k, spaces=1), spaces=0)
w += fft_0.inverse_times(fft_1.inverse_times(sk, spaces=1), spaces=0)
return w/100.
def analyze_power(signal):
sk = fft_0.times(fft_1.times(signal,spaces=1), spaces=0)
power = sk.power_analyze()
p_drawn_0 = power.sum(spaces=1)/p_spec_1.sum()
p_drawn_1 = power.sum(spaces=0)/p_spec_0.sum()
return p_drawn_0, p_drawn_1
s_1 = random_field_method_1()
s_2 = random_field_method_2()
method_1_power_0, method_1_power_1 = analyze_power(s_1)
methdo_2_power_0, method_2_power_2 = analyze_power(s_2)
```https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/193Field method; Field.integrate2018-01-11T13:31:00ZPumpe, Daniel (dpumpe)Field method; Field.integrateMartin, Sebastian and I think that it would be advantageous to have a Field method, `.integrate(spaces)` in case one has to integrate over one or multiple dimensions of a field. `.integrate` would thereby take care of all necessary volume factors as they appear in integrals.
What do you think about it (@reimar, @kjako @theos)?Martin, Sebastian and I think that it would be advantageous to have a Field method, `.integrate(spaces)` in case one has to integrate over one or multiple dimensions of a field. `.integrate` would thereby take care of all necessary volume factors as they appear in integrals.
What do you think about it (@reimar, @kjako @theos)?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/194ResponseOperator._add_attributes_to_copy crashes2017-10-12T14:10:57ZChristoph LienhardResponseOperator._add_attributes_to_copy crashesWhen trying to run the log_normal_wiener_filter demo an error occurs:
line 103, in _add_attributes_to_copy copy = super(DiagonalOperator, self)._add_attributes_to_copy(copy,
TypeError: super(type, obj): obj must be an instance or subtype of type
I guess the line should read:
copy = super(ResponseOperator, self)._add_attributes_to_copy(copy, **kwargs)When trying to run the log_normal_wiener_filter demo an error occurs:
line 103, in _add_attributes_to_copy copy = super(DiagonalOperator, self)._add_attributes_to_copy(copy,
TypeError: super(type, obj): obj must be an instance or subtype of type
I guess the line should read:
copy = super(ResponseOperator, self)._add_attributes_to_copy(copy, **kwargs)https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/195Announce merge dates from nightly branch to master2018-01-11T14:24:22ZMartin ReineckeAnnounce merge dates from nightly branch to masterThe tentative plan is to merge new (potentially disruptive) changes from nightly to master after they have been on the nightly branch for about two weeks.
It might still be helpful to announce a precise date when the next merge of this kind is planned. @Theos, what's your schedule?The tentative plan is to merge new (potentially disruptive) changes from nightly to master after they have been on the nightly branch for about two weeks.
It might still be helpful to announce a precise date when the next merge of this kind is planned. @Theos, what's your schedule?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/196What does Field.vdot() do if the spaces keyword is present?2018-01-11T13:30:42ZMartin ReineckeWhat does Field.vdot() do if the spaces keyword is present?What is the expected result if we call
`a.vdot(b,spaces=0)`
where a is a Field living on one space and b is a Field living on two spaces?
I'm asking because the documentation says that the result should be float or complex, but the code actually returns a Field object.
Is there an undocumented constraint that both Fields must have the same number of domains?What is the expected result if we call
`a.vdot(b,spaces=0)`
where a is a Field living on one space and b is a Field living on two spaces?
I'm asking because the documentation says that the result should be float or complex, but the code actually returns a Field object.
Is there an undocumented constraint that both Fields must have the same number of domains?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/197Error found by dimensional analysis2018-01-26T14:55:37ZMartin ReineckeError found by dimensional analysisThe test case below was distilled from `wiener_filter_via_curvature.py` by Torsten and me.
It indicates that there is a unit error in the way we compute the mock signal.
```
import numpy as np
import nifty as ift
import numericalunits as nu
np.random.seed(43)
correlation_length = 0.05*nu.m
field_sigma = 2.* nu.K
response_sigma = 0.01*nu.m
# stupid power spectrum, but we only care about units now
def power_spectrum(k):
a = 4 * correlation_length * field_sigma**2
return a / (1 + k * correlation_length) ** 4
# Total side-length of the domain
L = 2.*nu.m
# Grid resolution (pixels per axis)
N_pixels = 512
signal_space = ift.RGSpace([N_pixels], distances=L/N_pixels)
harmonic_space = ift.FFTOperator.get_default_codomain(signal_space)
fft = ift.FFTOperator(harmonic_space, target=signal_space)
power_space = ift.PowerSpace(harmonic_space)
mock_power = ift.Field(power_space, val=power_spectrum(power_space.kindex))
mock_harmonic = mock_power.power_synthesize(real_signal=True)
mock_signal = fft(mock_harmonic)
print "signal mean (measured in K):", mock_signal.mean()/nu.K
print "signal mean (measured in K/sqrt(m)): ", mock_signal.mean()*np.sqrt(nu.m)/nu.K
```
If this code is run several times (I tested with the `nightly` branch), the first printed number changes, but the second one does not, which means that the unit for the mock_signal field is actually sqrt(m)/K, which is not the intended one.
My guess is that we either have a dimensional error in `power_spectrum()` or in `Field.power_synthesize()`.The test case below was distilled from `wiener_filter_via_curvature.py` by Torsten and me.
It indicates that there is a unit error in the way we compute the mock signal.
```
import numpy as np
import nifty as ift
import numericalunits as nu
np.random.seed(43)
correlation_length = 0.05*nu.m
field_sigma = 2.* nu.K
response_sigma = 0.01*nu.m
# stupid power spectrum, but we only care about units now
def power_spectrum(k):
a = 4 * correlation_length * field_sigma**2
return a / (1 + k * correlation_length) ** 4
# Total side-length of the domain
L = 2.*nu.m
# Grid resolution (pixels per axis)
N_pixels = 512
signal_space = ift.RGSpace([N_pixels], distances=L/N_pixels)
harmonic_space = ift.FFTOperator.get_default_codomain(signal_space)
fft = ift.FFTOperator(harmonic_space, target=signal_space)
power_space = ift.PowerSpace(harmonic_space)
mock_power = ift.Field(power_space, val=power_spectrum(power_space.kindex))
mock_harmonic = mock_power.power_synthesize(real_signal=True)
mock_signal = fft(mock_harmonic)
print "signal mean (measured in K):", mock_signal.mean()/nu.K
print "signal mean (measured in K/sqrt(m)): ", mock_signal.mean()*np.sqrt(nu.m)/nu.K
```
If this code is run several times (I tested with the `nightly` branch), the first printed number changes, but the second one does not, which means that the unit for the mock_signal field is actually sqrt(m)/K, which is not the intended one.
My guess is that we either have a dimensional error in `power_spectrum()` or in `Field.power_synthesize()`.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/99Tests: Energy class2018-01-30T20:40:21ZTheo SteiningerTests: Energy class