ift issueshttps://gitlab.mpcdf.mpg.de/groups/ift/-/issues2018-01-26T15:01:10Zhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/201Add preconditioning to all our minimizers?2018-01-26T15:01:10ZMartin ReineckeAdd preconditioning to all our minimizers?As far as I understand, it should be possible to speed up all our minimization algorithms (not ony ConjugateGradient) by using an adequate preconditioner if available.
Unfortunately this is not discussed in the Nocedal&Wright book, unle...As far as I understand, it should be possible to speed up all our minimization algorithms (not ony ConjugateGradient) by using an adequate preconditioner if available.
Unfortunately this is not discussed in the Nocedal&Wright book, unless I have missed something. So I'm looking for implementation hints.
@theos, @kjako, @reimar, any tips, links to papers etc. ?Martin ReineckeMartin Reineckehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/200Make generate_posterior_sample() an attribute of WienerFilterCurvature?2018-02-06T09:34:51ZMartin ReineckeMake generate_posterior_sample() an attribute of WienerFilterCurvature?The method `generate_posterior_sample()` in `sugar.py` requires its second argument to be of type `WienerFilterCurvature` and is quite dependent on `WienerFilterCurvature`'s internal design (e.g. that it has members called S, R and N and...The method `generate_posterior_sample()` in `sugar.py` requires its second argument to be of type `WienerFilterCurvature` and is quite dependent on `WienerFilterCurvature`'s internal design (e.g. that it has members called S, R and N and what their exact types are).
I think it might be cleaner to move ths method into the class.
@kjako, @theos, what is your opinion?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/199Fairly urgent: decide on convention for diagonal in DiagonalOperator2017-11-16T10:02:14ZMartin ReineckeFairly urgent: decide on convention for diagonal in DiagonalOperatorRecently I removed the last remaining uses of the `bare` keyword in some methods of `DiagonalOperator`, and this change has now been merged into the `nightly` branch. The code now behaves as if `bare=False`, which was the default before....Recently I removed the last remaining uses of the `bare` keyword in some methods of `DiagonalOperator`, and this change has now been merged into the `nightly` branch. The code now behaves as if `bare=False`, which was the default before.
However Torsten argues that it would be more natural to behave as if `bare=True`.
Both is fine with me, but we need to decide very quickly, because the first people have started adapting to `nightly` and will be unhappy if they have to change their codes again!https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/198Design problem in InvertibleOperatorMixin2018-01-26T15:02:39ZMartin ReineckeDesign problem in InvertibleOperatorMixinThe constructor for `InvertibleOperatorMixin` currently allows specifying iteration starting values via `forward_x0` and `backward_x0`. This is problematic for several reasons:
- these vectors have to live over the full domain of a late...The constructor for `InvertibleOperatorMixin` currently allows specifying iteration starting values via `forward_x0` and `backward_x0`. This is problematic for several reasons:
- these vectors have to live over the full domain of a later operator application call, and thereby restrict the general applicability of the operator which is one of Nifty's current design goals.
- `forward_x0` is used in `times()` and `adjoint_inverse_times()` although it is most likely not a good guess for at least one of both. Analogously for `backward_x0`.
- A starting guess is directly coupled to a concrete argument for which the operator is invoked; it is not a property of the operator, but of the operator/argument combination. So if one sets up an `InvertibleOperatorMixin` using the currently available tools, it will only work well for a single specific field.
I would much prefer being able to supply a starting guess with every `times()` (or `adjoint_times()` etc.) call; I'm thinking about a clean approach for this. Suggestions are welcome!https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/197Error found by dimensional analysis2018-01-26T14:55:37ZMartin ReineckeError found by dimensional analysisThe test case below was distilled from `wiener_filter_via_curvature.py` by Torsten and me.
It indicates that there is a unit error in the way we compute the mock signal.
```
import numpy as np
import nifty as ift
import numericalunits a...The test case below was distilled from `wiener_filter_via_curvature.py` by Torsten and me.
It indicates that there is a unit error in the way we compute the mock signal.
```
import numpy as np
import nifty as ift
import numericalunits as nu
np.random.seed(43)
correlation_length = 0.05*nu.m
field_sigma = 2.* nu.K
response_sigma = 0.01*nu.m
# stupid power spectrum, but we only care about units now
def power_spectrum(k):
a = 4 * correlation_length * field_sigma**2
return a / (1 + k * correlation_length) ** 4
# Total side-length of the domain
L = 2.*nu.m
# Grid resolution (pixels per axis)
N_pixels = 512
signal_space = ift.RGSpace([N_pixels], distances=L/N_pixels)
harmonic_space = ift.FFTOperator.get_default_codomain(signal_space)
fft = ift.FFTOperator(harmonic_space, target=signal_space)
power_space = ift.PowerSpace(harmonic_space)
mock_power = ift.Field(power_space, val=power_spectrum(power_space.kindex))
mock_harmonic = mock_power.power_synthesize(real_signal=True)
mock_signal = fft(mock_harmonic)
print "signal mean (measured in K):", mock_signal.mean()/nu.K
print "signal mean (measured in K/sqrt(m)): ", mock_signal.mean()*np.sqrt(nu.m)/nu.K
```
If this code is run several times (I tested with the `nightly` branch), the first printed number changes, but the second one does not, which means that the unit for the mock_signal field is actually sqrt(m)/K, which is not the intended one.
My guess is that we either have a dimensional error in `power_spectrum()` or in `Field.power_synthesize()`.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/196What does Field.vdot() do if the spaces keyword is present?2018-01-11T13:30:42ZMartin ReineckeWhat does Field.vdot() do if the spaces keyword is present?What is the expected result if we call
`a.vdot(b,spaces=0)`
where a is a Field living on one space and b is a Field living on two spaces?
I'm asking because the documentation says that the result should be float or complex, but the co...What is the expected result if we call
`a.vdot(b,spaces=0)`
where a is a Field living on one space and b is a Field living on two spaces?
I'm asking because the documentation says that the result should be float or complex, but the code actually returns a Field object.
Is there an undocumented constraint that both Fields must have the same number of domains?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/195Announce merge dates from nightly branch to master2018-01-11T14:24:22ZMartin ReineckeAnnounce merge dates from nightly branch to masterThe tentative plan is to merge new (potentially disruptive) changes from nightly to master after they have been on the nightly branch for about two weeks.
It might still be helpful to announce a precise date when the next merge of this ...The tentative plan is to merge new (potentially disruptive) changes from nightly to master after they have been on the nightly branch for about two weeks.
It might still be helpful to announce a precise date when the next merge of this kind is planned. @Theos, what's your schedule?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/194ResponseOperator._add_attributes_to_copy crashes2017-10-12T14:10:57ZChristoph LienhardResponseOperator._add_attributes_to_copy crashesWhen trying to run the log_normal_wiener_filter demo an error occurs:
line 103, in _add_attributes_to_copy copy = super(DiagonalOperator, self)._add_attributes_to_copy(copy,
TypeError: super(type, obj): obj must be an instance or subtyp...When trying to run the log_normal_wiener_filter demo an error occurs:
line 103, in _add_attributes_to_copy copy = super(DiagonalOperator, self)._add_attributes_to_copy(copy,
TypeError: super(type, obj): obj must be an instance or subtype of type
I guess the line should read:
copy = super(ResponseOperator, self)._add_attributes_to_copy(copy, **kwargs)https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/193Field method; Field.integrate2018-01-11T13:31:00ZPumpe, Daniel (dpumpe)Field method; Field.integrateMartin, Sebastian and I think that it would be advantageous to have a Field method, `.integrate(spaces)` in case one has to integrate over one or multiple dimensions of a field. `.integrate` would thereby take care of all necessary volum...Martin, Sebastian and I think that it would be advantageous to have a Field method, `.integrate(spaces)` in case one has to integrate over one or multiple dimensions of a field. `.integrate` would thereby take care of all necessary volume factors as they appear in integrals.
What do you think about it (@reimar, @kjako @theos)?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/192Drawing gaussian random fields over multiple domains2017-09-23T07:14:41ZPumpe, Daniel (dpumpe)Drawing gaussian random fields over multiple domainsHi,
currently I am facing some inconsistencies for drawing gaussian random fields over multiple domains. Yet I did not find the root of these inconsistencies. However in the minimal test example below, `s_1` and `s_2` do not match, nei...Hi,
currently I am facing some inconsistencies for drawing gaussian random fields over multiple domains. Yet I did not find the root of these inconsistencies. However in the minimal test example below, `s_1` and `s_2` do not match, neither mean nor variance even though they should be drawn from the same gaussian just in two different ways. Further none of the drawn fields, `s_1` `s_2`, obey the desired spectra.
What am I doing wrong or missing?
```
from nifty import *
x_0 = RGSpace(50, zerocenter=False, distances=.5)
k_0 = RGRGTransformation.get_codomain(x_0)
p_0 = PowerSpace(harmonic_partner=k_0)
x_1 = RGSpace(75, zerocenter=False, distances=.25)
k_1 = RGRGTransformation.get_codomain(x_1)
p_1 = PowerSpace(harmonic_partner=k_1)
fft_0 = FFTOperator(domain=x_0, target=k_0)
fft_1 = FFTOperator(domain=x_1, target=k_1)
p_spec_0 = Field(p_0, val=lambda l: 1/(1+l)**3)
p_spec_1 = Field(p_1, val=lambda k: 10/(1+k)**5)
def random_field_method_1():
w = Field((x_0, x_1), val=0.)
for ii in xrange(100):
outer = Field((p_0, p_1), val=np.outer(p_spec_0.val, p_spec_1.val))
sk = outer.power_synthesize()
w += fft_0.inverse_times(fft_1.inverse_times(sk, spaces=1), spaces=0)
return w/100.
def random_field_method_2():
w = Field((x_0, x_1), val=0.)
S_0 = create_power_operator(k_0, sqrt(p_spec_0))
S_1 = create_power_operator(k_1, sqrt(p_spec_1))
for ii in xrange(100):
rand = Field.from_random('normal', domain=(x_0, x_1))
rand_k = fft_0.times(fft_1.times(rand, spaces=1), spaces=0)
sk = S_0.times(S_1.times(rand_k, spaces=1), spaces=0)
w += fft_0.inverse_times(fft_1.inverse_times(sk, spaces=1), spaces=0)
return w/100.
def analyze_power(signal):
sk = fft_0.times(fft_1.times(signal,spaces=1), spaces=0)
power = sk.power_analyze()
p_drawn_0 = power.sum(spaces=1)/p_spec_1.sum()
p_drawn_1 = power.sum(spaces=0)/p_spec_0.sum()
return p_drawn_0, p_drawn_1
s_1 = random_field_method_1()
s_2 = random_field_method_2()
method_1_power_0, method_1_power_1 = analyze_power(s_1)
methdo_2_power_0, method_2_power_2 = analyze_power(s_2)
```https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/191Field.power_synthesize(): some clarifications needed2017-12-09T14:10:49ZMartin ReineckeField.power_synthesize(): some clarifications neededI'm trying to understand the intricacies of `Field.power_synthesize()` and am encountering a few points that are unclear to me:
- which combinations of `real_signal` and `real_power` are allowed? Specifically, is it allowed/sensible to ...I'm trying to understand the intricacies of `Field.power_synthesize()` and am encountering a few points that are unclear to me:
- which combinations of `real_signal` and `real_power` are allowed? Specifically, is it allowed/sensible to have `real_signal==True` and `real_power==False`?
- if `self.dtype==float`, does it make sense to have `real_power==False`? In that case, `local_rescaler.imag` will become zero.
Once I understand all of this better, I'd volunteer to extend the docstring, and (if necessary) add a few sanity checks to the code.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/190DiagonalOperator calls nonexistent attribute adjoint()2017-09-15T23:36:07ZMartin ReineckeDiagonalOperator calls nonexistent attribute adjoint()`DiagonalOperator` contains the following code:
```
def _adjoint_times(self, x, spaces):
return self._times_helper(x, spaces,
operation=lambda z: z.adjoint().__mul__)
def _adjoint_invers...`DiagonalOperator` contains the following code:
```
def _adjoint_times(self, x, spaces):
return self._times_helper(x, spaces,
operation=lambda z: z.adjoint().__mul__)
def _adjoint_inverse_times(self, x, spaces):
return self._times_helper(x, spaces,
operation=lambda z: z.adjoint().__rtruediv__)
```
The `z.adjoint()` expression does not work, since nothing in NIFTy has an .adjoint() attribute. So far we have not noticed this, apparently because all of our `DiagonalOperator`s were self-adjoint and these methods were not called.
I don't know what the correct version is. @theos?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/189PowerSpace volume factors2017-12-01T20:18:08ZMartin ReineckePowerSpace volume factorsAs mentioned in the NIFTy paper, there are many different ways to define volume factors for the PowerSpace.
If I remember correctly, the agreement that we reached some time ago was that `PowerSpace.weight()` should use a weight of 1, i....As mentioned in the NIFTy paper, there are many different ways to define volume factors for the PowerSpace.
If I remember correctly, the agreement that we reached some time ago was that `PowerSpace.weight()` should use a weight of 1, i.e. it should not do anything. However, the current implementation uses `rho` as weights.
I suggest to change this to 1 as discussed. The advantage of using this weight over the others is that the user does not need to do any additional corrections when they have to use a different weight; they can just apply their own volume factors without needing to undo anything the space did by default.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/188Field.real/imag copy arrays, contrary to documentation2018-01-11T14:25:17ZMartin ReineckeField.real/imag copy arrays, contrary to documentation```
import nifty as ift
a=ift.RGSpace(10)
f=ift.Field(a,val=1.+1j)
f.real.val+=2
f.imag.val+=2
print f.val
```
This prints '1+1j' as the field value, i.e. the manipulations of real and imaginary parts did not affect the original field....```
import nifty as ift
a=ift.RGSpace(10)
f=ift.Field(a,val=1.+1j)
f.real.val+=2
f.imag.val+=2
print f.val
```
This prints '1+1j' as the field value, i.e. the manipulations of real and imaginary parts did not affect the original field. This seems to contradict the documentation, which states that the data is not copied.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/187Field.dim returns 0 for fields without domain2017-09-28T01:37:34ZMartin ReineckeField.dim returns 0 for fields without domain```
import nifty as ift
f=ift.Field((),val=42)
print f.val
print f.dim
```
As expected, the Field contains a single value, but its `dim` property is 0, which seems incorrect and is also inconsistent with `numpy.ndarray`'s behavior for z...```
import nifty as ift
f=ift.Field((),val=42)
print f.val
print f.dim
```
As expected, the Field contains a single value, but its `dim` property is 0, which seems incorrect and is also inconsistent with `numpy.ndarray`'s behavior for zero-dimensional arrays.https://gitlab.mpcdf.mpg.de/ift/D2O/-/issues/24Missing file in last commit?2018-03-18T10:55:45ZMartin ReineckeMissing file in last commit?Is it possible that you forgot to add a file `random.py` in your last commit?
As things are, D2O now imports Python's default `random` module, but this probably doesn't address the problems with MPI-parallel seeding you mentioned.Is it possible that you forgot to add a file `random.py` in your last commit?
As things are, D2O now imports Python's default `random` module, but this probably doesn't address the problems with MPI-parallel seeding you mentioned.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/186TransformationCache re-uses FFTs in situations where it shouldn't2017-09-28T01:59:35ZMartin ReineckeTransformationCache re-uses FFTs in situations where it shouldn't(This is mostly a reminder ... the problem should go away automatically once the `byebye_zerocenter` branch is merged.)
Currently the TransformationCache for FFTs caches and re-uses transforms even when the harmonic base has changes in ...(This is mostly a reminder ... the problem should go away automatically once the `byebye_zerocenter` branch is merged.)
Currently the TransformationCache for FFTs caches and re-uses transforms even when the harmonic base has changes in the meantime, which leads to wrong results. This can be avoided by simply not changing the harmonic base during a run, but this approach seems impractical, since we violate this constraint already in test_fft_operator.py.
Apparently, the whole test environment is not torn down and set up again between individual tests. I verified this by printing the size of the transformation cache whenever its create() method is called (line 27 in transformation_cache.py) and then ran
`nosetests test/test_operators/test_fft_operator.py -s`
I would have expected the length of the cache to be 1 all the time, but it went up beyond 100.
This is possibly also the cause for the strange behaviour in `test_power_synthesize_analyze()`.
To verify this, I suggest running the tests on maste ran on `byebye_zerocenter` and compare the convergence behaviour.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/185Binbounds calculation in PowerSpace needs fixing2017-09-15T23:26:29ZMartin ReineckeBinbounds calculation in PowerSpace needs fixingCurrently, the initialization of a `PowerSpace` has several problems:
- when specifying `logarithmic=False`, the code behaves differently from the code before the `PowerSpace` reorganization two months ago (this is because I'm now inter...Currently, the initialization of a `PowerSpace` has several problems:
- when specifying `logarithmic=False`, the code behaves differently from the code before the `PowerSpace` reorganization two months ago (this is because I'm now interpreting the parameter `logarithmic=False` to mean "please use linear binning", instead of "please use natural binning", which was its original meaning).
- the "magic" formulae for computing nbins and binbounds are buggy; this was not really obvious before the reorganization, because they were not used in most circumstances).
- the algorithm tries to derive binbounds from insufficient user information (e.g. if the user specifies the number of bins, the code simply guesses the position of the first and last binbounds, and the guess is not necessarily intuitive).
I would strongly favor an approach where the user has to specify a set of parameters that defines the binbounds absolutely unambiguously. E.g.:
- by providing a binbounds array
- by providing a type of spacing (linear or logarithmic), a number of bins, and the first and last binbound
- by providing absolutely nothing, which leads to natural binning
There are very few cases where it could make sense for the code to guess the binbounds; the only situations that I can imagine are 1D RGSpaces and LMSpaces with linear binning. Here, there is an obvious choice for the number of bins and the placing of the binbounds.
Would it be OK if I introduced the necessary additional constraints?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/184PowerSpace probing calls sugar.create_composed_fft_operator and dies2017-08-29T08:26:05ZPhilipp Arrasparras@mpa-garching.mpg.dePowerSpace probing calls sugar.create_composed_fft_operator and diesI narrowed the problem down to commit a09c6ed by @theos. The attached file executes until this commit and is broken afterwards.
Apparently, the prober tries to fft something although the probing takes place in a PowerSpace which does no...I narrowed the problem down to commit a09c6ed by @theos. The attached file executes until this commit and is broken afterwards.
Apparently, the prober tries to fft something although the probing takes place in a PowerSpace which does not admit for a fft.
[Critical+Filter.py](/uploads/7b0e6c37d18c6cdf932457da0ba682d1/Critical+Filter.py)https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/183Harmonize ConjugateGradient with other minimizers?2017-09-28T01:17:59ZMartin ReineckeHarmonize ConjugateGradient with other minimizers?In principle, ConjugateGradient is not fundamentally different from our descent minimizers, but its interface (including the form of the callbacks) is very different.
Wouldn't it be helpful to adjust it in a way that it also takes an `E...In principle, ConjugateGradient is not fundamentally different from our descent minimizers, but its interface (including the form of the callbacks) is very different.
Wouldn't it be helpful to adjust it in a way that it also takes an `Energy` object for minimization and passes its current energy to the callback in the same way the descent minimizers do? Then we could, for example, replace `ConjugateGradient` objects with, say, `VL_BFGS` ones and experiment much more flexibly with different minimizers.
Of course, `ConjugateGradient` would need a special subclass of `Energy` to work properly: it would, for example, need to invoke the linear operator within this energy directly. But that should be no fundamental problem ... after all, we already have some energy classes that provide `curvature` and some that don't.
I would be very interested to hear feedback!