ift issueshttps://gitlab.mpcdf.mpg.de/groups/ift/-/issues2021-05-30T17:30:29Zhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/326Check NIFTy_7 documentation generation for error messages2021-05-30T17:30:29ZPhilipp Arrasparras@mpa-garching.mpg.deCheck NIFTy_7 documentation generation for error messagesNIFTy7 releasePhilipp Arrasparras@mpa-garching.mpg.dePhilipp Arrasparras@mpa-garching.mpg.dehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/325Update general documentation to GeoVI2021-05-31T08:43:03ZPhilipp Arrasparras@mpa-garching.mpg.deUpdate general documentation to GeoVIAt the end of https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_7/docs/source/ift.rst MGVI is explicitly discussed. Let's add a discussion on GeoVI here or alternatively add a new file to the documentation for GeoVI alone.At the end of https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_7/docs/source/ift.rst MGVI is explicitly discussed. Let's add a discussion on GeoVI here or alternatively add a new file to the documentation for GeoVI alone.NIFTy7 releasehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/324Update `demos/mgvi_visualized.py` to GeoVI2021-05-30T14:49:38ZPhilipp Arrasparras@mpa-garching.mpg.deUpdate `demos/mgvi_visualized.py` to GeoVINIFTy7 releasePhilipp Arrasparras@mpa-garching.mpg.dePhilipp Arrasparras@mpa-garching.mpg.dehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/323Final steps for release2021-06-11T19:59:51ZPhilipp Arrasparras@mpa-garching.mpg.deFinal steps for release- [x] Check documentation generation for warnings and errors -> !636
- [x] Check pytest output for warnings -> #329 + !635, !634
- [x] Run `isort` on whole repository
```sh
find . -type f -name '*.py' | xargs isort
```
- [x] Update aut...- [x] Check documentation generation for warnings and errors -> !636
- [x] Check pytest output for warnings -> #329 + !635, !634
- [x] Run `isort` on whole repository
```sh
find . -type f -name '*.py' | xargs isort
```
- [x] Update author list again
```sh
git shortlog -s origin/NIFTy_6..origin/NIFTy_7 | cut -f2-
```
- [x] Fast-forward `NIFTy_8` to `origin/NIFTy_7` branch
- [x] Rename nifty7 -> nifty8 on NIFTy_8 branch
```sh
rm nifty7
ln -s src nifty8
find . -type f -exec sed -i 's/nifty7/nifty8/g' {} \;
find . -type f -exec sed -i 's/NIFTy_7/NIFTy_8/g' {} \;
find . -type f -exec sed -i 's/NIFTy7/NIFTy8/g' {} \; # Only partially!
```
- [x] Rename NIFTy_6 -> NIFTy_7 in `.gitlab-ci.yml` (for doc generation)
- [x] NIFTy_8 subsection in `README.md` in contributors section
- [x] NIFTy_7 section in `ChangeLog.md`
- [x] Make NIFTy_7 the default branch of the gitlab repository
- [x] Add daily pipeline schedule for `NIFTy_8` branch and remove the one for `NIFTy_6`NIFTy7 releasePhilipp Arrasparras@mpa-garching.mpg.dePhilipp Arrasparras@mpa-garching.mpg.dehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/322Update contributors list2021-05-30T17:28:48ZPhilipp Arrasparras@mpa-garching.mpg.deUpdate contributors listNIFTy7 releasePhilipp Arrasparras@mpa-garching.mpg.dePhilipp Arrasparras@mpa-garching.mpg.dehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/321Windows compatibility2021-05-05T17:08:08ZLukas PlatzWindows compatibilityA collaborator of mine just tried to install Nifty on a Windows machine in an Anaconda environment and had the problem that the symlink from `nifty7` to `src` was apparently breaking the setup. Once she removed it and renamed `src` to `n...A collaborator of mine just tried to install Nifty on a Windows machine in an Anaconda environment and had the problem that the symlink from `nifty7` to `src` was apparently breaking the setup. Once she removed it and renamed `src` to `nifty7`, the setup worked.
I have not checked if this is the general behavior under Windows or if it is just her setup, but assume it is the former.
Has anybody else experience with this and can weigh in?
What was the rationale behind changing the source location and introducing the symlink?
Cheers,
Lukashttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/320Is `ift.operators.operator._FunctionApplier` exposed to the NIFTy namespace? ...2021-04-07T10:02:09ZLukas PlatzIs `ift.operators.operator._FunctionApplier` exposed to the NIFTy namespace? If not, why?I just had the case where I wanted to prepend a pointwise operator to a give operator. For *ap*pending a pointwise operator, we have the syntax `op.ptw()`, but do we also have a direct way to *pre*pend an operator?
What I came up with i...I just had the case where I wanted to prepend a pointwise operator to a give operator. For *ap*pending a pointwise operator, we have the syntax `op.ptw()`, but do we also have a direct way to *pre*pend an operator?
What I came up with in a hunch was `op @ ift.ScalingOperator(op.domain, 1.).abs()`, but that is just horrible.
Is there a simple way to do this that I forgot? If not, why don't we expose `operator._FunctionApplier` as `ift.FunctionApplier`?
Cheers!https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/319AttributeError: 'OperatorAdapter' object has no attribute 'duckape'2021-03-07T22:57:53ZGordian EdenhoferAttributeError: 'OperatorAdapter' object has no attribute 'duckape'OperatorAdapter, e.g. the adjoint of an operator, should support ducktaping.OperatorAdapter, e.g. the adjoint of an operator, should support ducktaping.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/318Ducc dependency2021-03-07T13:33:04ZJakob RothDucc dependencySo far DUCC was an optional dependency of nifty7. Now it is strictly required because of the import in https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_7/src/library/nft.py#L20
Is this intended?So far DUCC was an optional dependency of nifty7. Now it is strictly required because of the import in https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_7/src/library/nft.py#L20
Is this intended?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/317MaternKernel implementation2021-03-24T09:12:11ZVincent EberleMaternKernel implementation# Matern Kernel
I thought it is a good idea to create this issue to keep everyone up-to-date who is involved in developement or already uses the Matern_kernel.
https://gitlab.mpcdf.mpg.de/ift/nifty/-/tree/matern_kernel
@matteani @jroth...# Matern Kernel
I thought it is a good idea to create this issue to keep everyone up-to-date who is involved in developement or already uses the Matern_kernel.
https://gitlab.mpcdf.mpg.de/ift/nifty/-/tree/matern_kernel
@matteani @jroth @parras @sding @gedenhof @vkainz
If I forgot somebody, just mention them to make sure that they are notified.
# TODO
- [x] Implementation of a**b by Simon and Philipp
- [x] Tests
- [x] Cosmetics
- [x] Demo
- [x] Make statistics_summary work for Matern Kernel fluctuationshttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/316Possibly unexpected behavior of correlated_field2021-01-13T11:13:02ZJakob RothPossibly unexpected behavior of correlated_field@gedenhof and I stumbled over the following behavior of the correlated field. We expected that the correlation structure in the correct units keeps unchanged when changing the target space's distances. Or in other words, if we increase ...@gedenhof and I stumbled over the following behavior of the correlated field. We expected that the correlation structure in the correct units keeps unchanged when changing the target space's distances. Or in other words, if we increase the pixel distance, we just zoom out. And when we decrease the pixel distance, we zoom in. However, this doesn't seem to be the case, as you can see here: In the right plot, the pixel distance is a factor of 10 larger.
![correlated_field](/uploads/bd5c783c172b52b23a347e2e70b7d87d/correlated_field.png)
When we create fields with a fixed power spectrum, everything works as expected. Again, in the right plot, the pixel distance is a factor of 10 larger. Here the right plot is a zoomed out version of the left.
![fixed_power_spec](/uploads/b3299fd32f0100ff9ea306f6939ab805/fixed_power_spec.png)
To us, this seems unintended, but we could not figure out how to change the correlated_field operator. @parras, @pfrank Could one of you help us to fix this, or explain why this is intended to be like this?
[pix_dist.py](/uploads/496ad8a62ee83f1c81a8fa39ba09a843/pix_dist.py)https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/315Unexpected get_sqrt behaviour2020-12-02T18:48:49ZSebastian HutschenreuterUnexpected get_sqrt behaviourThe following code raises a ValueError claiming no positive definiteness of the operator
```python
import nifty7 as ift
ds = ift.makeDomain(ift.RGSpace(100))
dd = ift.DiagonalOperator(ds, 4).get_sqrt()
```
while
```python
sd = ift.Sca...The following code raises a ValueError claiming no positive definiteness of the operator
```python
import nifty7 as ift
ds = ift.makeDomain(ift.RGSpace(100))
dd = ift.DiagonalOperator(ds, 4).get_sqrt()
```
while
```python
sd = ift.ScalingOperator(ift.full(ds, 4)).get_sqrt()
```
is fine.
Presumably that's a bug and the 'not' in the following code line in
[1](https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_7/src/operators/diagonal_operator.py#L170) is superfluous?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/314Always mirror samples2020-11-28T14:12:32ZPhilipp Arrasparras@mpa-garching.mpg.deAlways mirror samples@reimar @kjako @ensslint I just realized that `mirror_samples=True` is not the default for `MetricGaussianKL`. What is your opinion on this? I have the feeling that virtually all applications benefit from mirrored samples.@reimar @kjako @ensslint I just realized that `mirror_samples=True` is not the default for `MetricGaussianKL`. What is your opinion on this? I have the feeling that virtually all applications benefit from mirrored samples.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/313Better adjointness tests2020-11-18T19:24:15ZMartin ReineckeBetter adjointness testsFor the gridder paper, @parras and I have been thinking about better criteria for measuring the adjointness of two operators.
In Nifty, we test the adjointness of the operators `Op1` and `Op2` by drawing random fields `a` (living on `Op1...For the gridder paper, @parras and I have been thinking about better criteria for measuring the adjointness of two operators.
In Nifty, we test the adjointness of the operators `Op1` and `Op2` by drawing random fields `a` (living on `Op1.target`) and `b` (living on `Op1.domain`) and testing whether `abs(vdot(a, Op1(b)) - vdot(Op2(a), b))` is "small". But we don't really have a good definition of what "small" means.
Our idea was to compare this expression to `min(|a|*|Op1(b)|, |Op2(a)|*|b|)`. So our reference value is basically the smaller (to be pessimistic) of the two dot products, but with their cosine terms removed, so that we do not run into problems when, for example, `a` is orthogonal to `Op1(b)`.
@reimar, @pfrank, do you think this makes sense?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/312Unexpected amplification of fluctuations in the correlated field model by add...2021-01-13T18:02:56ZGordian EdenhoferUnexpected amplification of fluctuations in the correlated field model by adding axesI am working on a problem in which I might dynamically add axes and as such was relying on the normalization of the zero-mode in the correlated field model. I was surprised to notice that with each added axis the fluctuations of the corr...I am working on a problem in which I might dynamically add axes and as such was relying on the normalization of the zero-mode in the correlated field model. I was surprised to notice that with each added axis the fluctuations of the correlated field get amplified by the inverse of `offset_std_mean`. I am using NIFTy6 @ afc3e2df5e6b6d5f90eb414f07beeeefec2a085d .
The following code snippets reproduces the described amplification
```python
cf_axes = {
"offset_mean": 0.,
"offset_std_mean": 1e-3,
"offset_std_std": 1e-4,
"prefix": ''
}
temporal_axis_fluctuations = {
'fluctuations_mean': 1.,
'fluctuations_stddev': 0.1,
'loglogavgslope_mean': -1.0,
'loglogavgslope_stddev': 0.5,
'flexibility_mean': 2.5,
'flexibility_stddev': 1.0,
'asperity_mean': 0.5,
'asperity_stddev': 0.5,
'prefix': 'temporal_axis'
}
fish_axis_fluctuations = {
'fluctuations_mean': 1.,
'fluctuations_stddev': 0.1,
'loglogavgslope_mean': -1.5,
'loglogavgslope_stddev': 0.5,
'flexibility_mean': 2.5,
'flexibility_stddev': 1.0,
'asperity_mean': 0.5,
'asperity_stddev': 0.3,
'prefix': 'fish_axis'
}
cfmaker = ift.CorrelatedFieldMaker.make(**cf_axes)
cfmaker.add_fluctuations(ift.RGSpace(7638), **temporal_axis_fluctuations)
# cfmaker.add_fluctuations(ift.RGSpace(8), **fish_axis_fluctuations)
```
yields
```
cf = cfmaker.finalize()
np.mean([cf(ift.from_random(cf.domain)).s_std() for _ in range(100)]) # \approx 1
```
which is what I would expect. However, if I uncomment the last line in the second cell above, the result becomes
```
cf = cfmaker.finalize()
np.mean([cf(ift.from_random(cf.domain)).s_std() for _ in range(100)]) # \approx 1e+3
```
This does not make sense to me and I was expecting the same result as in the previous cell.
In short: Am I missing something or is there a bug in the correlated field model?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/311Partial Constant EnergyAdapter2020-07-18T17:21:28ZPhilipp FrankPartial Constant EnergyAdapterThe following minimal example raises an unexpected error on initialization of the `EnergyAdapter` object:
import nifty7 as ift
d = ift.UnstructuredDomain(10)
a = ift.FieldAdapter(d, 'hi') + ift.FieldAdapter(d, 'ho')
...The following minimal example raises an unexpected error on initialization of the `EnergyAdapter` object:
import nifty7 as ift
d = ift.UnstructuredDomain(10)
a = ift.FieldAdapter(d, 'hi') + ift.FieldAdapter(d, 'ho')
lh = ift.GaussianEnergy(domain =a.target)@a
x = ift.from_random(lh.domain)
H = ift.EnergyAdapter(x, lh, constants=['hi'])Philipp Arrasparras@mpa-garching.mpg.dePhilipp Arrasparras@mpa-garching.mpg.dehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/310Unexpected behaviour of ducktape and ducktape_left2021-01-22T14:13:41ZPhilipp FrankUnexpected behaviour of ducktape and ducktape_leftI've noticed an unexpected behaviour of `ducktape_left` and `ducktape` in combination with `Linearization`.
The following example code:
a = ift.Linearization.make_var(ift.from_random(ift.RGSpace(3))).ducktape_left('a')
does not pr...I've noticed an unexpected behaviour of `ducktape_left` and `ducktape` in combination with `Linearization`.
The following example code:
a = ift.Linearization.make_var(ift.from_random(ift.RGSpace(3))).ducktape_left('a')
does not produce an error but produces something that is not an instance of `Linearization` any more. Instead it returns an `_OpChain`. The same is true if we replace `ducktape_left` with `ducktape`.
I suspect this is due to the fact that `Linearization` is now an instance of `Operator` but does not implement `ducktape` itself.
In contrast if we try this with `Field`:
a = ift.from_random(ift.RGSpace(3)).ducktape_left('a')
we get an error.
I see two possible solutions for this: either we disable the (currently unintended?) support of `Linearization` in `ducktape` and `ducktape_left` or we implement a proper version of them for `Field` and `Linearization`.
@all does somebody know what is going on here?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/309Inconsistency in interface2020-06-29T12:27:54ZPhilipp Arrasparras@mpa-garching.mpg.deInconsistency in interface`ift.StructuredDomain.total_volume` is a property, `ift.DomainTuple.total_volume` is a method. Shall we unify that?`ift.StructuredDomain.total_volume` is a property, `ift.DomainTuple.total_volume` is a method. Shall we unify that?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/308Fisher test for VariableCovarianceGaussianEnergy not sensitive2021-04-09T12:56:45ZPhilipp Arrasparras@mpa-garching.mpg.deFisher test for VariableCovarianceGaussianEnergy not sensitiveOn the branch `metric_tests` I have introduced a breaking factor (949578182c660faee1ea7344d79993d9d1b35310) and the test does not break. I am not sure how to fix it. Is it even possible to fix it in this case? Do we need two test cases, ...On the branch `metric_tests` I have introduced a breaking factor (949578182c660faee1ea7344d79993d9d1b35310) and the test does not break. I am not sure how to fix it. Is it even possible to fix it in this case? Do we need two test cases, one where the mean is sampled and one where the variance is sampled?Reimar H LeikeReimar H Leikehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/306Complex samples are inconsistent2020-06-19T12:31:22ZReimar H LeikeComplex samples are inconsistentWhen we draw samples with complex `dtype`, we multiply their magnitude with `sqrt(0.5)`.
This is done because then a sample from a variance of 1 also has standard deviation of 1 on average.
However, this is inconsistent with its Hamilto...When we draw samples with complex `dtype`, we multiply their magnitude with `sqrt(0.5)`.
This is done because then a sample from a variance of 1 also has standard deviation of 1 on average.
However, this is inconsistent with its Hamiltonian.
If we instead regard a complex number as a pair of real numbers, then in order to reproduce the sample variance, we would need the vector (real, imag) to have covariance
```
/ 0.5 0 \
| |
\ 0 0.5 /
```
However this covariance would lead to the Hamiltonian `(real**2 + imag**2)*2` which is double of what we would get when using the complex likelihood with covariance 1.Philipp FrankPhilipp Frank