ift issueshttps://gitlab.mpcdf.mpg.de/groups/ift/-/issues2020-03-11T23:04:51Zhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/284Inconsistency of .sum() for Fields and Linearizations2020-03-11T23:04:51ZPhilipp Arrasparras@mpa-garching.mpg.deInconsistency of .sum() for Fields and LinearizationsIs it wanted that `Linearization.sum()` returns a scalar Field but `Field.sum()` returns an actual scalar, i.e. in most situations a `float`? `MultiField.sum()` inherits the behaviour from `Field`.
Linearizations: https://gitlab.mpcdf.m...Is it wanted that `Linearization.sum()` returns a scalar Field but `Field.sum()` returns an actual scalar, i.e. in most situations a `float`? `MultiField.sum()` inherits the behaviour from `Field`.
Linearizations: https://gitlab.mpcdf.mpg.de/ift/nifty/blob/NIFTy_6/nifty6/linearization.py#L237
Fields: https://gitlab.mpcdf.mpg.de/ift/nifty/blob/NIFTy_6/nifty6/field.py#L382
MultiFields: https://gitlab.mpcdf.mpg.de/ift/nifty/blob/NIFTy_6/nifty6/multi_field.py#L169https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/283Reduce licensing boilerplate2020-05-13T11:36:01ZGordian EdenhoferReduce licensing boilerplateIf we really feel the need to put licensing boilerplate in every source file, let's at least make it concise by e.g. using https://spdx.org/ids.If we really feel the need to put licensing boilerplate in every source file, let's at least make it concise by e.g. using https://spdx.org/ids.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/282FieldZeroPadder2020-02-19T16:45:27ZVincent EberleFieldZeroPadderFieldZeropadding.adjoint doesn't work in 2 dimensions.FieldZeropadding.adjoint doesn't work in 2 dimensions.Philipp FrankPhilipp Frankhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/281[NIFTy6] Boost performance of Correlated Fields2019-12-06T13:53:49ZPhilipp Arrasparras@mpa-garching.mpg.de[NIFTy6] Boost performance of Correlated FieldsDo PowerDistributor only on one slice and copy it with ContractionOperator.adjoint afterwards. The power distributor appears to be the bottleneck of correlated field evaluations currently.Do PowerDistributor only on one slice and copy it with ContractionOperator.adjoint afterwards. The power distributor appears to be the bottleneck of correlated field evaluations currently.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/280[NIFTy6] Implement partial insert operator for multi fields2019-12-06T16:54:00ZPhilipp Arrasparras@mpa-garching.mpg.de[NIFTy6] Implement partial insert operator for multi fieldshttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/279Consistent ordering of domain and non-domain arguments for standard operators2019-12-06T08:34:31ZGordian EdenhoferConsistent ordering of domain and non-domain arguments for standard operatorsCurrently, the order in which arguments shall be provided differs between operators, e.g. `ift.from_global_data` and `ift.full` both take the domain as first argument, while `ift.ScalingOperator` requires the factor with which to scale a...Currently, the order in which arguments shall be provided differs between operators, e.g. `ift.from_global_data` and `ift.full` both take the domain as first argument, while `ift.ScalingOperator` requires the factor with which to scale a field as first argument and the domain as second. Moving to NIFTY6 might be the perfect opportunity to fix this either by consistently requiring the domain to be e.g. the first argument or by making the `ift.ScalingOperator` more flexible by swapping arguments in a suitable manner during initialization.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/278Getting rid of standard parallelization?2019-12-09T12:50:25ZMartin ReineckeGetting rid of standard parallelization?The default way of parallelizing large tasks with MPI is currently to distribute fields over MPI tasks along their first axis. While it appears to work (our tests run fine with MPI), it has not been used during the last few years, and ot...The default way of parallelizing large tasks with MPI is currently to distribute fields over MPI tasks along their first axis. While it appears to work (our tests run fine with MPI), it has not been used during the last few years, and other parallelization approaches seem more promising.
If there is general agreement, I propose to remove this parallelization from the code, which would make NIFTy much smaller and easier to use and maintain.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/277Changelog for NIFTy_62019-12-06T16:54:45ZLukas PlatzChangelog for NIFTy_6Dear @parras, @pfrank, @phaim and @mtr,
should we introduce a changelog for the NIFTy_6 branch?
Currently, there are 107 files changed w.r.t. NIFTy 5, but no indication which of these changes affect the user interface.
If there was a ...Dear @parras, @pfrank, @phaim and @mtr,
should we introduce a changelog for the NIFTy_6 branch?
Currently, there are 107 files changed w.r.t. NIFTy 5, but no indication which of these changes affect the user interface.
If there was a list of (compatibility breaking) changes to the UI, NIFTy 5 users who want to update their code (now or in the distant future) could do so without testing by trial and error which changes they need to make.
Currently, your memory of the changes is probably still fresh enough so you could create such a changelog without excessive trouble, and it would be a very valuable feature to everyone else (and spare you explaining it over and over again).
What do you think? Would you be willing to write up a short summary of your UI changes?
And how about a policy "No merges into NIFTy_6_without a UI changelog entry (if applicable)" for the future?
Thank you all for building the multi frequency capabilities and for considering this,
Lukashttps://gitlab.mpcdf.mpg.de/ift/nifty_gridder/-/issues/3Segfault when epsilon >= 1e-1 (roughly)2019-10-02T16:57:59ZPhilipp Arrasparras@mpa-garching.mpg.deSegfault when epsilon >= 1e-1 (roughly)I have edited the tests in order to trigger the bug. See branch `bugreport` and https://gitlab.mpcdf.mpg.de/ift/nifty_gridder/-/jobs/938200I have edited the tests in order to trigger the bug. See branch `bugreport` and https://gitlab.mpcdf.mpg.de/ift/nifty_gridder/-/jobs/938200https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/275MPI broken master branch2019-10-04T14:44:21ZReimar H LeikeMPI broken master branchthe recent merge of operator spectra to master broke the functionality of the MPI KL, since newtonCG now requires the metric of the KL as an oeprator.the recent merge of operator spectra to master broke the functionality of the MPI KL, since newtonCG now requires the metric of the KL as an oeprator.Reimar H LeikeReimar H Leikehttps://gitlab.mpcdf.mpg.de/ift/nifty_gridder/-/issues/2Outline for improved w-stacking2019-08-30T06:06:30ZMartin ReineckeOutline for improved w-stackingI'm thinking of adding a (more or less) automated method that performs gridding/degridding following the "improved w-stacking" approach. The interface would be extremely simple:
`vis2dirty(baselines, gconf, idx, vis, [distance between w...I'm thinking of adding a (more or less) automated method that performs gridding/degridding following the "improved w-stacking" approach. The interface would be extremely simple:
`vis2dirty(baselines, gconf, idx, vis, [distance between w planes]) -> dirty`
The code would internally do the following:
- determine min and max w values of the provided data
- determine number and locations of the required w planes
- for every w plane:
- grid the relevant subset of visibilities onto the plane
- run grid2dirty on the result
- apply wscreen to the result
- add result to accumulated result
- apply correction function for gridding in w direction to accumulated result
- return accumulated result
Adjoint operation will follow once we are sure that this works.
Is there anything I'm missing, @parras?
So far it seems as if a (somewhat sub-optimal) prototype could be written pretty quickly.https://gitlab.mpcdf.mpg.de/ift/nifty_gridder/-/issues/1Feature wishlist2019-09-24T09:00:52ZPhilipp Arrasparras@mpa-garching.mpg.deFeature wishlist- [x] Add single precision mode
- [x] Compute holograpic matrix as sparse matrix
- [x] Set number of threads for fft from outside
- [x] Compute w-screen on the fly and apply it in dirty2grid and grid2dirty
- [x] Exploit symmetries in w-...- [x] Add single precision mode
- [x] Compute holograpic matrix as sparse matrix
- [x] Set number of threads for fft from outside
- [x] Compute w-screen on the fly and apply it in dirty2grid and grid2dirty
- [x] Exploit symmetries in w-screen
- [x] Add primary beam to gridder? No, at least not now.
- [x] Add weighted versions for vis2grid, grid2vis, apply_holo
- [x] Test wstacking vs modified DFThttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/273Negative distances2019-07-30T07:52:44ZPhilipp Arrasparras@mpa-garching.mpg.deNegative distancesIs it intended that distances of RGSpaces can be negative? I can imagine that this leads to unexpected behaviour...Is it intended that distances of RGSpaces can be negative? I can imagine that this leads to unexpected behaviour...https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/272Representation of ScalingOperator.ducktape()2019-06-25T12:46:21ZLukas PlatzRepresentation of ScalingOperator.ducktape()The representation of `ift.ScalingOperator(20., domain).ducktape('test')` is generated with swappend operators:
> ChainOperator:
> FieldAdapter <- ('test',)
> ScalingOperator (20.0)
Other operators do not exhibit this behavior ...The representation of `ift.ScalingOperator(20., domain).ducktape('test')` is generated with swappend operators:
> ChainOperator:
> FieldAdapter <- ('test',)
> ScalingOperator (20.0)
Other operators do not exhibit this behavior (tested DiagonalOperator, InverseGammaPrior). The domain and target of the resulting ChainOperator are correct, so I think this is just a quirk in the representation generation.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/271Should we "fix" exact zeros in PoissonianEnergy?2019-06-25T13:09:05ZMartin ReineckeShould we "fix" exact zeros in PoissonianEnergy?Several people seem to encounter problems when applying `PoissonianEnergy` to a field that contains exact zeros.
Would it be acceptable to replace these zeros by `np.finfo(x.dtype).tiny` in `PoissonianEnergy.apply()` in order to address...Several people seem to encounter problems when applying `PoissonianEnergy` to a field that contains exact zeros.
Would it be acceptable to replace these zeros by `np.finfo(x.dtype).tiny` in `PoissonianEnergy.apply()` in order to address the problem, or should this be done somewhere else?
@lplatz, @hutsch, @reimar, @parrashttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/270Accuracy of nfft2019-05-21T07:33:44ZPhilipp Arrasparras@mpa-garching.mpg.deAccuracy of nfftI have investigated the accuracy of our nfft implementation for 1 and for 100 data points:
## Current implementation
### 1 data point
![nfft_accuracy_old1](/uploads/e59d21c7e258153ef95ce0d5585a8f1a/nfft_accuracy_old1.png)
### 100 data...I have investigated the accuracy of our nfft implementation for 1 and for 100 data points:
## Current implementation
### 1 data point
![nfft_accuracy_old1](/uploads/e59d21c7e258153ef95ce0d5585a8f1a/nfft_accuracy_old1.png)
### 100 data points
![nfft_accuracy_old](/uploads/771abb3bb5fd94ba5efb0e27114e06fc/nfft_accuracy_old.png)
---
## Alternative
We use the following criterion to decide on the oversampling factor:
```python
rat = 3 if eps < 1e-11 else 2
```
If one always takes an oversampling factor of 2 we arrive at the following results
### 1 data point
![nfft_accuracy_new1](/uploads/6b88362864b0ef494b893647b53a46c7/nfft_accuracy_new1.png)
### 100 data points
![nfft_accuracy_new](/uploads/31055329393f439ea64dd8fdfa00d9db/nfft_accuracy_new.png)
Therefore, my suggestion is to drop oversampling with a factor of three since it does not add relevant accuracy.
Here is the original paper for the code. Chapter 4 describes the algorithm which we use.
[nufft_paper.pdf](/uploads/85e154c34d7be2d7c7f0c6a71acfde1f/nufft_paper.pdf)
This is the code which produces the plots.
[gridding_accuracy.py](/uploads/72a8ebab21724318d786821f07948bff/gridding_accuracy.py)
@mtr, @phaimhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/269Unneeded branches?2019-10-08T22:34:59ZMartin ReineckeUnneeded branches?What is the status of the branches `mf_plus_add` and `mfcorrelatedfield_withzeromodeprior`? They appear to be superseded by `multi_freq_plus_gridder`. Can they be removed, @jruestig ?What is the status of the branches `mf_plus_add` and `mfcorrelatedfield_withzeromodeprior`? They appear to be superseded by `multi_freq_plus_gridder`. Can they be removed, @jruestig ?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/268Do we still need NFFT?2019-04-29T08:25:22ZMartin ReineckeDo we still need NFFT?I'd like to remove the `nifty5.library.NFFT` class, since it will most likely not be used in the future and has a fairly inconvenient dependency on `pynfft` and the nfft package.
Any objections? @parras?I'd like to remove the `nifty5.library.NFFT` class, since it will most likely not be used in the future and has a fairly inconvenient dependency on `pynfft` and the nfft package.
Any objections? @parras?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/267SumOperator is not imported with nifty2019-04-26T13:29:43ZPhilipp HaimSumOperator is not imported with niftyWhen importing nifty5, SumOperator is not included. Is that intentional?When importing nifty5, SumOperator is not included. Is that intentional?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/266Amplitude model2019-11-29T10:39:48ZPhilipp Arrasparras@mpa-garching.mpg.deAmplitude modelI am back at considering what kind of default amplitude model is suitable for nifty. I think the current `SLAmplitude` operator does not provide a sensible prior on the zeromode. Therefore, I think we should change it.
What I do for Log...I am back at considering what kind of default amplitude model is suitable for nifty. I think the current `SLAmplitude` operator does not provide a sensible prior on the zeromode. Therefore, I think we should change it.
What I do for Lognormal problems is the following: set the zeromode of the amplitude operator A constantly to one and have as sky model: exp(ht @ U @ (A*xi)), where U is an operator which turns a standard normal distribution into a flat distribution with a lower and an upper bound.
My problem is that `demos/getting_started_3.py` does not show how to properly set up a prior on the zero mode.
@reimar, @pfrank, do you have a suggestion how to deal with this?