ift issueshttps://gitlab.mpcdf.mpg.de/groups/ift/-/issues2019-10-08T22:34:59Zhttps://gitlab.mpcdf.mpg.de/ift/nifty/issues/269Unneeded branches?2019-10-08T22:34:59ZMartin ReineckeUnneeded branches?What is the status of the branches `mf_plus_add` and `mfcorrelatedfield_withzeromodeprior`? They appear to be superseded by `multi_freq_plus_gridder`. Can they be removed, @jruestig ?What is the status of the branches `mf_plus_add` and `mfcorrelatedfield_withzeromodeprior`? They appear to be superseded by `multi_freq_plus_gridder`. Can they be removed, @jruestig ?https://gitlab.mpcdf.mpg.de/ift/nifty/issues/275MPI broken master branch2019-10-04T14:44:21ZReimar Heinrich LeikeMPI broken master branchthe recent merge of operator spectra to master broke the functionality of the MPI KL, since newtonCG now requires the metric of the KL as an oeprator.the recent merge of operator spectra to master broke the functionality of the MPI KL, since newtonCG now requires the metric of the KL as an oeprator.Reimar Heinrich LeikeReimar Heinrich Leikehttps://gitlab.mpcdf.mpg.de/ift/nifty_gridder/issues/3Segfault when epsilon >= 1e-1 (roughly)2019-10-02T16:57:59ZPhilipp ArrasSegfault when epsilon >= 1e-1 (roughly)I have edited the tests in order to trigger the bug. See branch `bugreport` and https://gitlab.mpcdf.mpg.de/ift/nifty_gridder/-/jobs/938200I have edited the tests in order to trigger the bug. See branch `bugreport` and https://gitlab.mpcdf.mpg.de/ift/nifty_gridder/-/jobs/938200https://gitlab.mpcdf.mpg.de/ift/nifty_gridder/issues/1Feature wishlist2019-09-24T09:00:52ZPhilipp ArrasFeature wishlist- [x] Add single precision mode
- [x] Compute holograpic matrix as sparse matrix
- [x] Set number of threads for fft from outside
- [x] Compute w-screen on the fly and apply it in dirty2grid and grid2dirty
- [x] Exploit symmetries in w-screen
- [x] Add primary beam to gridder? No, at least not now.
- [x] Add weighted versions for vis2grid, grid2vis, apply_holo
- [x] Test wstacking vs modified DFT- [x] Add single precision mode
- [x] Compute holograpic matrix as sparse matrix
- [x] Set number of threads for fft from outside
- [x] Compute w-screen on the fly and apply it in dirty2grid and grid2dirty
- [x] Exploit symmetries in w-screen
- [x] Add primary beam to gridder? No, at least not now.
- [x] Add weighted versions for vis2grid, grid2vis, apply_holo
- [x] Test wstacking vs modified DFThttps://gitlab.mpcdf.mpg.de/ift/nifty_gridder/issues/2Outline for improved w-stacking2019-08-30T06:06:30ZMartin ReineckeOutline for improved w-stackingI'm thinking of adding a (more or less) automated method that performs gridding/degridding following the "improved w-stacking" approach. The interface would be extremely simple:
`vis2dirty(baselines, gconf, idx, vis, [distance between w planes]) -> dirty`
The code would internally do the following:
- determine min and max w values of the provided data
- determine number and locations of the required w planes
- for every w plane:
- grid the relevant subset of visibilities onto the plane
- run grid2dirty on the result
- apply wscreen to the result
- add result to accumulated result
- apply correction function for gridding in w direction to accumulated result
- return accumulated result
Adjoint operation will follow once we are sure that this works.
Is there anything I'm missing, @parras?
So far it seems as if a (somewhat sub-optimal) prototype could be written pretty quickly.I'm thinking of adding a (more or less) automated method that performs gridding/degridding following the "improved w-stacking" approach. The interface would be extremely simple:
`vis2dirty(baselines, gconf, idx, vis, [distance between w planes]) -> dirty`
The code would internally do the following:
- determine min and max w values of the provided data
- determine number and locations of the required w planes
- for every w plane:
- grid the relevant subset of visibilities onto the plane
- run grid2dirty on the result
- apply wscreen to the result
- add result to accumulated result
- apply correction function for gridding in w direction to accumulated result
- return accumulated result
Adjoint operation will follow once we are sure that this works.
Is there anything I'm missing, @parras?
So far it seems as if a (somewhat sub-optimal) prototype could be written pretty quickly.https://gitlab.mpcdf.mpg.de/ift/nifty/issues/273Negative distances2019-07-30T07:52:44ZPhilipp ArrasNegative distancesIs it intended that distances of RGSpaces can be negative? I can imagine that this leads to unexpected behaviour...Is it intended that distances of RGSpaces can be negative? I can imagine that this leads to unexpected behaviour...https://gitlab.mpcdf.mpg.de/ift/nifty/issues/271Should we "fix" exact zeros in PoissonianEnergy?2019-06-25T13:09:05ZMartin ReineckeShould we "fix" exact zeros in PoissonianEnergy?Several people seem to encounter problems when applying `PoissonianEnergy` to a field that contains exact zeros.
Would it be acceptable to replace these zeros by `np.finfo(x.dtype).tiny` in `PoissonianEnergy.apply()` in order to address the problem, or should this be done somewhere else?
@lplatz, @hutsch, @reimar, @parrasSeveral people seem to encounter problems when applying `PoissonianEnergy` to a field that contains exact zeros.
Would it be acceptable to replace these zeros by `np.finfo(x.dtype).tiny` in `PoissonianEnergy.apply()` in order to address the problem, or should this be done somewhere else?
@lplatz, @hutsch, @reimar, @parrashttps://gitlab.mpcdf.mpg.de/ift/nifty/issues/272Representation of ScalingOperator.ducktape()2019-06-25T12:46:21ZLukas PlatzRepresentation of ScalingOperator.ducktape()The representation of `ift.ScalingOperator(20., domain).ducktape('test')` is generated with swappend operators:
> ChainOperator:
> FieldAdapter <- ('test',)
> ScalingOperator (20.0)
Other operators do not exhibit this behavior (tested DiagonalOperator, InverseGammaPrior). The domain and target of the resulting ChainOperator are correct, so I think this is just a quirk in the representation generation.The representation of `ift.ScalingOperator(20., domain).ducktape('test')` is generated with swappend operators:
> ChainOperator:
> FieldAdapter <- ('test',)
> ScalingOperator (20.0)
Other operators do not exhibit this behavior (tested DiagonalOperator, InverseGammaPrior). The domain and target of the resulting ChainOperator are correct, so I think this is just a quirk in the representation generation.https://gitlab.mpcdf.mpg.de/ift/nifty/issues/270Accuracy of nfft2019-05-21T07:33:44ZPhilipp ArrasAccuracy of nfftI have investigated the accuracy of our nfft implementation for 1 and for 100 data points:
## Current implementation
### 1 data point
![nfft_accuracy_old1](/uploads/e59d21c7e258153ef95ce0d5585a8f1a/nfft_accuracy_old1.png)
### 100 data points
![nfft_accuracy_old](/uploads/771abb3bb5fd94ba5efb0e27114e06fc/nfft_accuracy_old.png)
---
## Alternative
We use the following criterion to decide on the oversampling factor:
```python
rat = 3 if eps < 1e-11 else 2
```
If one always takes an oversampling factor of 2 we arrive at the following results
### 1 data point
![nfft_accuracy_new1](/uploads/6b88362864b0ef494b893647b53a46c7/nfft_accuracy_new1.png)
### 100 data points
![nfft_accuracy_new](/uploads/31055329393f439ea64dd8fdfa00d9db/nfft_accuracy_new.png)
Therefore, my suggestion is to drop oversampling with a factor of three since it does not add relevant accuracy.
Here is the original paper for the code. Chapter 4 describes the algorithm which we use.
[nufft_paper.pdf](/uploads/85e154c34d7be2d7c7f0c6a71acfde1f/nufft_paper.pdf)
This is the code which produces the plots.
[gridding_accuracy.py](/uploads/72a8ebab21724318d786821f07948bff/gridding_accuracy.py)
@mtr, @phaimI have investigated the accuracy of our nfft implementation for 1 and for 100 data points:
## Current implementation
### 1 data point
![nfft_accuracy_old1](/uploads/e59d21c7e258153ef95ce0d5585a8f1a/nfft_accuracy_old1.png)
### 100 data points
![nfft_accuracy_old](/uploads/771abb3bb5fd94ba5efb0e27114e06fc/nfft_accuracy_old.png)
---
## Alternative
We use the following criterion to decide on the oversampling factor:
```python
rat = 3 if eps < 1e-11 else 2
```
If one always takes an oversampling factor of 2 we arrive at the following results
### 1 data point
![nfft_accuracy_new1](/uploads/6b88362864b0ef494b893647b53a46c7/nfft_accuracy_new1.png)
### 100 data points
![nfft_accuracy_new](/uploads/31055329393f439ea64dd8fdfa00d9db/nfft_accuracy_new.png)
Therefore, my suggestion is to drop oversampling with a factor of three since it does not add relevant accuracy.
Here is the original paper for the code. Chapter 4 describes the algorithm which we use.
[nufft_paper.pdf](/uploads/85e154c34d7be2d7c7f0c6a71acfde1f/nufft_paper.pdf)
This is the code which produces the plots.
[gridding_accuracy.py](/uploads/72a8ebab21724318d786821f07948bff/gridding_accuracy.py)
@mtr, @phaimhttps://gitlab.mpcdf.mpg.de/ift/nifty/issues/262Nifty citation2019-05-21T06:52:52ZPhilipp ArrasNifty citationWhen citing NIFTy with A&A style I get in the references:
```
Martin Reinecke, Theo Steininger, Marco Selig. 2019, NIFTy – Numerical Information Field TheorY
```
without any link. Can this be fixed? Also: the year is missing in https://gitlab.mpcdf.mpg.de/ift/nifty/blob/NIFTy_5/docs/source/citations.rstWhen citing NIFTy with A&A style I get in the references:
```
Martin Reinecke, Theo Steininger, Marco Selig. 2019, NIFTy – Numerical Information Field TheorY
```
without any link. Can this be fixed? Also: the year is missing in https://gitlab.mpcdf.mpg.de/ift/nifty/blob/NIFTy_5/docs/source/citations.rsthttps://gitlab.mpcdf.mpg.de/ift/nifty/issues/268Do we still need NFFT?2019-04-29T08:25:22ZMartin ReineckeDo we still need NFFT?I'd like to remove the `nifty5.library.NFFT` class, since it will most likely not be used in the future and has a fairly inconvenient dependency on `pynfft` and the nfft package.
Any objections? @parras?I'd like to remove the `nifty5.library.NFFT` class, since it will most likely not be used in the future and has a fairly inconvenient dependency on `pynfft` and the nfft package.
Any objections? @parras?https://gitlab.mpcdf.mpg.de/ift/nifty/issues/267SumOperator is not imported with nifty2019-04-26T13:29:43ZPhilipp HaimSumOperator is not imported with niftyWhen importing nifty5, SumOperator is not included. Is that intentional?When importing nifty5, SumOperator is not included. Is that intentional?https://gitlab.mpcdf.mpg.de/ift/nifty/issues/264Bug in metric2019-04-16T09:34:36ZPhilipp ArrasBug in metricThe following script crashes in the current version of nifty and even on the branch `opsumdomains` the last example crashes. Anyone any ideas how to fix this?
```
import nifty5 as ift
dom = ift.RGSpace(10)
diffuse = ift.ducktape(dom, None, 'xi')
diffuse2 = ift.ducktape(dom, None, 'xi2')
e1 = ift.GaussianEnergy(domain=diffuse.target)
ops = []
ops.append((e1 + e1) @ diffuse)
ops.append(e1 @ diffuse)
ops.append(e1 @ diffuse + e1 @ diffuse2)
for op in ops:
pos = ift.full(op.domain, 0)
lin = ift.Linearization.make_var(pos, True)
print(op(lin).metric)
assert isinstance(op(lin).metric.domain, ift.MultiDomain)
assert op(lin).metric.domain is op.domain
```The following script crashes in the current version of nifty and even on the branch `opsumdomains` the last example crashes. Anyone any ideas how to fix this?
```
import nifty5 as ift
dom = ift.RGSpace(10)
diffuse = ift.ducktape(dom, None, 'xi')
diffuse2 = ift.ducktape(dom, None, 'xi2')
e1 = ift.GaussianEnergy(domain=diffuse.target)
ops = []
ops.append((e1 + e1) @ diffuse)
ops.append(e1 @ diffuse)
ops.append(e1 @ diffuse + e1 @ diffuse2)
for op in ops:
pos = ift.full(op.domain, 0)
lin = ift.Linearization.make_var(pos, True)
print(op(lin).metric)
assert isinstance(op(lin).metric.domain, ift.MultiDomain)
assert op(lin).metric.domain is op.domain
```https://gitlab.mpcdf.mpg.de/ift/nifty/issues/265MaskOperator in getting_started_12019-04-16T07:49:17ZJulia StadlerMaskOperator in getting_started_1The demo getting_started_1 uses a DiagonalOperator to implement the chessboard mask. I think replacing it by a MaskOperator would be helpful for people new to the code (I checked the demos first for how to implement masking ...)The demo getting_started_1 uses a DiagonalOperator to implement the chessboard mask. I think replacing it by a MaskOperator would be helpful for people new to the code (I checked the demos first for how to implement masking ...)https://gitlab.mpcdf.mpg.de/ift/nifty/issues/263Properly implementing a convolution2019-03-14T15:09:48ZJulia StadlerProperly implementing a convolutionMy response convolves the signal with a non-spherical PSF, which is specified by a field with the same domain as the signal. To avoid issues with periodic boundaries I apply a zero padding, so the exact steps are:
* zero-pad the PSF from the center, apply a Fourier transform and construct a DiagonalOperator from the result
* zero-pad the signal from the right/top and apply a Fourier transform
* apply the PSF-operator to the signal field followed by an inverse Fourier transform
* apply the adjoint signal zero padding operation
* take the real value
The problem with this procedure is that it reduces the flux at the borders of the image, where the PSF smears the additional zeros out to the pixels I'm interested in. I attached two plots to illustrate the problem, on shows how point sources are smeared out to the opposite side of the image without zero padding, the other how the zero padding leaks into the image at the border. The signal contains two point sources which get brighter with with bin number and equally the PSF gets wider.
Has anyone encountered a similar problem before? Or are there any nifty tools to tackle this, I'm not aware of? My approach would be to extend the signal field beyond the data region and to include a mask in the response to account for that. .Or is there a smarter approach?![data_no-zero-padding](/uploads/90b1f2e428d6316b0f4886ec40535fe4/data_no-zero-padding.png)
![data_outer-zero-padding](/uploads/75851eacc3174eccb9be766b57bdad28/data_outer-zero-padding.png)My response convolves the signal with a non-spherical PSF, which is specified by a field with the same domain as the signal. To avoid issues with periodic boundaries I apply a zero padding, so the exact steps are:
* zero-pad the PSF from the center, apply a Fourier transform and construct a DiagonalOperator from the result
* zero-pad the signal from the right/top and apply a Fourier transform
* apply the PSF-operator to the signal field followed by an inverse Fourier transform
* apply the adjoint signal zero padding operation
* take the real value
The problem with this procedure is that it reduces the flux at the borders of the image, where the PSF smears the additional zeros out to the pixels I'm interested in. I attached two plots to illustrate the problem, on shows how point sources are smeared out to the opposite side of the image without zero padding, the other how the zero padding leaks into the image at the border. The signal contains two point sources which get brighter with with bin number and equally the PSF gets wider.
Has anyone encountered a similar problem before? Or are there any nifty tools to tackle this, I'm not aware of? My approach would be to extend the signal field beyond the data region and to include a mask in the response to account for that. .Or is there a smarter approach?![data_no-zero-padding](/uploads/90b1f2e428d6316b0f4886ec40535fe4/data_no-zero-padding.png)
![data_outer-zero-padding](/uploads/75851eacc3174eccb9be766b57bdad28/data_outer-zero-padding.png)https://gitlab.mpcdf.mpg.de/ift/nifty/issues/260extending clip2019-02-25T10:55:09ZReimar Heinrich Leikeextending clipCurrently clip clips all values by the same clipping value. However, in numpy clip can also take an array, such that every value gets clipped by a diffent clipping value. Field.clip can in principle take a dobj and then clips differently for every entry, however this does not seem intended.
It would be advantageous to support passing a Field/MultiField instead of a scalar for clip.Currently clip clips all values by the same clipping value. However, in numpy clip can also take an array, such that every value gets clipped by a diffent clipping value. Field.clip can in principle take a dobj and then clips differently for every entry, however this does not seem intended.
It would be advantageous to support passing a Field/MultiField instead of a scalar for clip.Reimar Heinrich LeikeReimar Heinrich Leikehttps://gitlab.mpcdf.mpg.de/ift/nifty/issues/259bug in linearization with multifield as target2019-02-07T09:03:15ZReimar Heinrich Leikebug in linearization with multifield as targetIt is possible to contruct sensible operators out of Nifty routines that fail if called with Linearizations. [minimal_example.py](/uploads/064781a8a4c0a0af2872a52efb5c0814/minimal_example.py)It is possible to contruct sensible operators out of Nifty routines that fail if called with Linearizations. [minimal_example.py](/uploads/064781a8a4c0a0af2872a52efb5c0814/minimal_example.py)https://gitlab.mpcdf.mpg.de/ift/nifty/issues/257Demos do not work for python 3.5.52019-02-01T10:42:19ZJakob KnollmuellerDemos do not work for python 3.5.5Hi,
all demos do not run with python 3.5.5 because the f prefix before strings is not supported.
JakobHi,
all demos do not run with python 3.5.5 because the f prefix before strings is not supported.
JakobPhilipp ArrasPhilipp Arrashttps://gitlab.mpcdf.mpg.de/ift/nifty/issues/256Tag release commit!2019-02-01T08:37:20ZPhilipp ArrasTag release commit!Also:
- First demo runs, then build pagesAlso:
- First demo runs, then build pagesPhilipp ArrasPhilipp Arrashttps://gitlab.mpcdf.mpg.de/ift/nifty/issues/226Minimizers don't work reliably on nonlinear problems2019-02-01T08:36:45ZMartin ReineckeMinimizers don't work reliably on nonlinear problemsWe are seeing quite a lot of strange problems in our minimizations, typically when nonlinearities are involved.
So far I have not found any real bugs in our minimizers, so my suspicion is that we have to add constraints to our fields of unknowns, to prevent them wandering into regions where energy values are too high, gradients too small etc. Our current attempts to clip the nonlinearities or to manually reset the positions whenever they move out of the accepted area do not work well, so I suggest trying algorithms like SciPy's 'L-BFGS-B', 'COBYLA', etc., which have native support for constraints.
If that turns out to work well, we can try to add constraints also to our own solvers.We are seeing quite a lot of strange problems in our minimizations, typically when nonlinearities are involved.
So far I have not found any real bugs in our minimizers, so my suspicion is that we have to add constraints to our fields of unknowns, to prevent them wandering into regions where energy values are too high, gradients too small etc. Our current attempts to clip the nonlinearities or to manually reset the positions whenever they move out of the accepted area do not work well, so I suggest trying algorithms like SciPy's 'L-BFGS-B', 'COBYLA', etc., which have native support for constraints.
If that turns out to work well, we can try to add constraints also to our own solvers.