ift issueshttps://gitlab.mpcdf.mpg.de/groups/ift/-/issues2019-09-24T09:00:52Zhttps://gitlab.mpcdf.mpg.de/ift/nifty_gridder/-/issues/1Feature wishlist2019-09-24T09:00:52ZPhilipp Arrasparras@mpa-garching.mpg.deFeature wishlist- [x] Add single precision mode
- [x] Compute holograpic matrix as sparse matrix
- [x] Set number of threads for fft from outside
- [x] Compute w-screen on the fly and apply it in dirty2grid and grid2dirty
- [x] Exploit symmetries in w-screen
- [x] Add primary beam to gridder? No, at least not now.
- [x] Add weighted versions for vis2grid, grid2vis, apply_holo
- [x] Test wstacking vs modified DFT- [x] Add single precision mode
- [x] Compute holograpic matrix as sparse matrix
- [x] Set number of threads for fft from outside
- [x] Compute w-screen on the fly and apply it in dirty2grid and grid2dirty
- [x] Exploit symmetries in w-screen
- [x] Add primary beam to gridder? No, at least not now.
- [x] Add weighted versions for vis2grid, grid2vis, apply_holo
- [x] Test wstacking vs modified DFThttps://gitlab.mpcdf.mpg.de/ift/pyHealpix/-/issues/1README: Add flags to configure2017-05-24T10:11:22ZTheo SteiningerREADME: Add flags to configureOne could add the flags `--enable-openmp --enable-native-optimizations` to the install instructions.One could add the flags `--enable-openmp --enable-native-optimizations` to the install instructions.Martin ReineckeMartin Reineckehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/124Use sphinx for automated build of documentation2018-02-15T11:22:18ZTheo SteiningerUse sphinx for automated build of documentationIn the past I was successful with sphinx + napoleon
http://www.sphinx-doc.org/en/stable/ext/napoleon.htmlIn the past I was successful with sphinx + napoleon
http://www.sphinx-doc.org/en/stable/ext/napoleon.htmlPhilipp FrankPhilipp Frankhttps://gitlab.mpcdf.mpg.de/ift/D2O/-/issues/1tests: only make h5py test if h5py is avaiable.2017-05-15T21:27:26ZTheo Steiningertests: only make h5py test if h5py is avaiable.Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty_gridder/-/issues/2Outline for improved w-stacking2019-08-30T06:06:30ZMartin ReineckeOutline for improved w-stackingI'm thinking of adding a (more or less) automated method that performs gridding/degridding following the "improved w-stacking" approach. The interface would be extremely simple:
`vis2dirty(baselines, gconf, idx, vis, [distance between w planes]) -> dirty`
The code would internally do the following:
- determine min and max w values of the provided data
- determine number and locations of the required w planes
- for every w plane:
- grid the relevant subset of visibilities onto the plane
- run grid2dirty on the result
- apply wscreen to the result
- add result to accumulated result
- apply correction function for gridding in w direction to accumulated result
- return accumulated result
Adjoint operation will follow once we are sure that this works.
Is there anything I'm missing, @parras?
So far it seems as if a (somewhat sub-optimal) prototype could be written pretty quickly.I'm thinking of adding a (more or less) automated method that performs gridding/degridding following the "improved w-stacking" approach. The interface would be extremely simple:
`vis2dirty(baselines, gconf, idx, vis, [distance between w planes]) -> dirty`
The code would internally do the following:
- determine min and max w values of the provided data
- determine number and locations of the required w planes
- for every w plane:
- grid the relevant subset of visibilities onto the plane
- run grid2dirty on the result
- apply wscreen to the result
- add result to accumulated result
- apply correction function for gridding in w direction to accumulated result
- return accumulated result
Adjoint operation will follow once we are sure that this works.
Is there anything I'm missing, @parras?
So far it seems as if a (somewhat sub-optimal) prototype could be written pretty quickly.https://gitlab.mpcdf.mpg.de/ift/pyHealpix/-/issues/2pybind11 must already be installed2018-01-19T08:58:18ZTheo Steiningerpybind11 must already be installedWithout pybind11 being already installed the installation terminates with
```
./pyHealpix.cc:30:31: fatal error: pybind11/pybind11.h: No such file or directory
#include <pybind11/pybind11.h>
```
Doing a ``pip install pybind11`` solves the problem.Without pybind11 being already installed the installation terminates with
```
./pyHealpix.cc:30:31: fatal error: pybind11/pybind11.h: No such file or directory
#include <pybind11/pybind11.h>
```
Doing a ``pip install pybind11`` solves the problem.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/125pyfftw tests are not being run2017-05-16T21:20:26ZMartin Reineckepyfftw tests are not being runI just noticed that continuous integration does not run pyfftw tests, even when the package should be available. This accounts for the 2400 tests that are currently skipped in the pipelines.
I'm not sure why this happens. The relevant tests in `test_fft_operator.py` are guarded by
```
if module == "fftw" and "pyfftw" not in di:
raise SkipTest
```
which looks correct to me.I just noticed that continuous integration does not run pyfftw tests, even when the package should be available. This accounts for the 2400 tests that are currently skipped in the pipelines.
I'm not sure why this happens. The relevant tests in `test_fft_operator.py` are guarded by
```
if module == "fftw" and "pyfftw" not in di:
raise SkipTest
```
which looks correct to me.https://gitlab.mpcdf.mpg.de/ift/nifty_gridder/-/issues/3Segfault when epsilon >= 1e-1 (roughly)2019-10-02T16:57:59ZPhilipp Arrasparras@mpa-garching.mpg.deSegfault when epsilon >= 1e-1 (roughly)I have edited the tests in order to trigger the bug. See branch `bugreport` and https://gitlab.mpcdf.mpg.de/ift/nifty_gridder/-/jobs/938200I have edited the tests in order to trigger the bug. See branch `bugreport` and https://gitlab.mpcdf.mpg.de/ift/nifty_gridder/-/jobs/938200https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/126Trouble with SteepestDescent.2017-07-08T11:37:42ZMatevz, Sraml (sraml)Trouble with SteepestDescent.
```
np.random.seed(0)
N_dim = 500
x_space = RGSpace(N_dim)
x = Field(x_space, val=np.random.rand(N_dim))
N = DiagonalOperator(x_space, diagonal = 1.)
class QuadraticPot(Energy):
def __init__(self, position, N):
super(QuadraticPot, self).__init__(position)
self.N = N
def at(self, position):
return self.__class__(position, N = self.N)
@property
def value(self):
H = 0.5 *self.position.dot(self.N.inverse_times(self.position))
return H.real
@property
def gradient(self):
g = self.N.inverse_times(self.position)
return_g = g.copy_empty(dtype=np.float)
return_g.val = g.val.real
return return_g
@property
def curvature(self):
return self.N
minimizer = SteepestDescent(iteration_limit=1000,convergence_tolerance=1E-4, convergence_level=3)
energy = QuadraticPot(position=x , N=N)
(energy, convergence) = minimizer(energy)
```
I'm feeding the SteepestDescent method a quadratic function with 0 mean. If you run it, it converges. It needs around 5 iterations to converge but after that it creates a loop and it is trapped inside it until it reaches the 'iteration_limit=1000' . The produced result is still correct.
```
np.random.seed(0)
N_dim = 500
x_space = RGSpace(N_dim)
x = Field(x_space, val=np.random.rand(N_dim))
N = DiagonalOperator(x_space, diagonal = 1.)
class QuadraticPot(Energy):
def __init__(self, position, N):
super(QuadraticPot, self).__init__(position)
self.N = N
def at(self, position):
return self.__class__(position, N = self.N)
@property
def value(self):
H = 0.5 *self.position.dot(self.N.inverse_times(self.position))
return H.real
@property
def gradient(self):
g = self.N.inverse_times(self.position)
return_g = g.copy_empty(dtype=np.float)
return_g.val = g.val.real
return return_g
@property
def curvature(self):
return self.N
minimizer = SteepestDescent(iteration_limit=1000,convergence_tolerance=1E-4, convergence_level=3)
energy = QuadraticPot(position=x , N=N)
(energy, convergence) = minimizer(energy)
```
I'm feeding the SteepestDescent method a quadratic function with 0 mean. If you run it, it converges. It needs around 5 iterations to converge but after that it creates a loop and it is trapped inside it until it reaches the 'iteration_limit=1000' . The produced result is still correct.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/127Find good convergence criterion for DescentMinimizer2017-07-11T08:26:57ZTheo SteiningerFind good convergence criterion for DescentMinimizerAt the moment the `convergence number` is defined as `delta = abs(gradient).max() * (step_length/gradient_norm)`
Are there better measures? For a simple quadratic potential, this definition of `delta` causes a steepest descent to not trust the (actual true) minimum.
A good measure should be independent of scale and initial conditions.
Some possible candidates are:
* `(new_energy.position - energy.position).norm()`
* `(new_energy.position - energy.position).max()`
* `((new_energy.position - energy.position)/(1+energy.position)).norm()`
* `((new_energy.position - energy.position)/(1+energy.position)).max()`
* `step_length`
* `step_length / energy.position.norm()`
* `gradient.norm()`At the moment the `convergence number` is defined as `delta = abs(gradient).max() * (step_length/gradient_norm)`
Are there better measures? For a simple quadratic potential, this definition of `delta` causes a steepest descent to not trust the (actual true) minimum.
A good measure should be independent of scale and initial conditions.
Some possible candidates are:
* `(new_energy.position - energy.position).norm()`
* `(new_energy.position - energy.position).max()`
* `((new_energy.position - energy.position)/(1+energy.position)).norm()`
* `((new_energy.position - energy.position)/(1+energy.position)).max()`
* `step_length`
* `step_length / energy.position.norm()`
* `gradient.norm()`Martin ReineckeMartin Reineckehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/128Powerspectrum consistency2017-05-22T20:09:16ZReimar Heinrich LeikePowerspectrum consistencyAt the current state of nifty, it not properly documented and not intuitiv when to give a method a power spectrum and when to pass its root. Different conventions are used for power_synthesize and Power Opertor.At the current state of nifty, it not properly documented and not intuitiv when to give a method a power spectrum and when to pass its root. Different conventions are used for power_synthesize and Power Opertor.Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/129Bug in plotting of GLSpace2017-05-24T10:27:22ZTheo SteiningerBug in plotting of GLSpace```
from nifty import *
x = GLSpace(15)
f = Field.from_random(domain=x, random_type="normal")
my_plotter = plotting.GLPlotter(title='test')
my_plotter(f)
```
yields
![newplot](/uploads/0df31a3f1e97de7d9e20dba05d797f19/newplot.png)
Plotting of HPSpace works fine.```
from nifty import *
x = GLSpace(15)
f = Field.from_random(domain=x, random_type="normal")
my_plotter = plotting.GLPlotter(title='test')
my_plotter(f)
```
yields
![newplot](/uploads/0df31a3f1e97de7d9e20dba05d797f19/newplot.png)
Plotting of HPSpace works fine.Martin ReineckeMartin Reineckehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/130What is the correct data type for field norms?2017-05-23T01:19:12ZMartin ReineckeWhat is the correct data type for field norms?While running the nonlinear Wiener filter demo, I observed warnings about casting complex values to float. This can be traced back to the norm() method of the Field class returning complex numbers if the field is complex. At least for the 2-norm this should probably be replaced by a float value, but I'm not absolutely sure about other possible norms.While running the nonlinear Wiener filter demo, I observed warnings about casting complex values to float. This can be traced back to the norm() method of the Field class returning complex numbers if the field is complex. At least for the 2-norm this should probably be replaced by a float value, but I'm not absolutely sure about other possible norms.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/131SmoothingOperator: do we really need inverse_times() and friends?2017-08-18T08:22:04ZMartin ReineckeSmoothingOperator: do we really need inverse_times() and friends?Some of the current regresseion tests show divisions by zero when calling inverse_times() because exp(-x*x) for x around 20-30 bacomes zero.
If there is no need for this functionality in real-word code, I'd suggest of disabling the problematic methods. If that's not possible, I'd encourage testing for underflow and raising an exception if it occurs.Some of the current regresseion tests show divisions by zero when calling inverse_times() because exp(-x*x) for x around 20-30 bacomes zero.
If there is no need for this functionality in real-word code, I'd suggest of disabling the problematic methods. If that's not possible, I'd encourage testing for underflow and raising an exception if it occurs.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/132DiagonalOperator: trace_log() and log_determinant()2017-06-21T08:02:59ZMartin ReineckeDiagonalOperator: trace_log() and log_determinant()trace_log() and log_determinant() should throw exceptions if trace/determinant is <=0.trace_log() and log_determinant() should throw exceptions if trace/determinant is <=0.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/133SmoothingOperator causes log(0.) for logarithmic distances2017-06-21T07:58:05ZTheo SteiningerSmoothingOperator causes log(0.) for logarithmic distanceshttps://gitlab.mpcdf.mpg.de/ift/NIFTy/blob/master/nifty/operators/smoothing_operator/fft_smoothing_operator.py#L27
https://gitlab.mpcdf.mpg.de/ift/NIFTy/blob/master/nifty/operators/smoothing_operator/fft_smoothing_operator.py#L27
Smoothing should be translationally invariant. Couldn't we just add 1.0 to the distance array?https://gitlab.mpcdf.mpg.de/ift/NIFTy/blob/master/nifty/operators/smoothing_operator/fft_smoothing_operator.py#L27
https://gitlab.mpcdf.mpg.de/ift/NIFTy/blob/master/nifty/operators/smoothing_operator/fft_smoothing_operator.py#L27
Smoothing should be translationally invariant. Couldn't we just add 1.0 to the distance array?Martin ReineckeMartin Reineckehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/134Direct smoothing with logarithmic distances doesn't conserve the norm.2017-06-12T00:38:52ZTheo SteiningerDirect smoothing with logarithmic distances doesn't conserve the norm.The direct smoothing doesn't conserve the norm of an input field and also distributes the weight differently depending on the scale.
```
from nifty import *
x = RGSpace((2048), harmonic=True)
xp = PowerSpace(x)
f = Field(xp, val = 1.)
f.val[100] = 10
f.val[500] = 10
f.val[50] = 10
sm = SmoothingOperator(xp, sigma=0.1, log_distances=True)
g = sm(f)
f.dot(f)
g.dot(g)
plo = plotting.PowerPlotter(title='power.html')
plo(g)
```
![image](/uploads/cb6ec21b1814083df3a09594367cae14/image.png)
The direct smoothing doesn't conserve the norm of an input field and also distributes the weight differently depending on the scale.
```
from nifty import *
x = RGSpace((2048), harmonic=True)
xp = PowerSpace(x)
f = Field(xp, val = 1.)
f.val[100] = 10
f.val[500] = 10
f.val[50] = 10
sm = SmoothingOperator(xp, sigma=0.1, log_distances=True)
g = sm(f)
f.dot(f)
g.dot(g)
plo = plotting.PowerPlotter(title='power.html')
plo(g)
```
![image](/uploads/cb6ec21b1814083df3a09594367cae14/image.png)
Martin ReineckeMartin Reineckehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/135Drawing random fields, problem on monopol and highest frequency2017-05-23T23:39:28ZPumpe, Daniel (dpumpe)Drawing random fields, problem on monopol and highest frequencyJakob and I encountered a problem with .power_synthesize.
```
from nifty import *
import numpy as np
x1 = RGSpace(200, zerocenter=False, distances=.1)
k1 = RGRGTransformation.get_codomain(x1)
# setting spaces
npix = np.array([200]) # number of pixels
total_volume = 1. # total length
# setting signal parameters
lambda_s = .2 # signal correlation length
sigma_s = 1.5 # signal variance
# calculating parameters
k_0 = 4./(2*np.pi*lambda_s)
a_s = sigma_s**2.*lambda_s*total_volume
# creation of spaces
x1 = RGSpace(npix, distances=total_volume/npix,
zerocenter=False)
k1 = RGRGTransformation.get_codomain(x1)
p1 = PowerSpace(harmonic_partner=k1, logarithmic=False)
# # creating Power Operator with given spectrum
spec = (lambda k: a_s/(1+(k/k_0)**2)**2)
p_field = Field(p1, val=spec, dtype=np.float64)
# creating FFT-Operator and Response-Operator with Gaussian convolution
FFTOp = FFTOperator(domain=x1, target=k1,
domain_dtype=np.float64,
target_dtype=np.complex128)
# drawing a random field
sp = Field(p1, val=lambda z: spec(z) ** (1. / 2))
no_spec= 10000
sps = Field(p1, val=0.)
for ii in xrange(no_spec):
sk = sp.power_synthesize(real_signal=True, mean=0.)
sps += Field(p1, val=sk.power_analyze()**2, dtype=np.float64)
mean_power = sps/10000.
```
First and last mode of mean_power are wrong by a factor of 2.Jakob and I encountered a problem with .power_synthesize.
```
from nifty import *
import numpy as np
x1 = RGSpace(200, zerocenter=False, distances=.1)
k1 = RGRGTransformation.get_codomain(x1)
# setting spaces
npix = np.array([200]) # number of pixels
total_volume = 1. # total length
# setting signal parameters
lambda_s = .2 # signal correlation length
sigma_s = 1.5 # signal variance
# calculating parameters
k_0 = 4./(2*np.pi*lambda_s)
a_s = sigma_s**2.*lambda_s*total_volume
# creation of spaces
x1 = RGSpace(npix, distances=total_volume/npix,
zerocenter=False)
k1 = RGRGTransformation.get_codomain(x1)
p1 = PowerSpace(harmonic_partner=k1, logarithmic=False)
# # creating Power Operator with given spectrum
spec = (lambda k: a_s/(1+(k/k_0)**2)**2)
p_field = Field(p1, val=spec, dtype=np.float64)
# creating FFT-Operator and Response-Operator with Gaussian convolution
FFTOp = FFTOperator(domain=x1, target=k1,
domain_dtype=np.float64,
target_dtype=np.complex128)
# drawing a random field
sp = Field(p1, val=lambda z: spec(z) ** (1. / 2))
no_spec= 10000
sps = Field(p1, val=0.)
for ii in xrange(no_spec):
sk = sp.power_synthesize(real_signal=True, mean=0.)
sps += Field(p1, val=sk.power_analyze()**2, dtype=np.float64)
mean_power = sps/10000.
```
First and last mode of mean_power are wrong by a factor of 2.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/136Multiprocessing to process multiple probes for diagonal probing in parallel2017-08-10T07:12:09ZPumpe, Daniel (dpumpe)Multiprocessing to process multiple probes for diagonal probing in parallelHi,
I've tried to upgrade the Prober class, making it capable of processing multiple probes at the same time on different cpus. Due to various mysterious pickling errors I failed to do so.
Does anybody have a deeper experience in parallelising a simple for-loop? I think this would speed up our codes by multiple factors, as most of them a run on prelude.Hi,
I've tried to upgrade the Prober class, making it capable of processing multiple probes at the same time on different cpus. Due to various mysterious pickling errors I failed to do so.
Does anybody have a deeper experience in parallelising a simple for-loop? I think this would speed up our codes by multiple factors, as most of them a run on prelude.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/137Limit critical plotting functions to MPI.rank==0.2017-07-22T23:30:09ZTheo SteiningerLimit critical plotting functions to MPI.rank==0.