NIFTy issueshttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues2018-02-15T11:22:18Zhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/124Use sphinx for automated build of documentation2018-02-15T11:22:18ZTheo SteiningerUse sphinx for automated build of documentationIn the past I was successful with sphinx + napoleon
http://www.sphinx-doc.org/en/stable/ext/napoleon.htmlIn the past I was successful with sphinx + napoleon
http://www.sphinx-doc.org/en/stable/ext/napoleon.htmlPhilipp FrankPhilipp Frankhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/125pyfftw tests are not being run2017-05-16T21:20:26ZMartin Reineckepyfftw tests are not being runI just noticed that continuous integration does not run pyfftw tests, even when the package should be available. This accounts for the 2400 tests that are currently skipped in the pipelines.
I'm not sure why this happens. The relevant te...I just noticed that continuous integration does not run pyfftw tests, even when the package should be available. This accounts for the 2400 tests that are currently skipped in the pipelines.
I'm not sure why this happens. The relevant tests in `test_fft_operator.py` are guarded by
```
if module == "fftw" and "pyfftw" not in di:
raise SkipTest
```
which looks correct to me.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/126Trouble with SteepestDescent.2017-07-08T11:37:42ZMatevz, Sraml (sraml)Trouble with SteepestDescent.
```
np.random.seed(0)
N_dim = 500
x_space = RGSpace(N_dim)
x = Field(x_space, val=np.random.rand(N_dim))
N = DiagonalOperator(x_space, diagonal = 1.)
class QuadraticPot(Energy):
def __init__(self, position, N):
...
```
np.random.seed(0)
N_dim = 500
x_space = RGSpace(N_dim)
x = Field(x_space, val=np.random.rand(N_dim))
N = DiagonalOperator(x_space, diagonal = 1.)
class QuadraticPot(Energy):
def __init__(self, position, N):
super(QuadraticPot, self).__init__(position)
self.N = N
def at(self, position):
return self.__class__(position, N = self.N)
@property
def value(self):
H = 0.5 *self.position.dot(self.N.inverse_times(self.position))
return H.real
@property
def gradient(self):
g = self.N.inverse_times(self.position)
return_g = g.copy_empty(dtype=np.float)
return_g.val = g.val.real
return return_g
@property
def curvature(self):
return self.N
minimizer = SteepestDescent(iteration_limit=1000,convergence_tolerance=1E-4, convergence_level=3)
energy = QuadraticPot(position=x , N=N)
(energy, convergence) = minimizer(energy)
```
I'm feeding the SteepestDescent method a quadratic function with 0 mean. If you run it, it converges. It needs around 5 iterations to converge but after that it creates a loop and it is trapped inside it until it reaches the 'iteration_limit=1000' . The produced result is still correct.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/127Find good convergence criterion for DescentMinimizer2017-07-11T08:26:57ZTheo SteiningerFind good convergence criterion for DescentMinimizerAt the moment the `convergence number` is defined as `delta = abs(gradient).max() * (step_length/gradient_norm)`
Are there better measures? For a simple quadratic potential, this definition of `delta` causes a steepest descent to not tr...At the moment the `convergence number` is defined as `delta = abs(gradient).max() * (step_length/gradient_norm)`
Are there better measures? For a simple quadratic potential, this definition of `delta` causes a steepest descent to not trust the (actual true) minimum.
A good measure should be independent of scale and initial conditions.
Some possible candidates are:
* `(new_energy.position - energy.position).norm()`
* `(new_energy.position - energy.position).max()`
* `((new_energy.position - energy.position)/(1+energy.position)).norm()`
* `((new_energy.position - energy.position)/(1+energy.position)).max()`
* `step_length`
* `step_length / energy.position.norm()`
* `gradient.norm()`Martin ReineckeMartin Reineckehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/128Powerspectrum consistency2017-05-22T20:09:16ZReimar H LeikePowerspectrum consistencyAt the current state of nifty, it not properly documented and not intuitiv when to give a method a power spectrum and when to pass its root. Different conventions are used for power_synthesize and Power Opertor.At the current state of nifty, it not properly documented and not intuitiv when to give a method a power spectrum and when to pass its root. Different conventions are used for power_synthesize and Power Opertor.Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/129Bug in plotting of GLSpace2017-05-24T10:27:22ZTheo SteiningerBug in plotting of GLSpace```
from nifty import *
x = GLSpace(15)
f = Field.from_random(domain=x, random_type="normal")
my_plotter = plotting.GLPlotter(title='test')
my_plotter(f)
```
yields
![newplot](/uploads/0df31a3f1e97de7d9e20dba05d797f19/newplot.png)
Plo...```
from nifty import *
x = GLSpace(15)
f = Field.from_random(domain=x, random_type="normal")
my_plotter = plotting.GLPlotter(title='test')
my_plotter(f)
```
yields
![newplot](/uploads/0df31a3f1e97de7d9e20dba05d797f19/newplot.png)
Plotting of HPSpace works fine.Martin ReineckeMartin Reineckehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/130What is the correct data type for field norms?2017-05-23T01:19:12ZMartin ReineckeWhat is the correct data type for field norms?While running the nonlinear Wiener filter demo, I observed warnings about casting complex values to float. This can be traced back to the norm() method of the Field class returning complex numbers if the field is complex. At least for th...While running the nonlinear Wiener filter demo, I observed warnings about casting complex values to float. This can be traced back to the norm() method of the Field class returning complex numbers if the field is complex. At least for the 2-norm this should probably be replaced by a float value, but I'm not absolutely sure about other possible norms.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/131SmoothingOperator: do we really need inverse_times() and friends?2017-08-18T08:22:04ZMartin ReineckeSmoothingOperator: do we really need inverse_times() and friends?Some of the current regresseion tests show divisions by zero when calling inverse_times() because exp(-x*x) for x around 20-30 bacomes zero.
If there is no need for this functionality in real-word code, I'd suggest of disabling the prob...Some of the current regresseion tests show divisions by zero when calling inverse_times() because exp(-x*x) for x around 20-30 bacomes zero.
If there is no need for this functionality in real-word code, I'd suggest of disabling the problematic methods. If that's not possible, I'd encourage testing for underflow and raising an exception if it occurs.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/132DiagonalOperator: trace_log() and log_determinant()2017-06-21T08:02:59ZMartin ReineckeDiagonalOperator: trace_log() and log_determinant()trace_log() and log_determinant() should throw exceptions if trace/determinant is <=0.trace_log() and log_determinant() should throw exceptions if trace/determinant is <=0.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/133SmoothingOperator causes log(0.) for logarithmic distances2017-06-21T07:58:05ZTheo SteiningerSmoothingOperator causes log(0.) for logarithmic distanceshttps://gitlab.mpcdf.mpg.de/ift/NIFTy/blob/master/nifty/operators/smoothing_operator/fft_smoothing_operator.py#L27
https://gitlab.mpcdf.mpg.de/ift/NIFTy/blob/master/nifty/operators/smoothing_operator/fft_smoothing_operator.py#L27
Smooth...https://gitlab.mpcdf.mpg.de/ift/NIFTy/blob/master/nifty/operators/smoothing_operator/fft_smoothing_operator.py#L27
https://gitlab.mpcdf.mpg.de/ift/NIFTy/blob/master/nifty/operators/smoothing_operator/fft_smoothing_operator.py#L27
Smoothing should be translationally invariant. Couldn't we just add 1.0 to the distance array?Martin ReineckeMartin Reineckehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/134Direct smoothing with logarithmic distances doesn't conserve the norm.2017-06-12T00:38:52ZTheo SteiningerDirect smoothing with logarithmic distances doesn't conserve the norm.The direct smoothing doesn't conserve the norm of an input field and also distributes the weight differently depending on the scale.
```
from nifty import *
x = RGSpace((2048),...The direct smoothing doesn't conserve the norm of an input field and also distributes the weight differently depending on the scale.
```
from nifty import *
x = RGSpace((2048), harmonic=True)
xp = PowerSpace(x)
f = Field(xp, val = 1.)
f.val[100] = 10
f.val[500] = 10
f.val[50] = 10
sm = SmoothingOperator(xp, sigma=0.1, log_distances=True)
g = sm(f)
f.dot(f)
g.dot(g)
plo = plotting.PowerPlotter(title='power.html')
plo(g)
```
![image](/uploads/cb6ec21b1814083df3a09594367cae14/image.png)
Martin ReineckeMartin Reineckehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/135Drawing random fields, problem on monopol and highest frequency2017-05-23T23:39:28ZPumpe, Daniel (dpumpe)Drawing random fields, problem on monopol and highest frequencyJakob and I encountered a problem with .power_synthesize.
```
from nifty import *
import numpy as np
x1 = RGSpace(200, zerocenter=False, distances=.1)
k1 = RGRGTransformation.get_codomain(x1)
# setting spaces
npix = np.array(...Jakob and I encountered a problem with .power_synthesize.
```
from nifty import *
import numpy as np
x1 = RGSpace(200, zerocenter=False, distances=.1)
k1 = RGRGTransformation.get_codomain(x1)
# setting spaces
npix = np.array([200]) # number of pixels
total_volume = 1. # total length
# setting signal parameters
lambda_s = .2 # signal correlation length
sigma_s = 1.5 # signal variance
# calculating parameters
k_0 = 4./(2*np.pi*lambda_s)
a_s = sigma_s**2.*lambda_s*total_volume
# creation of spaces
x1 = RGSpace(npix, distances=total_volume/npix,
zerocenter=False)
k1 = RGRGTransformation.get_codomain(x1)
p1 = PowerSpace(harmonic_partner=k1, logarithmic=False)
# # creating Power Operator with given spectrum
spec = (lambda k: a_s/(1+(k/k_0)**2)**2)
p_field = Field(p1, val=spec, dtype=np.float64)
# creating FFT-Operator and Response-Operator with Gaussian convolution
FFTOp = FFTOperator(domain=x1, target=k1,
domain_dtype=np.float64,
target_dtype=np.complex128)
# drawing a random field
sp = Field(p1, val=lambda z: spec(z) ** (1. / 2))
no_spec= 10000
sps = Field(p1, val=0.)
for ii in xrange(no_spec):
sk = sp.power_synthesize(real_signal=True, mean=0.)
sps += Field(p1, val=sk.power_analyze()**2, dtype=np.float64)
mean_power = sps/10000.
```
First and last mode of mean_power are wrong by a factor of 2.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/136Multiprocessing to process multiple probes for diagonal probing in parallel2017-08-10T07:12:09ZPumpe, Daniel (dpumpe)Multiprocessing to process multiple probes for diagonal probing in parallelHi,
I've tried to upgrade the Prober class, making it capable of processing multiple probes at the same time on different cpus. Due to various mysterious pickling errors I failed to do so.
Does anybody have a deeper experience in par...Hi,
I've tried to upgrade the Prober class, making it capable of processing multiple probes at the same time on different cpus. Due to various mysterious pickling errors I failed to do so.
Does anybody have a deeper experience in parallelising a simple for-loop? I think this would speed up our codes by multiple factors, as most of them a run on prelude.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/137Limit critical plotting functions to MPI.rank==0.2017-07-22T23:30:09ZTheo SteiningerLimit critical plotting functions to MPI.rank==0.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/123SmoothingOperator: Correct units for sigma if `log_distances=True`?2017-05-28T06:56:12ZTheo SteiningerSmoothingOperator: Correct units for sigma if `log_distances=True`?Should the sigma be in log-units, too, if `log_distances` is set to True? If yes, how?Should the sigma be in log-units, too, if `log_distances` is set to True? If yes, how?Martin ReineckeMartin Reineckehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/122DirectSmoothingOperator shows unwanted behaviour on edges2017-05-28T06:56:12ZTheo SteiningerDirectSmoothingOperator shows unwanted behaviour on edgesBy fixing the normalization of the direct/brute-force smoother, the edges now show some instabilities:
``
from nifty import *
x = RGSpace(64, harmonic=True)
p = PowerSpace(x)
sm = SmoothingOperator(p, sigma = 1.)
sm.domain
f = Field(p, v...By fixing the normalization of the direct/brute-force smoother, the edges now show some instabilities:
``
from nifty import *
x = RGSpace(64, harmonic=True)
p = PowerSpace(x)
sm = SmoothingOperator(p, sigma = 1.)
sm.domain
f = Field(p, val=1.)
f.val[16] = 2
plotter = plotting.PowerPlotter()
plotter.title = 'test'
plotter.plot(sm(f))
``
-> The contributions to the edge pixels don't add up anymore. This is somehow the 'adjoint problem' of what has been solved by Martin's fix.
Is there any chance to fix this?
![image](/uploads/ebaac233e4a170769656d7be751dcaa2/image.png)Martin ReineckeMartin Reineckehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/121Questions on FieldType and FieldArray2017-05-28T06:56:12ZMartin ReineckeQuestions on FieldType and FieldArrayIt seems we forgot to discuss (and document) FieldType and FieldArray...
- could they be merged into a single class? FieldType seems unused, except as a parent of FieldArray.
- FieldType::process() seems unused and seems rather carel...It seems we forgot to discuss (and document) FieldType and FieldArray...
- could they be merged into a single class? FieldType seems unused, except as a parent of FieldArray.
- FieldType::process() seems unused and seems rather careless about errors. Do we need to keep this?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/120Import on OS X fails2017-05-28T06:56:12ZPumpe, Daniel (dpumpe)Import on OS X failsOn the last master branch the
from nifty import * fails as
In [1]: from nifty import *
---------------------------------------------------------------------------
ImportError Traceback (most recent cal...On the last master branch the
from nifty import * fails as
In [1]: from nifty import *
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-1-5ba8763d8b3e> in <module>()
----> 1 from nifty import *
/Users/danielpumpe/CloudStation/dpumpe/construction/NIFTy_3/lib/python2.7/site-packages/ift_nifty-3.0.4-py2.7-macosx-10.12-x86_64.egg/nifty/__init__.py in <module>()
55 from sugar import *
56
---> 57 import plotting
/Users/danielpumpe/CloudStation/dpumpe/construction/NIFTy_3/lib/python2.7/site-packages/ift_nifty-3.0.4-py2.7-macosx-10.12-x86_64.egg/nifty/plotting/__init__.py in <module>()
3 from figures import *
4 from colormap import *
----> 5 from plotter import *
6
ImportError: No module named plotter
As the latest master branch works on Martins machine, it is likely that this is due to subtile differences in the filesystems of OS X compared to Linux. Hence please change the naming of plotter.py in plotter or the name of the directory in order to overcome confusion and import errors.
https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/118Physical Units for demos2017-05-28T06:56:12ZReimar H LeikePhysical Units for demosVerify that the demo correctly applies units, such that if the resolution is doubled the physical problem stays the same.Verify that the demo correctly applies units, such that if the resolution is doubled the physical problem stays the same.Reimar H LeikeReimar H Leike2017-05-19https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/117Make "master" the default branch of the repository2017-05-28T06:56:12ZMartin ReineckeMake "master" the default branch of the repositoryThe Nifty_1 branch is (or at least should be) more or less unused by now. I suggest making "master" the default branch.The Nifty_1 branch is (or at least should be) more or less unused by now. I suggest making "master" the default branch.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/116Allow plots of the sphere without depending on healpy2017-05-28T06:56:12ZMartin ReineckeAllow plots of the sphere without depending on healpyIt should not be a big effort to provide basic plotting functionality for functions on the sphere without having to import healpy. I will take care of this.It should not be a big effort to provide basic plotting functionality for functions on the sphere without having to import healpy. I will take care of this.Martin ReineckeMartin Reineckehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/115Remove "Makefile"2017-05-28T06:56:12ZMartin ReineckeRemove "Makefile"Is this file needed for anything? It has not been updated for two years and seems disconnected from the rest of the package.Is this file needed for anything? It has not been updated for two years and seems disconnected from the rest of the package.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/114implement PowerSpace nbins>dim check2017-05-28T06:56:12ZReimar H Leikeimplement PowerSpace nbins>dim checkPowerSpace should raise an exception if nbins is bigger than the number of dimensionsPowerSpace should raise an exception if nbins is bigger than the number of dimensionsReimar H LeikeReimar H Leikehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/113Simplify direct smoothing code2017-05-28T06:56:12ZMartin ReineckeSimplify direct smoothing codeIf I understood correctly, this functionality performs Gaussian smoothing on a (non-equidistant) array of data.
I can implement this efficiently and without using Cython; I just need some help in the case that the array is distributed o...If I understood correctly, this functionality performs Gaussian smoothing on a (non-equidistant) array of data.
I can implement this efficiently and without using Cython; I just need some help in the case that the array is distributed over several MPI tasks...
Implementing this would get rid of Nifty's Cython dependence.Martin ReineckeMartin Reineckehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/112adjoint_inverse_times vs. inverse_adjoint_times2017-05-28T06:56:12ZTheo Steiningeradjoint_inverse_times vs. inverse_adjoint_timesIf a (left&right)-inverse of an operator exists, both methods are the same.If a (left&right)-inverse of an operator exists, both methods are the same.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/111Unitary vs. Isometric&Coisometric2017-05-28T06:56:12ZTheo SteiningerUnitary vs. Isometric&CoisometricSHT are in general not unitary but they can be (co)isometric. In this case, there exist left- and right-inverses. Do we want to distinguish unitary -> (co)isometric in order to automatically enable e.g. `inverse_times` for certain isomet...SHT are in general not unitary but they can be (co)isometric. In this case, there exist left- and right-inverses. Do we want to distinguish unitary -> (co)isometric in order to automatically enable e.g. `inverse_times` for certain isometric SHTs?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/110RGSpace hermitian decomposition bug for zerocentered & odd-number of pixels2017-05-28T06:56:12ZTheo SteiningerRGSpace hermitian decomposition bug for zerocentered & odd-number of pixelsTheo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/109Discuss default dtype problems for FFT/SHT Operator2017-05-28T06:56:12ZTheo SteiningerDiscuss default dtype problems for FFT/SHT Operatorhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/107Add NIFTy 3 virtualenv to prelude.2017-05-28T06:56:12ZTheo SteiningerAdd NIFTy 3 virtualenv to prelude.Pumpe, Daniel (dpumpe)Pumpe, Daniel (dpumpe)https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/138Support pyfftw even if MPI is not present2017-06-05T02:15:59ZMartin ReineckeSupport pyfftw even if MPI is not presentThis will allow using multi-threaded FFTs which can be a big help in large calculations.This will allow using multi-threaded FFTs which can be a big help in large calculations.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/139Broken demo?2017-07-29T11:50:49ZPhilipp ArrasBroken demo?When I try to run *demos/wiener_filter_harmonic.py* the program does not terminate after a sensible amount of time.
I get the following warning:
>>>
[MainThread][WARNING ] Field The distribution_stragey of pindex does not fit the...When I try to run *demos/wiener_filter_harmonic.py* the program does not terminate after a sensible amount of time.
I get the following warning:
>>>
[MainThread][WARNING ] Field The distribution_stragey of pindex does not fit the slice_local distribution strategy of the synthesized field.
/home/philipp/.local/lib/python2.7/site-packages/d2o-1.1.0-py2.7.egg/d2o/distributor_factory.py:339: ComplexWarning:
>>>
I am looking forward to joining your group and working on NIFTy :)https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/140Dependencies in README file2017-06-05T11:29:49ZPhilipp ArrasDependencies in README fileDuring install I noticed that the PyPI packages `mpi4py`, `nose_parameterized` and `plotly`, which I needed to get NIFTy to run, are not mentioned in the README.During install I noticed that the PyPI packages `mpi4py`, `nose_parameterized` and `plotly`, which I needed to get NIFTy to run, are not mentioned in the README.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/141Many tests are apparently skipped2017-05-31T07:24:21ZMartin ReineckeMany tests are apparently skippedThe @expand decorator use in our tests doesn't seem to work as we expect...
For example, the `test_pipundexInversion` method in test_power_space_test.py should be instantiated for the 10 different `RGSpace`s and the 2 `LMSpace`s in `HAR...The @expand decorator use in our tests doesn't seem to work as we expect...
For example, the `test_pipundexInversion` method in test_power_space_test.py should be instantiated for the 10 different `RGSpace`s and the 2 `LMSpace`s in `HARMONIC_SPACES`. In fact, there are only test names like
`test_pipundexInversion_RGSpace_not_None_3_False`
and
`test_pipundexInversion_LMSpace_fftw_None_3_True`
It seems that the constructor arguments to the spaces in HARMONIC_SPACES are not reflected in the names of the tests, and so the same test name is created over and over, and only run once in the end.
I have no idea how to fix this.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/142ProductSpaces drawing random fields and checking their spectra2017-06-21T07:57:40ZPumpe, Daniel (dpumpe)ProductSpaces drawing random fields and checking their spectraHi, currently I am testing ProductSpace reconstructions (PLN over RGSpace (x) RGSpace). I am facing a problem concerning the spectra. For a simple test scenario I would like to check the power of field drawn from two spectra, as the foll...Hi, currently I am testing ProductSpace reconstructions (PLN over RGSpace (x) RGSpace). I am facing a problem concerning the spectra. For a simple test scenario I would like to check the power of field drawn from two spectra, as the following
```
from nifty import *
np.random.seed(42)
#########################################################################
# setting space one
#########################################################################
npix_0 = np.array([200]) # number of pixels
total_volume_0 = 1. # total length
# setting signal parameters
lambda_s_0 = .5 # signal correlation length
sigma_s_0 = 1. # signal variance
#setting length smoothing kernel
kernel_0 = 0.
# calculating parameters
k0_0 = 4./(2*np.pi*lambda_s_0)
a_s_0 = sigma_s_0**2.*lambda_s_0*total_volume_0
# creation of spaces
x_0 = RGSpace(npix_0, distances=total_volume_0/npix_0,
zerocenter=False)
k_0 = RGRGTransformation.get_codomain(x_0)
p_0 = PowerSpace(harmonic_partner=k_0, logarithmic=False)
# creating Power Operator with given spectrum
spec_0 = (lambda k: a_s_0/(1+(k/k0_0)**2)**2)
p_field_0 = Field(p_0, val=spec_0, dtype=np.float64)
# creating FFT-Operator and Response-Operator with Gaussian convolution
FFTOp_0 = FFTOperator(domain=x_0, target=k_0)
#########################################################################
# setting space two
#########################################################################
npix_1 = np.array([100]) # number of pixels
total_volume_1 = 1. # total length
# setting signal parameters
lambda_s_1 = .05 # signal correlation length
sigma_s_1 = 2. # signal variance
#setting length smoothing kernel
kernel_1 = 0.
# calculating parameters
k0_1 = 4./(2*np.pi*lambda_s_1)
a_s_1 = sigma_s_1**2.*lambda_s_1*total_volume_1
# creation of spaces
x_1 = RGSpace(npix_1, distances=total_volume_1/npix_1,
zerocenter=False)
k_1 = RGRGTransformation.get_codomain(x_1)
p_1 = PowerSpace(harmonic_partner=k_1, logarithmic=False)
# creating Power Operator with given spectrum
spec_1 = (lambda k: a_s_0/(1+(k/k0_1)**2)**2)
p_field_1 = Field(p_1, val=spec_1, dtype=np.float64)
# creating FFT-Operator and Response-Operator with Gaussian convolution
FFTOp_1 = FFTOperator(domain=x_1, target=k_1)
#########################################################################
# drawing a random field with above defined correlation structure
#########################################################################
fp_0 = Field(p_0, val=(lambda k: spec_0(k)**1/2.))
fp_1 = Field(p_1, val=(lambda k: spec_1(k)**1/2.))
outer = np.outer(fp_0.val.data, fp_1.val.data)
fp = Field((p_0, p_1), val=outer)
sk = fp.power_synthesize(spaces=(0,1), real_signal=True)
```
The question is, how do I recover fp_0 and fp_1 from sk (i.e. simply the power of the drawn field).
In my mind this should be:
```
def test_product_space_power(x):
samples = 100
ps_0 = Field(p_0, val=0.)
ps_1 = Field(p_1, val=1.)
for ii in xrange(samples):
sk = x.power_synthesize(spaces=(0,1), real_signal=True)
p0, p1 = analyze_sk(sk)
ps_1 += p1
ps_0 += p0
p_0_test_result = ps_0/samples
p_1_test_result = ps_1/samples
return p_0_test_result, p_1_test_result
def analyze_sk(x):
sp = x.power_analyze(spaces=0, decompose_power=False)\
.power_analyze(spaces=1, decompose_power=False)
p0= sp.sum(spaces=1)/fp_1.sum()
p1= sp.sum(spaces=0)/fp_0.sum()
return p0, p1
def analyze_fp(x):
p0= x.sum(spaces=1)/fp_1.sum()
p1= x.sum(spaces=0)/fp_0.sum()
return p0, p1
```
However it seems to be not correct. Analysing the power of sk with ```test_product_space_power(fp) ``` in direction of space 0 seems to have a constant offset, while the power in direction of space 1 seems to have the wrong shape. ```analyze_fp(fp) ``` only serves as a double check to see if the covariance from which sk is actually drawn is the appropriate one for fp_0 and fp_1.
Am I using ```.power_analyze``` wrong or are we facing an issue when we draw the random fields.
Thanks a lot for your help!https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/143Try to avoid the exponentiation operator where possible2017-06-05T00:47:12ZMartin ReineckeTry to avoid the exponentiation operator where possibleLoosely related to issue 142, I have started a branch trying to get rid of the exponentiation expressions (i.e. `**`) in Nifty where they are not really needed.
If you can avoid `**`, please do. It has a very strange precedence. While...Loosely related to issue 142, I have started a branch trying to get rid of the exponentiation expressions (i.e. `**`) in Nifty where they are not really needed.
If you can avoid `**`, please do. It has a very strange precedence. While it is clear why the expressions mentioned in issue 142 need brackets (once one thinks about it long enough), there are *really* bizarre cases, like the following from https://gitlab.mpcdf.mpg.de/ift/NIFTy/blob/master/nifty/minimization/line_searching/line_search_strong_wolfe.py#L347
`d1[0, 1] = -db ** 2`
In Python, this is equivalent to "-(db*db)", although the spaces and general experience suggest otherwise.
In Fortran, the value of this expression would be "db*db".
[Edit: sorry, this is incorrect; I blindly trusted information on the internet :(]
So please, whenever you have to use `**`, be _very_ explicit with brackets!https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/144Arrays/D2Os as DomainObject attributes2017-07-06T10:17:14ZMartin ReineckeArrays/D2Os as DomainObject attributesIn an experimental branch I made "binbounds" an attribute of PowerSpace, and for efficiency reasons decided to store it as a Numpy array instead of a tuple.
This causes problems with the equality comparison operators of DomainObject, be...In an experimental branch I made "binbounds" an attribute of PowerSpace, and for efficiency reasons decided to store it as a Numpy array instead of a tuple.
This causes problems with the equality comparison operators of DomainObject, because Numpy arrays need to be compared via
`np.all(arr1==arr2)`
instead of
`arr1==arr2`
The situation for D2Os may be similar.
Should I extend the comparisons in DomainObject accordingly, or don't we want attributes of these types in general?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/145Add documentation to classes in library2018-06-27T15:53:10ZTheo SteiningerAdd documentation to classes in libraryThe `nifty2go` branch contains a great set of operator and energy objects. Please add proper docstrings to the classes and a reference to a paper/book where the respective algorithm is described. If there is no such paper where the imple...The `nifty2go` branch contains a great set of operator and energy objects. Please add proper docstrings to the classes and a reference to a paper/book where the respective algorithm is described. If there is no such paper where the implemented formulae can be found 1:1, please add inline comments on what is happening.Jakob KnollmuellerJakob Knollmuellerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/146Reuse curvature and gradient for gradient and energy in Energy classes2017-07-29T11:34:37ZTheo SteiningerReuse curvature and gradient for gradient and energy in Energy classesEspecially in `nifty/library/energy_library/critical_power_energy.py`
the gradient information is not reused for the computation of the energy itself.
Additionally, use memoization/caching. For this the memo decorator from nifty.energi...Especially in `nifty/library/energy_library/critical_power_energy.py`
the gradient information is not reused for the computation of the energy itself.
Additionally, use memoization/caching. For this the memo decorator from nifty.energies can be used.Jakob KnollmuellerJakob Knollmuellerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/147Probing harmonic domain, ```.generate_probe```2017-06-08T14:33:07ZPumpe, Daniel (dpumpe)Probing harmonic domain, ```.generate_probe```Hi,
for a classical critical filter run, inferring a power spectrum one has to probe the ```PropagatorOperator``` in its harmonic domain. Consequently this requires to get harmonic probes. These probes have to fulfil a few conditions, ...Hi,
for a classical critical filter run, inferring a power spectrum one has to probe the ```PropagatorOperator``` in its harmonic domain. Consequently this requires to get harmonic probes. These probes have to fulfil a few conditions, i.e. random_type= 'pm1' or 'normal', std=1, mean=0, real_signal=True. For gauss-distributed random numbers this is easy to get, however for 'pm1' currently I do not see a straight way.
For the 1-d case getting 'pm1' random numbers in the ```Prober``` could look like this,
```
def generate_probe(self):
""" a random-probe generator """
if self.domain[0].harmonic:
f = Field(self.domain, val=0., dtype=np.complex)
for ii in xrange(int(self.domain[0].shape[0]/2.-1.)):
real = np.random.randint(-1, 2)
img = np.random.randint(-1, 2)
f.val[ii+1] = np.complex(real, img)
f.val[-(ii+1)] = f.val[ii+1].conjugate()
mono = np.random.randint(-1, 2)
f.val[0] = np.complex(mono, 0.)
else:
f = Field.from_random(random_type=self.random_type,
domain=self.domain,
distribution_strategy=self.distribution_strategy)
uid = np.random.randint(1e18)
return (uid, f)
```https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/148Which data types do we allow for Field.dtype?2017-06-09T10:18:39ZMartin ReineckeWhich data types do we allow for Field.dtype?It seems that defining a Field with an integer dtype can produce quite unexpected results:
```
In [1]: from nifty import *
In [2]: import numpy as np
In [3]: s=RGSpace((4,))
In [4]: f=Field(s,val=np.arange(4))
In [5]: f.dot(f,bare=F...It seems that defining a Field with an integer dtype can produce quite unexpected results:
```
In [1]: from nifty import *
In [2]: import numpy as np
In [3]: s=RGSpace((4,))
In [4]: f=Field(s,val=np.arange(4))
In [5]: f.dot(f,bare=False)
Out[5]: 0
```
I suggest that we limit the allowed data types for Nifty fields to floating point and complex floating point. This could be enforced in Field._infer_dtype().https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/149Behaviour of the Field class on empty domains2017-06-12T23:24:23ZSebastian HutschenreuterBehaviour of the Field class on empty domainsField seems to do odd things when the domain is set empty under variation of the dtype keyword.
I know that he example is rather artificial, non the less I thought I should mention it.
In [2]: ff = Field(domain=(),dtype=np.float)
In [...Field seems to do odd things when the domain is set empty under variation of the dtype keyword.
I know that he example is rather artificial, non the less I thought I should mention it.
In [2]: ff = Field(domain=(),dtype=np.float)
In [3]: ff.val.get_full_data()
[MainThread][WARNING ] _distributor_factory Distribution strategy was set to 'not' because of global_shape == ()
Out[3]: array(0.0)
In [4]: ff = Field(domain=(),dtype=np.complex)
In [5]: ff.val.get_full_data()
[MainThread][WARNING ] _distributor_factory Distribution strategy was set to 'not' because of global_shape == ()
Out[5]: array((8.772981689857213+0j))https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/150The dot product of a field with itself2017-06-12T23:31:34ZSebastian HutschenreuterThe dot product of a field with itselfThe dot product only considers the volume factors of the first field in case bare=False. This leads to the (in my mind) odd result noted below.
In [2]: np.random.seed(123)
In [3]: rgf = Field(domain=RGSpace(4),val=np.random.randn(4))
...The dot product only considers the volume factors of the first field in case bare=False. This leads to the (in my mind) odd result noted below.
In [2]: np.random.seed(123)
In [3]: rgf = Field(domain=RGSpace(4),val=np.random.randn(4))
In [4]: rgf.dot(rgf)
Out[4]: 1.1305730855452751
This is equal to
In [5]: (rgf.weight(power=1) * rgf).sum()
Out[5]: 1.1305730855452751
Wouldn't the two following cases be more appropriate, depending on the context?
In [6]: (rgf.weight(power=1) * rgf.weight(power=1)).sum()
Out[6]: 0.28264327138631878
In [7]: (rgf*rgf).sum()
Out[7]: 4.5222923421811005https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/151Potential name confusion for Field.dot()2018-04-02T09:42:38ZMartin ReineckePotential name confusion for Field.dot()I just noticed the hard way that the behaviour of Field.dot() and numpy.dot() is subtly different ...
While Field.dot() takes the complex conjugate of its second argument, numpy.dot() doesn't; the correct numpy function to call would be...I just noticed the hard way that the behaviour of Field.dot() and numpy.dot() is subtly different ...
While Field.dot() takes the complex conjugate of its second argument, numpy.dot() doesn't; the correct numpy function to call would be numpy.vdot().
The question is whether we should rename Field.dot() to Field.vdot() for the sake of consistency with numpy. Otherwise we might risk confusing people with numpy background.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/152Clarification request for ComposedOperator2017-08-15T08:04:01ZMartin ReineckeClarification request for ComposedOperatorI think that there are two different ways to interpret the meaning of
"ComposedOperator", and I'm not sure if both of them are supported by Nifty at
the moment:
Scenario 1:
Say you have an input field F living on an RGSpace. If you want...I think that there are two different ways to interpret the meaning of
"ComposedOperator", and I'm not sure if both of them are supported by Nifty at
the moment:
Scenario 1:
Say you have an input field F living on an RGSpace. If you want to
apply an FFT, then some scaling, and then an inverse FFT again to this field,
is it possible to combine these three operators into a single ComposedOperator?
It doesn't seem that this works at the moment, because the "domain" property of
such a ComposedOperator returns a triple of spaces, and the input field only
lives on a single space.
Scenario 2:
You have a field living on a product of spaces, and you want to run several
operators on that field, every one operating on an individual space of the field.
This seems to be the anticipated scenario for ComposedOperator at the moment.
Is scenario 1 also needed in the real world? I have a feeling that it is; if
true, we should provide another "composed operator" for this use case and make
sure (by means of good naming, documentation and tests in the operator
constructors) that they are not confused.
(I'm mentioning this because until yesterday I was convinced that
ComposedOperator also worked for scenario 1...)https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/153log.log file too large2017-09-15T23:26:04ZChristoph Lienhardlog.log file too largeWhen executing a program using NIFTy a log.log file is created. This file tends to get very large.
Could you provide a tutorial explaining how to keep the size of the log.log file to a minimum or how to prevent NIFTy from even creating one?When executing a program using NIFTy a log.log file is created. This file tends to get very large.
Could you provide a tutorial explaining how to keep the size of the log.log file to a minimum or how to prevent NIFTy from even creating one?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/154RGSpace smoothing width2017-06-23T23:54:59ZTheo SteiningerRGSpace smoothing widthI've got the impression that the smoothing width for RGSpace is currently not correct in master.
Please try this code:
```
from nifty import *
x = RGSpace((256,), distances=1.)
f = Field(x, val=0)
f.val[100] = 1
sm = SmoothingOperator(...I've got the impression that the smoothing width for RGSpace is currently not correct in master.
Please try this code:
```
from nifty import *
x = RGSpace((256,), distances=1.)
f = Field(x, val=0)
f.val[100] = 1
sm = SmoothingOperator(x, sigma=50.)
plo = plotting.RG1DPlotter(title='test.html')
plo(sm(f))
g = sm(f)
g.sum()
```
The Gaussian bell is only half as broad as it should be. I fix is given in dc16a255d1b6d63db837278dc35d1cc8409b70b6
@dpumpe Could you please check, if this makes sense?
@mtr Could you please check, if the formula for smoothing on the sphere is correct? https://gitlab.mpcdf.mpg.de/ift/NIFTy/blob/master/nifty/spaces/lm_space/lm_space.py#L169Pumpe, Daniel (dpumpe)Pumpe, Daniel (dpumpe)https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/155New storage format for harmonic RGSpaces?2017-08-15T08:16:51ZMartin ReineckeNew storage format for harmonic RGSpaces?I'm wondering whether we can apply the hermitian/anti-hermitian separation trick that we employ for spherical harmonic coefficients in the LMSpace to harmonic RGSpaces as well.
The symmetries are nearly the same: on the sphere we have
...I'm wondering whether we can apply the hermitian/anti-hermitian separation trick that we employ for spherical harmonic coefficients in the LMSpace to harmonic RGSpaces as well.
The symmetries are nearly the same: on the sphere we have
`a_l,m = (-1)**m *conj(a_l,-m)`
in a (multidimensional) RGSpace it is
`f_k = conj(f_-k)`
So in principle we could store the Fourier coefficients of a real-valued field in another real-valued field with exactly the same dimensions.
A 1D example:
Instead of storing the complex values
`[ f0 f1 f2 f(-2) f(-1) ]`
we would simply store
`[ f0.re x*f1.re x*f1.im x*f2.re x*f2.im ]`, where x=sqrt(2.)
The coefficients at negative frequencies can be recovered trivially, and f0.im is 0 by symmetry.
In analogy to the spherical case, sqrt(2.) is needed to conserve the dot products.
For arrays of even length:
`[ f0 f1 f(-2) f(-1) ]`
becomes
`[ f0.re x*f1.re x*f1.im x*f2.re ]`
(f2.im is zero in this case)
In multi-D, this treatment has to be applied to exactly one dimension; in analogy with FFTW, I suggest to use the last one.
Adopting such a scheme would have several significant advantages:
- memory requirements go down
- FFTs of real data will be faster
- the input data type of an FFToperator can always be the same as the output data type
- the transform of a real field will by construction always be exactly hermitian, and operators cannot destroy the hermitianity by accident
- minimizers won't be able to explore degrees of freedom that do not really exist in a given problem
Of course there will be drawbacks as well:
- whenever Fourier coefficients in the traditional format are required, conversions need to be run, which are quick but not absolutely free. (I don't think that this will be needed often.)
- the get_distance_array method of RGSpace will have to be adjusted.
- probably more stuff I'm missing at the moment.
I know that this is a crazy-looking idea, but especially in the light of the numerical problems we are currently encountering with the reconstruction of radio data, I think it should be investigated.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/156fixed points in harmonic RGSpace2017-07-08T16:23:01ZMartin Reineckefixed points in harmonic RGSpaceWhen looking at _hermitianize_correct_variance in rg_space.py, I wondered whether the determination of fixed points is correct. Assuming a 1D case, the code determines two fixed points: one at 0, and one at N/2. But as far as I understan...When looking at _hermitianize_correct_variance in rg_space.py, I wondered whether the determination of fixed points is correct. Assuming a 1D case, the code determines two fixed points: one at 0, and one at N/2. But as far as I understand, the one at N/2 is only a fixed point if N is an even number.
Please let me know if I'm confusing different things here!https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/157RGSpace.hermitian_decomposition() is incorrect for odd grid lengths2017-07-08T21:05:30ZMartin ReineckeRGSpace.hermitian_decomposition() is incorrect for odd grid lengths```
from nifty import *
import numpy as np
def test_hermitianity (length):
arr=np.random.random(length)+ np.random.random(length)*1j
s=RGSpace(length)
x1,x2=s.hermitian_decomposition(arr)
print x1
test_hermitianity(3)
`...```
from nifty import *
import numpy as np
def test_hermitianity (length):
arr=np.random.random(length)+ np.random.random(length)*1j
s=RGSpace(length)
x1,x2=s.hermitian_decomposition(arr)
print x1
test_hermitianity(3)
```
The result should be three numbers; the first should only have a real part, and the second and third should be mutually complex conjugate.
This looks like the code implicitly (and incorrectly) assumes `zerocenter=True`.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/158Change or rename hermitian_decomposition()2017-07-11T08:00:28ZMartin ReineckeChange or rename hermitian_decomposition()I'm opening this as a separate issue, because it is design-related, not code-related.
In my opinion, the name "hermitian_decomposition" is misleading for the operation that the Space and Field methods currently perform. Of any data arra...I'm opening this as a separate issue, because it is design-related, not code-related.
In my opinion, the name "hermitian_decomposition" is misleading for the operation that the Space and Field methods currently perform. Of any data array there is exactly one hermitian decomposition, and it therefore cannot depend on the value of additional flags like "preserve_gaussian_variance". I suggest to move the necessary manipulations for variance preservation into a separate routine.
Alternatively the method name could be changed, but I can't think of anything convincing.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/159Bug in Field._hermitian_decomposition?2017-07-08T16:23:19ZMartin ReineckeBug in Field._hermitian_decomposition?The beginning of this method looks like this:
```
@staticmethod
def _hermitian_decomposition(domain, val, spaces, domain_axes,
preserve_gaussian_variance=False):
# hermitianize for the fi...The beginning of this method looks like this:
```
@staticmethod
def _hermitian_decomposition(domain, val, spaces, domain_axes,
preserve_gaussian_variance=False):
# hermitianize for the first space
(h, a) = domain[spaces[0]].hermitian_decomposition(
val,
domain_axes[spaces[0]],
preserve_gaussian_variance=preserve_gaussian_variance)
# hermitianize all remaining spaces using the iterative formula
for space in xrange(1, len(spaces)):
(hh, ha) = domain[space].hermitian_decomposition(
h,
domain_axes[space],
preserve_gaussian_variance=False)
(ah, aa) = domain[space].hermitian_decomposition(
a,
domain_axes[space],
preserve_gaussian_variance=False)
```
Why is `preserve_gaussian_variance` hardwired to `False` for all spaces other than the first?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/160VL_BFGS minimizer doesn't like HPSpace and LMSpace2017-07-09T23:04:22ZMatevz, Sraml (sraml)VL_BFGS minimizer doesn't like HPSpace and LMSpaceThe VL_BFGS minimizer has problems with the minimization of HPSpace and LMSpace. If we take a quadratic potential
`
np.random.seed(0)
N_dim = 500
#x_space = HPSpace(N_dim)
x_space = LMSpace(N_dim)
x =...The VL_BFGS minimizer has problems with the minimization of HPSpace and LMSpace. If we take a quadratic potential
`
np.random.seed(0)
N_dim = 500
#x_space = HPSpace(N_dim)
x_space = LMSpace(N_dim)
x = Field(x_space, val=np.random.rand(N_dim))
N = DiagonalOperator(x_space, diagonal = 1.)
class QuadraticPot(Energy):
def __init__(self, position, N):
super(QuadraticPot, self).__init__(position)
self.N = N
def at(self, position):
return self.__class__(position, N = self.N)
@property
def value(self):
H = 0.5 *self.position.dot(self.N.inverse_times(self.position))
return H.real
@property
def gradient(self):
g = self.N.inverse_times(self.position)
return_g = g.copy_empty(dtype=np.float)
return_g.val = g.val.real
return return_g
@property
def curvature(self):
return self.N
minimizer = VL_BFGS(iteration_limit=1000,convergence_tolerance=1E-4, convergence_level=3)
energy = QuadraticPot(position=x , N=N)
print energy.value
(energy, convergence) = minimizer(energy)
print energy.value
`
it doesn't converge to 0. For an example in this case above, the energy of the system is only reduced to a half of the starting energy. I think I'm lacking a piece of knowledge to understand this problem.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/161Rework methods taking an "axes" keyword2017-08-15T08:03:13ZMartin ReineckeRework methods taking an "axes" keywordAt the moment, there are a lot of routines which take an "axes" keyword, and in many of these, it could be avoided altogether or reduced to an integer.
Example 1: the "weight" method of DomainObject and derived classes
Here, `axes` is ...At the moment, there are a lot of routines which take an "axes" keyword, and in many of these, it could be avoided altogether or reduced to an integer.
Example 1: the "weight" method of DomainObject and derived classes
Here, `axes` is mostly used to determine, which particular axes in a `Field` belong to the space. In principle, all of this could be avoided if this method simply returned an array with one weight per pixel (or a scalar if all weights are equal), and leave all the juggling with array indices to Field.weight().
As things are, there are various alternative implementations of the re-shaping task (look for example at the `weight` methods of `PowerSpace` and `GLSpace`).
My suggestion is to simplify `DomainObject`'s `weight` method to:
```
def volfactor(self):
""" Returns an array containing the object's volume factors.
The array must be broadcastable to the space's dimensions
(useful if the volume factors are uniform in a direction, e.g. for GLSpace).
Alternatively, if all volume factors are equal, returns a scalar.
"""
```
... and leave all the operations involving powers, reshaping etc. to the calling `Field.weight` method.
Example 2: `Space.hermitianize_decomposition`():
Here the `axes` variable always contains a tuple with as many entries as the space has dimensions, and the entries take the form `i`, `i+1`, `i+2`, ... I suggest to pass a scalar integer with the name `first_axis` instead, which should be less error-prone.
If there is agreement to go into this direction, there are more cases, but we can discuss them later :)https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/162Fix class-docstrings for new operators2017-08-15T08:03:20ZTheo SteiningerFix class-docstrings for new operatorsThe docstrings for the following operators must be refactored (80-chars limit, complete missing entries, etc...)
- LaplaceOperator
- SmoothnessOperatorThe docstrings for the following operators must be refactored (80-chars limit, complete missing entries, etc...)
- LaplaceOperator
- SmoothnessOperatorJakob KnollmuellerJakob Knollmueller2017-07-12https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/163Descent minimizers: stopping criterion is broken2017-09-28T01:18:15ZMartin ReineckeDescent minimizers: stopping criterion is brokenCurrently, descent_minimizer.py contains the following lines:
```
elif delta < self.convergence_tolerance:
convergence += 1
self.logger.info("Updated convergence level to: %u" %
...Currently, descent_minimizer.py contains the following lines:
```
elif delta < self.convergence_tolerance:
convergence += 1
self.logger.info("Updated convergence level to: %u" %
convergence)
```
The problem is that `delta` never goes down far enough for the algorithms to stop in there, even though the energy is tiny compared to the starting energy (example below).
This can't be a valid stopping criterion.
```
VL_BFGS: DEBUG: Iteration:00000001 step_length=3.7E+01 delta=1.4E+00 energy=5.1E+01
VL_BFGS: DEBUG: Iteration:00000002 step_length=7.5E+00 delta=1.5E+00 energy=4.6E+00
VL_BFGS: DEBUG: Iteration:00000003 step_length=2.6E+00 delta=1.9E+00 energy=6.5E-01
VL_BFGS: DEBUG: Iteration:00000004 step_length=1.4E+00 delta=3.4E+00 energy=2.3E-02
VL_BFGS: DEBUG: Iteration:00000005 step_length=3.1E-01 delta=5.2E+00 energy=8.8E-03
VL_BFGS: DEBUG: Iteration:00000006 step_length=1.1E-01 delta=2.9E+00 energy=2.3E-04
VL_BFGS: DEBUG: Iteration:00000007 step_length=1.5E-02 delta=2.5E+00 energy=5.0E-05
VL_BFGS: DEBUG: Iteration:00000008 step_length=1.3E-02 delta=6.7E+00 energy=1.2E-06
VL_BFGS: DEBUG: Iteration:00000009 step_length=2.4E-03 delta=6.4E+00 energy=6.2E-07
VL_BFGS: DEBUG: Iteration:00000010 step_length=9.0E-04 delta=3.0E+00 energy=1.5E-08
VL_BFGS: DEBUG: Iteration:00000011 step_length=1.2E-04 delta=2.7E+00 energy=2.7E-09
VL_BFGS: DEBUG: Iteration:00000012 step_length=8.7E-05 delta=7.7E+00 energy=9.6E-11
VL_BFGS: DEBUG: Iteration:00000013 step_length=1.9E-05 delta=6.2E+00 energy=1.4E-11
VL_BFGS: DEBUG: Iteration:00000014 step_length=5.5E-06 delta=3.9E+00 energy=1.3E-12
VL_BFGS: DEBUG: Iteration:00000015 step_length=1.4E-06 delta=3.2E+00 energy=6.4E-14
VL_BFGS: DEBUG: Iteration:00000016 step_length=2.5E-07 delta=4.3E+00 energy=1.4E-14
VL_BFGS: DEBUG: Iteration:00000017 step_length=2.2E-07 delta=1.0E+01 energy=1.8E-16
VL_BFGS: DEBUG: Iteration:00000018 step_length=1.6E-08 delta=3.6E+00 energy=1.4E-17
VL_BFGS: DEBUG: Iteration:00000019 step_length=3.3E-09 delta=1.7E+00 energy=3.5E-18
VL_BFGS: DEBUG: Iteration:00000020 step_length=3.2E-09 delta=4.8E+00 energy=9.6E-20
VL_BFGS: DEBUG: Iteration:00000021 step_length=6.2E-10 delta=4.6E+00 energy=3.0E-20
VL_BFGS: DEBUG: Iteration:00000022 step_length=2.0E-10 delta=3.6E+00 energy=1.0E-21
VL_BFGS: DEBUG: Iteration:00000023 step_length=3.3E-11 delta=2.6E+00 energy=2.0E-22
VL_BFGS: DEBUG: Iteration:00000024 step_length=2.6E-11 delta=6.8E+00 energy=6.3E-24
VL_BFGS: DEBUG: Iteration:00000025 step_length=5.8E-12 delta=8.6E+00 energy=4.5E-24
VL_BFGS: DEBUG: Iteration:00000026 step_length=1.0E-12 delta=1.5E+00 energy=1.7E-24
VL_BFGS: DEBUG: Iteration:00000027 step_length=1.6E-12 delta=3.6E+00 energy=2.4E-26
VL_BFGS: DEBUG: Iteration:00000028 step_length=1.5E-13 delta=4.8E+00 energy=5.9E-27
VL_BFGS: DEBUG: Iteration:00000029 step_length=1.5E-13 delta=1.1E+01 energy=3.0E-29
VL_BFGS: DEBUG: Iteration:00000030 step_length=7.0E-15 delta=3.2E+00 energy=4.3E-30
VL_BFGS: WARNING: Reached iteration limit. Stopping.
```https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/164Another Field.vdot() bug...2017-07-11T08:18:02ZMartin ReineckeAnother Field.vdot() bug...```
from nifty import *
import numpy as np
s=RGSpace((10,))
f1=Field.from_random("normal",domain=s,dtype=np.complex128)
f2=Field.from_random("normal",domain=s,dtype=np.complex128)
print f1.vdot(f2)
print f1.vdot(f2,spaces=0)
``````
from nifty import *
import numpy as np
s=RGSpace((10,))
f1=Field.from_random("normal",domain=s,dtype=np.complex128)
f2=Field.from_random("normal",domain=s,dtype=np.complex128)
print f1.vdot(f2)
print f1.vdot(f2,spaces=0)
```https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/165Critical power energy doesn't work for product spaces2017-08-02T21:20:47ZTheo SteiningerCritical power energy doesn't work for product spacesIn principle it is no problem to port the existing code to product-space functionality. The biggest thing is that we then would need a n-dimensional SmoothnessOperator. Any volunteers?In principle it is no problem to port the existing code to product-space functionality. The biggest thing is that we then would need a n-dimensional SmoothnessOperator. Any volunteers?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/166MPI-enabled tests2017-07-29T11:50:06ZMartin ReineckeMPI-enabled testsMostly for fun, I enabled MPI nosetests in out continuous integration, and it seems that many tests actually continue to work!
But unfortunately I think I managed to freeze the test VM, so the pipeline is stuck now :(Mostly for fun, I enabled MPI nosetests in out continuous integration, and it seems that many tests actually continue to work!
But unfortunately I think I managed to freeze the test VM, so the pipeline is stuck now :(https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/167Field data types must be enforced consistently2017-08-15T08:05:09ZMartin ReineckeField data types must be enforced consistentlyI'm trying to debug the line searching algorithm and noticed that some of the input quantities have complex type.
This is not really acceptable, because there are no useful comparison operators defined for complex numbers, and the code ...I'm trying to debug the line searching algorithm and noticed that some of the input quantities have complex type.
This is not really acceptable, because there are no useful comparison operators defined for complex numbers, and the code does lots of comparisons. I don't understand why no exceptions are thrown; pure Python immediately complains if complex numbers are compared via <,>, etc., but numpy somehow doesn't.
We really need strict enforcement that search positions, gradients and energy values have real type.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/168Bugs in line_search_strong_wolfe2017-08-16T12:25:13ZMartin ReineckeBugs in line_search_strong_wolfeThere are several strange "features" in the line search algorithm:
- if the _zoom procedure does not converge, the outer minimization loop repeats the (failing) _zoom search in identical fashion until it reaches its own iteration limit....There are several strange "features" in the line search algorithm:
- if the _zoom procedure does not converge, the outer minimization loop repeats the (failing) _zoom search in identical fashion until it reaches its own iteration limit. This can't be intentional.
- The Wolfe conditions look a bit different to the ones mentioned in Wikipedia.
Do we have a reference paper for this implementation?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/169destroying distribution_strategy2017-07-26T08:27:00ZJakob Knollmuellerdestroying distribution_strategyHi,
I am currently trying to get my code running using the full capabilities of d2o, but some things in nifty seem to destroy the distribution_strategy, namely the InvertibleOperatorMixin and power_analyze (I tracked it down to the pind...Hi,
I am currently trying to get my code running using the full capabilities of d2o, but some things in nifty seem to destroy the distribution_strategy, namely the InvertibleOperatorMixin and power_analyze (I tracked it down to the pindex.bincount, see the example)
[minimal_example.py](/uploads/95bab8b0159517babf98330ce2a9ae5c/minimal_example.py)
Another problem is in the InvertibleOperatorMixin:
If no initial guess x0 is specified it is set to
x0 = Field(self.target, val=0., dtype=x.dtype)
setting the distribution to the default of "not" so the result also has this distribution, I would suggest setting
x0 = Field(self.target, val=0., dtype=x.dtype, distribution_strategy=x.distribution_strategy)
Jakobhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/170CriticalPowerEnergy has incomplete at() method2017-07-19T11:57:02ZMartin ReineckeCriticalPowerEnergy has incomplete at() methodThe `at()` method in `CriticalPowerEnergy` does not pass the `logarithmic` property to the newly built object, which results in a (potentially) different T inside that object.
What's the best solution to fix this? Store `logarithmic` ex...The `at()` method in `CriticalPowerEnergy` does not pass the `logarithmic` property to the newly built object, which results in a (potentially) different T inside that object.
What's the best solution to fix this? Store `logarithmic` explicitly to have it available in the `at()` method?
@kjako, any preferences?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/171create_power_operator() returns operator with incorrect data type2017-08-15T08:17:02ZMartin Reineckecreate_power_operator() returns operator with incorrect data typeIf I understood the purpose of `create_power_operator()` correctly, it should produce a `DiagonalOperator` which imprints the supplied power spectrum onto a given field in harmonic space.
I would expect that this returned operator conta...If I understood the purpose of `create_power_operator()` correctly, it should produce a `DiagonalOperator` which imprints the supplied power spectrum onto a given field in harmonic space.
I would expect that this returned operator contains values of real type, but its data type is complex, even if I explicitly ask for real numbers:
```
from nifty import *
import numpy as np
s=RGSpace(10,harmonic=True)
def ps(x): return x
x=create_power_operator(s,ps,dtype=np.float64)
print x.diagonal().dtype
```
What am I missing?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/172Overflow using lognormal2017-08-14T22:05:30ZMatevz, Sraml (sraml)Overflow using lognormalIn my project I use constantly lognormal model and since the normalization of the gradient in the SteepestDescent was removed I get overflow. I fix that with normalizing my gradient.
The overflow can be reproduced with this:
[test.py](...In my project I use constantly lognormal model and since the normalization of the gradient in the SteepestDescent was removed I get overflow. I fix that with normalizing my gradient.
The overflow can be reproduced with this:
[test.py](/uploads/5f5ce380ae65b699b5e350fd54ba8b18/test.py)https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/173Remove one of the WienerFilter demos2017-08-15T08:09:44ZPhilipp Arrasparras@mpa-garching.mpg.deRemove one of the WienerFilter demosWas ist der Unterschied zwischen den beiden Wiener Filter demos?
`demos/wiener_filter_via_curvature.py` macht Folgendes:
```
wiener_curvature = WienerFilterCurvature(S=S, N=N, R=R_harmonic)
m = wiener_curvature.inverse_times(j)
```
`de...Was ist der Unterschied zwischen den beiden Wiener Filter demos?
`demos/wiener_filter_via_curvature.py` macht Folgendes:
```
wiener_curvature = WienerFilterCurvature(S=S, N=N, R=R_harmonic)
m = wiener_curvature.inverse_times(j)
```
`demos/wiener_filter_via_hamiltonian.py` macht:
```
m0 = Field(h_space, val=.0)
energy = WienerFilterEnergy(position=m0, d=d, R=R, N=N, S=S, inverter=inverter)
D0 = energy.curvature
m0 = D0.inverse_times(j)
```
Soweit ich das verstehe, ist der einzige Unterschied zwischen den beiden Demos, dass bei der Zweiten eine Position und ein Invertierer definiert werden, die dann aber gar nicht verwendet werden (weil `energy.curvature` nur R, N und S braucht und keine Position).https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/174More zerocenter weirdness...2017-08-15T00:50:33ZMartin ReineckeMore zerocenter weirdness...```
from nifty import *
s=RGSpace(3,zerocenter=True)
f=Field(s,val=[0.,1.,2.])
op=FFTOperator(s)
print "op domain:", op.domain[0]
print "op target:", op.target[0]
print "op times real field", op.times(f).val
```
This prints
```
op doma...```
from nifty import *
s=RGSpace(3,zerocenter=True)
f=Field(s,val=[0.,1.,2.])
op=FFTOperator(s)
print "op domain:", op.domain[0]
print "op target:", op.target[0]
print "op times real field", op.times(f).val
```
This prints
```
op domain: RGSpace(shape=(3,), zerocenter=(True,), distances=(0.33333333333333331,), harmonic=False)
op target: RGSpace(shape=(3,), zerocenter=(True,), distances=(1.0,), harmonic=True)
op times real field [-0.33333333+0.j -0.16666667+0.8660254j 0.16666667+0.8660254j]
```
What I don't understand: if the target domain of the operator is zero-centered, why is the first value of the result real-only, and not the middle one?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/175Indexing bug in Laplace operator2017-08-17T23:22:06ZMartin ReineckeIndexing bug in Laplace operatorI'm running the following test case on current master:
```
from nifty import *
s_space = RGSpace([256,256])
FFT = FFTOperator(s_space)
h_space = FFT.target[0]
p_space = PowerSpace(h_space, logarithmic=True, distribution_strategy='not')...I'm running the following test case on current master:
```
from nifty import *
s_space = RGSpace([256,256])
FFT = FFTOperator(s_space)
h_space = FFT.target[0]
p_space = PowerSpace(h_space, logarithmic=True, distribution_strategy='not')
t0 = Field(p_space, val=log(0.1/(1+p_space.kindex)**3))
sop = SmoothnessOperator(p_space)
#print t0.val.get_full_data()
print t0.vdot(sop(t0))
#print t0.val.get_full_data()
print t0.vdot(sop(t0))
```
This produces the output:
```
0.645594411887
139746.578501
```
which is definitely incorrect, since the same number should be reported twice.
Things become even weirder. If I remove the commenting "#" from the first print statement, I get:
```
[ -2.30258509 -4.38202663 -4.94670585 -5.75181152 -6.49001612
-7.17883085 -8.05624052 -8.99425111 -9.93125436 -10.87947582
-11.85503513 -12.85994974 -16.24599361]
2.93684941121
139746.578501
```
So the print statement changes the result of the next statement?!
Removing the next "#" as well produces:
```
[ -2.30258509 -4.38202663 -4.94670585 -5.75181152 -6.49001612
-7.17883085 -8.05624052 -8.99425111 -9.93125436 -10.87947582
-11.85503513 -12.85994974 -16.24599361]
2.93684941121
[ -2.30258509 -4.38202663 -4.94670585 -5.75181152 -6.49001612
-7.17883085 -8.05624052 -8.99425111 -9.93125436 -10.87947582
-11.85503513 -12.85994974 -16.24599361]
2.93684941121
```
First question: can anyone reproduce this, or is it a bug in my local Python installation?
If this is reproducible, it definitely needs to be fixed quickly, but I'm completely at my wits' end ...https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/176mpi crashes with power_analyze2017-08-14T22:04:56ZJakob Knollmuellermpi crashes with power_analyzeCalling power_analyze of a fftw distributed field crashes the mpirun. I attached a minimal example:
[minimal_example.py](/uploads/3253d1e327fb18e6ec1c6326779cea6f/minimal_example.py)
I run it with:
mpirun -n 2 python minimal_example.py ...Calling power_analyze of a fftw distributed field crashes the mpirun. I attached a minimal example:
[minimal_example.py](/uploads/3253d1e327fb18e6ec1c6326779cea6f/minimal_example.py)
I run it with:
mpirun -n 2 python minimal_example.py
and I get:
```
[MainThread][ERROR ] root Uncaught exception
Traceback (most recent call last):
File "minimal_example.py", line 22, in <module>
pp=sh.power_analyze()
File "/usr/local/lib/python2.7/dist-packages/ift_nifty-3.0.4-py2.7.egg/nifty/field.py", line 377, in power_analyze
for part in parts]
File "/usr/local/lib/python2.7/dist-packages/ift_nifty-3.0.4-py2.7.egg/nifty/field.py", line 413, in _single_power_analyze
axes=work_field.domain_axes[space_index])
File "/usr/local/lib/python2.7/dist-packages/ift_nifty-3.0.4-py2.7.egg/nifty/field.py", line 439, in _calculate_power_spectrum
axes=axes)
File "/usr/local/lib/python2.7/dist-packages/ift_nifty-3.0.4-py2.7.egg/nifty/field.py", line 469, in _shape_up_pindex
semiscaled_local_data = local_data.reshape(semiscaled_shape)
ValueError: cannot reshape array of size 131072 into shape (64,64,64)
-------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.
-------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[12289,1],0]
Exit code: 1
--------------------------------------------------------------------------
```https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/119tests for spaces2017-08-17T23:25:49ZReimar H Leiketests for spacesCreate more tests for all spaces. Especially tests for content are needed, that tests if the methods of spaces do what they should do on a numerical level.Create more tests for all spaces. Especially tests for content are needed, that tests if the methods of spaces do what they should do on a numerical level.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/100Tests: NIFTy config & logging2018-01-19T08:55:21ZTheo SteiningerTests: NIFTy config & logging2018-01-19https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/177Documentation for FFTSmoothingOperator and DirectSmoothingOperator missing2018-02-21T21:18:52ZPhilipp Arrasparras@mpa-garching.mpg.deDocumentation for FFTSmoothingOperator and DirectSmoothingOperator missinghttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/178Replace the current implementation of MPI FFTs2018-01-11T14:25:44ZMartin ReineckeReplace the current implementation of MPI FFTsThe current way of performing MPI-parallel FFTs has several problems:
- it relies on a fork of `pyfftw` that may or may not be merged with the official version in the future, causing potential confusion for users
- it does not work for ...The current way of performing MPI-parallel FFTs has several problems:
- it relies on a fork of `pyfftw` that may or may not be merged with the official version in the future, causing potential confusion for users
- it does not work for all array sizes due to FFTW limitations.
The second point already makes broad regression testing of many different FFT sizes quite tricky.
My suggestion to overcome both these drawbacks is based on the fact that MPI communication during an FFT is _only_ needed if the first field dimension needs to be transformed, and that a multi-D FFT can be separated in to FFTs along individual axes.
If an FFT along the first axis is required, NIFTy (or D2O) could just do the following:
- MPI-tanspose the field in such a way that the first dimension is no longer distributed across CPUs
- perform the FFT along this dimension (no MPI required)
- revert the transpose again
- (perform FFTs along the other requested axes)
Internally this is exactly how FFTW handles this problem as well.
The transposition algorithm is not trivial, but certainly implementable without too many difficulties.
Doing this would also get rid of the special "fftw" distribution strategy.
Additional bonus: MPI FFTs would then also work with numpy FFT. The dependence on MPI-enabled FFTW would vanish, making NIFTy configuration simpler.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/179Field.from_random crashes mpi2018-02-15T14:31:18ZJakob KnollmuellerField.from_random crashes mpiHi,
I found another bug with mpi:
[minimal_example.py](/uploads/3863c7e25ac0661caab65a64bd964264/minimal_example.py)
the error I get is:
```
[MainThread][ERROR ] root Uncaught exception
Traceback (most recent call last):
F...Hi,
I found another bug with mpi:
[minimal_example.py](/uploads/3863c7e25ac0661caab65a64bd964264/minimal_example.py)
the error I get is:
```
[MainThread][ERROR ] root Uncaught exception
Traceback (most recent call last):
File "minimal_example-3.py", line 16, in <module>
std=some_diagonal.val)
File "/usr/local/lib/python2.7/dist-packages/ift_nifty-3.1.0-py2.7.egg/nifty/field.py", line 251, in from_random
lambda shape: generator_function(dtype=f.dtype,
File "/usr/local/lib/python2.7/dist-packages/d2o/distributed_data_object.py", line 469, in apply_generator
self.set_local_data(generator(self.distributor.local_shape), copy=copy)
File "/usr/local/lib/python2.7/dist-packages/ift_nifty-3.1.0-py2.7.egg/nifty/field.py", line 253, in <lambda>
**random_arguments))
File "/usr/local/lib/python2.7/dist-packages/ift_nifty-3.1.0-py2.7.egg/nifty/random.py", line 50, in normal
x *= dtype.type(std)
ValueError: operands could not be broadcast together with shapes (16,64,64) (64,64,64) (16,64,64)
-------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.
-------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[29272,1],1]
Exit code: 1
--------------------------------------------------------------------------
```
I only get it if the std in from_random is a array, for numbers it seems to work
Jakobhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/180Use MPCDF's runners for continuous integration?2017-11-13T09:39:16ZMartin ReineckeUse MPCDF's runners for continuous integration?It seems that MPCDF now offers shared runners (http://www.mpcdf.mpg.de/about-mpcdf/publications/bits-n-bytes?BB-View=196&BB-Doc=187).
Should we try to use these instead of Theo's machine?It seems that MPCDF now offers shared runners (http://www.mpcdf.mpg.de/about-mpcdf/publications/bits-n-bytes?BB-View=196&BB-Doc=187).
Should we try to use these instead of Theo's machine?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/181Logarithmic smoothing on non-PowerSpaces?2017-09-15T23:25:48ZMartin ReineckeLogarithmic smoothing on non-PowerSpaces?Currently, NIFTy allows to construct a `SmoothingOperator` like this:
```
import nifty as ift
op=ift.SmoothingOperator(ift.RGSpace(10),sigma=1.,log_distances=True)
```
I do not think this makes sense, as the RGSace is periodic and ther...Currently, NIFTy allows to construct a `SmoothingOperator` like this:
```
import nifty as ift
op=ift.SmoothingOperator(ift.RGSpace(10),sigma=1.,log_distances=True)
```
I do not think this makes sense, as the RGSace is periodic and there are no absolute distances.
This combination of parameters also leads to overflows because the code in FFTSmoothingOperator tries to compute log(0.).
Therefore I propose to ensure that `log_distance==False` whenever an FFTSmoothingOperator is constructed and raise an exception otherwise.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/182Completely remove inverse smoothing2017-09-15T23:25:14ZMartin ReineckeCompletely remove inverse smoothingThe inverse_times() of both `SmoothingOerator`s is currently broken.
`DirectSmoothingOperator` considers the inverse operation of "smoothing with sigma" to be "smoothing with 1/sigma", which is simply wrong.
`FFTSmoothingOperator` does...The inverse_times() of both `SmoothingOerator`s is currently broken.
`DirectSmoothingOperator` considers the inverse operation of "smoothing with sigma" to be "smoothing with 1/sigma", which is simply wrong.
`FFTSmoothingOperator` does in principle perform the correct operations, but does not check for division by zero, which occurs for most sigma values and indicates that the smoothing is in fact non-invertible with finite precision arithmetics.
Unfortunately the test `test_inverse_adjoint_times` does not recognize this, because the results it compares are both NaN and therefore the same...
Since this operation appears to be unused, I'd suggest to remove it ... doing it properly may be possible (at least for FFT smoothing), but is only worth the effort if someone really needs it.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/183Harmonize ConjugateGradient with other minimizers?2017-09-28T01:17:59ZMartin ReineckeHarmonize ConjugateGradient with other minimizers?In principle, ConjugateGradient is not fundamentally different from our descent minimizers, but its interface (including the form of the callbacks) is very different.
Wouldn't it be helpful to adjust it in a way that it also takes an `E...In principle, ConjugateGradient is not fundamentally different from our descent minimizers, but its interface (including the form of the callbacks) is very different.
Wouldn't it be helpful to adjust it in a way that it also takes an `Energy` object for minimization and passes its current energy to the callback in the same way the descent minimizers do? Then we could, for example, replace `ConjugateGradient` objects with, say, `VL_BFGS` ones and experiment much more flexibly with different minimizers.
Of course, `ConjugateGradient` would need a special subclass of `Energy` to work properly: it would, for example, need to invoke the linear operator within this energy directly. But that should be no fundamental problem ... after all, we already have some energy classes that provide `curvature` and some that don't.
I would be very interested to hear feedback!https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/184PowerSpace probing calls sugar.create_composed_fft_operator and dies2017-08-29T08:26:05ZPhilipp Arrasparras@mpa-garching.mpg.dePowerSpace probing calls sugar.create_composed_fft_operator and diesI narrowed the problem down to commit a09c6ed by @theos. The attached file executes until this commit and is broken afterwards.
Apparently, the prober tries to fft something although the probing takes place in a PowerSpace which does no...I narrowed the problem down to commit a09c6ed by @theos. The attached file executes until this commit and is broken afterwards.
Apparently, the prober tries to fft something although the probing takes place in a PowerSpace which does not admit for a fft.
[Critical+Filter.py](/uploads/7b0e6c37d18c6cdf932457da0ba682d1/Critical+Filter.py)https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/185Binbounds calculation in PowerSpace needs fixing2017-09-15T23:26:29ZMartin ReineckeBinbounds calculation in PowerSpace needs fixingCurrently, the initialization of a `PowerSpace` has several problems:
- when specifying `logarithmic=False`, the code behaves differently from the code before the `PowerSpace` reorganization two months ago (this is because I'm now inter...Currently, the initialization of a `PowerSpace` has several problems:
- when specifying `logarithmic=False`, the code behaves differently from the code before the `PowerSpace` reorganization two months ago (this is because I'm now interpreting the parameter `logarithmic=False` to mean "please use linear binning", instead of "please use natural binning", which was its original meaning).
- the "magic" formulae for computing nbins and binbounds are buggy; this was not really obvious before the reorganization, because they were not used in most circumstances).
- the algorithm tries to derive binbounds from insufficient user information (e.g. if the user specifies the number of bins, the code simply guesses the position of the first and last binbounds, and the guess is not necessarily intuitive).
I would strongly favor an approach where the user has to specify a set of parameters that defines the binbounds absolutely unambiguously. E.g.:
- by providing a binbounds array
- by providing a type of spacing (linear or logarithmic), a number of bins, and the first and last binbound
- by providing absolutely nothing, which leads to natural binning
There are very few cases where it could make sense for the code to guess the binbounds; the only situations that I can imagine are 1D RGSpaces and LMSpaces with linear binning. Here, there is an obvious choice for the number of bins and the placing of the binbounds.
Would it be OK if I introduced the necessary additional constraints?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/186TransformationCache re-uses FFTs in situations where it shouldn't2017-09-28T01:59:35ZMartin ReineckeTransformationCache re-uses FFTs in situations where it shouldn't(This is mostly a reminder ... the problem should go away automatically once the `byebye_zerocenter` branch is merged.)
Currently the TransformationCache for FFTs caches and re-uses transforms even when the harmonic base has changes in ...(This is mostly a reminder ... the problem should go away automatically once the `byebye_zerocenter` branch is merged.)
Currently the TransformationCache for FFTs caches and re-uses transforms even when the harmonic base has changes in the meantime, which leads to wrong results. This can be avoided by simply not changing the harmonic base during a run, but this approach seems impractical, since we violate this constraint already in test_fft_operator.py.
Apparently, the whole test environment is not torn down and set up again between individual tests. I verified this by printing the size of the transformation cache whenever its create() method is called (line 27 in transformation_cache.py) and then ran
`nosetests test/test_operators/test_fft_operator.py -s`
I would have expected the length of the cache to be 1 all the time, but it went up beyond 100.
This is possibly also the cause for the strange behaviour in `test_power_synthesize_analyze()`.
To verify this, I suggest running the tests on maste ran on `byebye_zerocenter` and compare the convergence behaviour.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/187Field.dim returns 0 for fields without domain2017-09-28T01:37:34ZMartin ReineckeField.dim returns 0 for fields without domain```
import nifty as ift
f=ift.Field((),val=42)
print f.val
print f.dim
```
As expected, the Field contains a single value, but its `dim` property is 0, which seems incorrect and is also inconsistent with `numpy.ndarray`'s behavior for z...```
import nifty as ift
f=ift.Field((),val=42)
print f.val
print f.dim
```
As expected, the Field contains a single value, but its `dim` property is 0, which seems incorrect and is also inconsistent with `numpy.ndarray`'s behavior for zero-dimensional arrays.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/188Field.real/imag copy arrays, contrary to documentation2018-01-11T14:25:17ZMartin ReineckeField.real/imag copy arrays, contrary to documentation```
import nifty as ift
a=ift.RGSpace(10)
f=ift.Field(a,val=1.+1j)
f.real.val+=2
f.imag.val+=2
print f.val
```
This prints '1+1j' as the field value, i.e. the manipulations of real and imaginary parts did not affect the original field....```
import nifty as ift
a=ift.RGSpace(10)
f=ift.Field(a,val=1.+1j)
f.real.val+=2
f.imag.val+=2
print f.val
```
This prints '1+1j' as the field value, i.e. the manipulations of real and imaginary parts did not affect the original field. This seems to contradict the documentation, which states that the data is not copied.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/189PowerSpace volume factors2017-12-01T20:18:08ZMartin ReineckePowerSpace volume factorsAs mentioned in the NIFTy paper, there are many different ways to define volume factors for the PowerSpace.
If I remember correctly, the agreement that we reached some time ago was that `PowerSpace.weight()` should use a weight of 1, i....As mentioned in the NIFTy paper, there are many different ways to define volume factors for the PowerSpace.
If I remember correctly, the agreement that we reached some time ago was that `PowerSpace.weight()` should use a weight of 1, i.e. it should not do anything. However, the current implementation uses `rho` as weights.
I suggest to change this to 1 as discussed. The advantage of using this weight over the others is that the user does not need to do any additional corrections when they have to use a different weight; they can just apply their own volume factors without needing to undo anything the space did by default.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/190DiagonalOperator calls nonexistent attribute adjoint()2017-09-15T23:36:07ZMartin ReineckeDiagonalOperator calls nonexistent attribute adjoint()`DiagonalOperator` contains the following code:
```
def _adjoint_times(self, x, spaces):
return self._times_helper(x, spaces,
operation=lambda z: z.adjoint().__mul__)
def _adjoint_invers...`DiagonalOperator` contains the following code:
```
def _adjoint_times(self, x, spaces):
return self._times_helper(x, spaces,
operation=lambda z: z.adjoint().__mul__)
def _adjoint_inverse_times(self, x, spaces):
return self._times_helper(x, spaces,
operation=lambda z: z.adjoint().__rtruediv__)
```
The `z.adjoint()` expression does not work, since nothing in NIFTy has an .adjoint() attribute. So far we have not noticed this, apparently because all of our `DiagonalOperator`s were self-adjoint and these methods were not called.
I don't know what the correct version is. @theos?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/191Field.power_synthesize(): some clarifications needed2017-12-09T14:10:49ZMartin ReineckeField.power_synthesize(): some clarifications neededI'm trying to understand the intricacies of `Field.power_synthesize()` and am encountering a few points that are unclear to me:
- which combinations of `real_signal` and `real_power` are allowed? Specifically, is it allowed/sensible to ...I'm trying to understand the intricacies of `Field.power_synthesize()` and am encountering a few points that are unclear to me:
- which combinations of `real_signal` and `real_power` are allowed? Specifically, is it allowed/sensible to have `real_signal==True` and `real_power==False`?
- if `self.dtype==float`, does it make sense to have `real_power==False`? In that case, `local_rescaler.imag` will become zero.
Once I understand all of this better, I'd volunteer to extend the docstring, and (if necessary) add a few sanity checks to the code.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/192Drawing gaussian random fields over multiple domains2017-09-23T07:14:41ZPumpe, Daniel (dpumpe)Drawing gaussian random fields over multiple domainsHi,
currently I am facing some inconsistencies for drawing gaussian random fields over multiple domains. Yet I did not find the root of these inconsistencies. However in the minimal test example below, `s_1` and `s_2` do not match, nei...Hi,
currently I am facing some inconsistencies for drawing gaussian random fields over multiple domains. Yet I did not find the root of these inconsistencies. However in the minimal test example below, `s_1` and `s_2` do not match, neither mean nor variance even though they should be drawn from the same gaussian just in two different ways. Further none of the drawn fields, `s_1` `s_2`, obey the desired spectra.
What am I doing wrong or missing?
```
from nifty import *
x_0 = RGSpace(50, zerocenter=False, distances=.5)
k_0 = RGRGTransformation.get_codomain(x_0)
p_0 = PowerSpace(harmonic_partner=k_0)
x_1 = RGSpace(75, zerocenter=False, distances=.25)
k_1 = RGRGTransformation.get_codomain(x_1)
p_1 = PowerSpace(harmonic_partner=k_1)
fft_0 = FFTOperator(domain=x_0, target=k_0)
fft_1 = FFTOperator(domain=x_1, target=k_1)
p_spec_0 = Field(p_0, val=lambda l: 1/(1+l)**3)
p_spec_1 = Field(p_1, val=lambda k: 10/(1+k)**5)
def random_field_method_1():
w = Field((x_0, x_1), val=0.)
for ii in xrange(100):
outer = Field((p_0, p_1), val=np.outer(p_spec_0.val, p_spec_1.val))
sk = outer.power_synthesize()
w += fft_0.inverse_times(fft_1.inverse_times(sk, spaces=1), spaces=0)
return w/100.
def random_field_method_2():
w = Field((x_0, x_1), val=0.)
S_0 = create_power_operator(k_0, sqrt(p_spec_0))
S_1 = create_power_operator(k_1, sqrt(p_spec_1))
for ii in xrange(100):
rand = Field.from_random('normal', domain=(x_0, x_1))
rand_k = fft_0.times(fft_1.times(rand, spaces=1), spaces=0)
sk = S_0.times(S_1.times(rand_k, spaces=1), spaces=0)
w += fft_0.inverse_times(fft_1.inverse_times(sk, spaces=1), spaces=0)
return w/100.
def analyze_power(signal):
sk = fft_0.times(fft_1.times(signal,spaces=1), spaces=0)
power = sk.power_analyze()
p_drawn_0 = power.sum(spaces=1)/p_spec_1.sum()
p_drawn_1 = power.sum(spaces=0)/p_spec_0.sum()
return p_drawn_0, p_drawn_1
s_1 = random_field_method_1()
s_2 = random_field_method_2()
method_1_power_0, method_1_power_1 = analyze_power(s_1)
methdo_2_power_0, method_2_power_2 = analyze_power(s_2)
```https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/193Field method; Field.integrate2018-01-11T13:31:00ZPumpe, Daniel (dpumpe)Field method; Field.integrateMartin, Sebastian and I think that it would be advantageous to have a Field method, `.integrate(spaces)` in case one has to integrate over one or multiple dimensions of a field. `.integrate` would thereby take care of all necessary volum...Martin, Sebastian and I think that it would be advantageous to have a Field method, `.integrate(spaces)` in case one has to integrate over one or multiple dimensions of a field. `.integrate` would thereby take care of all necessary volume factors as they appear in integrals.
What do you think about it (@reimar, @kjako @theos)?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/194ResponseOperator._add_attributes_to_copy crashes2017-10-12T14:10:57ZChristoph LienhardResponseOperator._add_attributes_to_copy crashesWhen trying to run the log_normal_wiener_filter demo an error occurs:
line 103, in _add_attributes_to_copy copy = super(DiagonalOperator, self)._add_attributes_to_copy(copy,
TypeError: super(type, obj): obj must be an instance or subtyp...When trying to run the log_normal_wiener_filter demo an error occurs:
line 103, in _add_attributes_to_copy copy = super(DiagonalOperator, self)._add_attributes_to_copy(copy,
TypeError: super(type, obj): obj must be an instance or subtype of type
I guess the line should read:
copy = super(ResponseOperator, self)._add_attributes_to_copy(copy, **kwargs)https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/195Announce merge dates from nightly branch to master2018-01-11T14:24:22ZMartin ReineckeAnnounce merge dates from nightly branch to masterThe tentative plan is to merge new (potentially disruptive) changes from nightly to master after they have been on the nightly branch for about two weeks.
It might still be helpful to announce a precise date when the next merge of this ...The tentative plan is to merge new (potentially disruptive) changes from nightly to master after they have been on the nightly branch for about two weeks.
It might still be helpful to announce a precise date when the next merge of this kind is planned. @Theos, what's your schedule?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/196What does Field.vdot() do if the spaces keyword is present?2018-01-11T13:30:42ZMartin ReineckeWhat does Field.vdot() do if the spaces keyword is present?What is the expected result if we call
`a.vdot(b,spaces=0)`
where a is a Field living on one space and b is a Field living on two spaces?
I'm asking because the documentation says that the result should be float or complex, but the co...What is the expected result if we call
`a.vdot(b,spaces=0)`
where a is a Field living on one space and b is a Field living on two spaces?
I'm asking because the documentation says that the result should be float or complex, but the code actually returns a Field object.
Is there an undocumented constraint that both Fields must have the same number of domains?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/197Error found by dimensional analysis2018-01-26T14:55:37ZMartin ReineckeError found by dimensional analysisThe test case below was distilled from `wiener_filter_via_curvature.py` by Torsten and me.
It indicates that there is a unit error in the way we compute the mock signal.
```
import numpy as np
import nifty as ift
import numericalunits a...The test case below was distilled from `wiener_filter_via_curvature.py` by Torsten and me.
It indicates that there is a unit error in the way we compute the mock signal.
```
import numpy as np
import nifty as ift
import numericalunits as nu
np.random.seed(43)
correlation_length = 0.05*nu.m
field_sigma = 2.* nu.K
response_sigma = 0.01*nu.m
# stupid power spectrum, but we only care about units now
def power_spectrum(k):
a = 4 * correlation_length * field_sigma**2
return a / (1 + k * correlation_length) ** 4
# Total side-length of the domain
L = 2.*nu.m
# Grid resolution (pixels per axis)
N_pixels = 512
signal_space = ift.RGSpace([N_pixels], distances=L/N_pixels)
harmonic_space = ift.FFTOperator.get_default_codomain(signal_space)
fft = ift.FFTOperator(harmonic_space, target=signal_space)
power_space = ift.PowerSpace(harmonic_space)
mock_power = ift.Field(power_space, val=power_spectrum(power_space.kindex))
mock_harmonic = mock_power.power_synthesize(real_signal=True)
mock_signal = fft(mock_harmonic)
print "signal mean (measured in K):", mock_signal.mean()/nu.K
print "signal mean (measured in K/sqrt(m)): ", mock_signal.mean()*np.sqrt(nu.m)/nu.K
```
If this code is run several times (I tested with the `nightly` branch), the first printed number changes, but the second one does not, which means that the unit for the mock_signal field is actually sqrt(m)/K, which is not the intended one.
My guess is that we either have a dimensional error in `power_spectrum()` or in `Field.power_synthesize()`.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/99Tests: Energy class2018-01-30T20:40:21ZTheo SteiningerTests: Energy classhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/97Tests: Operators2018-01-18T15:55:29ZTheo SteiningerTests: Operatorshttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/96Tests: Field class2018-01-19T08:55:09ZTheo SteiningerTests: Field class2018-01-19https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/93Docstring: Probing2018-02-14T15:13:33ZTheo SteiningerDocstring: Probinghttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/91Docstring: NIFTy config & logging2018-01-19T08:54:57ZTheo SteiningerDocstring: NIFTy config & logging2018-01-19https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/85Docstring: NIFTy2018-02-21T21:15:54ZTheo SteiningerDocstring: NIFTyhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/198Design problem in InvertibleOperatorMixin2018-01-26T15:02:39ZMartin ReineckeDesign problem in InvertibleOperatorMixinThe constructor for `InvertibleOperatorMixin` currently allows specifying iteration starting values via `forward_x0` and `backward_x0`. This is problematic for several reasons:
- these vectors have to live over the full domain of a late...The constructor for `InvertibleOperatorMixin` currently allows specifying iteration starting values via `forward_x0` and `backward_x0`. This is problematic for several reasons:
- these vectors have to live over the full domain of a later operator application call, and thereby restrict the general applicability of the operator which is one of Nifty's current design goals.
- `forward_x0` is used in `times()` and `adjoint_inverse_times()` although it is most likely not a good guess for at least one of both. Analogously for `backward_x0`.
- A starting guess is directly coupled to a concrete argument for which the operator is invoked; it is not a property of the operator, but of the operator/argument combination. So if one sets up an `InvertibleOperatorMixin` using the currently available tools, it will only work well for a single specific field.
I would much prefer being able to supply a starting guess with every `times()` (or `adjoint_times()` etc.) call; I'm thinking about a clean approach for this. Suggestions are welcome!https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/199Fairly urgent: decide on convention for diagonal in DiagonalOperator2017-11-16T10:02:14ZMartin ReineckeFairly urgent: decide on convention for diagonal in DiagonalOperatorRecently I removed the last remaining uses of the `bare` keyword in some methods of `DiagonalOperator`, and this change has now been merged into the `nightly` branch. The code now behaves as if `bare=False`, which was the default before....Recently I removed the last remaining uses of the `bare` keyword in some methods of `DiagonalOperator`, and this change has now been merged into the `nightly` branch. The code now behaves as if `bare=False`, which was the default before.
However Torsten argues that it would be more natural to behave as if `bare=True`.
Both is fine with me, but we need to decide very quickly, because the first people have started adapting to `nightly` and will be unhappy if they have to change their codes again!https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/65Profile Field.dot2018-01-19T08:55:37ZTheo SteiningerProfile Field.dotField.dot uses the DiagonalOperator in order to do dot products that only affect certain spaces of the partner. Maybe the fields are unnecessarily copied -> profiling is needed to ensure efficiency.Field.dot uses the DiagonalOperator in order to do dot products that only affect certain spaces of the partner. Maybe the fields are unnecessarily copied -> profiling is needed to ensure efficiency.Theo SteiningerTheo Steininger2018-01-19