ift issueshttps://gitlab.mpcdf.mpg.de/groups/ift/-/issues2017-06-13T07:44:11Zhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/73creation of distributed_data_object without dtype2017-06-13T07:44:11ZMartin Reineckecreation of distributed_data_object without dtypeIn rg_space.py and power_space.py there are calls to the constructor of "distributed_data_object", which do not specify a "dtype" argument.
This is accepted by D2O, but an annoying message is printed to the console.
I would suggest to pr...In rg_space.py and power_space.py there are calls to the constructor of "distributed_data_object", which do not specify a "dtype" argument.
This is accepted by D2O, but an annoying message is printed to the console.
I would suggest to produce a hard error whenever the user tries to create a D2O object without specifying a type. However, for the time being: is it correct to specify "dtype=np.float64" in the two cases I mentioned?
A related question: inside the distance_array-related functions of RGSpace and LMSpace, the data type np.float128 is used. Is that deliberate or an oversight?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/37Define NIFTy-wide comm in nifty_configuration.2017-06-13T07:44:57ZTheo SteiningerDefine NIFTy-wide comm in nifty_configuration.Use this communicator globally for all NIFTy objects. Inherit to all D2O arrays. Pass forward to pyfftw.Use this communicator globally for all NIFTy objects. Inherit to all D2O arrays. Pass forward to pyfftw.Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/31Spaces & Operators: Protect attributes that should not be altered after init2017-06-21T04:52:39ZTheo SteiningerSpaces & Operators: Protect attributes that should not be altered after initThe user is able to change attributes that produce inconsistent results:
from nifty import *
x = rg_space((100, 100))
x.dtype = np.complex
This conflicts with
x.complexity
Solution: Protect all critical att...The user is able to change attributes that produce inconsistent results:
from nifty import *
x = rg_space((100, 100))
x.dtype = np.complex
This conflicts with
x.complexity
Solution: Protect all critical attributes with a `property` decorator. https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/12Remove np support in point_space2017-06-21T04:52:39ZTheo SteiningerRemove np support in point_spacehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/142ProductSpaces drawing random fields and checking their spectra2017-06-21T07:57:40ZPumpe, Daniel (dpumpe)ProductSpaces drawing random fields and checking their spectraHi, currently I am testing ProductSpace reconstructions (PLN over RGSpace (x) RGSpace). I am facing a problem concerning the spectra. For a simple test scenario I would like to check the power of field drawn from two spectra, as the foll...Hi, currently I am testing ProductSpace reconstructions (PLN over RGSpace (x) RGSpace). I am facing a problem concerning the spectra. For a simple test scenario I would like to check the power of field drawn from two spectra, as the following
```
from nifty import *
np.random.seed(42)
#########################################################################
# setting space one
#########################################################################
npix_0 = np.array([200]) # number of pixels
total_volume_0 = 1. # total length
# setting signal parameters
lambda_s_0 = .5 # signal correlation length
sigma_s_0 = 1. # signal variance
#setting length smoothing kernel
kernel_0 = 0.
# calculating parameters
k0_0 = 4./(2*np.pi*lambda_s_0)
a_s_0 = sigma_s_0**2.*lambda_s_0*total_volume_0
# creation of spaces
x_0 = RGSpace(npix_0, distances=total_volume_0/npix_0,
zerocenter=False)
k_0 = RGRGTransformation.get_codomain(x_0)
p_0 = PowerSpace(harmonic_partner=k_0, logarithmic=False)
# creating Power Operator with given spectrum
spec_0 = (lambda k: a_s_0/(1+(k/k0_0)**2)**2)
p_field_0 = Field(p_0, val=spec_0, dtype=np.float64)
# creating FFT-Operator and Response-Operator with Gaussian convolution
FFTOp_0 = FFTOperator(domain=x_0, target=k_0)
#########################################################################
# setting space two
#########################################################################
npix_1 = np.array([100]) # number of pixels
total_volume_1 = 1. # total length
# setting signal parameters
lambda_s_1 = .05 # signal correlation length
sigma_s_1 = 2. # signal variance
#setting length smoothing kernel
kernel_1 = 0.
# calculating parameters
k0_1 = 4./(2*np.pi*lambda_s_1)
a_s_1 = sigma_s_1**2.*lambda_s_1*total_volume_1
# creation of spaces
x_1 = RGSpace(npix_1, distances=total_volume_1/npix_1,
zerocenter=False)
k_1 = RGRGTransformation.get_codomain(x_1)
p_1 = PowerSpace(harmonic_partner=k_1, logarithmic=False)
# creating Power Operator with given spectrum
spec_1 = (lambda k: a_s_0/(1+(k/k0_1)**2)**2)
p_field_1 = Field(p_1, val=spec_1, dtype=np.float64)
# creating FFT-Operator and Response-Operator with Gaussian convolution
FFTOp_1 = FFTOperator(domain=x_1, target=k_1)
#########################################################################
# drawing a random field with above defined correlation structure
#########################################################################
fp_0 = Field(p_0, val=(lambda k: spec_0(k)**1/2.))
fp_1 = Field(p_1, val=(lambda k: spec_1(k)**1/2.))
outer = np.outer(fp_0.val.data, fp_1.val.data)
fp = Field((p_0, p_1), val=outer)
sk = fp.power_synthesize(spaces=(0,1), real_signal=True)
```
The question is, how do I recover fp_0 and fp_1 from sk (i.e. simply the power of the drawn field).
In my mind this should be:
```
def test_product_space_power(x):
samples = 100
ps_0 = Field(p_0, val=0.)
ps_1 = Field(p_1, val=1.)
for ii in xrange(samples):
sk = x.power_synthesize(spaces=(0,1), real_signal=True)
p0, p1 = analyze_sk(sk)
ps_1 += p1
ps_0 += p0
p_0_test_result = ps_0/samples
p_1_test_result = ps_1/samples
return p_0_test_result, p_1_test_result
def analyze_sk(x):
sp = x.power_analyze(spaces=0, decompose_power=False)\
.power_analyze(spaces=1, decompose_power=False)
p0= sp.sum(spaces=1)/fp_1.sum()
p1= sp.sum(spaces=0)/fp_0.sum()
return p0, p1
def analyze_fp(x):
p0= x.sum(spaces=1)/fp_1.sum()
p1= x.sum(spaces=0)/fp_0.sum()
return p0, p1
```
However it seems to be not correct. Analysing the power of sk with ```test_product_space_power(fp) ``` in direction of space 0 seems to have a constant offset, while the power in direction of space 1 seems to have the wrong shape. ```analyze_fp(fp) ``` only serves as a double check to see if the covariance from which sk is actually drawn is the appropriate one for fp_0 and fp_1.
Am I using ```.power_analyze``` wrong or are we facing an issue when we draw the random fields.
Thanks a lot for your help!https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/133SmoothingOperator causes log(0.) for logarithmic distances2017-06-21T07:58:05ZTheo SteiningerSmoothingOperator causes log(0.) for logarithmic distanceshttps://gitlab.mpcdf.mpg.de/ift/NIFTy/blob/master/nifty/operators/smoothing_operator/fft_smoothing_operator.py#L27
https://gitlab.mpcdf.mpg.de/ift/NIFTy/blob/master/nifty/operators/smoothing_operator/fft_smoothing_operator.py#L27
Smooth...https://gitlab.mpcdf.mpg.de/ift/NIFTy/blob/master/nifty/operators/smoothing_operator/fft_smoothing_operator.py#L27
https://gitlab.mpcdf.mpg.de/ift/NIFTy/blob/master/nifty/operators/smoothing_operator/fft_smoothing_operator.py#L27
Smoothing should be translationally invariant. Couldn't we just add 1.0 to the distance array?Martin ReineckeMartin Reineckehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/132DiagonalOperator: trace_log() and log_determinant()2017-06-21T08:02:59ZMartin ReineckeDiagonalOperator: trace_log() and log_determinant()trace_log() and log_determinant() should throw exceptions if trace/determinant is <=0.trace_log() and log_determinant() should throw exceptions if trace/determinant is <=0.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/154RGSpace smoothing width2017-06-23T23:54:59ZTheo SteiningerRGSpace smoothing widthI've got the impression that the smoothing width for RGSpace is currently not correct in master.
Please try this code:
```
from nifty import *
x = RGSpace((256,), distances=1.)
f = Field(x, val=0)
f.val[100] = 1
sm = SmoothingOperator(...I've got the impression that the smoothing width for RGSpace is currently not correct in master.
Please try this code:
```
from nifty import *
x = RGSpace((256,), distances=1.)
f = Field(x, val=0)
f.val[100] = 1
sm = SmoothingOperator(x, sigma=50.)
plo = plotting.RG1DPlotter(title='test.html')
plo(sm(f))
g = sm(f)
g.sum()
```
The Gaussian bell is only half as broad as it should be. I fix is given in dc16a255d1b6d63db837278dc35d1cc8409b70b6
@dpumpe Could you please check, if this makes sense?
@mtr Could you please check, if the formula for smoothing on the sphere is correct? https://gitlab.mpcdf.mpg.de/ift/NIFTy/blob/master/nifty/spaces/lm_space/lm_space.py#L169Pumpe, Daniel (dpumpe)Pumpe, Daniel (dpumpe)https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/108Docstring: FFTOperator2017-07-01T11:06:18ZTheo SteiningerDocstring: FFTOperatorMartin ReineckeMartin Reineckehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/144Arrays/D2Os as DomainObject attributes2017-07-06T10:17:14ZMartin ReineckeArrays/D2Os as DomainObject attributesIn an experimental branch I made "binbounds" an attribute of PowerSpace, and for efficiency reasons decided to store it as a Numpy array instead of a tuple.
This causes problems with the equality comparison operators of DomainObject, be...In an experimental branch I made "binbounds" an attribute of PowerSpace, and for efficiency reasons decided to store it as a Numpy array instead of a tuple.
This causes problems with the equality comparison operators of DomainObject, because Numpy arrays need to be compared via
`np.all(arr1==arr2)`
instead of
`arr1==arr2`
The situation for D2Os may be similar.
Should I extend the comparisons in DomainObject accordingly, or don't we want attributes of these types in general?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/126Trouble with SteepestDescent.2017-07-08T11:37:42ZMatevz, Sraml (sraml)Trouble with SteepestDescent.
```
np.random.seed(0)
N_dim = 500
x_space = RGSpace(N_dim)
x = Field(x_space, val=np.random.rand(N_dim))
N = DiagonalOperator(x_space, diagonal = 1.)
class QuadraticPot(Energy):
def __init__(self, position, N):
...
```
np.random.seed(0)
N_dim = 500
x_space = RGSpace(N_dim)
x = Field(x_space, val=np.random.rand(N_dim))
N = DiagonalOperator(x_space, diagonal = 1.)
class QuadraticPot(Energy):
def __init__(self, position, N):
super(QuadraticPot, self).__init__(position)
self.N = N
def at(self, position):
return self.__class__(position, N = self.N)
@property
def value(self):
H = 0.5 *self.position.dot(self.N.inverse_times(self.position))
return H.real
@property
def gradient(self):
g = self.N.inverse_times(self.position)
return_g = g.copy_empty(dtype=np.float)
return_g.val = g.val.real
return return_g
@property
def curvature(self):
return self.N
minimizer = SteepestDescent(iteration_limit=1000,convergence_tolerance=1E-4, convergence_level=3)
energy = QuadraticPot(position=x , N=N)
(energy, convergence) = minimizer(energy)
```
I'm feeding the SteepestDescent method a quadratic function with 0 mean. If you run it, it converges. It needs around 5 iterations to converge but after that it creates a loop and it is trapped inside it until it reaches the 'iteration_limit=1000' . The produced result is still correct.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/156fixed points in harmonic RGSpace2017-07-08T16:23:01ZMartin Reineckefixed points in harmonic RGSpaceWhen looking at _hermitianize_correct_variance in rg_space.py, I wondered whether the determination of fixed points is correct. Assuming a 1D case, the code determines two fixed points: one at 0, and one at N/2. But as far as I understan...When looking at _hermitianize_correct_variance in rg_space.py, I wondered whether the determination of fixed points is correct. Assuming a 1D case, the code determines two fixed points: one at 0, and one at N/2. But as far as I understand, the one at N/2 is only a fixed point if N is an even number.
Please let me know if I'm confusing different things here!https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/159Bug in Field._hermitian_decomposition?2017-07-08T16:23:19ZMartin ReineckeBug in Field._hermitian_decomposition?The beginning of this method looks like this:
```
@staticmethod
def _hermitian_decomposition(domain, val, spaces, domain_axes,
preserve_gaussian_variance=False):
# hermitianize for the fi...The beginning of this method looks like this:
```
@staticmethod
def _hermitian_decomposition(domain, val, spaces, domain_axes,
preserve_gaussian_variance=False):
# hermitianize for the first space
(h, a) = domain[spaces[0]].hermitian_decomposition(
val,
domain_axes[spaces[0]],
preserve_gaussian_variance=preserve_gaussian_variance)
# hermitianize all remaining spaces using the iterative formula
for space in xrange(1, len(spaces)):
(hh, ha) = domain[space].hermitian_decomposition(
h,
domain_axes[space],
preserve_gaussian_variance=False)
(ah, aa) = domain[space].hermitian_decomposition(
a,
domain_axes[space],
preserve_gaussian_variance=False)
```
Why is `preserve_gaussian_variance` hardwired to `False` for all spaces other than the first?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/157RGSpace.hermitian_decomposition() is incorrect for odd grid lengths2017-07-08T21:05:30ZMartin ReineckeRGSpace.hermitian_decomposition() is incorrect for odd grid lengths```
from nifty import *
import numpy as np
def test_hermitianity (length):
arr=np.random.random(length)+ np.random.random(length)*1j
s=RGSpace(length)
x1,x2=s.hermitian_decomposition(arr)
print x1
test_hermitianity(3)
`...```
from nifty import *
import numpy as np
def test_hermitianity (length):
arr=np.random.random(length)+ np.random.random(length)*1j
s=RGSpace(length)
x1,x2=s.hermitian_decomposition(arr)
print x1
test_hermitianity(3)
```
The result should be three numbers; the first should only have a real part, and the second and third should be mutually complex conjugate.
This looks like the code implicitly (and incorrectly) assumes `zerocenter=True`.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/160VL_BFGS minimizer doesn't like HPSpace and LMSpace2017-07-09T23:04:22ZMatevz, Sraml (sraml)VL_BFGS minimizer doesn't like HPSpace and LMSpaceThe VL_BFGS minimizer has problems with the minimization of HPSpace and LMSpace. If we take a quadratic potential
`
np.random.seed(0)
N_dim = 500
#x_space = HPSpace(N_dim)
x_space = LMSpace(N_dim)
x =...The VL_BFGS minimizer has problems with the minimization of HPSpace and LMSpace. If we take a quadratic potential
`
np.random.seed(0)
N_dim = 500
#x_space = HPSpace(N_dim)
x_space = LMSpace(N_dim)
x = Field(x_space, val=np.random.rand(N_dim))
N = DiagonalOperator(x_space, diagonal = 1.)
class QuadraticPot(Energy):
def __init__(self, position, N):
super(QuadraticPot, self).__init__(position)
self.N = N
def at(self, position):
return self.__class__(position, N = self.N)
@property
def value(self):
H = 0.5 *self.position.dot(self.N.inverse_times(self.position))
return H.real
@property
def gradient(self):
g = self.N.inverse_times(self.position)
return_g = g.copy_empty(dtype=np.float)
return_g.val = g.val.real
return return_g
@property
def curvature(self):
return self.N
minimizer = VL_BFGS(iteration_limit=1000,convergence_tolerance=1E-4, convergence_level=3)
energy = QuadraticPot(position=x , N=N)
print energy.value
(energy, convergence) = minimizer(energy)
print energy.value
`
it doesn't converge to 0. For an example in this case above, the energy of the system is only reduced to a half of the starting energy. I think I'm lacking a piece of knowledge to understand this problem.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/158Change or rename hermitian_decomposition()2017-07-11T08:00:28ZMartin ReineckeChange or rename hermitian_decomposition()I'm opening this as a separate issue, because it is design-related, not code-related.
In my opinion, the name "hermitian_decomposition" is misleading for the operation that the Space and Field methods currently perform. Of any data arra...I'm opening this as a separate issue, because it is design-related, not code-related.
In my opinion, the name "hermitian_decomposition" is misleading for the operation that the Space and Field methods currently perform. Of any data array there is exactly one hermitian decomposition, and it therefore cannot depend on the value of additional flags like "preserve_gaussian_variance". I suggest to move the necessary manipulations for variance preservation into a separate routine.
Alternatively the method name could be changed, but I can't think of anything convincing.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/164Another Field.vdot() bug...2017-07-11T08:18:02ZMartin ReineckeAnother Field.vdot() bug...```
from nifty import *
import numpy as np
s=RGSpace((10,))
f1=Field.from_random("normal",domain=s,dtype=np.complex128)
f2=Field.from_random("normal",domain=s,dtype=np.complex128)
print f1.vdot(f2)
print f1.vdot(f2,spaces=0)
``````
from nifty import *
import numpy as np
s=RGSpace((10,))
f1=Field.from_random("normal",domain=s,dtype=np.complex128)
f2=Field.from_random("normal",domain=s,dtype=np.complex128)
print f1.vdot(f2)
print f1.vdot(f2,spaces=0)
```https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/127Find good convergence criterion for DescentMinimizer2017-07-11T08:26:57ZTheo SteiningerFind good convergence criterion for DescentMinimizerAt the moment the `convergence number` is defined as `delta = abs(gradient).max() * (step_length/gradient_norm)`
Are there better measures? For a simple quadratic potential, this definition of `delta` causes a steepest descent to not tr...At the moment the `convergence number` is defined as `delta = abs(gradient).max() * (step_length/gradient_norm)`
Are there better measures? For a simple quadratic potential, this definition of `delta` causes a steepest descent to not trust the (actual true) minimum.
A good measure should be independent of scale and initial conditions.
Some possible candidates are:
* `(new_energy.position - energy.position).norm()`
* `(new_energy.position - energy.position).max()`
* `((new_energy.position - energy.position)/(1+energy.position)).norm()`
* `((new_energy.position - energy.position)/(1+energy.position)).max()`
* `step_length`
* `step_length / energy.position.norm()`
* `gradient.norm()`Martin ReineckeMartin Reineckehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/170CriticalPowerEnergy has incomplete at() method2017-07-19T11:57:02ZMartin ReineckeCriticalPowerEnergy has incomplete at() methodThe `at()` method in `CriticalPowerEnergy` does not pass the `logarithmic` property to the newly built object, which results in a (potentially) different T inside that object.
What's the best solution to fix this? Store `logarithmic` ex...The `at()` method in `CriticalPowerEnergy` does not pass the `logarithmic` property to the newly built object, which results in a (potentially) different T inside that object.
What's the best solution to fix this? Store `logarithmic` explicitly to have it available in the `at()` method?
@kjako, any preferences?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/137Limit critical plotting functions to MPI.rank==0.2017-07-22T23:30:09ZTheo SteiningerLimit critical plotting functions to MPI.rank==0.