ift issueshttps://gitlab.mpcdf.mpg.de/groups/ift/-/issues2017-06-23T23:54:59Zhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/154RGSpace smoothing width2017-06-23T23:54:59ZTheo SteiningerRGSpace smoothing widthI've got the impression that the smoothing width for RGSpace is currently not correct in master.
Please try this code:
```
from nifty import *
x = RGSpace((256,), distances=1.)
f = Field(x, val=0)
f.val[100] = 1
sm = SmoothingOperator(...I've got the impression that the smoothing width for RGSpace is currently not correct in master.
Please try this code:
```
from nifty import *
x = RGSpace((256,), distances=1.)
f = Field(x, val=0)
f.val[100] = 1
sm = SmoothingOperator(x, sigma=50.)
plo = plotting.RG1DPlotter(title='test.html')
plo(sm(f))
g = sm(f)
g.sum()
```
The Gaussian bell is only half as broad as it should be. I fix is given in dc16a255d1b6d63db837278dc35d1cc8409b70b6
@dpumpe Could you please check, if this makes sense?
@mtr Could you please check, if the formula for smoothing on the sphere is correct? https://gitlab.mpcdf.mpg.de/ift/NIFTy/blob/master/nifty/spaces/lm_space/lm_space.py#L169Pumpe, Daniel (dpumpe)Pumpe, Daniel (dpumpe)https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/153log.log file too large2017-09-15T23:26:04ZChristoph Lienhardlog.log file too largeWhen executing a program using NIFTy a log.log file is created. This file tends to get very large.
Could you provide a tutorial explaining how to keep the size of the log.log file to a minimum or how to prevent NIFTy from even creating one?When executing a program using NIFTy a log.log file is created. This file tends to get very large.
Could you provide a tutorial explaining how to keep the size of the log.log file to a minimum or how to prevent NIFTy from even creating one?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/152Clarification request for ComposedOperator2017-08-15T08:04:01ZMartin ReineckeClarification request for ComposedOperatorI think that there are two different ways to interpret the meaning of
"ComposedOperator", and I'm not sure if both of them are supported by Nifty at
the moment:
Scenario 1:
Say you have an input field F living on an RGSpace. If you want...I think that there are two different ways to interpret the meaning of
"ComposedOperator", and I'm not sure if both of them are supported by Nifty at
the moment:
Scenario 1:
Say you have an input field F living on an RGSpace. If you want to
apply an FFT, then some scaling, and then an inverse FFT again to this field,
is it possible to combine these three operators into a single ComposedOperator?
It doesn't seem that this works at the moment, because the "domain" property of
such a ComposedOperator returns a triple of spaces, and the input field only
lives on a single space.
Scenario 2:
You have a field living on a product of spaces, and you want to run several
operators on that field, every one operating on an individual space of the field.
This seems to be the anticipated scenario for ComposedOperator at the moment.
Is scenario 1 also needed in the real world? I have a feeling that it is; if
true, we should provide another "composed operator" for this use case and make
sure (by means of good naming, documentation and tests in the operator
constructors) that they are not confused.
(I'm mentioning this because until yesterday I was convinced that
ComposedOperator also worked for scenario 1...)https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/151Potential name confusion for Field.dot()2018-04-02T09:42:38ZMartin ReineckePotential name confusion for Field.dot()I just noticed the hard way that the behaviour of Field.dot() and numpy.dot() is subtly different ...
While Field.dot() takes the complex conjugate of its second argument, numpy.dot() doesn't; the correct numpy function to call would be...I just noticed the hard way that the behaviour of Field.dot() and numpy.dot() is subtly different ...
While Field.dot() takes the complex conjugate of its second argument, numpy.dot() doesn't; the correct numpy function to call would be numpy.vdot().
The question is whether we should rename Field.dot() to Field.vdot() for the sake of consistency with numpy. Otherwise we might risk confusing people with numpy background.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/150The dot product of a field with itself2017-06-12T23:31:34ZSebastian HutschenreuterThe dot product of a field with itselfThe dot product only considers the volume factors of the first field in case bare=False. This leads to the (in my mind) odd result noted below.
In [2]: np.random.seed(123)
In [3]: rgf = Field(domain=RGSpace(4),val=np.random.randn(4))
...The dot product only considers the volume factors of the first field in case bare=False. This leads to the (in my mind) odd result noted below.
In [2]: np.random.seed(123)
In [3]: rgf = Field(domain=RGSpace(4),val=np.random.randn(4))
In [4]: rgf.dot(rgf)
Out[4]: 1.1305730855452751
This is equal to
In [5]: (rgf.weight(power=1) * rgf).sum()
Out[5]: 1.1305730855452751
Wouldn't the two following cases be more appropriate, depending on the context?
In [6]: (rgf.weight(power=1) * rgf.weight(power=1)).sum()
Out[6]: 0.28264327138631878
In [7]: (rgf*rgf).sum()
Out[7]: 4.5222923421811005https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/149Behaviour of the Field class on empty domains2017-06-12T23:24:23ZSebastian HutschenreuterBehaviour of the Field class on empty domainsField seems to do odd things when the domain is set empty under variation of the dtype keyword.
I know that he example is rather artificial, non the less I thought I should mention it.
In [2]: ff = Field(domain=(),dtype=np.float)
In [...Field seems to do odd things when the domain is set empty under variation of the dtype keyword.
I know that he example is rather artificial, non the less I thought I should mention it.
In [2]: ff = Field(domain=(),dtype=np.float)
In [3]: ff.val.get_full_data()
[MainThread][WARNING ] _distributor_factory Distribution strategy was set to 'not' because of global_shape == ()
Out[3]: array(0.0)
In [4]: ff = Field(domain=(),dtype=np.complex)
In [5]: ff.val.get_full_data()
[MainThread][WARNING ] _distributor_factory Distribution strategy was set to 'not' because of global_shape == ()
Out[5]: array((8.772981689857213+0j))https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/148Which data types do we allow for Field.dtype?2017-06-09T10:18:39ZMartin ReineckeWhich data types do we allow for Field.dtype?It seems that defining a Field with an integer dtype can produce quite unexpected results:
```
In [1]: from nifty import *
In [2]: import numpy as np
In [3]: s=RGSpace((4,))
In [4]: f=Field(s,val=np.arange(4))
In [5]: f.dot(f,bare=F...It seems that defining a Field with an integer dtype can produce quite unexpected results:
```
In [1]: from nifty import *
In [2]: import numpy as np
In [3]: s=RGSpace((4,))
In [4]: f=Field(s,val=np.arange(4))
In [5]: f.dot(f,bare=False)
Out[5]: 0
```
I suggest that we limit the allowed data types for Nifty fields to floating point and complex floating point. This could be enforced in Field._infer_dtype().https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/147Probing harmonic domain, ```.generate_probe```2017-06-08T14:33:07ZPumpe, Daniel (dpumpe)Probing harmonic domain, ```.generate_probe```Hi,
for a classical critical filter run, inferring a power spectrum one has to probe the ```PropagatorOperator``` in its harmonic domain. Consequently this requires to get harmonic probes. These probes have to fulfil a few conditions, ...Hi,
for a classical critical filter run, inferring a power spectrum one has to probe the ```PropagatorOperator``` in its harmonic domain. Consequently this requires to get harmonic probes. These probes have to fulfil a few conditions, i.e. random_type= 'pm1' or 'normal', std=1, mean=0, real_signal=True. For gauss-distributed random numbers this is easy to get, however for 'pm1' currently I do not see a straight way.
For the 1-d case getting 'pm1' random numbers in the ```Prober``` could look like this,
```
def generate_probe(self):
""" a random-probe generator """
if self.domain[0].harmonic:
f = Field(self.domain, val=0., dtype=np.complex)
for ii in xrange(int(self.domain[0].shape[0]/2.-1.)):
real = np.random.randint(-1, 2)
img = np.random.randint(-1, 2)
f.val[ii+1] = np.complex(real, img)
f.val[-(ii+1)] = f.val[ii+1].conjugate()
mono = np.random.randint(-1, 2)
f.val[0] = np.complex(mono, 0.)
else:
f = Field.from_random(random_type=self.random_type,
domain=self.domain,
distribution_strategy=self.distribution_strategy)
uid = np.random.randint(1e18)
return (uid, f)
```https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/146Reuse curvature and gradient for gradient and energy in Energy classes2017-07-29T11:34:37ZTheo SteiningerReuse curvature and gradient for gradient and energy in Energy classesEspecially in `nifty/library/energy_library/critical_power_energy.py`
the gradient information is not reused for the computation of the energy itself.
Additionally, use memoization/caching. For this the memo decorator from nifty.energi...Especially in `nifty/library/energy_library/critical_power_energy.py`
the gradient information is not reused for the computation of the energy itself.
Additionally, use memoization/caching. For this the memo decorator from nifty.energies can be used.Jakob KnollmuellerJakob Knollmuellerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/145Add documentation to classes in library2018-06-27T15:53:10ZTheo SteiningerAdd documentation to classes in libraryThe `nifty2go` branch contains a great set of operator and energy objects. Please add proper docstrings to the classes and a reference to a paper/book where the respective algorithm is described. If there is no such paper where the imple...The `nifty2go` branch contains a great set of operator and energy objects. Please add proper docstrings to the classes and a reference to a paper/book where the respective algorithm is described. If there is no such paper where the implemented formulae can be found 1:1, please add inline comments on what is happening.Jakob KnollmuellerJakob Knollmuellerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/144Arrays/D2Os as DomainObject attributes2017-07-06T10:17:14ZMartin ReineckeArrays/D2Os as DomainObject attributesIn an experimental branch I made "binbounds" an attribute of PowerSpace, and for efficiency reasons decided to store it as a Numpy array instead of a tuple.
This causes problems with the equality comparison operators of DomainObject, be...In an experimental branch I made "binbounds" an attribute of PowerSpace, and for efficiency reasons decided to store it as a Numpy array instead of a tuple.
This causes problems with the equality comparison operators of DomainObject, because Numpy arrays need to be compared via
`np.all(arr1==arr2)`
instead of
`arr1==arr2`
The situation for D2Os may be similar.
Should I extend the comparisons in DomainObject accordingly, or don't we want attributes of these types in general?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/143Try to avoid the exponentiation operator where possible2017-06-05T00:47:12ZMartin ReineckeTry to avoid the exponentiation operator where possibleLoosely related to issue 142, I have started a branch trying to get rid of the exponentiation expressions (i.e. `**`) in Nifty where they are not really needed.
If you can avoid `**`, please do. It has a very strange precedence. While...Loosely related to issue 142, I have started a branch trying to get rid of the exponentiation expressions (i.e. `**`) in Nifty where they are not really needed.
If you can avoid `**`, please do. It has a very strange precedence. While it is clear why the expressions mentioned in issue 142 need brackets (once one thinks about it long enough), there are *really* bizarre cases, like the following from https://gitlab.mpcdf.mpg.de/ift/NIFTy/blob/master/nifty/minimization/line_searching/line_search_strong_wolfe.py#L347
`d1[0, 1] = -db ** 2`
In Python, this is equivalent to "-(db*db)", although the spaces and general experience suggest otherwise.
In Fortran, the value of this expression would be "db*db".
[Edit: sorry, this is incorrect; I blindly trusted information on the internet :(]
So please, whenever you have to use `**`, be _very_ explicit with brackets!https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/142ProductSpaces drawing random fields and checking their spectra2017-06-21T07:57:40ZPumpe, Daniel (dpumpe)ProductSpaces drawing random fields and checking their spectraHi, currently I am testing ProductSpace reconstructions (PLN over RGSpace (x) RGSpace). I am facing a problem concerning the spectra. For a simple test scenario I would like to check the power of field drawn from two spectra, as the foll...Hi, currently I am testing ProductSpace reconstructions (PLN over RGSpace (x) RGSpace). I am facing a problem concerning the spectra. For a simple test scenario I would like to check the power of field drawn from two spectra, as the following
```
from nifty import *
np.random.seed(42)
#########################################################################
# setting space one
#########################################################################
npix_0 = np.array([200]) # number of pixels
total_volume_0 = 1. # total length
# setting signal parameters
lambda_s_0 = .5 # signal correlation length
sigma_s_0 = 1. # signal variance
#setting length smoothing kernel
kernel_0 = 0.
# calculating parameters
k0_0 = 4./(2*np.pi*lambda_s_0)
a_s_0 = sigma_s_0**2.*lambda_s_0*total_volume_0
# creation of spaces
x_0 = RGSpace(npix_0, distances=total_volume_0/npix_0,
zerocenter=False)
k_0 = RGRGTransformation.get_codomain(x_0)
p_0 = PowerSpace(harmonic_partner=k_0, logarithmic=False)
# creating Power Operator with given spectrum
spec_0 = (lambda k: a_s_0/(1+(k/k0_0)**2)**2)
p_field_0 = Field(p_0, val=spec_0, dtype=np.float64)
# creating FFT-Operator and Response-Operator with Gaussian convolution
FFTOp_0 = FFTOperator(domain=x_0, target=k_0)
#########################################################################
# setting space two
#########################################################################
npix_1 = np.array([100]) # number of pixels
total_volume_1 = 1. # total length
# setting signal parameters
lambda_s_1 = .05 # signal correlation length
sigma_s_1 = 2. # signal variance
#setting length smoothing kernel
kernel_1 = 0.
# calculating parameters
k0_1 = 4./(2*np.pi*lambda_s_1)
a_s_1 = sigma_s_1**2.*lambda_s_1*total_volume_1
# creation of spaces
x_1 = RGSpace(npix_1, distances=total_volume_1/npix_1,
zerocenter=False)
k_1 = RGRGTransformation.get_codomain(x_1)
p_1 = PowerSpace(harmonic_partner=k_1, logarithmic=False)
# creating Power Operator with given spectrum
spec_1 = (lambda k: a_s_0/(1+(k/k0_1)**2)**2)
p_field_1 = Field(p_1, val=spec_1, dtype=np.float64)
# creating FFT-Operator and Response-Operator with Gaussian convolution
FFTOp_1 = FFTOperator(domain=x_1, target=k_1)
#########################################################################
# drawing a random field with above defined correlation structure
#########################################################################
fp_0 = Field(p_0, val=(lambda k: spec_0(k)**1/2.))
fp_1 = Field(p_1, val=(lambda k: spec_1(k)**1/2.))
outer = np.outer(fp_0.val.data, fp_1.val.data)
fp = Field((p_0, p_1), val=outer)
sk = fp.power_synthesize(spaces=(0,1), real_signal=True)
```
The question is, how do I recover fp_0 and fp_1 from sk (i.e. simply the power of the drawn field).
In my mind this should be:
```
def test_product_space_power(x):
samples = 100
ps_0 = Field(p_0, val=0.)
ps_1 = Field(p_1, val=1.)
for ii in xrange(samples):
sk = x.power_synthesize(spaces=(0,1), real_signal=True)
p0, p1 = analyze_sk(sk)
ps_1 += p1
ps_0 += p0
p_0_test_result = ps_0/samples
p_1_test_result = ps_1/samples
return p_0_test_result, p_1_test_result
def analyze_sk(x):
sp = x.power_analyze(spaces=0, decompose_power=False)\
.power_analyze(spaces=1, decompose_power=False)
p0= sp.sum(spaces=1)/fp_1.sum()
p1= sp.sum(spaces=0)/fp_0.sum()
return p0, p1
def analyze_fp(x):
p0= x.sum(spaces=1)/fp_1.sum()
p1= x.sum(spaces=0)/fp_0.sum()
return p0, p1
```
However it seems to be not correct. Analysing the power of sk with ```test_product_space_power(fp) ``` in direction of space 0 seems to have a constant offset, while the power in direction of space 1 seems to have the wrong shape. ```analyze_fp(fp) ``` only serves as a double check to see if the covariance from which sk is actually drawn is the appropriate one for fp_0 and fp_1.
Am I using ```.power_analyze``` wrong or are we facing an issue when we draw the random fields.
Thanks a lot for your help!https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/141Many tests are apparently skipped2017-05-31T07:24:21ZMartin ReineckeMany tests are apparently skippedThe @expand decorator use in our tests doesn't seem to work as we expect...
For example, the `test_pipundexInversion` method in test_power_space_test.py should be instantiated for the 10 different `RGSpace`s and the 2 `LMSpace`s in `HAR...The @expand decorator use in our tests doesn't seem to work as we expect...
For example, the `test_pipundexInversion` method in test_power_space_test.py should be instantiated for the 10 different `RGSpace`s and the 2 `LMSpace`s in `HARMONIC_SPACES`. In fact, there are only test names like
`test_pipundexInversion_RGSpace_not_None_3_False`
and
`test_pipundexInversion_LMSpace_fftw_None_3_True`
It seems that the constructor arguments to the spaces in HARMONIC_SPACES are not reflected in the names of the tests, and so the same test name is created over and over, and only run once in the end.
I have no idea how to fix this.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/140Dependencies in README file2017-06-05T11:29:49ZPhilipp ArrasDependencies in README fileDuring install I noticed that the PyPI packages `mpi4py`, `nose_parameterized` and `plotly`, which I needed to get NIFTy to run, are not mentioned in the README.During install I noticed that the PyPI packages `mpi4py`, `nose_parameterized` and `plotly`, which I needed to get NIFTy to run, are not mentioned in the README.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/139Broken demo?2017-07-29T11:50:49ZPhilipp ArrasBroken demo?When I try to run *demos/wiener_filter_harmonic.py* the program does not terminate after a sensible amount of time.
I get the following warning:
>>>
[MainThread][WARNING ] Field The distribution_stragey of pindex does not fit the...When I try to run *demos/wiener_filter_harmonic.py* the program does not terminate after a sensible amount of time.
I get the following warning:
>>>
[MainThread][WARNING ] Field The distribution_stragey of pindex does not fit the slice_local distribution strategy of the synthesized field.
/home/philipp/.local/lib/python2.7/site-packages/d2o-1.1.0-py2.7.egg/d2o/distributor_factory.py:339: ComplexWarning:
>>>
I am looking forward to joining your group and working on NIFTy :)https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/138Support pyfftw even if MPI is not present2017-06-05T02:15:59ZMartin ReineckeSupport pyfftw even if MPI is not presentThis will allow using multi-threaded FFTs which can be a big help in large calculations.This will allow using multi-threaded FFTs which can be a big help in large calculations.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/137Limit critical plotting functions to MPI.rank==0.2017-07-22T23:30:09ZTheo SteiningerLimit critical plotting functions to MPI.rank==0.https://gitlab.mpcdf.mpg.de/ift/pyHealpix/-/issues/1README: Add flags to configure2017-05-24T10:11:22ZTheo SteiningerREADME: Add flags to configureOne could add the flags `--enable-openmp --enable-native-optimizations` to the install instructions.One could add the flags `--enable-openmp --enable-native-optimizations` to the install instructions.Martin ReineckeMartin Reineckehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/136Multiprocessing to process multiple probes for diagonal probing in parallel2017-08-10T07:12:09ZPumpe, Daniel (dpumpe)Multiprocessing to process multiple probes for diagonal probing in parallelHi,
I've tried to upgrade the Prober class, making it capable of processing multiple probes at the same time on different cpus. Due to various mysterious pickling errors I failed to do so.
Does anybody have a deeper experience in par...Hi,
I've tried to upgrade the Prober class, making it capable of processing multiple probes at the same time on different cpus. Due to various mysterious pickling errors I failed to do so.
Does anybody have a deeper experience in parallelising a simple for-loop? I think this would speed up our codes by multiple factors, as most of them a run on prelude.