ift issueshttps://gitlab.mpcdf.mpg.de/groups/ift/-/issues2017-08-15T08:03:13Zhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/161Rework methods taking an "axes" keyword2017-08-15T08:03:13ZMartin ReineckeRework methods taking an "axes" keywordAt the moment, there are a lot of routines which take an "axes" keyword, and in many of these, it could be avoided altogether or reduced to an integer.
Example 1: the "weight" method of DomainObject and derived classes
Here, `axes` is ...At the moment, there are a lot of routines which take an "axes" keyword, and in many of these, it could be avoided altogether or reduced to an integer.
Example 1: the "weight" method of DomainObject and derived classes
Here, `axes` is mostly used to determine, which particular axes in a `Field` belong to the space. In principle, all of this could be avoided if this method simply returned an array with one weight per pixel (or a scalar if all weights are equal), and leave all the juggling with array indices to Field.weight().
As things are, there are various alternative implementations of the re-shaping task (look for example at the `weight` methods of `PowerSpace` and `GLSpace`).
My suggestion is to simplify `DomainObject`'s `weight` method to:
```
def volfactor(self):
""" Returns an array containing the object's volume factors.
The array must be broadcastable to the space's dimensions
(useful if the volume factors are uniform in a direction, e.g. for GLSpace).
Alternatively, if all volume factors are equal, returns a scalar.
"""
```
... and leave all the operations involving powers, reshaping etc. to the calling `Field.weight` method.
Example 2: `Space.hermitianize_decomposition`():
Here the `axes` variable always contains a tuple with as many entries as the space has dimensions, and the entries take the form `i`, `i+1`, `i+2`, ... I suggest to pass a scalar integer with the name `first_axis` instead, which should be less error-prone.
If there is agreement to go into this direction, there are more cases, but we can discuss them later :)https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/160VL_BFGS minimizer doesn't like HPSpace and LMSpace2017-07-09T23:04:22ZMatevz, Sraml (sraml)VL_BFGS minimizer doesn't like HPSpace and LMSpaceThe VL_BFGS minimizer has problems with the minimization of HPSpace and LMSpace. If we take a quadratic potential
`
np.random.seed(0)
N_dim = 500
#x_space = HPSpace(N_dim)
x_space = LMSpace(N_dim)
x =...The VL_BFGS minimizer has problems with the minimization of HPSpace and LMSpace. If we take a quadratic potential
`
np.random.seed(0)
N_dim = 500
#x_space = HPSpace(N_dim)
x_space = LMSpace(N_dim)
x = Field(x_space, val=np.random.rand(N_dim))
N = DiagonalOperator(x_space, diagonal = 1.)
class QuadraticPot(Energy):
def __init__(self, position, N):
super(QuadraticPot, self).__init__(position)
self.N = N
def at(self, position):
return self.__class__(position, N = self.N)
@property
def value(self):
H = 0.5 *self.position.dot(self.N.inverse_times(self.position))
return H.real
@property
def gradient(self):
g = self.N.inverse_times(self.position)
return_g = g.copy_empty(dtype=np.float)
return_g.val = g.val.real
return return_g
@property
def curvature(self):
return self.N
minimizer = VL_BFGS(iteration_limit=1000,convergence_tolerance=1E-4, convergence_level=3)
energy = QuadraticPot(position=x , N=N)
print energy.value
(energy, convergence) = minimizer(energy)
print energy.value
`
it doesn't converge to 0. For an example in this case above, the energy of the system is only reduced to a half of the starting energy. I think I'm lacking a piece of knowledge to understand this problem.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/159Bug in Field._hermitian_decomposition?2017-07-08T16:23:19ZMartin ReineckeBug in Field._hermitian_decomposition?The beginning of this method looks like this:
```
@staticmethod
def _hermitian_decomposition(domain, val, spaces, domain_axes,
preserve_gaussian_variance=False):
# hermitianize for the fi...The beginning of this method looks like this:
```
@staticmethod
def _hermitian_decomposition(domain, val, spaces, domain_axes,
preserve_gaussian_variance=False):
# hermitianize for the first space
(h, a) = domain[spaces[0]].hermitian_decomposition(
val,
domain_axes[spaces[0]],
preserve_gaussian_variance=preserve_gaussian_variance)
# hermitianize all remaining spaces using the iterative formula
for space in xrange(1, len(spaces)):
(hh, ha) = domain[space].hermitian_decomposition(
h,
domain_axes[space],
preserve_gaussian_variance=False)
(ah, aa) = domain[space].hermitian_decomposition(
a,
domain_axes[space],
preserve_gaussian_variance=False)
```
Why is `preserve_gaussian_variance` hardwired to `False` for all spaces other than the first?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/158Change or rename hermitian_decomposition()2017-07-11T08:00:28ZMartin ReineckeChange or rename hermitian_decomposition()I'm opening this as a separate issue, because it is design-related, not code-related.
In my opinion, the name "hermitian_decomposition" is misleading for the operation that the Space and Field methods currently perform. Of any data arra...I'm opening this as a separate issue, because it is design-related, not code-related.
In my opinion, the name "hermitian_decomposition" is misleading for the operation that the Space and Field methods currently perform. Of any data array there is exactly one hermitian decomposition, and it therefore cannot depend on the value of additional flags like "preserve_gaussian_variance". I suggest to move the necessary manipulations for variance preservation into a separate routine.
Alternatively the method name could be changed, but I can't think of anything convincing.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/157RGSpace.hermitian_decomposition() is incorrect for odd grid lengths2017-07-08T21:05:30ZMartin ReineckeRGSpace.hermitian_decomposition() is incorrect for odd grid lengths```
from nifty import *
import numpy as np
def test_hermitianity (length):
arr=np.random.random(length)+ np.random.random(length)*1j
s=RGSpace(length)
x1,x2=s.hermitian_decomposition(arr)
print x1
test_hermitianity(3)
`...```
from nifty import *
import numpy as np
def test_hermitianity (length):
arr=np.random.random(length)+ np.random.random(length)*1j
s=RGSpace(length)
x1,x2=s.hermitian_decomposition(arr)
print x1
test_hermitianity(3)
```
The result should be three numbers; the first should only have a real part, and the second and third should be mutually complex conjugate.
This looks like the code implicitly (and incorrectly) assumes `zerocenter=True`.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/156fixed points in harmonic RGSpace2017-07-08T16:23:01ZMartin Reineckefixed points in harmonic RGSpaceWhen looking at _hermitianize_correct_variance in rg_space.py, I wondered whether the determination of fixed points is correct. Assuming a 1D case, the code determines two fixed points: one at 0, and one at N/2. But as far as I understan...When looking at _hermitianize_correct_variance in rg_space.py, I wondered whether the determination of fixed points is correct. Assuming a 1D case, the code determines two fixed points: one at 0, and one at N/2. But as far as I understand, the one at N/2 is only a fixed point if N is an even number.
Please let me know if I'm confusing different things here!https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/155New storage format for harmonic RGSpaces?2017-08-15T08:16:51ZMartin ReineckeNew storage format for harmonic RGSpaces?I'm wondering whether we can apply the hermitian/anti-hermitian separation trick that we employ for spherical harmonic coefficients in the LMSpace to harmonic RGSpaces as well.
The symmetries are nearly the same: on the sphere we have
...I'm wondering whether we can apply the hermitian/anti-hermitian separation trick that we employ for spherical harmonic coefficients in the LMSpace to harmonic RGSpaces as well.
The symmetries are nearly the same: on the sphere we have
`a_l,m = (-1)**m *conj(a_l,-m)`
in a (multidimensional) RGSpace it is
`f_k = conj(f_-k)`
So in principle we could store the Fourier coefficients of a real-valued field in another real-valued field with exactly the same dimensions.
A 1D example:
Instead of storing the complex values
`[ f0 f1 f2 f(-2) f(-1) ]`
we would simply store
`[ f0.re x*f1.re x*f1.im x*f2.re x*f2.im ]`, where x=sqrt(2.)
The coefficients at negative frequencies can be recovered trivially, and f0.im is 0 by symmetry.
In analogy to the spherical case, sqrt(2.) is needed to conserve the dot products.
For arrays of even length:
`[ f0 f1 f(-2) f(-1) ]`
becomes
`[ f0.re x*f1.re x*f1.im x*f2.re ]`
(f2.im is zero in this case)
In multi-D, this treatment has to be applied to exactly one dimension; in analogy with FFTW, I suggest to use the last one.
Adopting such a scheme would have several significant advantages:
- memory requirements go down
- FFTs of real data will be faster
- the input data type of an FFToperator can always be the same as the output data type
- the transform of a real field will by construction always be exactly hermitian, and operators cannot destroy the hermitianity by accident
- minimizers won't be able to explore degrees of freedom that do not really exist in a given problem
Of course there will be drawbacks as well:
- whenever Fourier coefficients in the traditional format are required, conversions need to be run, which are quick but not absolutely free. (I don't think that this will be needed often.)
- the get_distance_array method of RGSpace will have to be adjusted.
- probably more stuff I'm missing at the moment.
I know that this is a crazy-looking idea, but especially in the light of the numerical problems we are currently encountering with the reconstruction of radio data, I think it should be investigated.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/154RGSpace smoothing width2017-06-23T23:54:59ZTheo SteiningerRGSpace smoothing widthI've got the impression that the smoothing width for RGSpace is currently not correct in master.
Please try this code:
```
from nifty import *
x = RGSpace((256,), distances=1.)
f = Field(x, val=0)
f.val[100] = 1
sm = SmoothingOperator(...I've got the impression that the smoothing width for RGSpace is currently not correct in master.
Please try this code:
```
from nifty import *
x = RGSpace((256,), distances=1.)
f = Field(x, val=0)
f.val[100] = 1
sm = SmoothingOperator(x, sigma=50.)
plo = plotting.RG1DPlotter(title='test.html')
plo(sm(f))
g = sm(f)
g.sum()
```
The Gaussian bell is only half as broad as it should be. I fix is given in dc16a255d1b6d63db837278dc35d1cc8409b70b6
@dpumpe Could you please check, if this makes sense?
@mtr Could you please check, if the formula for smoothing on the sphere is correct? https://gitlab.mpcdf.mpg.de/ift/NIFTy/blob/master/nifty/spaces/lm_space/lm_space.py#L169Pumpe, Daniel (dpumpe)Pumpe, Daniel (dpumpe)https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/153log.log file too large2017-09-15T23:26:04ZChristoph Lienhardlog.log file too largeWhen executing a program using NIFTy a log.log file is created. This file tends to get very large.
Could you provide a tutorial explaining how to keep the size of the log.log file to a minimum or how to prevent NIFTy from even creating one?When executing a program using NIFTy a log.log file is created. This file tends to get very large.
Could you provide a tutorial explaining how to keep the size of the log.log file to a minimum or how to prevent NIFTy from even creating one?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/152Clarification request for ComposedOperator2017-08-15T08:04:01ZMartin ReineckeClarification request for ComposedOperatorI think that there are two different ways to interpret the meaning of
"ComposedOperator", and I'm not sure if both of them are supported by Nifty at
the moment:
Scenario 1:
Say you have an input field F living on an RGSpace. If you want...I think that there are two different ways to interpret the meaning of
"ComposedOperator", and I'm not sure if both of them are supported by Nifty at
the moment:
Scenario 1:
Say you have an input field F living on an RGSpace. If you want to
apply an FFT, then some scaling, and then an inverse FFT again to this field,
is it possible to combine these three operators into a single ComposedOperator?
It doesn't seem that this works at the moment, because the "domain" property of
such a ComposedOperator returns a triple of spaces, and the input field only
lives on a single space.
Scenario 2:
You have a field living on a product of spaces, and you want to run several
operators on that field, every one operating on an individual space of the field.
This seems to be the anticipated scenario for ComposedOperator at the moment.
Is scenario 1 also needed in the real world? I have a feeling that it is; if
true, we should provide another "composed operator" for this use case and make
sure (by means of good naming, documentation and tests in the operator
constructors) that they are not confused.
(I'm mentioning this because until yesterday I was convinced that
ComposedOperator also worked for scenario 1...)https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/151Potential name confusion for Field.dot()2018-04-02T09:42:38ZMartin ReineckePotential name confusion for Field.dot()I just noticed the hard way that the behaviour of Field.dot() and numpy.dot() is subtly different ...
While Field.dot() takes the complex conjugate of its second argument, numpy.dot() doesn't; the correct numpy function to call would be...I just noticed the hard way that the behaviour of Field.dot() and numpy.dot() is subtly different ...
While Field.dot() takes the complex conjugate of its second argument, numpy.dot() doesn't; the correct numpy function to call would be numpy.vdot().
The question is whether we should rename Field.dot() to Field.vdot() for the sake of consistency with numpy. Otherwise we might risk confusing people with numpy background.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/150The dot product of a field with itself2017-06-12T23:31:34ZSebastian HutschenreuterThe dot product of a field with itselfThe dot product only considers the volume factors of the first field in case bare=False. This leads to the (in my mind) odd result noted below.
In [2]: np.random.seed(123)
In [3]: rgf = Field(domain=RGSpace(4),val=np.random.randn(4))
...The dot product only considers the volume factors of the first field in case bare=False. This leads to the (in my mind) odd result noted below.
In [2]: np.random.seed(123)
In [3]: rgf = Field(domain=RGSpace(4),val=np.random.randn(4))
In [4]: rgf.dot(rgf)
Out[4]: 1.1305730855452751
This is equal to
In [5]: (rgf.weight(power=1) * rgf).sum()
Out[5]: 1.1305730855452751
Wouldn't the two following cases be more appropriate, depending on the context?
In [6]: (rgf.weight(power=1) * rgf.weight(power=1)).sum()
Out[6]: 0.28264327138631878
In [7]: (rgf*rgf).sum()
Out[7]: 4.5222923421811005https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/149Behaviour of the Field class on empty domains2017-06-12T23:24:23ZSebastian HutschenreuterBehaviour of the Field class on empty domainsField seems to do odd things when the domain is set empty under variation of the dtype keyword.
I know that he example is rather artificial, non the less I thought I should mention it.
In [2]: ff = Field(domain=(),dtype=np.float)
In [...Field seems to do odd things when the domain is set empty under variation of the dtype keyword.
I know that he example is rather artificial, non the less I thought I should mention it.
In [2]: ff = Field(domain=(),dtype=np.float)
In [3]: ff.val.get_full_data()
[MainThread][WARNING ] _distributor_factory Distribution strategy was set to 'not' because of global_shape == ()
Out[3]: array(0.0)
In [4]: ff = Field(domain=(),dtype=np.complex)
In [5]: ff.val.get_full_data()
[MainThread][WARNING ] _distributor_factory Distribution strategy was set to 'not' because of global_shape == ()
Out[5]: array((8.772981689857213+0j))https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/148Which data types do we allow for Field.dtype?2017-06-09T10:18:39ZMartin ReineckeWhich data types do we allow for Field.dtype?It seems that defining a Field with an integer dtype can produce quite unexpected results:
```
In [1]: from nifty import *
In [2]: import numpy as np
In [3]: s=RGSpace((4,))
In [4]: f=Field(s,val=np.arange(4))
In [5]: f.dot(f,bare=F...It seems that defining a Field with an integer dtype can produce quite unexpected results:
```
In [1]: from nifty import *
In [2]: import numpy as np
In [3]: s=RGSpace((4,))
In [4]: f=Field(s,val=np.arange(4))
In [5]: f.dot(f,bare=False)
Out[5]: 0
```
I suggest that we limit the allowed data types for Nifty fields to floating point and complex floating point. This could be enforced in Field._infer_dtype().https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/147Probing harmonic domain, ```.generate_probe```2017-06-08T14:33:07ZPumpe, Daniel (dpumpe)Probing harmonic domain, ```.generate_probe```Hi,
for a classical critical filter run, inferring a power spectrum one has to probe the ```PropagatorOperator``` in its harmonic domain. Consequently this requires to get harmonic probes. These probes have to fulfil a few conditions, ...Hi,
for a classical critical filter run, inferring a power spectrum one has to probe the ```PropagatorOperator``` in its harmonic domain. Consequently this requires to get harmonic probes. These probes have to fulfil a few conditions, i.e. random_type= 'pm1' or 'normal', std=1, mean=0, real_signal=True. For gauss-distributed random numbers this is easy to get, however for 'pm1' currently I do not see a straight way.
For the 1-d case getting 'pm1' random numbers in the ```Prober``` could look like this,
```
def generate_probe(self):
""" a random-probe generator """
if self.domain[0].harmonic:
f = Field(self.domain, val=0., dtype=np.complex)
for ii in xrange(int(self.domain[0].shape[0]/2.-1.)):
real = np.random.randint(-1, 2)
img = np.random.randint(-1, 2)
f.val[ii+1] = np.complex(real, img)
f.val[-(ii+1)] = f.val[ii+1].conjugate()
mono = np.random.randint(-1, 2)
f.val[0] = np.complex(mono, 0.)
else:
f = Field.from_random(random_type=self.random_type,
domain=self.domain,
distribution_strategy=self.distribution_strategy)
uid = np.random.randint(1e18)
return (uid, f)
```https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/146Reuse curvature and gradient for gradient and energy in Energy classes2017-07-29T11:34:37ZTheo SteiningerReuse curvature and gradient for gradient and energy in Energy classesEspecially in `nifty/library/energy_library/critical_power_energy.py`
the gradient information is not reused for the computation of the energy itself.
Additionally, use memoization/caching. For this the memo decorator from nifty.energi...Especially in `nifty/library/energy_library/critical_power_energy.py`
the gradient information is not reused for the computation of the energy itself.
Additionally, use memoization/caching. For this the memo decorator from nifty.energies can be used.Jakob KnollmuellerJakob Knollmuellerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/145Add documentation to classes in library2018-06-27T15:53:10ZTheo SteiningerAdd documentation to classes in libraryThe `nifty2go` branch contains a great set of operator and energy objects. Please add proper docstrings to the classes and a reference to a paper/book where the respective algorithm is described. If there is no such paper where the imple...The `nifty2go` branch contains a great set of operator and energy objects. Please add proper docstrings to the classes and a reference to a paper/book where the respective algorithm is described. If there is no such paper where the implemented formulae can be found 1:1, please add inline comments on what is happening.Jakob KnollmuellerJakob Knollmuellerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/144Arrays/D2Os as DomainObject attributes2017-07-06T10:17:14ZMartin ReineckeArrays/D2Os as DomainObject attributesIn an experimental branch I made "binbounds" an attribute of PowerSpace, and for efficiency reasons decided to store it as a Numpy array instead of a tuple.
This causes problems with the equality comparison operators of DomainObject, be...In an experimental branch I made "binbounds" an attribute of PowerSpace, and for efficiency reasons decided to store it as a Numpy array instead of a tuple.
This causes problems with the equality comparison operators of DomainObject, because Numpy arrays need to be compared via
`np.all(arr1==arr2)`
instead of
`arr1==arr2`
The situation for D2Os may be similar.
Should I extend the comparisons in DomainObject accordingly, or don't we want attributes of these types in general?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/143Try to avoid the exponentiation operator where possible2017-06-05T00:47:12ZMartin ReineckeTry to avoid the exponentiation operator where possibleLoosely related to issue 142, I have started a branch trying to get rid of the exponentiation expressions (i.e. `**`) in Nifty where they are not really needed.
If you can avoid `**`, please do. It has a very strange precedence. While...Loosely related to issue 142, I have started a branch trying to get rid of the exponentiation expressions (i.e. `**`) in Nifty where they are not really needed.
If you can avoid `**`, please do. It has a very strange precedence. While it is clear why the expressions mentioned in issue 142 need brackets (once one thinks about it long enough), there are *really* bizarre cases, like the following from https://gitlab.mpcdf.mpg.de/ift/NIFTy/blob/master/nifty/minimization/line_searching/line_search_strong_wolfe.py#L347
`d1[0, 1] = -db ** 2`
In Python, this is equivalent to "-(db*db)", although the spaces and general experience suggest otherwise.
In Fortran, the value of this expression would be "db*db".
[Edit: sorry, this is incorrect; I blindly trusted information on the internet :(]
So please, whenever you have to use `**`, be _very_ explicit with brackets!https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/142ProductSpaces drawing random fields and checking their spectra2017-06-21T07:57:40ZPumpe, Daniel (dpumpe)ProductSpaces drawing random fields and checking their spectraHi, currently I am testing ProductSpace reconstructions (PLN over RGSpace (x) RGSpace). I am facing a problem concerning the spectra. For a simple test scenario I would like to check the power of field drawn from two spectra, as the foll...Hi, currently I am testing ProductSpace reconstructions (PLN over RGSpace (x) RGSpace). I am facing a problem concerning the spectra. For a simple test scenario I would like to check the power of field drawn from two spectra, as the following
```
from nifty import *
np.random.seed(42)
#########################################################################
# setting space one
#########################################################################
npix_0 = np.array([200]) # number of pixels
total_volume_0 = 1. # total length
# setting signal parameters
lambda_s_0 = .5 # signal correlation length
sigma_s_0 = 1. # signal variance
#setting length smoothing kernel
kernel_0 = 0.
# calculating parameters
k0_0 = 4./(2*np.pi*lambda_s_0)
a_s_0 = sigma_s_0**2.*lambda_s_0*total_volume_0
# creation of spaces
x_0 = RGSpace(npix_0, distances=total_volume_0/npix_0,
zerocenter=False)
k_0 = RGRGTransformation.get_codomain(x_0)
p_0 = PowerSpace(harmonic_partner=k_0, logarithmic=False)
# creating Power Operator with given spectrum
spec_0 = (lambda k: a_s_0/(1+(k/k0_0)**2)**2)
p_field_0 = Field(p_0, val=spec_0, dtype=np.float64)
# creating FFT-Operator and Response-Operator with Gaussian convolution
FFTOp_0 = FFTOperator(domain=x_0, target=k_0)
#########################################################################
# setting space two
#########################################################################
npix_1 = np.array([100]) # number of pixels
total_volume_1 = 1. # total length
# setting signal parameters
lambda_s_1 = .05 # signal correlation length
sigma_s_1 = 2. # signal variance
#setting length smoothing kernel
kernel_1 = 0.
# calculating parameters
k0_1 = 4./(2*np.pi*lambda_s_1)
a_s_1 = sigma_s_1**2.*lambda_s_1*total_volume_1
# creation of spaces
x_1 = RGSpace(npix_1, distances=total_volume_1/npix_1,
zerocenter=False)
k_1 = RGRGTransformation.get_codomain(x_1)
p_1 = PowerSpace(harmonic_partner=k_1, logarithmic=False)
# creating Power Operator with given spectrum
spec_1 = (lambda k: a_s_0/(1+(k/k0_1)**2)**2)
p_field_1 = Field(p_1, val=spec_1, dtype=np.float64)
# creating FFT-Operator and Response-Operator with Gaussian convolution
FFTOp_1 = FFTOperator(domain=x_1, target=k_1)
#########################################################################
# drawing a random field with above defined correlation structure
#########################################################################
fp_0 = Field(p_0, val=(lambda k: spec_0(k)**1/2.))
fp_1 = Field(p_1, val=(lambda k: spec_1(k)**1/2.))
outer = np.outer(fp_0.val.data, fp_1.val.data)
fp = Field((p_0, p_1), val=outer)
sk = fp.power_synthesize(spaces=(0,1), real_signal=True)
```
The question is, how do I recover fp_0 and fp_1 from sk (i.e. simply the power of the drawn field).
In my mind this should be:
```
def test_product_space_power(x):
samples = 100
ps_0 = Field(p_0, val=0.)
ps_1 = Field(p_1, val=1.)
for ii in xrange(samples):
sk = x.power_synthesize(spaces=(0,1), real_signal=True)
p0, p1 = analyze_sk(sk)
ps_1 += p1
ps_0 += p0
p_0_test_result = ps_0/samples
p_1_test_result = ps_1/samples
return p_0_test_result, p_1_test_result
def analyze_sk(x):
sp = x.power_analyze(spaces=0, decompose_power=False)\
.power_analyze(spaces=1, decompose_power=False)
p0= sp.sum(spaces=1)/fp_1.sum()
p1= sp.sum(spaces=0)/fp_0.sum()
return p0, p1
def analyze_fp(x):
p0= x.sum(spaces=1)/fp_1.sum()
p1= x.sum(spaces=0)/fp_0.sum()
return p0, p1
```
However it seems to be not correct. Analysing the power of sk with ```test_product_space_power(fp) ``` in direction of space 0 seems to have a constant offset, while the power in direction of space 1 seems to have the wrong shape. ```analyze_fp(fp) ``` only serves as a double check to see if the covariance from which sk is actually drawn is the appropriate one for fp_0 and fp_1.
Am I using ```.power_analyze``` wrong or are we facing an issue when we draw the random fields.
Thanks a lot for your help!