ift issueshttps://gitlab.mpcdf.mpg.de/groups/ift/-/issues2016-10-25T13:23:30Zhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/60Configuration for FFT during (de)briefing.2016-10-25T13:23:30ZTheo SteiningerConfiguration for FFT during (de)briefing.Make it optional, whether the briefing should try to (FFT-)transform the input. By this the user could avoid that somewhere still a FFT takes place, although not wanted.Make it optional, whether the briefing should try to (FFT-)transform the input. By this the user could avoid that somewhere still a FFT takes place, although not wanted.Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/279Consistent ordering of domain and non-domain arguments for standard operators2019-12-06T08:34:31ZGordian EdenhoferConsistent ordering of domain and non-domain arguments for standard operatorsCurrently, the order in which arguments shall be provided differs between operators, e.g. `ift.from_global_data` and `ift.full` both take the domain as first argument, while `ift.ScalingOperator` requires the factor with which to scale a...Currently, the order in which arguments shall be provided differs between operators, e.g. `ift.from_global_data` and `ift.full` both take the domain as first argument, while `ift.ScalingOperator` requires the factor with which to scale a field as first argument and the domain as second. Moving to NIFTY6 might be the perfect opportunity to fix this either by consistently requiring the domain to be e.g. the first argument or by making the `ift.ScalingOperator` more flexible by swapping arguments in a suitable manner during initialization.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/252Constant model jacobian buggy2018-07-11T10:14:53ZReimar H LeikeConstant model jacobian buggyThe jacobian of the ConstantModel is defined as
```
self._jacobian = 0.
```
This already looks quite simplified and as one woould expect it easily causes trouble, e.g. as shown in this example:
```
import nifty5 as ift
space = ift.RGSpa...The jacobian of the ConstantModel is defined as
```
self._jacobian = 0.
```
This already looks quite simplified and as one woould expect it easily causes trouble, e.g. as shown in this example:
```
import nifty5 as ift
space = ift.RGSpace(4)
a = ift.full(space, 1.)
hspace = space.get_default_codomain()
ht = ift.HarmonicTransformOperator(hspace, target = space)
b = ift.full(hspace, 1.)
multia = ift.MultiField.from_dict({'a':a})
var_a = ift.Variable(multia)['a']
const_b = ift.Constant(multia, b)
sumabvalue = var_a.value + ht(const_b).value
# Here it crashes due to domain mismatch, even though you can add the values just fine
sumab = var_a + ht(const_b)
```
There are at least 2 possible ways to fix this issue. One possibility is to make the ConstantModel use the position again, such that it can return zeros in the correct domain. Another possibility is to have an operator that maps anything to an empty MultiDomain. (which then can be added with other fields)
I favor the last one, it is however incompatible with the way that martin wants to unify Field and MultiField.
Any thoughts @kjako @mtr @parras ?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/335CorrelatedFieldMaker amplitude normalization breaks fluctuation amplitude2021-08-24T10:52:00ZLukas PlatzCorrelatedFieldMaker amplitude normalization breaks fluctuation amplitudeIf one adds more than one amplitude operator to a `CorrelatedFieldMaker` and sets a very small/large offset standard deviation, the field fluctuation amplitude gets distorted.
This is because in `get_normalized_amplitudes()` every ampli...If one adds more than one amplitude operator to a `CorrelatedFieldMaker` and sets a very small/large offset standard deviation, the field fluctuation amplitude gets distorted.
This is because in `get_normalized_amplitudes()` every amplitude operator gets devided by `self.azm` and the results of this function are then multiplied with an outer product to create the joint amplitude operator.
In CFs with more than one amplitude operator, this results in the joint amplitude operator being divided by multiples of the zeromode operator and thus an incorrect output scaling behavior.
To demonstrate this effect, I created correlated field operators for identical 2d fields, but one with a single amplitude operator for `ift.RGSpace((N, N)` and one with two amplitude operators on `ift.RGSpace(N)`. From top to bottom, I varied the `offset std` setting passed to `set_amplitude_total_offset()` and in each case plotted a histogram of the sampled fields' standard deviation.
![output](/uploads/6779601e231148c36a77a91aa8e7fd62/bug.png)
One can clearly see the output fluctuation amplitude is independant from the `offset_std` for the 'single' case, as it should be, but not for the 'dual' case.
A fix for this is proposed in merge requests !669 and !670.
To reproduce, run the following code:
```
import nifty7 as ift
import matplotlib.pyplot as plt
fluct_pars = {
'fluctuations': (1.0, 0.1),
'flexibility': None,
'asperity': None,
'loglogavgslope': (-2.0, 0.2),
'prefix': 'flucts_',
'dofdex': None
}
n_offsets = 5
n_samples = 1000
dom_single = ift.RGSpace((25, 25))
dom_dual = ift.RGSpace(25)
fig, axs = plt.subplots(nrows=n_offsets, ncols=2, figsize=(12, 4 * n_offsets))
for i in range(n_offsets):
cfmaker_single = ift.CorrelatedFieldMaker(prefix='str(i)_')
cfmaker_dual = ift.CorrelatedFieldMaker(prefix='str(i)_')
cfmaker_single.add_fluctuations(dom_single, **fluct_pars)
cfmaker_dual.add_fluctuations(dom_dual, **fluct_pars)
cfmaker_dual.add_fluctuations(dom_dual, **fluct_pars)
offset_std = 10 ** -i
zm_pars = {'offset_mean': 0.,
'offset_std': (offset_std, offset_std / 10.)}
cfmaker_single.set_amplitude_total_offset(**zm_pars)
cfmaker_dual.set_amplitude_total_offset(**zm_pars)
cf_single = cfmaker_single.finalize(prior_info=0)
cf_dual = cfmaker_dual.finalize(prior_info=0)
samples_single = [cf_single(ift.from_random(cf_single.domain)).val.std() for _ in range(n_samples)]
samples_dual = [cf_dual(ift.from_random(cf_dual.domain)).val.std() for _ in range(n_samples)]
_ = axs[i, 0].hist(samples_single, bins=20, density=True)
_ = axs[i, 1].hist(samples_dual, bins=20, density=True)
axs[i, 0].set_title(f'offset_std = {offset_std:1.0e}, single')
axs[i, 1].set_title(f'offset_std = {offset_std:1.0e}, dual')
axs[i, 0].set_xlabel('sample std')
axs[i, 1].set_xlabel('sample std')
plt.tight_layout()
plt.show()
```
Any comments?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/351correlated Field model linearization adjoint very slow if total_N != 02022-03-29T18:52:49ZJakob Rothcorrelated Field model linearization adjoint very slow if total_N != 0As noticed by @parras @pfrank and me the adjoint of the linearization of the correlated field model is very slow if total_N != 0.
Here is a demo:
```
import nifty8 as ift
import numpy as np
sp1 = ift.RGSpace((4000, 4000))
cfmaker = ift...As noticed by @parras @pfrank and me the adjoint of the linearization of the correlated field model is very slow if total_N != 0.
Here is a demo:
```
import nifty8 as ift
import numpy as np
sp1 = ift.RGSpace((4000, 4000))
cfmaker = ift.CorrelatedFieldMaker('')
cfmaker.add_fluctuations(sp1, (0.1, 1e-2), (2, .2), (.01, .5), (-4, 2.),
'amp1')
cfmaker.set_amplitude_total_offset(0., (1e-2, 1e-6))
cf0 = cfmaker.finalize(0)
n_tot=1
cfmaker = ift.CorrelatedFieldMaker('', total_N=n_tot)
cfmaker.add_fluctuations(sp1, (0.1, 1e-2), (2, .2), (.01, .5), (-4, 2.),
'amp1', dofdex=np.arange(n_tot))
cfmaker.set_amplitude_total_offset(0., (1e-2, 1e-6), dofdex=np.arange(n_tot))
cf1 = cfmaker.finalize(0)
print("benchmark for total_N = 0")
ift.exec_time(cf0)
print("benchmark for total_N = 1")
ift.exec_time(cf1)
```
I had a quick look at the issue. The problem seems to be the _Distributor operator in the correlated field model file: https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_8/src/library/correlated_fields.py#L214
For the case of total_N != 0 this _Distributor is called multiple times. The call which is very slow in adjoint direction is in line: https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_8/src/library/correlated_fields.py#L362 and the line below.
Note this _Distibutor operator is only used for the case total_N != 0, and therefore the simple case with total_N = 0 is reasonably fast.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/341Correlated Field model, `total_N != 0`: Sample statistic is not correct2021-12-08T10:00:54ZPhilipp Arrasparras@mpa-garching.mpg.deCorrelated Field model, `total_N != 0`: Sample statistic is not correctIf I compare prior samples generated by `SimpleCorrelatedField` and `CorrelatedFieldMaker` with `total_N > 0`, the statistics looks different.
```
import nifty8 as ift
offset_mean = 0.
offset_std = None
fluctuations = (0.1, 0.05)
loglo...If I compare prior samples generated by `SimpleCorrelatedField` and `CorrelatedFieldMaker` with `total_N > 0`, the statistics looks different.
```
import nifty8 as ift
offset_mean = 0.
offset_std = None
fluctuations = (0.1, 0.05)
loglogslope = (-4, 1)
dom = ift.RGSpace(2625, 20)
op = ift.SimpleCorrelatedField(dom, offset_mean, offset_std, fluctuations, None, None, loglogslope)
ift.single_plot([op(ift.from_random(op.domain)) for _ in range(20)], name="cfm_single.png")
N = 1
cfm = ift.CorrelatedFieldMaker("", N)
cfm.add_fluctuations(dom, fluctuations, None, None, loglogslope)
cfm.set_amplitude_total_offset(offset_mean, offset_std)
op = cfm.finalize(0)
prior_samples = [op(ift.from_random(op.domain)) for _ in range(10)]
p = ift.Plot()
for ii in range(N):
p.add([ift.DomainTupleFieldInserter(pp.domain, 0, (ii,)).adjoint(pp) for pp in prior_samples])
p.output(name="cfm_multi.png")
```
Prior samples for SimpleCorrelatedField:
![cfm_single](/uploads/3f9db21b292bccab96ec48c94e3296c6/cfm_single.png)
Prior samples for non-trivial CorrelatedFieldMaker with `N_total = 1`
![cfm_multi](/uploads/564ca8466f3e955ab7218bfe8ff8149f/cfm_multi.png)
ping: @jrothhttps://gitlab.mpcdf.mpg.de/ift/D2O/-/issues/21Create dedicated object for 'distribution_strategy'2017-07-06T05:50:22ZTheo SteiningerCreate dedicated object for 'distribution_strategy'Only global-type distribution strategies are comparable by their name directly. In order to compare local-type distribution strategies as well -> implement an object which represents the distribution strategy. For 'freeform' this include...Only global-type distribution strategies are comparable by their name directly. In order to compare local-type distribution strategies as well -> implement an object which represents the distribution strategy. For 'freeform' this includes the individual slices lengths. Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/82create_power_operator2017-04-29T01:16:10ZPumpe, Daniel (dpumpe)create_power_operatorHi,
while testing my code I came across a potential problem concerning volume factors in the outcome of the application of an operator created by 'create_power_operator'
minimal example to show differences between Nifty_1 and Nifty_3 ...Hi,
while testing my code I came across a potential problem concerning volume factors in the outcome of the application of an operator created by 'create_power_operator'
minimal example to show differences between Nifty_1 and Nifty_3
[tt.py](/uploads/ffe6ef7ca3fb4aae2b9a2ab3e01001c7/tt.py)
Since 'res_1!= res_2', either Nifty_1 or Nifty_3 calculates something wrong.
In my mind Nifty_1 does the calculation properly. An easy fix in Nifty_3 create_power_operator could be setting the bare keyword in line 43 of the DiagonalOperator to true.
Cheers,
Danielhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/70create_power_operator2017-04-10T12:40:19ZPumpe, Daniel (dpumpe)create_power_operatorHi Theo, while checking volume factors in my code I stumbled over a potential bug in sugar.py create_power_operator. In line 27 the DiagonalOperator should be bare=True (default is False). Else wise the application of such a "PowerOperat...Hi Theo, while checking volume factors in my code I stumbled over a potential bug in sugar.py create_power_operator. In line 27 the DiagonalOperator should be bare=True (default is False). Else wise the application of such a "PowerOperator" does not match the results as if the code would be run in Nifty 1. Or am I missing something, please correct me if I see something wrong.
# Power Spectrum differences
# NIFTY_1
x1 = rg_space(200, zerocenter=False, dist=0.1)
k1 = x1.get_codomain()
spec1 = (lambda k: 42/(1+k**2))
S1 = power_operator(k1, spec=spec1)
test1 = field(x1, val=3)
result1 = S1.inverse_times(test1)
# NIFTy_3
x3 = RGSpace(200, zerocenter=False, distances=0.1)
k3 = RGRGTransformation.get_codomain(x3)
spec3 = (lambda k:42/(1+k**2))
S3 = create_power_operator(k3, spec3)
FFT3 = FFTOperator(domain=x3, target=k3,
domain_dtype=np.float64,
target_dtype=np.complex64)
test3 = Field(x3, val=3)
result3 = FFT3.inverse_times(S3.inverse_times(FFT3.times(test3)))https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/171create_power_operator() returns operator with incorrect data type2017-08-15T08:17:02ZMartin Reineckecreate_power_operator() returns operator with incorrect data typeIf I understood the purpose of `create_power_operator()` correctly, it should produce a `DiagonalOperator` which imprints the supplied power spectrum onto a given field in harmonic space.
I would expect that this returned operator conta...If I understood the purpose of `create_power_operator()` correctly, it should produce a `DiagonalOperator` which imprints the supplied power spectrum onto a given field in harmonic space.
I would expect that this returned operator contains values of real type, but its data type is complex, even if I explicitly ask for real numbers:
```
from nifty import *
import numpy as np
s=RGSpace(10,harmonic=True)
def ps(x): return x
x=create_power_operator(s,ps,dtype=np.float64)
print x.diagonal().dtype
```
What am I missing?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/83Create Wiener Filter demo which utilizes full keepers functionality.2017-07-29T11:33:55ZTheo SteiningerCreate Wiener Filter demo which utilizes full keepers functionality.* Explicitly configure (MPI-)logging.
* Use keepers.versioning for restartable reconstruction.
* Use dynamic parametrization for setting `iteration_limit`, etc...* Explicitly configure (MPI-)logging.
* Use keepers.versioning for restartable reconstruction.
* Use dynamic parametrization for setting `iteration_limit`, etc...https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/73creation of distributed_data_object without dtype2017-06-13T07:44:11ZMartin Reineckecreation of distributed_data_object without dtypeIn rg_space.py and power_space.py there are calls to the constructor of "distributed_data_object", which do not specify a "dtype" argument.
This is accepted by D2O, but an annoying message is printed to the console.
I would suggest to pr...In rg_space.py and power_space.py there are calls to the constructor of "distributed_data_object", which do not specify a "dtype" argument.
This is accepted by D2O, but an annoying message is printed to the console.
I would suggest to produce a hard error whenever the user tries to create a D2O object without specifying a type. However, for the time being: is it correct to specify "dtype=np.float64" in the two cases I mentioned?
A related question: inside the distance_array-related functions of RGSpace and LMSpace, the data type np.float128 is used. Is that deliberate or an oversight?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/165Critical power energy doesn't work for product spaces2017-08-02T21:20:47ZTheo SteiningerCritical power energy doesn't work for product spacesIn principle it is no problem to port the existing code to product-space functionality. The biggest thing is that we then would need a n-dimensional SmoothnessOperator. Any volunteers?In principle it is no problem to port the existing code to product-space functionality. The biggest thing is that we then would need a n-dimensional SmoothnessOperator. Any volunteers?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/170CriticalPowerEnergy has incomplete at() method2017-07-19T11:57:02ZMartin ReineckeCriticalPowerEnergy has incomplete at() methodThe `at()` method in `CriticalPowerEnergy` does not pass the `logarithmic` property to the newly built object, which results in a (potentially) different T inside that object.
What's the best solution to fix this? Store `logarithmic` ex...The `at()` method in `CriticalPowerEnergy` does not pass the `logarithmic` property to the newly built object, which results in a (potentially) different T inside that object.
What's the best solution to fix this? Store `logarithmic` explicitly to have it available in the `at()` method?
@kjako, any preferences?https://gitlab.mpcdf.mpg.de/ift/D2O/-/issues/15d2o: Contraction functions rely on non-degeneracy of distribution strategy2017-07-06T05:50:22ZTheo Steiningerd2o: Contraction functions rely on non-degeneracy of distribution strategySeveral methods of the distributed_data_object rely on the fact, that the distribution strategy behaves as if the local data was non-degenerate. Currently the non-distributor fixes this by returning trivial (local) results in the _allgat...Several methods of the distributed_data_object rely on the fact, that the distribution strategy behaves as if the local data was non-degenerate. Currently the non-distributor fixes this by returning trivial (local) results in the _allgather and the _Allreduce_sum method.
Affected d2o methods are at least: _contraction_helper, mean
Fix: Move the functionality for sum, prod, etc... into the distributor.Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/2d2o: Contraction functions rely on non-degeneracy of distribution strategy2017-09-20T21:36:14ZTheo Steiningerd2o: Contraction functions rely on non-degeneracy of distribution strategySeveral methods of the distributed_data_object rely on the fact, that the distribution strategy behaves as if the local data was non-degenerate. Currently the non-distributor fixes this by returning trivial (local) results in the _allgat...Several methods of the distributed_data_object rely on the fact, that the distribution strategy behaves as if the local data was non-degenerate. Currently the non-distributor fixes this by returning trivial (local) results in the _allgather and the _Allreduce_sum method.
Affected d2o methods are at least: _contraction_helper, mean
Fix: Move the functionality for sum, prod, etc... into the distributor.Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/D2O/-/issues/14d2o: _contraction_helper does not work when using numpy keyword arguments2017-07-06T05:50:22ZTheo Steiningerd2o: _contraction_helper does not work when using numpy keyword argumentsThe `_contraction_helper` passes keyword arguments to the underlying numpy functions (axis=, keepdims=). The result of the _contraction_helper's local computation is then an array and not a scalar. Therefore the dtype check fails.
Fi...The `_contraction_helper` passes keyword arguments to the underlying numpy functions (axis=, keepdims=). The result of the _contraction_helper's local computation is then an array and not a scalar. Therefore the dtype check fails.
Fix: After solving theos/NIFTy#2, adopt to the case that the local run's result object is an array and make a further distinction of cases, i.e for something like axis=0 for the slicing_distributor. Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/3d2o: _contraction_helper does not work when using numpy keyword arguments2017-09-20T21:44:34ZTheo Steiningerd2o: _contraction_helper does not work when using numpy keyword argumentsThe `_contraction_helper` passes keyword arguments to the underlying numpy functions (axis=, keepdims=). The result of the _contraction_helper's local computation is then an array and not a scalar. Therefore the dtype check fails.
Fi...The `_contraction_helper` passes keyword arguments to the underlying numpy functions (axis=, keepdims=). The result of the _contraction_helper's local computation is then an array and not a scalar. Therefore the dtype check fails.
Fix: After solving theos/NIFTy#2, adopt to the case that the local run's result object is an array and make a further distinction of cases, i.e for something like axis=0 for the slicing_distributor. Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/D2O/-/issues/6d2o cumsum and flatten rely on certain features of distribution strategy2017-07-06T05:50:23ZTheo Steiningerd2o cumsum and flatten rely on certain features of distribution strategycumsum and flatten assume: if the shape of the d2o changes through flattening, the distribution strategy was "slicing". cumsum and flatten assume: if the shape of the d2o changes through flattening, the distribution strategy was "slicing". Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/15d2o cumsum and flatten rely on certain features of distribution strategy2016-05-26T11:09:31ZTheo Steiningerd2o cumsum and flatten rely on certain features of distribution strategycumsum and flatten assume: if the shape of the d2o changes through flattening, the distribution strategy was "slicing". cumsum and flatten assume: if the shape of the d2o changes through flattening, the distribution strategy was "slicing". Theo SteiningerTheo Steininger