With regards to impact: this bug probably affects a lot of our reconstructions as many multi-frequency models should trigger this case.

If one adds more than one amplitude operator to a `CorrelatedFieldMaker`

and sets a very small/large offset standard deviation, the field fluctuation amplitude gets distorted.

This is because in `get_normalized_amplitudes()`

every amplitude operator gets devided by `self.azm`

and the results of this function are then multiplied with an outer product to create the joint amplitude operator.

In CFs with more than one amplitude operator, this results in the joint amplitude operator being divided by multiples of the zeromode operator and thus an incorrect output scaling behavior.

To demonstrate this effect, I created correlated field operators for identical 2d fields, but one with a single amplitude operator for `ift.RGSpace((N, N)`

and one with two amplitude operators on `ift.RGSpace(N)`

. From top to bottom, I varied the `offset std`

setting passed to `set_amplitude_total_offset()`

and in each case plotted a histogram of the sampled fields' standard deviation.

One can clearly see the output fluctuation amplitude is independant from the `offset_std`

for the 'single' case, as it should be, but not for the 'dual' case.

A fix for this is proposed in merge requests !669 and !670.

To reproduce, run the following code:

```
import nifty7 as ift
import matplotlib.pyplot as plt
fluct_pars = {
'fluctuations': (1.0, 0.1),
'flexibility': None,
'asperity': None,
'loglogavgslope': (-2.0, 0.2),
'prefix': 'flucts_',
'dofdex': None
}
n_offsets = 5
n_samples = 1000
dom_single = ift.RGSpace((25, 25))
dom_dual = ift.RGSpace(25)
fig, axs = plt.subplots(nrows=n_offsets, ncols=2, figsize=(12, 4 * n_offsets))
for i in range(n_offsets):
cfmaker_single = ift.CorrelatedFieldMaker(prefix='str(i)_')
cfmaker_dual = ift.CorrelatedFieldMaker(prefix='str(i)_')
cfmaker_single.add_fluctuations(dom_single, **fluct_pars)
cfmaker_dual.add_fluctuations(dom_dual, **fluct_pars)
cfmaker_dual.add_fluctuations(dom_dual, **fluct_pars)
offset_std = 10 ** -i
zm_pars = {'offset_mean': 0.,
'offset_std': (offset_std, offset_std / 10.)}
cfmaker_single.set_amplitude_total_offset(**zm_pars)
cfmaker_dual.set_amplitude_total_offset(**zm_pars)
cf_single = cfmaker_single.finalize(prior_info=0)
cf_dual = cfmaker_dual.finalize(prior_info=0)
samples_single = [cf_single(ift.from_random(cf_single.domain)).val.std() for _ in range(n_samples)]
samples_dual = [cf_dual(ift.from_random(cf_dual.domain)).val.std() for _ in range(n_samples)]
_ = axs[i, 0].hist(samples_single, bins=20, density=True)
_ = axs[i, 1].hist(samples_dual, bins=20, density=True)
axs[i, 0].set_title(f'offset_std = {offset_std:1.0e}, single')
axs[i, 1].set_title(f'offset_std = {offset_std:1.0e}, dual')
axs[i, 0].set_xlabel('sample std')
axs[i, 1].set_xlabel('sample std')
plt.tight_layout()
plt.show()
```

Any comments?

**Lukas Platz**
(24333e4d)
*
at
30 Jul 17:31
*

fix CorrelatedFieldMaker amplitude normalization

**Lukas Platz**
(ff365138)
*
at
30 Jul 17:20
*

fix fluctuation normalization for more than one amplitude

From my experience using sparse matrices in practice is much more involved than using dense ones - many more ways to shoot yourself in the foot. Needing to switch application operators is probably no problem for most users.

Yes, I see your point.

The only pain I have with it is that it forces us to keep the operator code complex and hard to understand/maintain. Different codes for dense and sparse would allow a much clearer code logic.

Looking back on the old implementation, I think it was a mistake to have the sparse matrix application and general matrix application joined in one operator. It makes the `__init__`

function much more involved than is necessary and also hard to understand. If you two don't protest, I vote for splitting into two operators.

Should we merge this? I don't have the permissions to do it.

At this point also one general remark: the original implementation did not use `np.tensordot`

in case of `flatten=True`

because this is mostly used for applying `scipy.sparse`

matrices. For d>1 input fields, a flattening before application is needed because sparse matrices are two-dimensional by design.

In the `flatten=True`

branch, `m.dot`

was used for the application because sparse matrices implement their on `.dot`

method that makes use of the sparsity. Using `np.dot`

or `np.tensordot`

in this case destroys all the efficiency that sparse matrices can provide and either makes the application *very* inefficient or even could crash the code if it tries to cast a sparse matrix to a full entry numpy matrix for tensordot.

This means: the operator in its current state is definitely *not* sparse matrix compatible and we should remove the respective hints from the docstring. If we want to keep sparse matrix compatibility in NIFTy, we should re-add the old operator, trim it down to just applying sparse matrices and name it something like `SparseMatrixProductOperator`

.

@gedenhof, what do you say?

I would second that - either we rename the operator to something like `TensorDotOperator`

, or we stick to simple matrix-vector/matrix-matrix products.

For general tensor computations there are also the `Einsum`

operators, although I have to admit I don't find their interface straight-forward to use.

Hi Neel, this is precisely why the original hint is given: as you say a DomainTuple can contain more than one subspace, but the operator implementation relies(/relied?) on the assumption that the user would only put in fields with DomainTuples with at most one subspace. The line is there to warn users of this fact, so please revert the change. Unless your operator does not rely on this assumption any more - then you can just delete the line ;)

Looks good!

Hi @g-neelshah,

nice to see someone is upgrading and bugfixing the operator!

While you are at it, you might want to replace the calls to `np.flatten`

with `np.ravel`

, which does not create copies of the data if not necessary - that alone could increase the performance of the operator quite a bit for large inputs.

Cheers, Lukas

It is very nice - thank for implementing it! :)

**Lukas Platz**
(b7246142)
*
at
22 Jul 17:54
*

fix typo