ift issueshttps://gitlab.mpcdf.mpg.de/groups/ift/-/issues2020-05-13T11:23:26Zhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/294Potential confusion in the direction of FFTs2020-05-13T11:23:26ZMartin ReineckePotential confusion in the direction of FFTsIn the current implementation of FFTOperator (https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_6/nifty6/operators/harmonic_operators.py#L76) we have the code:
```
if x.domain[self._space].harmonic: # harmonic -> position
func = fft.fftn
fct = 1.
else:
func = fft.ifftn
fct = ncells
```
where `x` is the input field.
This looks exactly the wrong way around.
@parras, @pfrank, @kjako, @reimar, @veberle, what do you think?
Switching the FFT direction breaks almost none of our tests (only one, which already has a FIXME :).In the current implementation of FFTOperator (https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_6/nifty6/operators/harmonic_operators.py#L76) we have the code:
```
if x.domain[self._space].harmonic: # harmonic -> position
func = fft.fftn
fct = 1.
else:
func = fft.ifftn
fct = ncells
```
where `x` is the input field.
This looks exactly the wrong way around.
@parras, @pfrank, @kjako, @reimar, @veberle, what do you think?
Switching the FFT direction breaks almost none of our tests (only one, which already has a FIXME :).2020-05-13https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/296Releasing NIFTy 6?2020-05-22T07:46:13ZMartin ReineckeReleasing NIFTy 6?I have the impression that we have enough new features to justify a NIFTy 6 release.
Does anyone have feature suggestions which should go into the code before the release?I have the impression that we have enough new features to justify a NIFTy 6 release.
Does anyone have feature suggestions which should go into the code before the release?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/283Reduce licensing boilerplate2020-05-13T11:36:01ZGordian EdenhoferReduce licensing boilerplateIf we really feel the need to put licensing boilerplate in every source file, let's at least make it concise by e.g. using https://spdx.org/ids.If we really feel the need to put licensing boilerplate in every source file, let's at least make it concise by e.g. using https://spdx.org/ids.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/299Add possibility to log energies2020-05-19T12:51:06ZPhilipp Arrasparras@mpa-garching.mpg.deAdd possibility to log energiesI have implemented a method to log energies inside a IterationController such that they can be analyzed afterwards. @mtr how do you like this implementation? Do you think we could/should make this part of nifty? Or do you have other ideas how to approach the problem?
https://gitlab.mpcdf.mpg.de/ift/papers/rickandresolve/-/commit/8b0e15dc4a533dfba17f29361345309e336dfb8a
(Since this is part of an unpublished project, this link is only visible for ift group members. Sorry!)
@veberle @mtrI have implemented a method to log energies inside a IterationController such that they can be analyzed afterwards. @mtr how do you like this implementation? Do you think we could/should make this part of nifty? Or do you have other ideas how to approach the problem?
https://gitlab.mpcdf.mpg.de/ift/papers/rickandresolve/-/commit/8b0e15dc4a533dfba17f29361345309e336dfb8a
(Since this is part of an unpublished project, this link is only visible for ift group members. Sorry!)
@veberle @mtrhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/278Getting rid of standard parallelization?2019-12-09T12:50:25ZMartin ReineckeGetting rid of standard parallelization?The default way of parallelizing large tasks with MPI is currently to distribute fields over MPI tasks along their first axis. While it appears to work (our tests run fine with MPI), it has not been used during the last few years, and other parallelization approaches seem more promising.
If there is general agreement, I propose to remove this parallelization from the code, which would make NIFTy much smaller and easier to use and maintain.The default way of parallelizing large tasks with MPI is currently to distribute fields over MPI tasks along their first axis. While it appears to work (our tests run fine with MPI), it has not been used during the last few years, and other parallelization approaches seem more promising.
If there is general agreement, I propose to remove this parallelization from the code, which would make NIFTy much smaller and easier to use and maintain.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/311Partial Constant EnergyAdapter2020-07-18T17:21:28ZPhilipp FrankPartial Constant EnergyAdapterThe following minimal example raises an unexpected error on initialization of the `EnergyAdapter` object:
import nifty7 as ift
d = ift.UnstructuredDomain(10)
a = ift.FieldAdapter(d, 'hi') + ift.FieldAdapter(d, 'ho')
lh = ift.GaussianEnergy(domain =a.target)@a
x = ift.from_random(lh.domain)
H = ift.EnergyAdapter(x, lh, constants=['hi'])The following minimal example raises an unexpected error on initialization of the `EnergyAdapter` object:
import nifty7 as ift
d = ift.UnstructuredDomain(10)
a = ift.FieldAdapter(d, 'hi') + ift.FieldAdapter(d, 'ho')
lh = ift.GaussianEnergy(domain =a.target)@a
x = ift.from_random(lh.domain)
H = ift.EnergyAdapter(x, lh, constants=['hi'])Philipp Arrasparras@mpa-garching.mpg.dePhilipp Arrasparras@mpa-garching.mpg.dehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/309Inconsistency in interface2020-06-29T12:27:54ZPhilipp Arrasparras@mpa-garching.mpg.deInconsistency in interface`ift.StructuredDomain.total_volume` is a property, `ift.DomainTuple.total_volume` is a method. Shall we unify that?`ift.StructuredDomain.total_volume` is a property, `ift.DomainTuple.total_volume` is a method. Shall we unify that?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/306Complex samples are inconsistent2020-06-19T12:31:22ZReimar Heinrich LeikeComplex samples are inconsistentWhen we draw samples with complex `dtype`, we multiply their magnitude with `sqrt(0.5)`.
This is done because then a sample from a variance of 1 also has standard deviation of 1 on average.
However, this is inconsistent with its Hamiltonian.
If we instead regard a complex number as a pair of real numbers, then in order to reproduce the sample variance, we would need the vector (real, imag) to have covariance
```
/ 0.5 0 \
| |
\ 0 0.5 /
```
However this covariance would lead to the Hamiltonian `(real**2 + imag**2)*2` which is double of what we would get when using the complex likelihood with covariance 1.When we draw samples with complex `dtype`, we multiply their magnitude with `sqrt(0.5)`.
This is done because then a sample from a variance of 1 also has standard deviation of 1 on average.
However, this is inconsistent with its Hamiltonian.
If we instead regard a complex number as a pair of real numbers, then in order to reproduce the sample variance, we would need the vector (real, imag) to have covariance
```
/ 0.5 0 \
| |
\ 0 0.5 /
```
However this covariance would lead to the Hamiltonian `(real**2 + imag**2)*2` which is double of what we would get when using the complex likelihood with covariance 1.Philipp FrankPhilipp Frankhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/305MPI Summation Routine in global NIFTy namespace?2020-05-30T10:40:45ZLukas PlatzMPI Summation Routine in global NIFTy namespace?Currently, `nifty6.minimization.metric_gaussian_kl` contains valuable MPI routines like the summation routine of !512 and `_shareRange`.
Currently I am working on an MPI operator using MPI allreduce summation and used to import `_allreduce_sum_field` and `_shareRange` directly from the KL file for development purposes.
Now that !512 removed the former, should we make the new summation routine general again and include it in the global namespace?
@mtr: would that be feasible without a large effort?Currently, `nifty6.minimization.metric_gaussian_kl` contains valuable MPI routines like the summation routine of !512 and `_shareRange`.
Currently I am working on an MPI operator using MPI allreduce summation and used to import `_allreduce_sum_field` and `_shareRange` directly from the KL file for development purposes.
Now that !512 removed the former, should we make the new summation routine general again and include it in the global namespace?
@mtr: would that be feasible without a large effort?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/304Broken jacobian2020-06-02T07:46:37ZPhilipp Arrasparras@mpa-garching.mpg.deBroken jacobianI have discovered an example in which the Jacobian test breaks. I do not see the reason (yet?) why it should not work.
```
import numpy as np
import nifty6 as ift
dom = ift.UnstructuredDomain(10)
op = (1.j*(ift.ScalingOperator(dom, 1.).log()).imag).exp()
loc = ift.from_random(op.domain, dtype=np.complex128) + 5
ift.extra.check_jacobian_consistency(op, loc)
```
Note the `+ 5` which shall make sure that the branch cut of the logarithm is avoided.I have discovered an example in which the Jacobian test breaks. I do not see the reason (yet?) why it should not work.
```
import numpy as np
import nifty6 as ift
dom = ift.UnstructuredDomain(10)
op = (1.j*(ift.ScalingOperator(dom, 1.).log()).imag).exp()
loc = ift.from_random(op.domain, dtype=np.complex128) + 5
ift.extra.check_jacobian_consistency(op, loc)
```
Note the `+ 5` which shall make sure that the branch cut of the logarithm is avoided.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/303Jacobian consistency check broken for CorrelatedField with N > 12020-05-22T09:26:39ZPhilipp Arrasparras@mpa-garching.mpg.deJacobian consistency check broken for CorrelatedField with N > 1If the return statement here
https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_6/test/test_operators/test_correlated_fields.py#L92
is removed, the jacobian tests fail for the individual amplitudes and the whole model.If the return statement here
https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_6/test/test_operators/test_correlated_fields.py#L92
is removed, the jacobian tests fail for the individual amplitudes and the whole model.Philipp HaimPhilipp Haimhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/302Invalid call to inverse2020-06-24T15:04:05ZGordian EdenhoferInvalid call to inverseWhy is nifty calling `.inverse` of an operator that definitely does not support it?
```ipython
In [40]: n2f_jac
Out[40]:
ChainOperator:
DiagonalOperator
OuterProduct
In [41]: n2f_jac.domain
Out[41]:
DomainTuple:
HPSpace(nside=16)
In [42]: n2f_jac.target
Out[42]:
DomainTuple:
UnstructuredDomain(shape=(30,))
HPSpace(nside=16)
In [43]: n2f_jac.capability
Out[43]: 3
In [44]: ift.extra.consistency_check(n2f_jac)
~/Projects/nifty/nifty6/extra.py in consistency_check(op, domain_dtype, target_dtype, atol, rtol, only_r_linear)
201 _actual_domain_check_linear(op, domain_dtype)
202 _actual_domain_check_linear(op.adjoint, target_dtype)
--> 203 _actual_domain_check_linear(op.inverse, target_dtype)
~/Projects/nifty/nifty6/operators/linear_operator.py in inverse(self)
86 Returns a LinearOperator object which behaves as if it were
87 the inverse of this operator."""
---> 88 return self._flip_modes(self.INVERSE_BIT)
~/Projects/nifty/nifty6/operators/chain_operator.py in _flip_modes(self, trafo)
123 if trafo == ADJ or trafo == INV:
124 return self.make(
--> 125 [op._flip_modes(trafo) for op in reversed(self._ops)])
~/Projects/nifty/nifty6/operators/chain_operator.py in <listcomp>(.0)
123 if trafo == ADJ or trafo == INV:
124 return self.make(
--> 125 [op._flip_modes(trafo) for op in reversed(self._ops)])
~/Projects/nifty/nifty6/operators/diagonal_operator.py in _flip_modes(self, trafo)
149 if trafo & self.INVERSE_BIT:
--> 150 xdiag = 1./xdiag # This operator contains zeros zeros on one axis
FloatingPointError: divide by zero encountered in true_divide
In [45]: n2f_jac(r).val
Out[45]:
array([[ 2.87943447e-02, 2.84809183e-02, 2.86276722e-02, ...,
2.85864172e-02, 2.88285874e-02, 2.88791209e-02],
...,
[ 0.00000000e+00, 1.46037080e-16, 0.00000000e+00, ...,
1.43154658e-16, -1.34704751e-16, 0.00000000e+00], # zero except for numerical fluctuations
[ 3.18092631e-02, 3.11583354e-02, 3.14395536e-02, ...,
3.13576202e-02, 3.18985295e-02, 3.20487145e-02]])
```Why is nifty calling `.inverse` of an operator that definitely does not support it?
```ipython
In [40]: n2f_jac
Out[40]:
ChainOperator:
DiagonalOperator
OuterProduct
In [41]: n2f_jac.domain
Out[41]:
DomainTuple:
HPSpace(nside=16)
In [42]: n2f_jac.target
Out[42]:
DomainTuple:
UnstructuredDomain(shape=(30,))
HPSpace(nside=16)
In [43]: n2f_jac.capability
Out[43]: 3
In [44]: ift.extra.consistency_check(n2f_jac)
~/Projects/nifty/nifty6/extra.py in consistency_check(op, domain_dtype, target_dtype, atol, rtol, only_r_linear)
201 _actual_domain_check_linear(op, domain_dtype)
202 _actual_domain_check_linear(op.adjoint, target_dtype)
--> 203 _actual_domain_check_linear(op.inverse, target_dtype)
~/Projects/nifty/nifty6/operators/linear_operator.py in inverse(self)
86 Returns a LinearOperator object which behaves as if it were
87 the inverse of this operator."""
---> 88 return self._flip_modes(self.INVERSE_BIT)
~/Projects/nifty/nifty6/operators/chain_operator.py in _flip_modes(self, trafo)
123 if trafo == ADJ or trafo == INV:
124 return self.make(
--> 125 [op._flip_modes(trafo) for op in reversed(self._ops)])
~/Projects/nifty/nifty6/operators/chain_operator.py in <listcomp>(.0)
123 if trafo == ADJ or trafo == INV:
124 return self.make(
--> 125 [op._flip_modes(trafo) for op in reversed(self._ops)])
~/Projects/nifty/nifty6/operators/diagonal_operator.py in _flip_modes(self, trafo)
149 if trafo & self.INVERSE_BIT:
--> 150 xdiag = 1./xdiag # This operator contains zeros zeros on one axis
FloatingPointError: divide by zero encountered in true_divide
In [45]: n2f_jac(r).val
Out[45]:
array([[ 2.87943447e-02, 2.84809183e-02, 2.86276722e-02, ...,
2.85864172e-02, 2.88285874e-02, 2.88791209e-02],
...,
[ 0.00000000e+00, 1.46037080e-16, 0.00000000e+00, ...,
1.43154658e-16, -1.34704751e-16, 0.00000000e+00], # zero except for numerical fluctuations
[ 3.18092631e-02, 3.11583354e-02, 3.14395536e-02, ...,
3.13576202e-02, 3.18985295e-02, 3.20487145e-02]])
```https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/301Broken GaussianEnergy2020-05-19T16:19:40ZGordian EdenhoferBroken GaussianEnergySampling with a Gaussian energy and no mean was broken in one of the recent merge request.
Applying the following path
```patch
diff --git a/demos/getting_started_3.py b/demos/getting_started_3.py
index 2dcffa17..c053e0a0 100644
--- a/demos/getting_started_3.py
+++ b/demos/getting_started_3.py
@@ -109,8 +109,7 @@ if __name__ == '__main__':
minimizer = ift.NewtonCG(ic_newton)
# Set up likelihood and information Hamiltonian
- likelihood = (ift.GaussianEnergy(mean=data, inverse_covariance=N.inverse) @
- signal_response)
+ likelihood = ift.GaussianEnergy(inverse_covariance=N.inverse) @ ift.Adder(data, neg=True) @ signal_response
H = ift.StandardHamiltonian(likelihood, ic_sampling)
initial_mean = ift.MultiField.full(H.domain, 0.)
```
results in the demo failing though it actually should do the exact same thing as before.Sampling with a Gaussian energy and no mean was broken in one of the recent merge request.
Applying the following path
```patch
diff --git a/demos/getting_started_3.py b/demos/getting_started_3.py
index 2dcffa17..c053e0a0 100644
--- a/demos/getting_started_3.py
+++ b/demos/getting_started_3.py
@@ -109,8 +109,7 @@ if __name__ == '__main__':
minimizer = ift.NewtonCG(ic_newton)
# Set up likelihood and information Hamiltonian
- likelihood = (ift.GaussianEnergy(mean=data, inverse_covariance=N.inverse) @
- signal_response)
+ likelihood = ift.GaussianEnergy(inverse_covariance=N.inverse) @ ift.Adder(data, neg=True) @ signal_response
H = ift.StandardHamiltonian(likelihood, ic_sampling)
initial_mean = ift.MultiField.full(H.domain, 0.)
```
results in the demo failing though it actually should do the exact same thing as before.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/300Streamline arguments position2020-05-19T13:05:11ZGordian EdenhoferStreamline arguments positionSwitch positional arguments to get a consistent order of e.g. `domain` within the positional arguments of all operators.
Affected operators are:
* `ift.from_random`
* `ift.OuterProduct`
@lerou You said that you are keeping a list of operators that annoy you by their inconsistency. Can you amend the list with such cases please?Switch positional arguments to get a consistent order of e.g. `domain` within the positional arguments of all operators.
Affected operators are:
* `ift.from_random`
* `ift.OuterProduct`
@lerou You said that you are keeping a list of operators that annoy you by their inconsistency. Can you amend the list with such cases please?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/298Tests are taking too long2020-05-19T08:02:40ZMartin ReineckeTests are taking too longOn my laptop, the Nifty regression test are taking almost 10 minutes at the moment, which starts to get really problematic during development. Is there a chance of reducing test sizes and/or avoiding redundancies in the current tests?
The biggest problem by far is test_correlated_fields.py, which accounts for roughly 40% of the testing time. Next is test_jacobian.py with 20%; next is test_energy_gradients.py.
@parras, @pfrank, @phaimOn my laptop, the Nifty regression test are taking almost 10 minutes at the moment, which starts to get really problematic during development. Is there a chance of reducing test sizes and/or avoiding redundancies in the current tests?
The biggest problem by far is test_correlated_fields.py, which accounts for roughly 40% of the testing time. Next is test_jacobian.py with 20%; next is test_energy_gradients.py.
@parras, @pfrank, @phaimhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/297Really obsolete Nifty docs are still online2020-05-14T06:58:53ZMartin ReineckeReally obsolete Nifty docs are still onlineI searched online for "nifty documentation" today, and the second hit was
https://wwwmpa.mpa-garching.mpg.de/ift/nifty/start.html
Can we please remove these completely obsolete files from the public internet? Our potential users might be grateful :)I searched online for "nifty documentation" today, and the second hit was
https://wwwmpa.mpa-garching.mpg.de/ift/nifty/start.html
Can we please remove these completely obsolete files from the public internet? Our potential users might be grateful :)Torsten EnsslinTorsten Ensslinhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/295Suggestion: Add dtype property to DomainTuple2020-04-26T15:24:18ZPhilipp Arrasparras@mpa-garching.mpg.deSuggestion: Add dtype property to DomainTupleI suggest to add a property called `dtype` to `DomainTuple`. When wrapping a numpy array as `Field` the constructor would check that the dtype of the array and the `DomainTuple` coincide. Functions like `from_random` do not need to take a dtype any more. In particular, sampling from e.g. `DiagonalOperator`s become unambiguous in terms of drawing either a real or a complex field. This in turn would simplify the KL because the `lh_sampling_dtype` would disappear.
I am not 100% sure how to treat `RGSpace.get_default_codomain()`. Would the default codomain of a real space be real again or complex? My suggestion is to add an optional argument to that function and take as default the same dtype as the original space.
Also I am not sure how to tell the `DomainTuple` its dtype. I suggest as an additional argument of `DomainTuple.make()`.
What do you think about that, @mtr @pfrank @reimar?I suggest to add a property called `dtype` to `DomainTuple`. When wrapping a numpy array as `Field` the constructor would check that the dtype of the array and the `DomainTuple` coincide. Functions like `from_random` do not need to take a dtype any more. In particular, sampling from e.g. `DiagonalOperator`s become unambiguous in terms of drawing either a real or a complex field. This in turn would simplify the KL because the `lh_sampling_dtype` would disappear.
I am not 100% sure how to treat `RGSpace.get_default_codomain()`. Would the default codomain of a real space be real again or complex? My suggestion is to add an optional argument to that function and take as default the same dtype as the original space.
Also I am not sure how to tell the `DomainTuple` its dtype. I suggest as an additional argument of `DomainTuple.make()`.
What do you think about that, @mtr @pfrank @reimar?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/293[GU] Simplify point-wise Field operations2020-04-14T06:46:41ZMartin Reinecke[GU] Simplify point-wise Field operationsCurrently, point-wise operations on Nifty objects are not really implemented in a beautiful way: they are injected via some nasty Python tricks into `Field`, `MultiField`, `Operator` and the `sugar` module, and a slightly more complicated variant is added to `Linearization`.
My plan is to have a single method in `Field`, `MultiField`, `Linearization` and `Operator`, which is called `ptw` (or `pointwise`), which takes a string describing the requested pointwise operation (like "sin", "exp", "one_over" etc.). Implementation of the actual operations would be fairly trivial via a dictionary mapping these names to their `numpy` equivalents, and also describing their derivatives via `numpy` or custom functions where necessary.
If we like, we can leave the global pointwise functions injected into `sugar.py` unchanged for aesthetic reasons :)
Opinions?Currently, point-wise operations on Nifty objects are not really implemented in a beautiful way: they are injected via some nasty Python tricks into `Field`, `MultiField`, `Operator` and the `sugar` module, and a slightly more complicated variant is added to `Linearization`.
My plan is to have a single method in `Field`, `MultiField`, `Linearization` and `Operator`, which is called `ptw` (or `pointwise`), which takes a string describing the requested pointwise operation (like "sin", "exp", "one_over" etc.). Implementation of the actual operations would be fairly trivial via a dictionary mapping these names to their `numpy` equivalents, and also describing their derivatives via `numpy` or custom functions where necessary.
If we like, we can leave the global pointwise functions injected into `sugar.py` unchanged for aesthetic reasons :)
Opinions?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/292Many Linearization methods explicitly expect Fields2020-04-03T07:53:56ZMartin ReineckeMany Linearization methods explicitly expect FieldsWhile working on Field/MultiField unification, I noticed that the largest part of `Linearization` members do not accept `MultiFields`; a clear example is `sinc`, which explicitly creates a `Field` for its output, but many other methods break in more obscure ways.
If we want unification, we need all `Linearization` methods to work on `Multifields`, which is quite a bit of tedious work. Any volunteers?While working on Field/MultiField unification, I noticed that the largest part of `Linearization` members do not accept `MultiFields`; a clear example is `sinc`, which explicitly creates a `Field` for its output, but many other methods break in more obscure ways.
If we want unification, we need all `Linearization` methods to work on `Multifields`, which is quite a bit of tedious work. Any volunteers?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/289Overflows after updating NIFTy2020-03-27T14:50:45ZPhilipp Arrasparras@mpa-garching.mpg.deOverflows after updating NIFTyWhen updating to the new interpolation scheme (https://gitlab.mpcdf.mpg.de/ift/nifty/commit/37a5691e3013ca36303ee4c3e78c74485f516793), I get overflows in the minimization where I did not get any before. I am confused since the both gradient tests work better than before and the accuracy of the approximation is better. Maybe the operator works better now so that actually an inconsistency in my model is triggered or so. I have the feeling that it is not the operator's fault... ;)
@gedenhof can you share your observations about the overflows? If it is the same commit as for me, then I will go ahead and try to assemble a minimal demonstration case but I think this might be a lot of work so maybe we can find the problem without that.
@gedenhof, @lerouWhen updating to the new interpolation scheme (https://gitlab.mpcdf.mpg.de/ift/nifty/commit/37a5691e3013ca36303ee4c3e78c74485f516793), I get overflows in the minimization where I did not get any before. I am confused since the both gradient tests work better than before and the accuracy of the approximation is better. Maybe the operator works better now so that actually an inconsistency in my model is triggered or so. I have the feeling that it is not the operator's fault... ;)
@gedenhof can you share your observations about the overflows? If it is the same commit as for me, then I will go ahead and try to assemble a minimal demonstration case but I think this might be a lot of work so maybe we can find the problem without that.
@gedenhof, @lerou