ift issueshttps://gitlab.mpcdf.mpg.de/groups/ift/-/issues2019-05-21T06:52:52Zhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/262Nifty citation2019-05-21T06:52:52ZPhilipp Arrasparras@mpa-garching.mpg.deNifty citationWhen citing NIFTy with A&A style I get in the references:
```
Martin Reinecke, Theo Steininger, Marco Selig. 2019, NIFTy – Numerical Information Field TheorY
```
without any link. Can this be fixed? Also: the year is missing in https://gitlab.mpcdf.mpg.de/ift/nifty/blob/NIFTy_5/docs/source/citations.rstWhen citing NIFTy with A&A style I get in the references:
```
Martin Reinecke, Theo Steininger, Marco Selig. 2019, NIFTy – Numerical Information Field TheorY
```
without any link. Can this be fixed? Also: the year is missing in https://gitlab.mpcdf.mpg.de/ift/nifty/blob/NIFTy_5/docs/source/citations.rsthttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/268Do we still need NFFT?2019-04-29T08:25:22ZMartin ReineckeDo we still need NFFT?I'd like to remove the `nifty5.library.NFFT` class, since it will most likely not be used in the future and has a fairly inconvenient dependency on `pynfft` and the nfft package.
Any objections? @parras?I'd like to remove the `nifty5.library.NFFT` class, since it will most likely not be used in the future and has a fairly inconvenient dependency on `pynfft` and the nfft package.
Any objections? @parras?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/267SumOperator is not imported with nifty2019-04-26T13:29:43ZPhilipp HaimSumOperator is not imported with niftyWhen importing nifty5, SumOperator is not included. Is that intentional?When importing nifty5, SumOperator is not included. Is that intentional?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/264Bug in metric2019-04-16T09:34:36ZPhilipp Arrasparras@mpa-garching.mpg.deBug in metricThe following script crashes in the current version of nifty and even on the branch `opsumdomains` the last example crashes. Anyone any ideas how to fix this?
```
import nifty5 as ift
dom = ift.RGSpace(10)
diffuse = ift.ducktape(dom, None, 'xi')
diffuse2 = ift.ducktape(dom, None, 'xi2')
e1 = ift.GaussianEnergy(domain=diffuse.target)
ops = []
ops.append((e1 + e1) @ diffuse)
ops.append(e1 @ diffuse)
ops.append(e1 @ diffuse + e1 @ diffuse2)
for op in ops:
pos = ift.full(op.domain, 0)
lin = ift.Linearization.make_var(pos, True)
print(op(lin).metric)
assert isinstance(op(lin).metric.domain, ift.MultiDomain)
assert op(lin).metric.domain is op.domain
```The following script crashes in the current version of nifty and even on the branch `opsumdomains` the last example crashes. Anyone any ideas how to fix this?
```
import nifty5 as ift
dom = ift.RGSpace(10)
diffuse = ift.ducktape(dom, None, 'xi')
diffuse2 = ift.ducktape(dom, None, 'xi2')
e1 = ift.GaussianEnergy(domain=diffuse.target)
ops = []
ops.append((e1 + e1) @ diffuse)
ops.append(e1 @ diffuse)
ops.append(e1 @ diffuse + e1 @ diffuse2)
for op in ops:
pos = ift.full(op.domain, 0)
lin = ift.Linearization.make_var(pos, True)
print(op(lin).metric)
assert isinstance(op(lin).metric.domain, ift.MultiDomain)
assert op(lin).metric.domain is op.domain
```https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/265MaskOperator in getting_started_12019-04-16T07:49:17ZJulia StadlerMaskOperator in getting_started_1The demo getting_started_1 uses a DiagonalOperator to implement the chessboard mask. I think replacing it by a MaskOperator would be helpful for people new to the code (I checked the demos first for how to implement masking ...)The demo getting_started_1 uses a DiagonalOperator to implement the chessboard mask. I think replacing it by a MaskOperator would be helpful for people new to the code (I checked the demos first for how to implement masking ...)https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/263Properly implementing a convolution2019-03-14T15:09:48ZJulia StadlerProperly implementing a convolutionMy response convolves the signal with a non-spherical PSF, which is specified by a field with the same domain as the signal. To avoid issues with periodic boundaries I apply a zero padding, so the exact steps are:
* zero-pad the PSF from the center, apply a Fourier transform and construct a DiagonalOperator from the result
* zero-pad the signal from the right/top and apply a Fourier transform
* apply the PSF-operator to the signal field followed by an inverse Fourier transform
* apply the adjoint signal zero padding operation
* take the real value
The problem with this procedure is that it reduces the flux at the borders of the image, where the PSF smears the additional zeros out to the pixels I'm interested in. I attached two plots to illustrate the problem, on shows how point sources are smeared out to the opposite side of the image without zero padding, the other how the zero padding leaks into the image at the border. The signal contains two point sources which get brighter with with bin number and equally the PSF gets wider.
Has anyone encountered a similar problem before? Or are there any nifty tools to tackle this, I'm not aware of? My approach would be to extend the signal field beyond the data region and to include a mask in the response to account for that. .Or is there a smarter approach?![data_no-zero-padding](/uploads/90b1f2e428d6316b0f4886ec40535fe4/data_no-zero-padding.png)
![data_outer-zero-padding](/uploads/75851eacc3174eccb9be766b57bdad28/data_outer-zero-padding.png)My response convolves the signal with a non-spherical PSF, which is specified by a field with the same domain as the signal. To avoid issues with periodic boundaries I apply a zero padding, so the exact steps are:
* zero-pad the PSF from the center, apply a Fourier transform and construct a DiagonalOperator from the result
* zero-pad the signal from the right/top and apply a Fourier transform
* apply the PSF-operator to the signal field followed by an inverse Fourier transform
* apply the adjoint signal zero padding operation
* take the real value
The problem with this procedure is that it reduces the flux at the borders of the image, where the PSF smears the additional zeros out to the pixels I'm interested in. I attached two plots to illustrate the problem, on shows how point sources are smeared out to the opposite side of the image without zero padding, the other how the zero padding leaks into the image at the border. The signal contains two point sources which get brighter with with bin number and equally the PSF gets wider.
Has anyone encountered a similar problem before? Or are there any nifty tools to tackle this, I'm not aware of? My approach would be to extend the signal field beyond the data region and to include a mask in the response to account for that. .Or is there a smarter approach?![data_no-zero-padding](/uploads/90b1f2e428d6316b0f4886ec40535fe4/data_no-zero-padding.png)
![data_outer-zero-padding](/uploads/75851eacc3174eccb9be766b57bdad28/data_outer-zero-padding.png)https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/260extending clip2019-02-25T10:55:09ZReimar Heinrich Leikeextending clipCurrently clip clips all values by the same clipping value. However, in numpy clip can also take an array, such that every value gets clipped by a diffent clipping value. Field.clip can in principle take a dobj and then clips differently for every entry, however this does not seem intended.
It would be advantageous to support passing a Field/MultiField instead of a scalar for clip.Currently clip clips all values by the same clipping value. However, in numpy clip can also take an array, such that every value gets clipped by a diffent clipping value. Field.clip can in principle take a dobj and then clips differently for every entry, however this does not seem intended.
It would be advantageous to support passing a Field/MultiField instead of a scalar for clip.Reimar Heinrich LeikeReimar Heinrich Leikehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/259bug in linearization with multifield as target2019-02-07T09:03:15ZReimar Heinrich Leikebug in linearization with multifield as targetIt is possible to contruct sensible operators out of Nifty routines that fail if called with Linearizations. [minimal_example.py](/uploads/064781a8a4c0a0af2872a52efb5c0814/minimal_example.py)It is possible to contruct sensible operators out of Nifty routines that fail if called with Linearizations. [minimal_example.py](/uploads/064781a8a4c0a0af2872a52efb5c0814/minimal_example.py)https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/257Demos do not work for python 3.5.52019-02-01T10:42:19ZJakob KnollmuellerDemos do not work for python 3.5.5Hi,
all demos do not run with python 3.5.5 because the f prefix before strings is not supported.
JakobHi,
all demos do not run with python 3.5.5 because the f prefix before strings is not supported.
JakobPhilipp Arrasparras@mpa-garching.mpg.dePhilipp Arrasparras@mpa-garching.mpg.dehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/256Tag release commit!2019-02-01T08:37:20ZPhilipp Arrasparras@mpa-garching.mpg.deTag release commit!Also:
- First demo runs, then build pagesAlso:
- First demo runs, then build pagesPhilipp Arrasparras@mpa-garching.mpg.dePhilipp Arrasparras@mpa-garching.mpg.dehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/226Minimizers don't work reliably on nonlinear problems2019-02-01T08:36:45ZMartin ReineckeMinimizers don't work reliably on nonlinear problemsWe are seeing quite a lot of strange problems in our minimizations, typically when nonlinearities are involved.
So far I have not found any real bugs in our minimizers, so my suspicion is that we have to add constraints to our fields of unknowns, to prevent them wandering into regions where energy values are too high, gradients too small etc. Our current attempts to clip the nonlinearities or to manually reset the positions whenever they move out of the accepted area do not work well, so I suggest trying algorithms like SciPy's 'L-BFGS-B', 'COBYLA', etc., which have native support for constraints.
If that turns out to work well, we can try to add constraints also to our own solvers.We are seeing quite a lot of strange problems in our minimizations, typically when nonlinearities are involved.
So far I have not found any real bugs in our minimizers, so my suspicion is that we have to add constraints to our fields of unknowns, to prevent them wandering into regions where energy values are too high, gradients too small etc. Our current attempts to clip the nonlinearities or to manually reset the positions whenever they move out of the accepted area do not work well, so I suggest trying algorithms like SciPy's 'L-BFGS-B', 'COBYLA', etc., which have native support for constraints.
If that turns out to work well, we can try to add constraints also to our own solvers.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/248state of branches2019-02-01T08:30:53ZMartin Reineckestate of branchesWith NIFTy 5 around the corner, I'd like to get an overview which of the current branches we still need...
- try_different_sampling: @reimar, should we merge this?
- new_sampling: still has "krylov" files, can this be removed?
- complex_fixes: changes seem very small. @parras, should this be merged?
- symbolicDifferentiation: we should probably keep this, so that we don't forget how it is done if we ever need it again ;)
- yango_minimizer: @reimar, what's the status?
- krylov_probing: @reimar, ditto
- addUnits: will be kept since it might be useful in the future
- new_los: @kjako, I still need your opinion on thisWith NIFTy 5 around the corner, I'd like to get an overview which of the current branches we still need...
- try_different_sampling: @reimar, should we merge this?
- new_sampling: still has "krylov" files, can this be removed?
- complex_fixes: changes seem very small. @parras, should this be merged?
- symbolicDifferentiation: we should probably keep this, so that we don't forget how it is done if we ever need it again ;)
- yango_minimizer: @reimar, what's the status?
- krylov_probing: @reimar, ditto
- addUnits: will be kept since it might be useful in the future
- new_los: @kjako, I still need your opinion on thishttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/241SmoothnessOperator ill-posedness2019-02-01T08:30:45ZSebastian KehlSmoothnessOperator ill-posednessVisualizing nifty's SmoothnessOperator [smoothingoperator.pdf](/uploads/34b3a1a3e4326d00dc2695b100224f9f/smoothingoperator.pdf) over some PowerSpace, I was wondering whether having no penalty on the highest frequency entry (indicated by the empty last row and column in the attached picture) is the desired behavior?
I understand that this might be desired for the zero mode (and maybe the next one too) since the data should be informative enough to render the overall problem well-posed. But for the highest frequency mode, I would expect heavy problems in optimization from leaving this degree-of-freedom basically free.Visualizing nifty's SmoothnessOperator [smoothingoperator.pdf](/uploads/34b3a1a3e4326d00dc2695b100224f9f/smoothingoperator.pdf) over some PowerSpace, I was wondering whether having no penalty on the highest frequency entry (indicated by the empty last row and column in the attached picture) is the desired behavior?
I understand that this might be desired for the zero mode (and maybe the next one too) since the data should be informative enough to render the overall problem well-posed. But for the highest frequency mode, I would expect heavy problems in optimization from leaving this degree-of-freedom basically free.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/247[post Nifty5] make Fields and MultiFields immutable?2018-10-17T11:59:49ZMartin Reinecke[post Nifty5] make Fields and MultiFields immutable?Currently, `Field`s and `MultiField`s are the only NIFTy classes whose value can change after they have been constructed. It may be worthwhile to change this and make them immutable as well.
Pros:
- code will get quite a bit shorter (e.g. no more explicit `lock()` calls)
- less potential for subtle usage errors
Cons:
- slight performance hit by additional copying of data (operators like += will not be as efficient any more)Currently, `Field`s and `MultiField`s are the only NIFTy classes whose value can change after they have been constructed. It may be worthwhile to change this and make them immutable as well.
Pros:
- code will get quite a bit shorter (e.g. no more explicit `lock()` calls)
- less potential for subtle usage errors
Cons:
- slight performance hit by additional copying of data (operators like += will not be as efficient any more)https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/253inverter=2018-09-11T09:50:38ZChristoph Lienhardinverter=https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/252Constant model jacobian buggy2018-07-11T10:14:53ZReimar Heinrich LeikeConstant model jacobian buggyThe jacobian of the ConstantModel is defined as
```
self._jacobian = 0.
```
This already looks quite simplified and as one woould expect it easily causes trouble, e.g. as shown in this example:
```
import nifty5 as ift
space = ift.RGSpace(4)
a = ift.full(space, 1.)
hspace = space.get_default_codomain()
ht = ift.HarmonicTransformOperator(hspace, target = space)
b = ift.full(hspace, 1.)
multia = ift.MultiField.from_dict({'a':a})
var_a = ift.Variable(multia)['a']
const_b = ift.Constant(multia, b)
sumabvalue = var_a.value + ht(const_b).value
# Here it crashes due to domain mismatch, even though you can add the values just fine
sumab = var_a + ht(const_b)
```
There are at least 2 possible ways to fix this issue. One possibility is to make the ConstantModel use the position again, such that it can return zeros in the correct domain. Another possibility is to have an operator that maps anything to an empty MultiDomain. (which then can be added with other fields)
I favor the last one, it is however incompatible with the way that martin wants to unify Field and MultiField.
Any thoughts @kjako @mtr @parras ?The jacobian of the ConstantModel is defined as
```
self._jacobian = 0.
```
This already looks quite simplified and as one woould expect it easily causes trouble, e.g. as shown in this example:
```
import nifty5 as ift
space = ift.RGSpace(4)
a = ift.full(space, 1.)
hspace = space.get_default_codomain()
ht = ift.HarmonicTransformOperator(hspace, target = space)
b = ift.full(hspace, 1.)
multia = ift.MultiField.from_dict({'a':a})
var_a = ift.Variable(multia)['a']
const_b = ift.Constant(multia, b)
sumabvalue = var_a.value + ht(const_b).value
# Here it crashes due to domain mismatch, even though you can add the values just fine
sumab = var_a + ht(const_b)
```
There are at least 2 possible ways to fix this issue. One possibility is to make the ConstantModel use the position again, such that it can return zeros in the correct domain. Another possibility is to have an operator that maps anything to an empty MultiDomain. (which then can be added with other fields)
I favor the last one, it is however incompatible with the way that martin wants to unify Field and MultiField.
Any thoughts @kjako @mtr @parras ?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/249Linear Operator-MultiFields bug2018-07-09T11:59:46ZReimar Heinrich LeikeLinear Operator-MultiFields bugThe new feature that one can add multifields that do not have exactly the same domain causes trouble with the check_input function of linear operators. One fix would be to allow ANY linear operator to operate on ANY MultiField, which is well defined since a linear operator applied to zero is always zero. Thoughts?
(minimal code example for NIFTy crashing below)
```
import nifty5 as ift
space = ift.RGSpace(4)
a = ift.full(space, 1.)
space2 = ift.RGSpace(5)
b = ift.full(space, 1.)
multi = ift.MultiField({'a':a, 'b':b})
var_a = ift.Variable(multi)['a']*2 # Multiplying this by 2 causes NIFTy to crash later
# Here it crashes due to domain mismatch
var_a.gradient.adjoint_times(a)
```The new feature that one can add multifields that do not have exactly the same domain causes trouble with the check_input function of linear operators. One fix would be to allow ANY linear operator to operate on ANY MultiField, which is well defined since a linear operator applied to zero is always zero. Thoughts?
(minimal code example for NIFTy crashing below)
```
import nifty5 as ift
space = ift.RGSpace(4)
a = ift.full(space, 1.)
space2 = ift.RGSpace(5)
b = ift.full(space, 1.)
multi = ift.MultiField({'a':a, 'b':b})
var_a = ift.Variable(multi)['a']*2 # Multiplying this by 2 causes NIFTy to crash later
# Here it crashes due to domain mismatch
var_a.gradient.adjoint_times(a)
```https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/251Versioning problems when using Nifty in dependencies for other projects2018-07-03T15:04:57ZChristoph LienhardVersioning problems when using Nifty in dependencies for other projectsI am currently trying to set up a working setup.py for the HMCF project and have problems making pip install NIFTy on the fly.
I am not an expert and therefore the reason for my problem may lie somewhere else, but so far I think the source of it all is that the version is not specified directly in setup.py.
The problem arises, when I specify a dependency link to NIFTy on git in my HMCF [setup.py](https://gitlab.mpcdf.mpg.de/ift/HMCF/blob/master/setup.py):
setup(name=...
dependency_links=["git+https://gitlab.mpcdf.mpg.de/ift/NIFTy@NIFTy4#egg=nifty4-4.2"],
install_requires=["numpy>=1.10", "nifty4>=4.2", "scipy>=0.17", "h5py>=2.5.0"],
...)
This does not work (I also tried fiddling around with the egg argument, leaving it out, do not specify a version, etc. pp, but the problem remains).
> Could not find a version that satisfies the requirement nifty4 (from hmcf==1.0) (from versions: )
> No matching distribution found for nifty4 (from hmcf==1.0)
As far as I understand the problem: pip wants to have a version number.
In several stack overflow questions and related corners of the internet people state that pip looks for a version in the setup.py file of the dependency.
Since NIFTy does not specify the version in an obvious way, pip will not treat NIFTy as a valid package.
Possible Solutions:
===================
1. Reinstate NIFTy on python servers as done with [NIFTy1](https://pypi.org/project/ift_nifty)
2. Specify a version (hard-coded if you want to call it like this) and skip the `__version__` stuff in the setup.py file of NIFTy.
I know, this is handy, but I could not find a solution where pip dependencies work while NIFTy is only available via git
I tried the latter solution on a new nifty branch called [setup_version_test](https://gitlab.mpcdf.mpg.de/ift/NIFTy/tree/setup_version_test) and it works using
setup(name=...
dependency_links=["git+https://gitlab.mpcdf.mpg.de/ift/NIFTy@setup_version_test#egg=nifty4-4.2"],
...)
in the HMCF setup.py file.
Since we have a number of projects which depend on Nifty and simulate easy installation by issuing a setup.py file (talking about you, [starblade](https://gitlab.mpcdf.mpg.de/ift/starblade/blob/master/setup.py)),
I find it very important that it is actually possible to achieve an on-the-fly installation of Nifty.
Everything in here was done with MPA resources, python 3.5.1 and pip 9.0.1I am currently trying to set up a working setup.py for the HMCF project and have problems making pip install NIFTy on the fly.
I am not an expert and therefore the reason for my problem may lie somewhere else, but so far I think the source of it all is that the version is not specified directly in setup.py.
The problem arises, when I specify a dependency link to NIFTy on git in my HMCF [setup.py](https://gitlab.mpcdf.mpg.de/ift/HMCF/blob/master/setup.py):
setup(name=...
dependency_links=["git+https://gitlab.mpcdf.mpg.de/ift/NIFTy@NIFTy4#egg=nifty4-4.2"],
install_requires=["numpy>=1.10", "nifty4>=4.2", "scipy>=0.17", "h5py>=2.5.0"],
...)
This does not work (I also tried fiddling around with the egg argument, leaving it out, do not specify a version, etc. pp, but the problem remains).
> Could not find a version that satisfies the requirement nifty4 (from hmcf==1.0) (from versions: )
> No matching distribution found for nifty4 (from hmcf==1.0)
As far as I understand the problem: pip wants to have a version number.
In several stack overflow questions and related corners of the internet people state that pip looks for a version in the setup.py file of the dependency.
Since NIFTy does not specify the version in an obvious way, pip will not treat NIFTy as a valid package.
Possible Solutions:
===================
1. Reinstate NIFTy on python servers as done with [NIFTy1](https://pypi.org/project/ift_nifty)
2. Specify a version (hard-coded if you want to call it like this) and skip the `__version__` stuff in the setup.py file of NIFTy.
I know, this is handy, but I could not find a solution where pip dependencies work while NIFTy is only available via git
I tried the latter solution on a new nifty branch called [setup_version_test](https://gitlab.mpcdf.mpg.de/ift/NIFTy/tree/setup_version_test) and it works using
setup(name=...
dependency_links=["git+https://gitlab.mpcdf.mpg.de/ift/NIFTy@setup_version_test#egg=nifty4-4.2"],
...)
in the HMCF setup.py file.
Since we have a number of projects which depend on Nifty and simulate easy installation by issuing a setup.py file (talking about you, [starblade](https://gitlab.mpcdf.mpg.de/ift/starblade/blob/master/setup.py)),
I find it very important that it is actually possible to achieve an on-the-fly installation of Nifty.
Everything in here was done with MPA resources, python 3.5.1 and pip 9.0.1https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/246Slight re-design of the `Energy` class2018-06-27T20:37:45ZMartin ReineckeSlight re-design of the `Energy` classCurrently, most energy classes take constructor arguments which allow their curvatures to be inverted. This only makes sense as long as we are not working very much with sums of energies. In this latter case, it doesn't matter if the individual curvatures are invertible or not, we just need to make sure that the summed-up curvature is invertible.
I'd like to propose the following changes:
- All inversion-related functionality is removed from the energy classes we currently have
- the `Energy` base class gets a new method `make_curvature_invertible()`, which takes an `IterationController` and optionally another `Energy` that serves as a preconditioner (we need an `Energy` here and not simply a `LinearOperator`, since the preconditioning might depend on position). This method returns a new `Energy` object, with invertible curvature.
- we add a new class `SamplingEnabledEnergy`, whose constructor takes two energies `prior` and `likelihood` and an `IterationController`. It returns an energy object that behaves like the sum of prior and likelihood, and returns a sampling-enabled curvature.Currently, most energy classes take constructor arguments which allow their curvatures to be inverted. This only makes sense as long as we are not working very much with sums of energies. In this latter case, it doesn't matter if the individual curvatures are invertible or not, we just need to make sure that the summed-up curvature is invertible.
I'd like to propose the following changes:
- All inversion-related functionality is removed from the energy classes we currently have
- the `Energy` base class gets a new method `make_curvature_invertible()`, which takes an `IterationController` and optionally another `Energy` that serves as a preconditioner (we need an `Energy` here and not simply a `LinearOperator`, since the preconditioning might depend on position). This method returns a new `Energy` object, with invertible curvature.
- we add a new class `SamplingEnabledEnergy`, whose constructor takes two energies `prior` and `likelihood` and an `IterationController`. It returns an energy object that behaves like the sum of prior and likelihood, and returns a sampling-enabled curvature.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/245dtypes in UnitLogGauss2018-06-27T15:55:19ZPhilipp Arrasparras@mpa-garching.mpg.dedtypes in UnitLogGauss```
import numpy as np
import nifty5 as ift
space = ift.RGSpace(2)
f = ift.from_random('normal', space)
var_f = ift.Variable(f)
g = ift.from_random('normal', space, dtype=np.complex)
var_g = ift.Variable(g)
multi = ift.MultiField({'f': f, 'g': g})
var = ift.Variable(multi)
op = ift.ScalingOperator(1j, space)
op_multi = ift.ScalingOperator(1j, var.value.domain)
lh_f = ift.library.UnitLogGauss(op(var_f))
lh_g = ift.library.UnitLogGauss(op(var_g))
lh = ift.library.UnitLogGauss(op_multi(var))
print(lh_f.value)
print(lh_f.gradient)
print()
print(lh_g.value)
print(lh_g.gradient)
print()
print(lh.value)
print(lh.gradient['f'])
print(lh.gradient['g'])
```
Results in:
```
2.230929182659062
nifty5.Field instance
- domain = DomainTuple, len: 1
RGSpace(shape=(2,), distances=(0.5,), harmonic=False)
- val = array([-1.52732309+0.j, -1.45915817+0.j])
0.30445727594179295
nifty5.Field instance
- domain = DomainTuple, len: 1
RGSpace(shape=(2,), distances=(0.5,), harmonic=False)
- val = array([0.25117854+0.27192986j, 0.45978435-0.51036889j])
2.535386458600855
nifty5.Field instance
- domain = DomainTuple, len: 1
RGSpace(shape=(2,), distances=(0.5,), harmonic=False)
- val = array([-1.52732309+0.j, -1.45915817+0.j])
nifty5.Field instance
- domain = DomainTuple, len: 1
RGSpace(shape=(2,), distances=(0.5,), harmonic=False)
- val = array([0.25117854+0.27192986j, 0.45978435-0.51036889j])
```
I would expect that the gradient has the same `dtype` as the input.```
import numpy as np
import nifty5 as ift
space = ift.RGSpace(2)
f = ift.from_random('normal', space)
var_f = ift.Variable(f)
g = ift.from_random('normal', space, dtype=np.complex)
var_g = ift.Variable(g)
multi = ift.MultiField({'f': f, 'g': g})
var = ift.Variable(multi)
op = ift.ScalingOperator(1j, space)
op_multi = ift.ScalingOperator(1j, var.value.domain)
lh_f = ift.library.UnitLogGauss(op(var_f))
lh_g = ift.library.UnitLogGauss(op(var_g))
lh = ift.library.UnitLogGauss(op_multi(var))
print(lh_f.value)
print(lh_f.gradient)
print()
print(lh_g.value)
print(lh_g.gradient)
print()
print(lh.value)
print(lh.gradient['f'])
print(lh.gradient['g'])
```
Results in:
```
2.230929182659062
nifty5.Field instance
- domain = DomainTuple, len: 1
RGSpace(shape=(2,), distances=(0.5,), harmonic=False)
- val = array([-1.52732309+0.j, -1.45915817+0.j])
0.30445727594179295
nifty5.Field instance
- domain = DomainTuple, len: 1
RGSpace(shape=(2,), distances=(0.5,), harmonic=False)
- val = array([0.25117854+0.27192986j, 0.45978435-0.51036889j])
2.535386458600855
nifty5.Field instance
- domain = DomainTuple, len: 1
RGSpace(shape=(2,), distances=(0.5,), harmonic=False)
- val = array([-1.52732309+0.j, -1.45915817+0.j])
nifty5.Field instance
- domain = DomainTuple, len: 1
RGSpace(shape=(2,), distances=(0.5,), harmonic=False)
- val = array([0.25117854+0.27192986j, 0.45978435-0.51036889j])
```
I would expect that the gradient has the same `dtype` as the input.