NIFTy issueshttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues2023-05-19T15:21:37Zhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/365Update PyPi minor release2023-05-19T15:21:37ZMatteo GuardianiUpdate PyPi minor releaseThe last minor NIFTy8 release on PyPi (https://pypi.org/project/nifty8/#history) is 8.4 (from 29.9.22).
Since many commits have been merged in between (including many bugfixs), it would be good to release a new minor stable version to ke...The last minor NIFTy8 release on PyPi (https://pypi.org/project/nifty8/#history) is 8.4 (from 29.9.22).
Since many commits have been merged in between (including many bugfixs), it would be good to release a new minor stable version to keep the PiPy version more or less up to date.
@pfrank @gedenhof @mtrhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/364Encoding error in _report_to_logger_and_file, when using Windows2023-04-20T09:44:54ZAndreas PoppEncoding error in _report_to_logger_and_file, when using WindowsUnicodeEncodeError: 'charmap' codec can't encode character '\u03c7' prompted, when executing my own code, but also when executing NIFTY demo files, where results are writen in seperate files.
Beneath is the error, when executing my own ...UnicodeEncodeError: 'charmap' codec can't encode character '\u03c7' prompted, when executing my own code, but also when executing NIFTY demo files, where results are writen in seperate files.
Beneath is the error, when executing my own file for further clarification:
![charmap_error_for_chi](/uploads/6cf13fcf7770e6fbc44527e198b0f227/charmap_error_for_chi.png)https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/362Optimize KL Bug [initial pos=None, dry_run=True, transition]2023-03-06T14:48:41ZVincent EberleOptimize KL Bug [initial pos=None, dry_run=True, transition]# Initial Position, dry_run, transition
The mean is set accordingly to the initial position.
If initial pos == `None` it is first generated as something living on a empty domain. @pfrank do you know why? As far as I understand it is emp...# Initial Position, dry_run, transition
The mean is set accordingly to the initial position.
If initial pos == `None` it is first generated as something living on a empty domain. @pfrank do you know why? As far as I understand it is empty because later this will be populated by `_normal_initialize`
```python
if initial_position is None:
mean = full(makeDomain({}), 0.)
else:
mean = initial_position
del(initial_position)
```
This mean is used to generate a `single_value_sample_list`: Which is then also EMPTY.
`sl = _single_value_sample_list(mean, comm=comm(initial_index))`
Then the mean is set to the mean of the previous iteration or to the _transition(sl). The mean keys missing in the initial multifield get populated by standard gaussian distributed variables (0.1 std). (so all here)
```python
t = transitions(iglobal)
mean = mean if t is None else t(sl)
mean = _normal_initialize(mean, lh.domain)
```
Later in the minimization the sample_list gets overwritten.
## Now the Problem:
If `dry_run==True` is used:
- Sampling is skipped
- Minimization is skipped
- overwriting Sample_list is skipped
This means we keep the old empty sample_list.
So, as soon as we set
- the initial_pos!= None an empty sample list is created
- and `dry_run==True` (which means, that the list sample_list stays empty)
- any transition which trys to do something with the sample is called(e.g. calling a key() of the dictionary/Multifield)
ADDITION:
It also fails if the transition starts at the 0 global it and not initial pos is given
You get an error (missing key).
My suggestion is to use the func
tion `_normal_initialize` right after setting the mean to an empty domain (or not doing this anyways)
so that the sample is not empty.
Something like this:
```python
if initial_position is None:
mean = full(makeDomain({}), 0.)
mean = _normal_initialize(mean, likelihood_energy(iglobal=0).domain)
else:
mean = initial_position
del(initial_position)
sl = _single_value_sample_list(mean, comm=comm(initial_index))
```
On the other hand this can be easily fixed by the user by setting the initial_pos.
But we should address this in the near future. (I can do it next week, but let me know what you think)
@pfrank @gedenhof @jroth @mtrPhilipp FrankPhilipp Frankhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/361Failing docs pipeline2023-02-02T20:45:37ZPhilipp FrankFailing docs pipelineRecently the docs stage of the pipeline fails due to some ownership issues regarding the docs folder (see https://gitlab.mpcdf.mpg.de/ift/nifty/-/jobs/2023129#L497).
I am not entirely sure what is going on here, I suspect it may be somet...Recently the docs stage of the pipeline fails due to some ownership issues regarding the docs folder (see https://gitlab.mpcdf.mpg.de/ift/nifty/-/jobs/2023129#L497).
I am not entirely sure what is going on here, I suspect it may be something related to using a cached docker image.
@veberle, @gedenhof: Do you have any thoughts on this?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/360Trying to fix the Hartley convention in ducc02023-02-09T10:58:45ZMartin ReineckeTrying to fix the Hartley convention in ducc0I have a bit of a philosophical problem and would be happy about your opinion:
Nifty makes heavy use of `ducc0`'s Hartley transform. Unfortunately this implementation of the transform has the bug (or convention issue) that it computes
...I have a bit of a philosophical problem and would be happy about your opinion:
Nifty makes heavy use of `ducc0`'s Hartley transform. Unfortunately this implementation of the transform has the bug (or convention issue) that it computes
`Hartley(x) = FFT(x).real + FFT(x).imag`
instead of the canonical
`Hartley(x) = FFT(x).real - FFT(x).imag`
which is mentioned, e.g., at https://en.wikipedia.org/wiki/Hartley_transform.
I would like to fix that at some point, but this could lead to subtle breakage in Nifty (e.g. if you store a Hartley-transformed field with a version of Nifty that uses an old `ducc` version and load it again with a Nifty that uses a newer one). Practically all "normal" uses of the Hartley transform will continue to work.
Do you have suggestions how to deal with this?
Alternatives are
- just change the behavior, saying this is a bug fix (which it technically is) and deal with potential breakage as it appears (it shold be minimal)
- change the behaviour and rename `ducc0` to `ducc1`; this is probably the cleanest approach, but maybe too much trouble for this change
- add new function names for the correct Hartley transforms and keep the old ones around for some time
- anything else?
@pfrank, @gedenhof, @jroth, @veberlehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/359ambiguous wording [save_strategy:"last" ]2023-02-02T14:18:11ZVincent Eberleambiguous wording [save_strategy:"last" ]# Suggestions for rewording
As this docstring can be misunderstood easily I'd propose to change "last" into "latest".
Otherwise one could think, that the samples are only saved at the very end.
```python
save_strategy : str
...# Suggestions for rewording
As this docstring can be misunderstood easily I'd propose to change "last" into "latest".
Otherwise one could think, that the samples are only saved at the very end.
```python
save_strategy : str
If "last", only the samples of the last global iteration are stored. If
"all", all intermediate samples are written to disk. `save_strategy` is
only applicable if `output_directory` is not None. Default: "last".
```
@gedenhof, @jroth, @pfrank what do you think?Vincent EberleVincent Eberlehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/350Optimize_kl: if save_strategy="last" and the number of samples decreases the ...2022-02-25T17:34:40ZPhilipp Arrasparras@mpa-garching.mpg.deOptimize_kl: if save_strategy="last" and the number of samples decreases the superfluous old samples are not deletedPhilipp Arrasparras@mpa-garching.mpg.dePhilipp Arrasparras@mpa-garching.mpg.dehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/343`Variational inference visualized` is too slow2021-12-08T22:22:56ZPhilipp Arrasparras@mpa-garching.mpg.de`Variational inference visualized` is too slowRunning the file `https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_8/demos/variational_inference_visualized.py` is so slow that it is not very instructive anymore.Running the file `https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_8/demos/variational_inference_visualized.py` is so slow that it is not very instructive anymore.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/342Try to replace FITS I/O where possible2021-12-06T08:51:12ZMartin ReineckeTry to replace FITS I/O where possibleJust a general remark for future I/O changes: please let's stop using FITS (and maybe even replace the FITS format in places where we don't need to be interoperable with anything else). This format really is incredibly old and dusty. Any...Just a general remark for future I/O changes: please let's stop using FITS (and maybe even replace the FITS format in places where we don't need to be interoperable with anything else). This format really is incredibly old and dusty. Anything with better metadata support should be fine, probably HDF5 (which we depend on anyway).https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/338[New feature] Add tensorflow operator2021-10-29T15:38:45ZPhilipp Arrasparras@mpa-garching.mpg.de[New feature] Add tensorflow operatorhttps://gitlab.mpcdf.mpg.de/ift/deepreasoning/-/blob/master/operators/tensorflow_operator.pyhttps://gitlab.mpcdf.mpg.de/ift/deepreasoning/-/blob/master/operators/tensorflow_operator.pyhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/328What is the correct gradient at zero for `pointwise.unit_step`?2021-08-16T16:58:38ZLukas PlatzWhat is the correct gradient at zero for `pointwise.unit_step`?I just noticed that the gradient of the pointwise function `unit_step` [is set to be always zero](https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/4772fce8b032135e39702ffb4b52b33f5deefe20/src/pointwise.py#L91).
Is that correct, or should we...I just noticed that the gradient of the pointwise function `unit_step` [is set to be always zero](https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/4772fce8b032135e39702ffb4b52b33f5deefe20/src/pointwise.py#L91).
Is that correct, or should we return `np.nan` if the input is exactly zero, like for examples `sign` [does](https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/4772fce8b032135e39702ffb4b52b33f5deefe20/src/pointwise.py#L67)?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/327Improvement: Simplify for const could be more potent2021-05-30T17:20:04ZPhilipp Arrasparras@mpa-garching.mpg.deImprovement: Simplify for const could be more potentThe following script demonstrates that the `simplify_for_const` machinery is not able to efficiently deal with partially constant `MultiField`s.
```python
import nifty7 as ift
import numpy as np
dom = ift.RGSpace(10)
e = ift.VariableCo...The following script demonstrates that the `simplify_for_const` machinery is not able to efficiently deal with partially constant `MultiField`s.
```python
import nifty7 as ift
import numpy as np
dom = ift.RGSpace(10)
e = ift.VariableCovarianceGaussianEnergy(dom, "a", "b", np.float64)
fa = ift.FieldAdapter(dom, "a")
fb = ift.FieldAdapter(dom, "b")
e1 = e(fa.adjoint@fa + fb.adjoint@fb)
print(e1)
inp = ift.from_random(fa.domain)
print(e1.simplify_for_constant_input(inp))
```https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/313Better adjointness tests2020-11-18T19:24:15ZMartin ReineckeBetter adjointness testsFor the gridder paper, @parras and I have been thinking about better criteria for measuring the adjointness of two operators.
In Nifty, we test the adjointness of the operators `Op1` and `Op2` by drawing random fields `a` (living on `Op1...For the gridder paper, @parras and I have been thinking about better criteria for measuring the adjointness of two operators.
In Nifty, we test the adjointness of the operators `Op1` and `Op2` by drawing random fields `a` (living on `Op1.target`) and `b` (living on `Op1.domain`) and testing whether `abs(vdot(a, Op1(b)) - vdot(Op2(a), b))` is "small". But we don't really have a good definition of what "small" means.
Our idea was to compare this expression to `min(|a|*|Op1(b)|, |Op2(a)|*|b|)`. So our reference value is basically the smaller (to be pessimistic) of the two dot products, but with their cosine terms removed, so that we do not run into problems when, for example, `a` is orthogonal to `Op1(b)`.
@reimar, @pfrank, do you think this makes sense?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/291Problematic MultiField methods2021-03-24T09:26:17ZMartin ReineckeProblematic MultiField methodsCurrently, `MultiField` has methods like `s_sum`, `clip`, and many transcendental functions. I'm wondering whether they make any sense: what's the point of computing the sum over all values in several fields ... or computing the sine of ...Currently, `MultiField` has methods like `s_sum`, `clip`, and many transcendental functions. I'm wondering whether they make any sense: what's the point of computing the sum over all values in several fields ... or computing the sine of all field entries?
We currently use `MultiField`'s `norm` property in minimization, but given that the components of the `MultiField` may have vastly different scales, is this actually a clever thing to do? This basically means that we stop minimizing once the field component with the largest values has converged ... not necessarily what we want.
@parras, @pfrank, @reimar, @kjakohttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/290Data types for the Grand Unification2020-04-08T18:20:48ZMartin ReineckeData types for the Grand Unification```
Space:
structured or unstructured set of points (current NIFTy's "Domain")
SpaceTuple:
external product of zero or more Spaces (current NIFTy's "DomainTuple")
SpaceTuple = (Space, Space, ...)
SpaceTupleDict:
diction...```
Space:
structured or unstructured set of points (current NIFTy's "Domain")
SpaceTuple:
external product of zero or more Spaces (current NIFTy's "DomainTuple")
SpaceTuple = (Space, Space, ...)
SpaceTupleDict:
dictionary (with string keys) of one or more SpaceTuples
SpaceTupleDict = {name1: SpaceTuple1, name2: SpaceTuple2, ...}
(current NIFTy's "MultiDomain")
Domain:
This is an abstract concept. Can currently be represented by SpaceTuple or SpaceTupleDict
Special domains:
ScalarDomain: empty SpaceTuple
Operator:
- Operators take an Operand defined on an input domain, transform it in some way
and return a new Operand defined on a (possibly different) target domain.
- Operators can be concatenated, as long as the domains at the interface are
identical. The result is another Operator.
Operand:
- Operand objects represent fields and potentially their Jacobians and metrics.
- an Operand object can be asked for its "value" and (if configured accordingly)
its Jacobian. If the target domain of an Operand object is scalar, a metric
may also be available.
- Applying an Operator to an Operand object will always return another Operand object.
class Operator(object):
@property
def domain(self):
# return input domain
@property
def target(self):
# return output domain
def __call__(self, other):
# if isinstance(other, Operand) return an Operand object
# else return an Operator object
def __matmul__(self, other):
return self(other)
class LinearOperator(Operator):
# more or less analogous to the current LinearOperator
class Operand(object):
# this unifies current NIFTy's Field, MultiField, and Linearization classes
@property
def domain(self):
# if no Jacobian is present, return None, else the Jacobian's domain.
@property
def target(self):
# return the domain on which the value of the Operand is defined. This is
# also the Jacobian's target (if a Jacobian is defined)
@property
def val(self):
# return a low level data structure holding the actual values (currently numpy.ndarray
# or dictionary of numpy.ndarrays. Read-only.
def val_rw(self):
# return a writeable copy of `val`
def fld(self):
# return am Operand that only contains the value content of this object. Its Jacobian and
# potential higher derivatives will be `None`.
@property
def jac(self):
return a Jacobian LinearOperator if possible, else None
@property
def want_metric(self):
return True or False
@property
def metric(self):
if self.jacobian is None, raise an exception
if self.target is not ScalarDomain, raise an exception
if not self.want_metric, raise an exception
if metric cannot be computed, raise an exception
return metric
```
https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/288Make convergence tests less fragile2020-03-22T13:06:44ZMartin ReineckeMake convergence tests less fragileThe switch to `numpy`'s new RNG interface has shown that some of our convergence and consistency tests are not very robust: in principle these tests should succeed for any random seed we use during the problem setup, but this is apparent...The switch to `numpy`'s new RNG interface has shown that some of our convergence and consistency tests are not very robust: in principle these tests should succeed for any random seed we use during the problem setup, but this is apparently not the case. We should have a closer look at the problematic tests and fix them accordingly.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/286NIFTy grand unification: unify MultiFields and Fields2020-04-07T17:33:33ZMartin ReineckeNIFTy grand unification: unify MultiFields and Fields- all new fields have the internal structure of a MultiField
- a classic "standard" field is represented by a new field with a single key
that is the empty string
- Many of our operators work on part of a DomainTuple (e.g. FFTOperator)...- all new fields have the internal structure of a MultiField
- a classic "standard" field is represented by a new field with a single key
that is the empty string
- Many of our operators work on part of a DomainTuple (e.g. FFTOperator).
Typically this is specified by passing the domain and additionally a "spaces"
argument, which is None, int of tupe of ints.
Since in the future every domain is a "multi-domain", this is no longer
sufficient: the partial domain must now contain an additional string defining
the name of the required field component. This requires an update
(and renaming) of "parse_spaces", "infer_space" etc.
Maybe it's good to introduce a new "PartialDomain" class which contains
* a string containing the desired field component, and
* an integer tuple containing the desired subspaces of that component
- "MultiField" will be renamed to "Field"; "Field" will probably be renamed
to some internal helper class or completely implemented within the new "Field".
- "MultiDomain" will be renamed to ???; "DomainTuple" will probably become
"_DomainTuple", i.e. it should not be directly accessed by external users.
- "makeField" and "makeDomain" become static "make" members of "Field" and
"Domain"Martin ReineckeMartin Reineckehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/276testing differentiability for complex operators2021-03-08T08:36:30ZReimar H Leiketesting differentiability for complex operatorsThe routine extra.check_jacobian_consistency only checks derivatives in real direction. There are two other interesting cases: differentiability in real and imaginary direction and complex derifferentiability (which is the former with th...The routine extra.check_jacobian_consistency only checks derivatives in real direction. There are two other interesting cases: differentiability in real and imaginary direction and complex derifferentiability (which is the former with the additional requirement that df/d(Imag) = i*df/d(Re)).Reimar H LeikeReimar H Leikehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/274Restructure DOFDistributor2019-10-17T13:44:44ZJakob KnollmuellerRestructure DOFDistributorHi,
I just want to document the thought to change the DOFDistributor to a BinDistributor and build it analogous to the adjoint numpy bincount function. This should make things more clear and allow for operations on fields of any one-dim...Hi,
I just want to document the thought to change the DOFDistributor to a BinDistributor and build it analogous to the adjoint numpy bincount function. This should make things more clear and allow for operations on fields of any one-dimensional domain. We could get rid of the DOFSpace and change the default to an UnstructuredDomain
Jakobhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/258Branch cleanup2019-05-21T06:55:16ZMartin ReineckeBranch cleanupI'd like to get a better understanding which branches are still used and which ones can be deleted:
- new_los (I'll adjust the code for NIFTy 5)
- new_sampling (@reimar is this still relevant?)
- symbolicDifferentiation (@parras do you w...I'd like to get a better understanding which branches are still used and which ones can be deleted:
- new_los (I'll adjust the code for NIFTy 5)
- new_sampling (@reimar is this still relevant?)
- symbolicDifferentiation (@parras do you want to keep this?)
- yango_minimizer (@reimar ?)
- addUnits (@parras ?)
- theo_master (I guess I'll convert this into a tag)
- nifty2go (is anyone still using this? Otherwise I'd convert it into a tag as well)
@ensslint, any comments?Martin ReineckeMartin Reinecke