NIFTy issueshttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues2020-05-13T11:23:26Zhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/294Potential confusion in the direction of FFTs2020-05-13T11:23:26ZMartin ReineckePotential confusion in the direction of FFTsIn the current implementation of FFTOperator (https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_6/nifty6/operators/harmonic_operators.py#L76) we have the code:
```
if x.domain[self._space].harmonic: # harmonic -> position
...In the current implementation of FFTOperator (https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_6/nifty6/operators/harmonic_operators.py#L76) we have the code:
```
if x.domain[self._space].harmonic: # harmonic -> position
func = fft.fftn
fct = 1.
else:
func = fft.ifftn
fct = ncells
```
where `x` is the input field.
This looks exactly the wrong way around.
@parras, @pfrank, @kjako, @reimar, @veberle, what do you think?
Switching the FFT direction breaks almost none of our tests (only one, which already has a FIXME :).2020-05-13https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/296Releasing NIFTy 6?2020-05-22T07:46:13ZMartin ReineckeReleasing NIFTy 6?I have the impression that we have enough new features to justify a NIFTy 6 release.
Does anyone have feature suggestions which should go into the code before the release?I have the impression that we have enough new features to justify a NIFTy 6 release.
Does anyone have feature suggestions which should go into the code before the release?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/359ambiguous wording [save_strategy:"last" ]2023-02-02T14:18:11ZVincent Eberleambiguous wording [save_strategy:"last" ]# Suggestions for rewording
As this docstring can be misunderstood easily I'd propose to change "last" into "latest".
Otherwise one could think, that the samples are only saved at the very end.
```python
save_strategy : str
...# Suggestions for rewording
As this docstring can be misunderstood easily I'd propose to change "last" into "latest".
Otherwise one could think, that the samples are only saved at the very end.
```python
save_strategy : str
If "last", only the samples of the last global iteration are stored. If
"all", all intermediate samples are written to disk. `save_strategy` is
only applicable if `output_directory` is not None. Default: "last".
```
@gedenhof, @jroth, @pfrank what do you think?Vincent EberleVincent Eberlehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/283Reduce licensing boilerplate2020-05-13T11:36:01ZGordian EdenhoferReduce licensing boilerplateIf we really feel the need to put licensing boilerplate in every source file, let's at least make it concise by e.g. using https://spdx.org/ids.If we really feel the need to put licensing boilerplate in every source file, let's at least make it concise by e.g. using https://spdx.org/ids.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/397NIFTy re demo2023-12-08T22:42:12ZJakob RothNIFTy re demoThe nifty.re demo now sets point estimates to showcase this feature: https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_8/demos/nifty_re.py?ref_type=heads#L202
As the first new nifty.re-users now copy this over to their own script witho...The nifty.re demo now sets point estimates to showcase this feature: https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_8/demos/nifty_re.py?ref_type=heads#L202
As the first new nifty.re-users now copy this over to their own script without thinking about what this implies; I am wondering if we should remove it in the demo. @pfrank, @gedenhof, what do you think? Alternatively, we could also add a comment that in many applications, one probably doesn't want to set point estimates or at least would need to adapt the keys to the domain of the model.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/392Testing of re.optimize_kl2023-11-23T22:00:28ZPhilipp FrankTesting of re.optimize_klCurrently the `re.optimize_kl` and `re.OptimizeVI` features remain largely untested. Only the demos ensure that `optimize_kl` runs through as intended for one specific configuration. We should at least add tests verifying that the differ...Currently the `re.optimize_kl` and `re.OptimizeVI` features remain largely untested. Only the demos ensure that `optimize_kl` runs through as intended for one specific configuration. We should at least add tests verifying that the different configurations produce the intended outcomes and that the update rules behave as expected.Philipp FrankPhilipp Frankhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/380RFC: Make offset a tuple of mean and std. in CF2023-11-17T14:14:17ZGordian EdenhoferRFC: Make offset a tuple of mean and std. in CFWDYT about making the `offset` parameter of the CFM a tuple of a mean and a standard deviation akin to all other parameters of the CFM? @pfrank @jroth @veberle @matteani
I imagine the implementation would look something like `A[0] = zm...WDYT about making the `offset` parameter of the CFM a tuple of a mean and a standard deviation akin to all other parameters of the CFM? @pfrank @jroth @veberle @matteani
I imagine the implementation would look something like `A[0] = zm_std; cf += zm_mean` with `A` the amplitude/power-spectrum and `cf` the correlated field realization, i.e. we would learn the zero-mode via the zeroth excitation. I am well aware that this is already feasible right now by specifying a custom zero-mode operator. My proposal is to make the zero-mode a scalar by default without custom code.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/379Sunset jax_expr in numpy NIFTy2023-11-29T19:43:48ZGordian EdenhoferSunset jax_expr in numpy NIFTyI think it is fair to say that `Operator.jax_expr` never got that much traction. Furthermore, it is not the kind of model building that we want to encourage. @all WDYT about removing `jax_expr`s from NIFTy operators again? It was a nice ...I think it is fair to say that `Operator.jax_expr` never got that much traction. Furthermore, it is not the kind of model building that we want to encourage. @all WDYT about removing `jax_expr`s from NIFTy operators again? It was a nice idea when I started developing NIFTy.re but it turned out to be an ineffective aproach.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/362Optimize KL Bug [initial pos=None, dry_run=True, transition]2023-03-06T14:48:41ZVincent EberleOptimize KL Bug [initial pos=None, dry_run=True, transition]# Initial Position, dry_run, transition
The mean is set accordingly to the initial position.
If initial pos == `None` it is first generated as something living on a empty domain. @pfrank do you know why? As far as I understand it is emp...# Initial Position, dry_run, transition
The mean is set accordingly to the initial position.
If initial pos == `None` it is first generated as something living on a empty domain. @pfrank do you know why? As far as I understand it is empty because later this will be populated by `_normal_initialize`
```python
if initial_position is None:
mean = full(makeDomain({}), 0.)
else:
mean = initial_position
del(initial_position)
```
This mean is used to generate a `single_value_sample_list`: Which is then also EMPTY.
`sl = _single_value_sample_list(mean, comm=comm(initial_index))`
Then the mean is set to the mean of the previous iteration or to the _transition(sl). The mean keys missing in the initial multifield get populated by standard gaussian distributed variables (0.1 std). (so all here)
```python
t = transitions(iglobal)
mean = mean if t is None else t(sl)
mean = _normal_initialize(mean, lh.domain)
```
Later in the minimization the sample_list gets overwritten.
## Now the Problem:
If `dry_run==True` is used:
- Sampling is skipped
- Minimization is skipped
- overwriting Sample_list is skipped
This means we keep the old empty sample_list.
So, as soon as we set
- the initial_pos!= None an empty sample list is created
- and `dry_run==True` (which means, that the list sample_list stays empty)
- any transition which trys to do something with the sample is called(e.g. calling a key() of the dictionary/Multifield)
ADDITION:
It also fails if the transition starts at the 0 global it and not initial pos is given
You get an error (missing key).
My suggestion is to use the func
tion `_normal_initialize` right after setting the mean to an empty domain (or not doing this anyways)
so that the sample is not empty.
Something like this:
```python
if initial_position is None:
mean = full(makeDomain({}), 0.)
mean = _normal_initialize(mean, likelihood_energy(iglobal=0).domain)
else:
mean = initial_position
del(initial_position)
sl = _single_value_sample_list(mean, comm=comm(initial_index))
```
On the other hand this can be easily fixed by the user by setting the initial_pos.
But we should address this in the near future. (I can do it next week, but let me know what you think)
@pfrank @gedenhof @jroth @mtrPhilipp FrankPhilipp Frankhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/358Mean / Variance confusion in save_fits function of optimize_KL2023-01-13T09:35:36ZVincent EberleMean / Variance confusion in save_fits function of optimize_KLWhile finishing some plots I was surprised by a very high standard deviation and wanted to check if this could be true.
Then I found this:
```python
if mean or std:
m, s = self.sample_stat(op)
if mean:
se...While finishing some plots I was surprised by a very high standard deviation and wanted to check if this could be true.
Then I found this:
```python
if mean or std:
m, s = self.sample_stat(op)
if mean:
self._save_fits_2d(m, file_name_base + "_mean.fits", overwrite)
if std:
self._save_fits_2d(s, file_name_base + "_std.fits", overwrite)
```
here: https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_8/src/minimization/sample_list.py#L224
So in the 'if std:' we save something called standard deviation.
But sample_stat returns the mean and the variance:
```python
def sample_stat(self, op=None):
"""Compute mean and variance of samples after applying `op`.
Parameters
----------
op : callable or None
Callable that is applied to each item in the :class:`SampleListBase`
before it is used to compute mean and variance.
Returns
-------
tuple
A tuple with two items: the mean and the variance.
"""
from ..probing import StatCalculator
if self.n_samples == 1:
res = self.average(op)
return res, 0*res
sc = StatCalculator()
for ss in self.iterator(op):
sc.add(ss)
return sc.mean, sc.var
```
here: https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_8/src/minimization/sample_list.py#L348
So I think we should take the square root here.... @gedenhof @pfrank
I address also @parras, since he worked a lot on the whole optmize-kl and sample-list part of nifty.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/299Add possibility to log energies2020-05-19T12:51:06ZPhilipp Arrasparras@mpa-garching.mpg.deAdd possibility to log energiesI have implemented a method to log energies inside a IterationController such that they can be analyzed afterwards. @mtr how do you like this implementation? Do you think we could/should make this part of nifty? Or do you have other idea...I have implemented a method to log energies inside a IterationController such that they can be analyzed afterwards. @mtr how do you like this implementation? Do you think we could/should make this part of nifty? Or do you have other ideas how to approach the problem?
https://gitlab.mpcdf.mpg.de/ift/papers/rickandresolve/-/commit/8b0e15dc4a533dfba17f29361345309e336dfb8a
(Since this is part of an unpublished project, this link is only visible for ift group members. Sorry!)
@veberle @mtrhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/290Data types for the Grand Unification2020-04-08T18:20:48ZMartin ReineckeData types for the Grand Unification```
Space:
structured or unstructured set of points (current NIFTy's "Domain")
SpaceTuple:
external product of zero or more Spaces (current NIFTy's "DomainTuple")
SpaceTuple = (Space, Space, ...)
SpaceTupleDict:
diction...```
Space:
structured or unstructured set of points (current NIFTy's "Domain")
SpaceTuple:
external product of zero or more Spaces (current NIFTy's "DomainTuple")
SpaceTuple = (Space, Space, ...)
SpaceTupleDict:
dictionary (with string keys) of one or more SpaceTuples
SpaceTupleDict = {name1: SpaceTuple1, name2: SpaceTuple2, ...}
(current NIFTy's "MultiDomain")
Domain:
This is an abstract concept. Can currently be represented by SpaceTuple or SpaceTupleDict
Special domains:
ScalarDomain: empty SpaceTuple
Operator:
- Operators take an Operand defined on an input domain, transform it in some way
and return a new Operand defined on a (possibly different) target domain.
- Operators can be concatenated, as long as the domains at the interface are
identical. The result is another Operator.
Operand:
- Operand objects represent fields and potentially their Jacobians and metrics.
- an Operand object can be asked for its "value" and (if configured accordingly)
its Jacobian. If the target domain of an Operand object is scalar, a metric
may also be available.
- Applying an Operator to an Operand object will always return another Operand object.
class Operator(object):
@property
def domain(self):
# return input domain
@property
def target(self):
# return output domain
def __call__(self, other):
# if isinstance(other, Operand) return an Operand object
# else return an Operator object
def __matmul__(self, other):
return self(other)
class LinearOperator(Operator):
# more or less analogous to the current LinearOperator
class Operand(object):
# this unifies current NIFTy's Field, MultiField, and Linearization classes
@property
def domain(self):
# if no Jacobian is present, return None, else the Jacobian's domain.
@property
def target(self):
# return the domain on which the value of the Operand is defined. This is
# also the Jacobian's target (if a Jacobian is defined)
@property
def val(self):
# return a low level data structure holding the actual values (currently numpy.ndarray
# or dictionary of numpy.ndarrays. Read-only.
def val_rw(self):
# return a writeable copy of `val`
def fld(self):
# return am Operand that only contains the value content of this object. Its Jacobian and
# potential higher derivatives will be `None`.
@property
def jac(self):
return a Jacobian LinearOperator if possible, else None
@property
def want_metric(self):
return True or False
@property
def metric(self):
if self.jacobian is None, raise an exception
if self.target is not ScalarDomain, raise an exception
if not self.want_metric, raise an exception
if metric cannot be computed, raise an exception
return metric
```
https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/278Getting rid of standard parallelization?2019-12-09T12:50:25ZMartin ReineckeGetting rid of standard parallelization?The default way of parallelizing large tasks with MPI is currently to distribute fields over MPI tasks along their first axis. While it appears to work (our tests run fine with MPI), it has not been used during the last few years, and ot...The default way of parallelizing large tasks with MPI is currently to distribute fields over MPI tasks along their first axis. While it appears to work (our tests run fine with MPI), it has not been used during the last few years, and other parallelization approaches seem more promising.
If there is general agreement, I propose to remove this parallelization from the code, which would make NIFTy much smaller and easier to use and maintain.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/250[post Nifty6] new design for the `Field` class?2019-12-02T20:58:52ZMartin Reinecke[post Nifty6] new design for the `Field` class?I'm wondering whether we can solve most of our current problems regarding fields and multifields by introducing a single new `Field` class that
- unifies the current `Field` and `MultiField` classes (a classic field would have a single d...I'm wondering whether we can solve most of our current problems regarding fields and multifields by introducing a single new `Field` class that
- unifies the current `Field` and `MultiField` classes (a classic field would have a single dictionary entry with an empty string as name)
- is immutable
- can contain scalars or array-like objects to describe field content (maybe even functions?)
If we have such a class we can again demand that all operations on fields require identical domains, and we can still avoid writing huge zero-filled arrays by writing scalar zeros instead.
@reimar, @kjako, @parras please let me know what you think.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/405Memory leak in re.optimize_kl2024-03-22T09:21:40ZSimon StraehnzMemory leak in re.optimize_klI recently tried to do some batch reconstructions again, and apparently there is still a memory leak in `re.optimize_kl` when running multiple reconstructions in a loop. I have attached a modified version of the demo `0_intro.py` that sh...I recently tried to do some batch reconstructions again, and apparently there is still a memory leak in `re.optimize_kl` when running multiple reconstructions in a loop. I have attached a modified version of the demo `0_intro.py` that shows the problem. The leak in this example amounts to ca. 233 MiB per iteration, even though the dof * samples * 8 (float64) = 613 KiB.
Edit: it actually is "only" 133 MiB, see below
[0_intro_memtest.py](/uploads/3f6802f83e3ed20e3605c6a8604c2130/0_intro_memtest.py)https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/404Gauss-Makrov on GPU2024-03-05T16:54:12ZPhilipp HaimGauss-Makrov on GPUThe the loop in the general gm-process introduced in [this](%) commit leads to significant performance degradation of the correlated field on the GPU. Changing the loop to a `cumsum` (at least for the correlated field) should fix this is...The the loop in the general gm-process introduced in [this](%) commit leads to significant performance degradation of the correlated field on the GPU. Changing the loop to a `cumsum` (at least for the correlated field) should fix this issue.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/403NIFTy operator in NIFTy.re2024-02-27T15:08:14ZJakob RothNIFTy operator in NIFTy.reIn the classic NIFTy we have the `JaxOperator` interfacing classic nifty with jax. I don't think it was extensively used, but it was certainly handy for some projects.
The new version of jax_linop (https://github.com/NIFTy-PPL/jax_linop...In the classic NIFTy we have the `JaxOperator` interfacing classic nifty with jax. I don't think it was extensively used, but it was certainly handy for some projects.
The new version of jax_linop (https://github.com/NIFTy-PPL/jax_linop) now also supports binding nonlinear functions to jax. Thus we could also build the reverse binding a `NiftyOperator` which takes a classic NIFTy operator and returns a Jax primitive. Of course, this operator won't be as performant as a native Jax implementation, but I think this would still be very handy, especially for projects that transition their code base to Jax, since with such an operator, it would be possible to gradually move to Jax and keep some legacy nifty code in the beginning.
Once jax_linop is stable, I could implement such an operator. @pfrank @gedenhof, what do you thinkhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/402Non-jitted model evalutation in optmize_kl2024-02-18T17:33:14ZPhilipp HaimNon-jitted model evalutation in optmize_klI have a use-case where I want to evaluate a part of my model on a GPU, while having the inital and final parts of the model on the CPU. To my understanding, such a function cannot be jitted (at the moment). However, the conjugate gradie...I have a use-case where I want to evaluate a part of my model on a GPU, while having the inital and final parts of the model on the CPU. To my understanding, such a function cannot be jitted (at the moment). However, the conjugate gradient implementation both explicitly and implicitly (through the call of jax.lax.while_loop) jits the model evaluation. The model is then executed fully on the CPU. Trying to avoid this compilation by removing [this jit-call in conjugate_gradient](https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_8/src/re/conjugate_gradient.py?ref_type=heads#L242) leads to an error in the following `while_loop` [here](https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_8/src/re/conjugate_gradient.py?ref_type=heads#L382) do to the some of the arrays being on different devices.
As a sanity check I have tested `jax.linearize` and `jax.linear_transpose` on the model, which works without any (apparent) issues. So I have hopes that `optimize_kl` could work with such a model, if all jit-compilations of the full model call would be avoided.
If I understand [this issue](https://github.com/google/jax/discussions/17040) correctly, JAX is working on an API that would allow this kind of behavior in a jitted function, however I couldn't find any current information on that feature. So while this might resolve the issue, it is unclear when it would be available.
As there are already flags like `kl_jit` and `residual_jit`, would it be possible to add a flag that prevents any jit-compilation within the optimization routine? Are there any other places where an implicit jit-compilation takes place that I am missing?
If it would help I could provide a minimal working example for this issue.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/401Implement Wiener Filter method2024-02-01T11:21:52ZGordian EdenhoferImplement Wiener Filter methodImplement a method solving the Wiener Filter for a given likelihood. As to generalize to non-Gaussian likelihoods, make it optionally depend on the position. This could be implement by simply wrapping `jft.draw_linear_residual`.Implement a method solving the Wiener Filter for a given likelihood. As to generalize to non-Gaussian likelihoods, make it optionally depend on the position. This could be implement by simply wrapping `jft.draw_linear_residual`.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/400ift.Plot breaks silently with arguments vmin, vmax and LogNorm2024-03-15T11:43:58ZVincent Eberleift.Plot breaks silently with arguments vmin, vmax and LogNormwhen using vmin and vmax outside of LogNorm using matplotlib you get the following error message:
e.g.
```Python
mock_signal = signal(ift.from_random(signal.domain)).val
plt.imshow(mock_signal.T, origin="lower",vmin=1e-9,vmax=1e-5, nor...when using vmin and vmax outside of LogNorm using matplotlib you get the following error message:
e.g.
```Python
mock_signal = signal(ift.from_random(signal.domain)).val
plt.imshow(mock_signal.T, origin="lower",vmin=1e-9,vmax=1e-5, norm=LogNorm())
plt.colorbar()
plt.show()
```
Passing parameters norm and vmin/vmax simultaneously is not supported. Please pass vmin/vmax directly to the norm when creating it.
But using ift.Plot():
e.g.
```
plot = ift.Plot()
plot.add(signal(ift.from_random(signal.domain)).val, vmin=1e-9, vmax=1e-5, norm=LogNorm())
plot.output()
```
will display a weird plot![Screenshot_from_2024-01-29_15-31-09](/uploads/12bc10d66f37486a5078a652c6fb9464/Screenshot_from_2024-01-29_15-31-09.png)Vincent EberleVincent Eberle