ift issueshttps://gitlab.mpcdf.mpg.de/groups/ift/-/issues2017-09-20T21:36:14Zhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/2d2o: Contraction functions rely on non-degeneracy of distribution strategy2017-09-20T21:36:14ZTheo Steiningerd2o: Contraction functions rely on non-degeneracy of distribution strategySeveral methods of the distributed_data_object rely on the fact, that the distribution strategy behaves as if the local data was non-degenerate. Currently the non-distributor fixes this by returning trivial (local) results in the _allgat...Several methods of the distributed_data_object rely on the fact, that the distribution strategy behaves as if the local data was non-degenerate. Currently the non-distributor fixes this by returning trivial (local) results in the _allgather and the _Allreduce_sum method.
Affected d2o methods are at least: _contraction_helper, mean
Fix: Move the functionality for sum, prod, etc... into the distributor.Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/3d2o: _contraction_helper does not work when using numpy keyword arguments2017-09-20T21:44:34ZTheo Steiningerd2o: _contraction_helper does not work when using numpy keyword argumentsThe `_contraction_helper` passes keyword arguments to the underlying numpy functions (axis=, keepdims=). The result of the _contraction_helper's local computation is then an array and not a scalar. Therefore the dtype check fails.
Fi...The `_contraction_helper` passes keyword arguments to the underlying numpy functions (axis=, keepdims=). The result of the _contraction_helper's local computation is then an array and not a scalar. Therefore the dtype check fails.
Fix: After solving theos/NIFTy#2, adopt to the case that the local run's result object is an array and make a further distinction of cases, i.e for something like axis=0 for the slicing_distributor. Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/15d2o cumsum and flatten rely on certain features of distribution strategy2016-05-26T11:09:31ZTheo Steiningerd2o cumsum and flatten rely on certain features of distribution strategycumsum and flatten assume: if the shape of the d2o changes through flattening, the distribution strategy was "slicing". cumsum and flatten assume: if the shape of the d2o changes through flattening, the distribution strategy was "slicing". Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/23d2o initialization from d2o is slow2016-04-06T00:54:57ZTheo Steiningerd2o initialization from d2o is slowThe slicig distributor is slower than it could be for:
a = np.arange(200000)
obj = distributed_data_object(a, distribution_strategy='fftw')
distributed_data_object(obj, distribution_strategy='equal')
This is connected...The slicig distributor is slower than it could be for:
a = np.arange(200000)
obj = distributed_data_object(a, distribution_strategy='fftw')
distributed_data_object(obj, distribution_strategy='equal')
This is connected to the _enfold and _defold methods.
Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/37Define NIFTy-wide comm in nifty_configuration.2017-06-13T07:44:57ZTheo SteiningerDefine NIFTy-wide comm in nifty_configuration.Use this communicator globally for all NIFTy objects. Inherit to all D2O arrays. Pass forward to pyfftw.Use this communicator globally for all NIFTy objects. Inherit to all D2O arrays. Pass forward to pyfftw.Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/257Demos do not work for python 3.5.52019-02-01T10:42:19ZJakob KnollmuellerDemos do not work for python 3.5.5Hi,
all demos do not run with python 3.5.5 because the f prefix before strings is not supported.
JakobHi,
all demos do not run with python 3.5.5 because the f prefix before strings is not supported.
JakobPhilipp Arrasparras@mpa-garching.mpg.dePhilipp Arrasparras@mpa-garching.mpg.dehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/140Dependencies in README file2017-06-05T11:29:49ZPhilipp ArrasDependencies in README fileDuring install I noticed that the PyPI packages `mpi4py`, `nose_parameterized` and `plotly`, which I needed to get NIFTy to run, are not mentioned in the README.During install I noticed that the PyPI packages `mpi4py`, `nose_parameterized` and `plotly`, which I needed to get NIFTy to run, are not mentioned in the README.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/163Descent minimizers: stopping criterion is broken2017-09-28T01:18:15ZMartin ReineckeDescent minimizers: stopping criterion is brokenCurrently, descent_minimizer.py contains the following lines:
```
elif delta < self.convergence_tolerance:
convergence += 1
self.logger.info("Updated convergence level to: %u" %
...Currently, descent_minimizer.py contains the following lines:
```
elif delta < self.convergence_tolerance:
convergence += 1
self.logger.info("Updated convergence level to: %u" %
convergence)
```
The problem is that `delta` never goes down far enough for the algorithms to stop in there, even though the energy is tiny compared to the starting energy (example below).
This can't be a valid stopping criterion.
```
VL_BFGS: DEBUG: Iteration:00000001 step_length=3.7E+01 delta=1.4E+00 energy=5.1E+01
VL_BFGS: DEBUG: Iteration:00000002 step_length=7.5E+00 delta=1.5E+00 energy=4.6E+00
VL_BFGS: DEBUG: Iteration:00000003 step_length=2.6E+00 delta=1.9E+00 energy=6.5E-01
VL_BFGS: DEBUG: Iteration:00000004 step_length=1.4E+00 delta=3.4E+00 energy=2.3E-02
VL_BFGS: DEBUG: Iteration:00000005 step_length=3.1E-01 delta=5.2E+00 energy=8.8E-03
VL_BFGS: DEBUG: Iteration:00000006 step_length=1.1E-01 delta=2.9E+00 energy=2.3E-04
VL_BFGS: DEBUG: Iteration:00000007 step_length=1.5E-02 delta=2.5E+00 energy=5.0E-05
VL_BFGS: DEBUG: Iteration:00000008 step_length=1.3E-02 delta=6.7E+00 energy=1.2E-06
VL_BFGS: DEBUG: Iteration:00000009 step_length=2.4E-03 delta=6.4E+00 energy=6.2E-07
VL_BFGS: DEBUG: Iteration:00000010 step_length=9.0E-04 delta=3.0E+00 energy=1.5E-08
VL_BFGS: DEBUG: Iteration:00000011 step_length=1.2E-04 delta=2.7E+00 energy=2.7E-09
VL_BFGS: DEBUG: Iteration:00000012 step_length=8.7E-05 delta=7.7E+00 energy=9.6E-11
VL_BFGS: DEBUG: Iteration:00000013 step_length=1.9E-05 delta=6.2E+00 energy=1.4E-11
VL_BFGS: DEBUG: Iteration:00000014 step_length=5.5E-06 delta=3.9E+00 energy=1.3E-12
VL_BFGS: DEBUG: Iteration:00000015 step_length=1.4E-06 delta=3.2E+00 energy=6.4E-14
VL_BFGS: DEBUG: Iteration:00000016 step_length=2.5E-07 delta=4.3E+00 energy=1.4E-14
VL_BFGS: DEBUG: Iteration:00000017 step_length=2.2E-07 delta=1.0E+01 energy=1.8E-16
VL_BFGS: DEBUG: Iteration:00000018 step_length=1.6E-08 delta=3.6E+00 energy=1.4E-17
VL_BFGS: DEBUG: Iteration:00000019 step_length=3.3E-09 delta=1.7E+00 energy=3.5E-18
VL_BFGS: DEBUG: Iteration:00000020 step_length=3.2E-09 delta=4.8E+00 energy=9.6E-20
VL_BFGS: DEBUG: Iteration:00000021 step_length=6.2E-10 delta=4.6E+00 energy=3.0E-20
VL_BFGS: DEBUG: Iteration:00000022 step_length=2.0E-10 delta=3.6E+00 energy=1.0E-21
VL_BFGS: DEBUG: Iteration:00000023 step_length=3.3E-11 delta=2.6E+00 energy=2.0E-22
VL_BFGS: DEBUG: Iteration:00000024 step_length=2.6E-11 delta=6.8E+00 energy=6.3E-24
VL_BFGS: DEBUG: Iteration:00000025 step_length=5.8E-12 delta=8.6E+00 energy=4.5E-24
VL_BFGS: DEBUG: Iteration:00000026 step_length=1.0E-12 delta=1.5E+00 energy=1.7E-24
VL_BFGS: DEBUG: Iteration:00000027 step_length=1.6E-12 delta=3.6E+00 energy=2.4E-26
VL_BFGS: DEBUG: Iteration:00000028 step_length=1.5E-13 delta=4.8E+00 energy=5.9E-27
VL_BFGS: DEBUG: Iteration:00000029 step_length=1.5E-13 delta=1.1E+01 energy=3.0E-29
VL_BFGS: DEBUG: Iteration:00000030 step_length=7.0E-15 delta=3.2E+00 energy=4.3E-30
VL_BFGS: WARNING: Reached iteration limit. Stopping.
```https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/198Design problem in InvertibleOperatorMixin2018-01-26T15:02:39ZMartin ReineckeDesign problem in InvertibleOperatorMixinThe constructor for `InvertibleOperatorMixin` currently allows specifying iteration starting values via `forward_x0` and `backward_x0`. This is problematic for several reasons:
- these vectors have to live over the full domain of a late...The constructor for `InvertibleOperatorMixin` currently allows specifying iteration starting values via `forward_x0` and `backward_x0`. This is problematic for several reasons:
- these vectors have to live over the full domain of a later operator application call, and thereby restrict the general applicability of the operator which is one of Nifty's current design goals.
- `forward_x0` is used in `times()` and `adjoint_inverse_times()` although it is most likely not a good guess for at least one of both. Analogously for `backward_x0`.
- A starting guess is directly coupled to a concrete argument for which the operator is invoked; it is not a property of the operator, but of the operator/argument combination. So if one sets up an `InvertibleOperatorMixin` using the currently available tools, it will only work well for a single specific field.
I would much prefer being able to supply a starting guess with every `times()` (or `adjoint_times()` etc.) call; I'm thinking about a clean approach for this. Suggestions are welcome!https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/169destroying distribution_strategy2017-07-26T08:27:00ZJakob Knollmuellerdestroying distribution_strategyHi,
I am currently trying to get my code running using the full capabilities of d2o, but some things in nifty seem to destroy the distribution_strategy, namely the InvertibleOperatorMixin and power_analyze (I tracked it down to the pind...Hi,
I am currently trying to get my code running using the full capabilities of d2o, but some things in nifty seem to destroy the distribution_strategy, namely the InvertibleOperatorMixin and power_analyze (I tracked it down to the pindex.bincount, see the example)
[minimal_example.py](/uploads/95bab8b0159517babf98330ce2a9ae5c/minimal_example.py)
Another problem is in the InvertibleOperatorMixin:
If no initial guess x0 is specified it is set to
x0 = Field(self.target, val=0., dtype=x.dtype)
setting the distribution to the default of "not" so the result also has this distribution, I would suggest setting
x0 = Field(self.target, val=0., dtype=x.dtype, distribution_strategy=x.distribution_strategy)
Jakobhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/190DiagonalOperator calls nonexistent attribute adjoint()2017-09-15T23:36:07ZMartin ReineckeDiagonalOperator calls nonexistent attribute adjoint()`DiagonalOperator` contains the following code:
```
def _adjoint_times(self, x, spaces):
return self._times_helper(x, spaces,
operation=lambda z: z.adjoint().__mul__)
def _adjoint_invers...`DiagonalOperator` contains the following code:
```
def _adjoint_times(self, x, spaces):
return self._times_helper(x, spaces,
operation=lambda z: z.adjoint().__mul__)
def _adjoint_inverse_times(self, x, spaces):
return self._times_helper(x, spaces,
operation=lambda z: z.adjoint().__rtruediv__)
```
The `z.adjoint()` expression does not work, since nothing in NIFTy has an .adjoint() attribute. So far we have not noticed this, apparently because all of our `DiagonalOperator`s were self-adjoint and these methods were not called.
I don't know what the correct version is. @theos?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/221DiagonalOperator diagonal() function as property?2018-02-17T13:58:55ZChristoph LienhardDiagonalOperator diagonal() function as property?The DiagonalOperator class has a function diagonal() returning the respective Field.
IMHO this should be a property, since e.g.
```
diagonal_val = operator.diagonal().val
```
just looks stupid (from a purely designy pythonic point of vi...The DiagonalOperator class has a function diagonal() returning the respective Field.
IMHO this should be a property, since e.g.
```
diagonal_val = operator.diagonal().val
```
just looks stupid (from a purely designy pythonic point of view).
The main philosophy I propose in this case is that from a class point of view 'diagonal' to DiagonalOperator is the same as 'val' to Field.
Following this the getter of the proposed property 'diagonal' should not return a copy of the diagonal (in contrast to what the current function diagonal() returns) since the property 'val' of Field does not do this either.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/132DiagonalOperator: trace_log() and log_determinant()2017-06-21T08:02:59ZMartin ReineckeDiagonalOperator: trace_log() and log_determinant()trace_log() and log_determinant() should throw exceptions if trace/determinant is <=0.trace_log() and log_determinant() should throw exceptions if trace/determinant is <=0.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/228diagonal Operator with zeros2018-03-13T12:41:15ZReimar H Leikediagonal Operator with zerosCurrently the DiagonalOperator always has the capability of inverse_times. However, this is only legal if there are no zeros on the diagonal. One should either document that a non-zero diagonal is expected or change the properties of the...Currently the DiagonalOperator always has the capability of inverse_times. However, this is only legal if there are no zeros on the diagonal. One should either document that a non-zero diagonal is expected or change the properties of the operator dependent on the input.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/368Dictionary with .update instead of | for python 3.7 compatibility2023-10-24T14:46:50ZVincent EberleDictionary with .update instead of | for python 3.7 compatibility# Broken Interface for Python < 3.9 and Jax installed
If Jax is installed the Correlated Field of nifty tries to make an instance of the nifty.re.correlated field using |= which is a python 3.9 syntax. This brakes the interface for user...# Broken Interface for Python < 3.9 and Jax installed
If Jax is installed the Correlated Field of nifty tries to make an instance of the nifty.re.correlated field using |= which is a python 3.9 syntax. This brakes the interface for users having an older python version and jax as well.
IMHO we should go to the .update syntax instead. If we want to use this syntax, we can consider to update the minimal python version required for using NIFTy8.
@gedenhof @wmarg @pfrank
```python
fluctuations = WrappedCall(fluctuations, name=prefix + "fluctuations")
ptree = fluctuations.domain
loglogavgslope = WrappedCall(loglogavgslope, name=prefix + "loglogavgslope")
ptree |= loglogavgslope.domain
if flexibility is not None:
flexibility = WrappedCall(flexibility, name=prefix + "flexibility")
ptree |= flexibility.domain
# Register the parameters for the spectrum
_safe_assert(log_vol is not None)
_safe_assert(rel_log_mode_len.ndim == log_vol.ndim == 1)
ptree |= {prefix + "spectrum": ShapeWithDtype((2, ) + log_vol.shape)}
if asperity is not None:
asperity = WrappedCall(asperity, name=prefix + "asperity")
ptree |= asperity.domain
```
[Go to File](src/re/correlated_field.py#L133-146)Gordian EdenhoferGordian Edenhoferhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/122DirectSmoothingOperator shows unwanted behaviour on edges2017-05-28T06:56:12ZTheo SteiningerDirectSmoothingOperator shows unwanted behaviour on edgesBy fixing the normalization of the direct/brute-force smoother, the edges now show some instabilities:
``
from nifty import *
x = RGSpace(64, harmonic=True)
p = PowerSpace(x)
sm = SmoothingOperator(p, sigma = 1.)
sm.domain
f = Field(p, v...By fixing the normalization of the direct/brute-force smoother, the edges now show some instabilities:
``
from nifty import *
x = RGSpace(64, harmonic=True)
p = PowerSpace(x)
sm = SmoothingOperator(p, sigma = 1.)
sm.domain
f = Field(p, val=1.)
f.val[16] = 2
plotter = plotting.PowerPlotter()
plotter.title = 'test'
plotter.plot(sm(f))
``
-> The contributions to the edge pixels don't add up anymore. This is somehow the 'adjoint problem' of what has been solved by Martin's fix.
Is there any chance to fix this?
![image](/uploads/ebaac233e4a170769656d7be751dcaa2/image.png)Martin ReineckeMartin Reineckehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/134Direct smoothing with logarithmic distances doesn't conserve the norm.2017-06-12T00:38:52ZTheo SteiningerDirect smoothing with logarithmic distances doesn't conserve the norm.The direct smoothing doesn't conserve the norm of an input field and also distributes the weight differently depending on the scale.
```
from nifty import *
x = RGSpace((2048),...The direct smoothing doesn't conserve the norm of an input field and also distributes the weight differently depending on the scale.
```
from nifty import *
x = RGSpace((2048), harmonic=True)
xp = PowerSpace(x)
f = Field(xp, val = 1.)
f.val[100] = 10
f.val[500] = 10
f.val[50] = 10
sm = SmoothingOperator(xp, sigma=0.1, log_distances=True)
g = sm(f)
f.dot(f)
g.dot(g)
plo = plotting.PowerPlotter(title='power.html')
plo(g)
```
![image](/uploads/cb6ec21b1814083df3a09594367cae14/image.png)
Martin ReineckeMartin Reineckehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/109Discuss default dtype problems for FFT/SHT Operator2017-05-28T06:56:12ZTheo SteiningerDiscuss default dtype problems for FFT/SHT Operatorhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/90Docstring: Energy class2017-05-19T01:24:57ZTheo SteiningerDocstring: Energy classJakob KnollmuellerJakob Knollmuellerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/108Docstring: FFTOperator2017-07-01T11:06:18ZTheo SteiningerDocstring: FFTOperatorMartin ReineckeMartin Reinecke