ift issueshttps://gitlab.mpcdf.mpg.de/groups/ift/-/issues2020-05-19T13:05:11Zhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/300Streamline arguments position2020-05-19T13:05:11ZGordian EdenhoferStreamline arguments positionSwitch positional arguments to get a consistent order of e.g. `domain` within the positional arguments of all operators.
Affected operators are:
* `ift.from_random`
* `ift.OuterProduct`
@lerou You said that you are keeping a list of op...Switch positional arguments to get a consistent order of e.g. `domain` within the positional arguments of all operators.
Affected operators are:
* `ift.from_random`
* `ift.OuterProduct`
@lerou You said that you are keeping a list of operators that annoy you by their inconsistency. Can you amend the list with such cases please?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/295Suggestion: Add dtype property to DomainTuple2020-04-26T15:24:18ZPhilipp Arrasparras@mpa-garching.mpg.deSuggestion: Add dtype property to DomainTupleI suggest to add a property called `dtype` to `DomainTuple`. When wrapping a numpy array as `Field` the constructor would check that the dtype of the array and the `DomainTuple` coincide. Functions like `from_random` do not need to take ...I suggest to add a property called `dtype` to `DomainTuple`. When wrapping a numpy array as `Field` the constructor would check that the dtype of the array and the `DomainTuple` coincide. Functions like `from_random` do not need to take a dtype any more. In particular, sampling from e.g. `DiagonalOperator`s become unambiguous in terms of drawing either a real or a complex field. This in turn would simplify the KL because the `lh_sampling_dtype` would disappear.
I am not 100% sure how to treat `RGSpace.get_default_codomain()`. Would the default codomain of a real space be real again or complex? My suggestion is to add an optional argument to that function and take as default the same dtype as the original space.
Also I am not sure how to tell the `DomainTuple` its dtype. I suggest as an additional argument of `DomainTuple.make()`.
What do you think about that, @mtr @pfrank @reimar?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/399Sum of likelihoods with named domains operators2024-01-09T20:23:53ZMatteo GuardianiSum of likelihoods with named domains operatorsWhen summing two likelihoods applied to some operators with different named domains, the summation does not work during minimization, because the metric update does not know how to correctly join multiple named inputs.
Minimal breaking e...When summing two likelihoods applied to some operators with different named domains, the summation does not work during minimization, because the metric update does not know how to correctly join multiple named inputs.
Minimal breaking example:
```
from jax import random
import nifty8.re as jft
import numpy as np
import jax
jax.config.update("jax_platform_name", "cpu")
seed = 42
key = random.PRNGKey(seed)
shape = (10, 10)
cf_zm = {"offset_mean": 0., "offset_std": (1e-3, 1e-4)}
cf_fl = {
"fluctuations": (1e-1, 5e-3),
"loglogavgslope": (-3., 1e-2),
"flexibility": (1e+0, 5e-1),
"asperity": (5e-1, 5e-2),
}
cfm = jft.CorrelatedFieldMaker("jcf_")
cfm.set_amplitude_total_offset(**cf_zm)
cfm.add_fluctuations(
shape,
distances=1. / shape[0],
**cf_fl,
prefix="",
non_parametric_kind="power",
)
jcf = cfm.finalize()
key, subkey = random.split(key)
pos = jft.random_like(subkey, jcf.domain)
noise_level = 0.2
datar = jcf(pos) + np.random.normal(0, 1, shape)*noise_level
dom_key = 'a'
m = jft.Model(
lambda x: {dom_key: jcf(x)},
domain=jcf.domain)
R = jft.Model(lambda x: x[dom_key], domain=m.target)
like1 = jft.Gaussian(datar, lambda x: 1/noise_level**2 * x) @ R
like2 = jft.Gaussian(datar, lambda x: 2/noise_level**2 * x) @ R
ll = (like1 + like2) @ m
pos = jft.random_like(key, jcf.domain)
n_iterations = 2
n_samples = 2
delta = 1e-3
absdelta = 1e-4
samples, _ = jft.optimize_kl(
ll,
jft.Vector(pos),
n_total_iterations=n_iterations,
n_samples=n_samples,
# Source for the stochasticity for sampling
key=key,
draw_linear_kwargs=dict(cg_name="SL",
cg_kwargs=dict(absdelta=absdelta / 10., maxiter=100)),
nonlinearly_update_kwargs=dict(
minimize_kwargs=dict(
name="SN",
xtol=delta,
cg_kwargs=dict(name=None),
maxiter=5,
)
),
kl_kwargs=dict(
minimize_kwargs=dict(
name="M", absdelta=absdelta, cg_kwargs=dict(name="MCG"), maxiter=35
)
),
sample_mode="nonlinear_resample",
resume=False)
```
gives the following error:
```
~/pro/python/nifty/nifty8/re/likelihood.py in joined_left_sqrt_metric(p, t, **pkw)
641 def joined_left_sqrt_metric(p, t, **pkw):
642 return (
--> 643 self.left_sqrt_metric(p, t[lkey], **pkw) +
644 other.left_sqrt_metric(p, t[rkey], **pkw)
645 )
TypeError: unsupported operand type(s) for +: 'dict' and 'dict'
```https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/267SumOperator is not imported with nifty2019-04-26T13:29:43ZPhilipp HaimSumOperator is not imported with niftyWhen importing nifty5, SumOperator is not included. Is that intentional?When importing nifty5, SumOperator is not included. Is that intentional?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/138Support pyfftw even if MPI is not present2017-06-05T02:15:59ZMartin ReineckeSupport pyfftw even if MPI is not presentThis will allow using multi-threaded FFTs which can be a big help in large calculations.This will allow using multi-threaded FFTs which can be a big help in large calculations.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/287Switch to new random interface of numpy2020-03-24T11:25:13ZPhilipp Arrasparras@mpa-garching.mpg.deSwitch to new random interface of numpy`np.random.seed` is legacy. The numpy docs say:
> The best practice is to **not** reseed a BitGenerator, rather to recreate a new one. This method is here for legacy reasons.
> This example demonstrates best practice.
```python
from nu...`np.random.seed` is legacy. The numpy docs say:
> The best practice is to **not** reseed a BitGenerator, rather to recreate a new one. This method is here for legacy reasons.
> This example demonstrates best practice.
```python
from numpy.random import MT19937
from numpy.random import RandomState, SeedSequence
rs = RandomState(MT19937(SeedSequence(123456789)))
# Later, you want to restart the stream
rs = RandomState(MT19937(SeedSequence(987654321)))
```
We might want to migrate eventually.Martin ReineckeMartin Reineckehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/256Tag release commit!2019-02-01T08:37:20ZPhilipp Arrasparras@mpa-garching.mpg.deTag release commit!Also:
- First demo runs, then build pagesAlso:
- First demo runs, then build pagesPhilipp Arrasparras@mpa-garching.mpg.dePhilipp Arrasparras@mpa-garching.mpg.dehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/298Tests are taking too long2020-05-19T08:02:40ZMartin ReineckeTests are taking too longOn my laptop, the Nifty regression test are taking almost 10 minutes at the moment, which starts to get really problematic during development. Is there a chance of reducing test sizes and/or avoiding redundancies in the current tests?
T...On my laptop, the Nifty regression test are taking almost 10 minutes at the moment, which starts to get really problematic during development. Is there a chance of reducing test sizes and/or avoiding redundancies in the current tests?
The biggest problem by far is test_correlated_fields.py, which accounts for roughly 40% of the testing time. Next is test_jacobian.py with 20%; next is test_energy_gradients.py.
@parras, @pfrank, @phaimhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/99Tests: Energy class2018-01-30T20:40:21ZTheo SteiningerTests: Energy classhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/96Tests: Field class2018-01-19T08:55:09ZTheo SteiningerTests: Field class2018-01-19https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/119tests for spaces2017-08-17T23:25:49ZReimar H Leiketests for spacesCreate more tests for all spaces. Especially tests for content are needed, that tests if the methods of spaces do what they should do on a numerical level.Create more tests for all spaces. Especially tests for content are needed, that tests if the methods of spaces do what they should do on a numerical level.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/98Tests: Minimizers2017-09-28T01:18:49ZTheo SteiningerTests: Minimizershttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/100Tests: NIFTy config & logging2018-01-19T08:55:21ZTheo SteiningerTests: NIFTy config & logging2018-01-19https://gitlab.mpcdf.mpg.de/ift/D2O/-/issues/1tests: only make h5py test if h5py is avaiable.2017-05-15T21:27:26ZTheo Steiningertests: only make h5py test if h5py is avaiable.Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/97Tests: Operators2018-01-18T15:55:29ZTheo SteiningerTests: Operatorshttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/95Tests: Space classes2017-08-15T08:04:24ZTheo SteiningerTests: Space classesCheck if they are complete.Check if they are complete.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/14The d2o_librarian will fail when mixing different MPI comms2016-05-26T11:09:36ZTheo SteiningerThe d2o_librarian will fail when mixing different MPI commsEvery local librarian instance on a node of a MPI cluster just increments its internal counter by one when a new d2o is registered. This gets out of sync, when only a part of the full cluster is covered by a special comm.
?Possible s...Every local librarian instance on a node of a MPI cluster just increments its internal counter by one when a new d2o is registered. This gets out of sync, when only a part of the full cluster is covered by a special comm.
?Possible solution: The individual librarians store the id of 'their' d2o and communicate a common id for their dictionary.
Con: Involves MPI communication.Theo SteiningerTheo Steiningerhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/150The dot product of a field with itself2017-06-12T23:31:34ZSebastian HutschenreuterThe dot product of a field with itselfThe dot product only considers the volume factors of the first field in case bare=False. This leads to the (in my mind) odd result noted below.
In [2]: np.random.seed(123)
In [3]: rgf = Field(domain=RGSpace(4),val=np.random.randn(4))
...The dot product only considers the volume factors of the first field in case bare=False. This leads to the (in my mind) odd result noted below.
In [2]: np.random.seed(123)
In [3]: rgf = Field(domain=RGSpace(4),val=np.random.randn(4))
In [4]: rgf.dot(rgf)
Out[4]: 1.1305730855452751
This is equal to
In [5]: (rgf.weight(power=1) * rgf).sum()
Out[5]: 1.1305730855452751
Wouldn't the two following cases be more appropriate, depending on the context?
In [6]: (rgf.weight(power=1) * rgf.weight(power=1)).sum()
Out[6]: 0.28264327138631878
In [7]: (rgf*rgf).sum()
Out[7]: 4.5222923421811005https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/369Thread safety in optimize_kl2023-11-07T11:27:57ZLandman BesterThread safety in optimize_klThe following code attempts to run a number of optimize_kl instances in parallel, first with a process pool and then with a thread pool.
```python
import numpy as np
import nifty8 as ift
import concurrent.futures as cf
def function_ca...The following code attempts to run a number of optimize_kl instances in parallel, first with a process pool and then with a thread pool.
```python
import numpy as np
import nifty8 as ift
import concurrent.futures as cf
def function_calling_nifty(seed):
# set up correlated field model
npix = 512
pospace = ift.RGSpace((npix,))
spfreq = ift.RGSpace((npix,))
cfmaker = ift.CorrelatedFieldMaker('amplitude')
cfmaker.add_fluctuations(spfreq, (0.1, 1e-2), None, None, (-3, 1),
'f')
cfmaker.set_amplitude_total_offset(0., (1e-2, 1e-6))
cf = cfmaker.finalize()
normalized_amp = cfmaker.get_normalized_amplitudes()
pspec = normalized_amp[0]**2
# signal + fake data
signal = ift.exp(cf.real)
mask = np.random.binomial(1, 0.5, size=npix).astype(bool)
tmp = ift.makeField(signal.target, ~mask)
R = ift.MaskOperator(tmp)
signal_response = R(signal)
mock_position = ift.from_random(signal_response.domain, 'normal')
dspace = R.target
noise = .001
N = ift.ScalingOperator(dspace, noise, float)
data = signal_response(mock_position) + N.draw_sample()
# Minimization parameters
ic_sampling = ift.AbsDeltaEnergyController(deltaE=0.01, iteration_limit=50)
ic_newton = ift.AbsDeltaEnergyController(deltaE=0.01, iteration_limit=15)
minimizer = ift.NewtonCG(ic_newton, enable_logging=False)
# Set up likelihood energy and information Hamiltonian
likelihood_energy = ift.GaussianEnergy(data, inverse_covariance=N.inverse) @ signal_response
n_samples = 3
n_iterations = 5
with ift.random.Context(seed):
samples = ift.optimize_kl(likelihood_energy,
n_iterations,
n_samples,
minimizer,
ic_sampling,
None, # for GeoVI
plot_energy_history=False,
plot_minisanity_history=False)
return samples
if __name__=='__main__':
# processes
print("___________________________________Running with processes___________________________________")
nrun = 8
futures = []
with cf.ProcessPoolExecutor(max_workers=nrun) as executor:
for i in range(nrun):
future = executor.submit(function_calling_nifty, i)
futures.append(future)
for f in cf.as_completed(futures):
samples = f.result()
# threads
print("___________________________________Running with threads___________________________________")
futures = []
with cf.ThreadPoolExecutor(max_workers=nrun) as executor:
for i in range(nrun):
future = executor.submit(function_calling_nifty, i)
futures.append(future)
for f in cf.as_completed(futures):
samples = f.result()
```
The first run with processes is successful but the second one falls over with
```bash
Traceback (most recent call last):
File "/home/landman/software/scratch/nifty_threadsafety.py", line 43, in function_calling_nifty
samples = ift.optimize_kl(likelihood_energy,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/landman/venvs/qcal/lib/python3.11/site-packages/nifty8/minimization/optimize_kl.py", line 368, in optimize_kl
e = SampledKLEnergy(
^^^^^^^^^^^^^^^^
File "/home/landman/venvs/qcal/lib/python3.11/site-packages/nifty8/minimization/kl_energies.py", line 290, in SampledKLEnergy
sample_list = draw_samples(position, ham_sampling, minimizer_sampling, n_samples,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/landman/venvs/qcal/lib/python3.11/site-packages/nifty8/minimization/kl_energies.py", line 142, in draw_samples
with random.Context(sseq[i]):
File "/home/landman/venvs/qcal/lib/python3.11/site-packages/nifty8/random.py", line 290, in __exit__
raise RuntimeError("inconsistent RNG usage detected")
RuntimeError: inconsistent RNG usage detected
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/landman/software/scratch/nifty_threadsafety.py", line 78, in <module>
samples = f.result()
^^^^^^^^^^
File "/usr/lib/python3.11/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/usr/lib/python3.11/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/landman/software/scratch/nifty_threadsafety.py", line 42, in function_calling_nifty
with ift.random.Context(seed):
File "/home/landman/venvs/qcal/lib/python3.11/site-packages/nifty8/random.py", line 290, in __exit__
raise RuntimeError("inconsistent RNG usage detected")
RuntimeError: inconsistent RNG usage detected
```
I tried using the Context class as suggested in ```src/random.py``` but this doesn't seem to have the desired effect. Any suggestions on how to get this working with a thread pool would be much appreciated.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/186TransformationCache re-uses FFTs in situations where it shouldn't2017-09-28T01:59:35ZMartin ReineckeTransformationCache re-uses FFTs in situations where it shouldn't(This is mostly a reminder ... the problem should go away automatically once the `byebye_zerocenter` branch is merged.)
Currently the TransformationCache for FFTs caches and re-uses transforms even when the harmonic base has changes in ...(This is mostly a reminder ... the problem should go away automatically once the `byebye_zerocenter` branch is merged.)
Currently the TransformationCache for FFTs caches and re-uses transforms even when the harmonic base has changes in the meantime, which leads to wrong results. This can be avoided by simply not changing the harmonic base during a run, but this approach seems impractical, since we violate this constraint already in test_fft_operator.py.
Apparently, the whole test environment is not torn down and set up again between individual tests. I verified this by printing the size of the transformation cache whenever its create() method is called (line 27 in transformation_cache.py) and then ran
`nosetests test/test_operators/test_fft_operator.py -s`
I would have expected the length of the cache to be 1 all the time, but it went up beyond 100.
This is possibly also the cause for the strange behaviour in `test_power_synthesize_analyze()`.
To verify this, I suggest running the tests on maste ran on `byebye_zerocenter` and compare the convergence behaviour.