Commit fccadfc5 authored by Martin Reinecke's avatar Martin Reinecke
Browse files

first try

parent ad4c6cde
......@@ -140,7 +140,6 @@
"outputs": [],
"source": [
"import numpy as np\n",
"np.random.seed(40)\n",
"import nifty6 as ift\n",
"import matplotlib.pyplot as plt\n",
"%matplotlib inline"
......
%% Cell type:markdown id: tags:
# A NIFTy demonstration
%% Cell type:markdown id: tags:
## IFT: Big Picture
IFT starting point:
$$d = Rs+n$$
Typically, $s$ is a continuous field, $d$ a discrete data vector. Particularly, $R$ is not invertible.
IFT aims at **inverting** the above uninvertible problem in the **best possible way** using Bayesian statistics.
## NIFTy
NIFTy (Numerical Information Field Theory) is a Python framework in which IFT problems can be tackled easily.
Main Interfaces:
- **Spaces**: Cartesian, 2-Spheres (Healpix, Gauss-Legendre) and their respective harmonic spaces.
- **Fields**: Defined on spaces.
- **Operators**: Acting on fields.
%% Cell type:markdown id: tags:
## Wiener Filter: Formulae
### Assumptions
- $d=Rs+n$, $R$ linear operator.
- $\mathcal P (s) = \mathcal G (s,S)$, $\mathcal P (n) = \mathcal G (n,N)$ where $S, N$ are positive definite matrices.
### Posterior
The Posterior is given by:
$$\mathcal P (s|d) \propto P(s,d) = \mathcal G(d-Rs,N) \,\mathcal G(s,S) \propto \mathcal G (s-m,D) $$
where
$$\begin{align}
m &= Dj \\
D^{-1}&= (S^{-1} +R^\dagger N^{-1} R )\\
j &= R^\dagger N^{-1} d
\end{align}$$
Let us implement this in NIFTy!
%% Cell type:markdown id: tags:
## Wiener Filter: Example
- We assume statistical homogeneity and isotropy. Therefore the signal covariance $S$ is diagonal in harmonic space, and is described by a one-dimensional power spectrum, assumed here as $$P(k) = P_0\,\left(1+\left(\frac{k}{k_0}\right)^2\right)^{-\gamma /2},$$
with $P_0 = 0.2, k_0 = 5, \gamma = 4$.
- $N = 0.2 \cdot \mathbb{1}$.
- Number of data points $N_{pix} = 512$.
- reconstruction in harmonic space.
- Response operator:
$$R = FFT_{\text{harmonic} \rightarrow \text{position}}$$
%% Cell type:code id: tags:
``` python
N_pixels = 512 # Number of pixels
def pow_spec(k):
P0, k0, gamma = [.2, 5, 4]
return P0 / ((1. + (k/k0)**2)**(gamma / 2))
```
%% Cell type:markdown id: tags:
## Wiener Filter: Implementation
%% Cell type:markdown id: tags:
### Import Modules
%% Cell type:code id: tags:
``` python
import numpy as np
np.random.seed(40)
import nifty6 as ift
import matplotlib.pyplot as plt
%matplotlib inline
```
%% Cell type:markdown id: tags:
### Implement Propagator
%% Cell type:code id: tags:
``` python
def Curvature(R, N, Sh):
IC = ift.GradientNormController(iteration_limit=50000,
tol_abs_gradnorm=0.1)
# WienerFilterCurvature is (R.adjoint*N.inverse*R + Sh.inverse) plus some handy
# helper methods.
return ift.WienerFilterCurvature(R,N,Sh,iteration_controller=IC,iteration_controller_sampling=IC)
```
%% Cell type:markdown id: tags:
### Conjugate Gradient Preconditioning
- $D$ is defined via:
$$D^{-1} = \mathcal S_h^{-1} + R^\dagger N^{-1} R.$$
In the end, we want to apply $D$ to $j$, i.e. we need the inverse action of $D^{-1}$. This is done numerically (algorithm: *Conjugate Gradient*).
<!--
- One can define the *condition number* of a non-singular and normal matrix $A$:
$$\kappa (A) := \frac{|\lambda_{\text{max}}|}{|\lambda_{\text{min}}|},$$
where $\lambda_{\text{max}}$ and $\lambda_{\text{min}}$ are the largest and smallest eigenvalue of $A$, respectively.
- The larger $\kappa$ the slower Conjugate Gradient.
- By default, conjugate gradient solves: $D^{-1} m = j$ for $m$, where $D^{-1}$ can be badly conditioned. If one knows a non-singular matrix $T$ for which $TD^{-1}$ is better conditioned, one can solve the equivalent problem:
$$\tilde A m = \tilde j,$$
where $\tilde A = T D^{-1}$ and $\tilde j = Tj$.
- In our case $S^{-1}$ is responsible for the bad conditioning of $D$ depending on the chosen power spectrum. Thus, we choose
$$T = \mathcal F^\dagger S_h^{-1} \mathcal F.$$
-->
%% Cell type:markdown id: tags:
### Generate Mock data
- Generate a field $s$ and $n$ with given covariances.
- Calculate $d$.
%% Cell type:code id: tags:
``` python
s_space = ift.RGSpace(N_pixels)
h_space = s_space.get_default_codomain()
HT = ift.HarmonicTransformOperator(h_space, target=s_space)
# Operators
Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec)
R = HT #*ift.create_harmonic_smoothing_operator((h_space,), 0, 0.02)
# Fields and data
sh = Sh.draw_sample()
noiseless_data=R(sh)
noise_amplitude = np.sqrt(0.2)
N = ift.ScalingOperator(s_space, noise_amplitude**2)
n = ift.Field.from_random(domain=s_space, random_type='normal',
std=noise_amplitude, mean=0)
d = noiseless_data + n
j = R.adjoint_times(N.inverse_times(d))
curv = Curvature(R=R, N=N, Sh=Sh)
D = curv.inverse
```
%% Cell type:markdown id: tags:
### Run Wiener Filter
%% Cell type:code id: tags:
``` python
m = D(j)
```
%% Cell type:markdown id: tags:
### Signal Reconstruction
%% Cell type:code id: tags:
``` python
# Get signal data and reconstruction data
s_data = HT(sh).val
m_data = HT(m).val
d_data = d.val
plt.figure(figsize=(15,10))
plt.plot(s_data, 'r', label="Signal", linewidth=3)
plt.plot(d_data, 'k.', label="Data")
plt.plot(m_data, 'k', label="Reconstruction",linewidth=3)
plt.title("Reconstruction")
plt.legend()
plt.show()
```
%% Cell type:code id: tags:
``` python
plt.figure(figsize=(15,10))
plt.plot(s_data - s_data, 'r', label="Signal", linewidth=3)
plt.plot(d_data - s_data, 'k.', label="Data")
plt.plot(m_data - s_data, 'k', label="Reconstruction",linewidth=3)
plt.axhspan(-noise_amplitude,noise_amplitude, facecolor='0.9', alpha=.5)
plt.title("Residuals")
plt.legend()
plt.show()
```
%% Cell type:markdown id: tags:
### Power Spectrum
%% Cell type:code id: tags:
``` python
s_power_data = ift.power_analyze(sh).val
m_power_data = ift.power_analyze(m).val
plt.figure(figsize=(15,10))
plt.loglog()
plt.xlim(1, int(N_pixels/2))
ymin = min(m_power_data)
plt.ylim(ymin, 1)
xs = np.arange(1,int(N_pixels/2),.1)
plt.plot(xs, pow_spec(xs), label="True Power Spectrum", color='k',alpha=0.5)
plt.plot(s_power_data, 'r', label="Signal")
plt.plot(m_power_data, 'k', label="Reconstruction")
plt.axhline(noise_amplitude**2 / N_pixels, color="k", linestyle='--', label="Noise level", alpha=.5)
plt.axhspan(noise_amplitude**2 / N_pixels, ymin, facecolor='0.9', alpha=.5)
plt.title("Power Spectrum")
plt.legend()
plt.show()
```
%% Cell type:markdown id: tags:
## Wiener Filter on Incomplete Data
%% Cell type:code id: tags:
``` python
# Operators
Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec)
N = ift.ScalingOperator(s_space, noise_amplitude**2)
# R is defined below
# Fields
sh = Sh.draw_sample()
s = HT(sh)
n = ift.Field.from_random(domain=s_space, random_type='normal',
std=noise_amplitude, mean=0)
```
%% Cell type:markdown id: tags:
### Partially Lose Data
%% Cell type:code id: tags:
``` python
l = int(N_pixels * 0.2)
h = int(N_pixels * 0.2 * 2)
mask = np.full(s_space.shape, 1.)
mask[l:h] = 0
mask = ift.Field.from_raw(s_space, mask)
R = ift.DiagonalOperator(mask)(HT)
n = n.val_rw()
n[l:h] = 0
n = ift.Field.from_raw(s_space, n)
d = R(sh) + n
```
%% Cell type:code id: tags:
``` python
curv = Curvature(R=R, N=N, Sh=Sh)
D = curv.inverse
j = R.adjoint_times(N.inverse_times(d))
m = D(j)
```
%% Cell type:markdown id: tags:
### Compute Uncertainty
%% Cell type:code id: tags:
``` python
m_mean, m_var = ift.probe_with_posterior_samples(curv, HT, 200)
```
%% Cell type:markdown id: tags:
### Get data
%% Cell type:code id: tags:
``` python
# Get signal data and reconstruction data
s_data = s.val
m_data = HT(m).val
m_var_data = m_var.val
uncertainty = np.sqrt(m_var_data)
d_data = d.val_rw()
# Set lost data to NaN for proper plotting
d_data[d_data == 0] = np.nan
```
%% Cell type:code id: tags:
``` python
fig = plt.figure(figsize=(15,10))
plt.axvspan(l, h, facecolor='0.8',alpha=0.5)
plt.fill_between(range(N_pixels), m_data - uncertainty, m_data + uncertainty, facecolor='0.5', alpha=0.5)
plt.plot(s_data, 'r', label="Signal", alpha=1, linewidth=3)
plt.plot(d_data, 'k.', label="Data")
plt.plot(m_data, 'k', label="Reconstruction", linewidth=3)
plt.title("Reconstruction of incomplete data")
plt.legend()
```
%% Cell type:markdown id: tags:
# 2d Example
%% Cell type:code id: tags:
``` python
N_pixels = 256 # Number of pixels
sigma2 = 2. # Noise variance
def pow_spec(k):
P0, k0, gamma = [.2, 2, 4]
return P0 * (1. + (k/k0)**2)**(-gamma/2)
s_space = ift.RGSpace([N_pixels, N_pixels])
```
%% Cell type:code id: tags:
``` python
h_space = s_space.get_default_codomain()
HT = ift.HarmonicTransformOperator(h_space,s_space)
# Operators
Sh = ift.create_power_operator(h_space, power_spectrum=pow_spec)
N = ift.ScalingOperator(s_space, sigma2)
# Fields and data
sh = Sh.draw_sample()
n = ift.Field.from_random(domain=s_space, random_type='normal',
std=np.sqrt(sigma2), mean=0)
# Lose some data
l = int(N_pixels * 0.33)
h = int(N_pixels * 0.33 * 2)
mask = np.full(s_space.shape, 1.)
mask[l:h,l:h] = 0.
mask = ift.Field.from_raw(s_space, mask)
R = ift.DiagonalOperator(mask)(HT)
n = n.val_rw()
n[l:h, l:h] = 0
n = ift.Field.from_raw(s_space, n)
curv = Curvature(R=R, N=N, Sh=Sh)
D = curv.inverse
d = R(sh) + n
j = R.adjoint_times(N.inverse_times(d))
# Run Wiener filter
m = D(j)
# Uncertainty
m_mean, m_var = ift.probe_with_posterior_samples(curv, HT, 20)
# Get data
s_data = HT(sh).val
m_data = HT(m).val
m_var_data = m_var.val
d_data = d.val
uncertainty = np.sqrt(np.abs(m_var_data))
```
%% Cell type:code id: tags:
``` python
cm = ['magma', 'inferno', 'plasma', 'viridis'][1]
mi = np.min(s_data)
ma = np.max(s_data)
fig, axes = plt.subplots(1, 2, figsize=(15, 7))
data = [s_data, d_data]
caption = ["Signal", "Data"]
for ax in axes.flat:
im = ax.imshow(data.pop(0), interpolation='nearest', cmap=cm, vmin=mi,
vmax=ma)
ax.set_title(caption.pop(0))
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.15, 0.05, 0.7])
fig.colorbar(im, cax=cbar_ax)
```
%% Cell type:code id: tags:
``` python
mi = np.min(s_data)
ma = np.max(s_data)
fig, axes = plt.subplots(3, 2, figsize=(15, 22.5))
sample = HT(curv.draw_sample(from_inverse=True)+m).val
post_mean = (m_mean + HT(m)).val
data = [s_data, m_data, post_mean, sample, s_data - m_data, uncertainty]
caption = ["Signal", "Reconstruction", "Posterior mean", "Sample", "Residuals", "Uncertainty Map"]
for ax in axes.flat:
im = ax.imshow(data.pop(0), interpolation='nearest', cmap=cm, vmin=mi, vmax=ma)
ax.set_title(caption.pop(0))
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([.85, 0.15, 0.05, 0.7])
fig.colorbar(im, cax=cbar_ax)
```
%% Cell type:markdown id: tags:
### Is the uncertainty map reliable?
%% Cell type:code id: tags:
``` python
precise = (np.abs(s_data-m_data) < uncertainty)
print("Error within uncertainty map bounds: " + str(np.sum(precise) * 100 / N_pixels**2) + "%")
plt.figure(figsize=(15,10))
plt.imshow(precise.astype(float), cmap="brg")
plt.colorbar()
```
%% Cell type:markdown id: tags:
# Start Coding
## NIFTy Repository + Installation guide
https://gitlab.mpcdf.mpg.de/ift/NIFTy
NIFTy v5 **more or less stable!**
......
......@@ -5,8 +5,6 @@ import numpy as np
import nifty6 as ift
np.random.seed(40)
N0s, a0s, b0s, c0s = [], [], [], []
for ii in range(10, 26):
......@@ -15,15 +13,15 @@ for ii in range(10, 26):
N = int(2**ii)
print('N = {}'.format(N))
uv = np.random.rand(N, 2) - 0.5
vis = np.random.randn(N) + 1j*np.random.randn(N)
rng = ift.random.current_rng()
uv = rng.uniform(-.5, .5, (N,2))
vis = rng.normal(0., 1., N) + 1j*rng.normal(0., 1., N)
uvspace = ift.RGSpace((nu, nv))
visspace = ift.UnstructuredDomain(N)
img = np.random.randn(nu*nv)
img = img.reshape((nu, nv))
img = rng.standard_normal((nu, nv))
img = ift.makeField(uvspace, img)
t0 = time()
......
......@@ -27,8 +27,6 @@ import numpy as np
import nifty6 as ift
if __name__ == '__main__':
np.random.seed(41)
# Set up the position space of the signal
mode = 2
if mode == 0:
......@@ -62,7 +60,7 @@ if __name__ == '__main__':
p = R(sky)
mock_position = ift.from_random('normal', harmonic_space)
tmp = p(mock_position).val.astype(np.float64)
data = np.random.binomial(1, tmp)
data = ift.random.current_rng().binomial(1, tmp)
data = ift.Field.from_raw(R.target, data)
# Compute likelihood and Hamiltonian
......
......@@ -46,8 +46,6 @@ def make_random_mask():
if __name__ == '__main__':
np.random.seed(42)
# Choose space on which the signal field is defined
if len(sys.argv) == 2:
mode = int(sys.argv[1])
......
......@@ -44,8 +44,6 @@ def exposure_2d():
if __name__ == '__main__':
np.random.seed(42)
# Choose space on which the signal field is defined
if len(sys.argv) == 2:
mode = int(sys.argv[1])
......@@ -94,7 +92,7 @@ if __name__ == '__main__':
lamb = R(sky)
mock_position = ift.from_random('normal', domain)
data = lamb(mock_position)
data = np.random.poisson(data.val.astype(np.float64))
data = ift.random.current_rng().poisson(data.val.astype(np.float64))
data = ift.Field.from_raw(d_space, data)
likelihood = ift.PoissonianEnergy(data) @ lamb
......
......@@ -33,20 +33,18 @@ import nifty6 as ift
def random_los(n_los):
starts = list(np.random.uniform(0, 1, (n_los, 2)).T)
ends = list(np.random.uniform(0, 1, (n_los, 2)).T)
starts = list(ift.random.current_rng().random((n_los, 2)).T)
ends = list(ift.random.current_rng().random((n_los, 2)).T)
return starts, ends
def radial_los(n_los):
starts = list(np.random.uniform(0, 1, (n_los, 2)).T)
ends = list(0.5 + 0*np.random.uniform(0, 1, (n_los, 2)).T)
starts = list(ift.random.current_rng().random((n_los, 2)).T)
ends = list(0.5 + 0*ift.random.current_rng().random((n_los, 2)).T)
return starts, ends
if __name__ == '__main__':
np.random.seed(420)
# Choose between random line-of-sight response (mode=0) and radial lines
# of sight (mode=1)
if len(sys.argv) == 2:
......
......@@ -44,20 +44,18 @@ class SingleDomain(ift.LinearOperator):
def random_los(n_los):
starts = list(np.random.uniform(0, 1, (n_los, 2)).T)
ends = list(np.random.uniform(0, 1, (n_los, 2)).T)
starts = list(ift.random.current_rng().random((n_los, 2)).T)
ends = list(ift.random.current_rng().random((n_los, 2)).T)
return starts, ends
def radial_los(n_los):
starts = list(np.random.uniform(0, 1, (n_los, 2)).T)
ends = list(0.5 + 0*np.random.uniform(0, 1, (n_los, 2)).T)
starts = list(ift.random.current_rng().random((n_los, 2)).T)
ends = list(0.5 + 0*ift.random.current_rng().random((n_los, 2)).T)
return starts, ends
if __name__ == '__main__':
np.random.seed(43)
# Choose between random line-of-sight response (mode=0) and radial lines
# of sight (mode=1)
if len(sys.argv) == 2:
......
......@@ -19,7 +19,6 @@ import matplotlib.pyplot as plt
import numpy as np
import nifty6 as ift
np.random.seed(12)
def polynomial(coefficients, sampling_points):
......@@ -86,7 +85,7 @@ if __name__ == '__main__':
N_params = 10
N_samples = 100
size = (12,)
x = np.random.random(size) * 10
x = ift.random.current_rng().random(size) * 10
y = np.sin(x**2) * x**3
var = np.full_like(y, y.var() / 10)
var[-2] *= 4
......
from .version import __version__
from . import random
from .domains.domain import Domain
from .domains.structured_domain import StructuredDomain
from .domains.unstructured_domain import UnstructuredDomain
......
......@@ -25,6 +25,7 @@ from ..field import Field
from ..linearization import Linearization
from ..operators.operator import Operator
from ..sugar import makeOp
from .. import random
def _f_on_np(f, arr):
......@@ -67,7 +68,8 @@ class _InterpolationOperator(Operator):
if table_func is not None:
if inv_table_func is None:
raise ValueError
a = func(np.random.randn(10))
# MR FIXME: not sure whether we should have this in production code
a = func(random.current_rng().random(10))
a1 = _f_on_np(lambda x: inv_table_func(table_func(x)), a)
np.testing.assert_allclose(a, a1)
self._table = _f_on_np(table_func, self._table)
......
......@@ -11,7 +11,7 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
# Copyright(C) 2013-2019 Max-Planck-Society
# Copyright(C) 2013-2020 Max-Planck-Society
#
# NIFTy is being developed at the Max-Planck-Institut fuer Astrophysik.
......@@ -113,10 +113,6 @@ class MetricGaussianKL_MPI(Energy):
preconditioning is done by default.
_samples : None
Only a parameter for internal uses. Typically not to be set by users.
seed_offset : int
A parameter with which one can controll from which seed the samples
are drawn. Per default, the seed is different for MPI tasks, but the
same every time this class is initialized.
Note
----
......@@ -132,7 +128,7 @@ class MetricGaussianKL_MPI(Energy):
def __init__(self, mean, hamiltonian, n_samples, constants=[],
point_estimates=[], mirror_samples=False,
napprox=0, _samples=None, seed_offset=0,
napprox=0, _samples=None,
lh_sampling_dtype=np.float64):
super(MetricGaussianKL_MPI, self).__init__(mean)
......@@ -159,10 +155,10 @@ class MetricGaussianKL_MPI(Energy):
if napprox > 1:
met._approximation = makeOp(approximation2endo(met, napprox))
_samples = []
rand_state = np.random.get_state()
sseq = random.spawn_sseq(n_samples)
for i in range(lo, hi):
if mirror_samples:
np.random.seed(i//2+seed_offset)
random.push_sseq(sseq[i//2])
if (i % 2) and (i-1 >= lo):
_samples.append(-_samples[-1])
......@@ -170,16 +166,16 @@ class MetricGaussianKL_MPI(Energy):
_samples.append(((i % 2)*2-1) *
met.draw_sample(from_inverse=True,
dtype=lh_sampling_dtype))
random.pop_sseq()
else:
np.random.seed(i+seed_offset)
random.push_sseq(sseq[i])
_samples.append(met.draw_sample(from_inverse=True,
dtype=lh_sampling_dtype))
np.random.set_state(rand_state)
random.pop_sseq()
_samples = tuple(_samples)
if mirror_samples:
n_samples *= 2
self._samples = _samples
self._seed_offset = seed_offset
self._n_samples = n_samples
self._lin = Linearization.make_partial_var(mean, constants)
v, g = None, None
......@@ -205,7 +201,7 @@ class MetricGaussianKL_MPI(Energy):
return MetricGaussianKL_MPI(
position, self._hamiltonian, self._n_samples, self._constants,
self._point_estimates, _samples=self._samples,
seed_offset=self._seed_offset, lh_sampling_dtype=self._sampdt)
lh_sampling_dtype=self._sampdt)
@property
def value(self):
......@@ -245,11 +241,13 @@ class MetricGaussianKL_MPI(Energy):
raise NotImplementedError()
lin = self._lin.with_want_metric()
samp = full(self._hamiltonian.domain, 0.)
rand_state = np.random.get_state()
np.random.seed(rank+np.random.randint(99999))
for v in self._samples:
sseq = random.spawn_sseq(n_samples)
for i, v in enumerate(self._samples):
# FIXME: this is not yet correct. We need to use the _global_ sample
# index, not the local one!
random.push_sseq(sseq[i])
samp = samp + self._hamiltonian(lin+v).metric.draw_sample(from_inverse=False, dtype=dtype)
np.random.set_state(rand_state)
random.pop_sseq()
return allreduce_sum_field(samp)
def metric_sample(self, from_inverse=False, dtype=np.float64):
......
......@@ -11,21 +11,50 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
# Copyright(C) 2013-2019 Max-Planck-Society
# Copyright(C) 2013-2020 Max-Planck-Society
#
# NIFTy is being developed at the Max-Planck-Institut fuer Astrophysik.
import numpy as np
_sseq = [np.random.SeedSequence(42)]
_rng = [np.random.default_rng(_sseq[-1])]
def spawn_sseq(n, parent=None):
if parent is None:
global _sseq
parent = _sseq[-1]
return parent.spawn(n)
def current_rng():
return _rng[-1]
def push_sseq(sseq):
_sseq.append(sseq)
_rng.append(np.random.default_rng(_sseq[-1]))
def push_sseq_from_seed(seed):
_sseq.append(np.random.SeedSequence(seed))
_rng.append(np.random.default_rng(_sseq[-1]))
def pop_sseq():
_sseq.pop()
_rng.pop()
class Random(object):
@staticmethod
def pm1(dtype, shape):
if np.issubdtype(dtype, np.complexfloating):
x = np.array([1+0j, 0+1j, -1+0j, 0-1j], dtype=dtype)
x = x[np.random.randint(4, size=shape)]
x = x[_rng[-1].integers(0, 4, size=shape)]
else:
x = 2*np.random.randint(2, size=shape) - 1
x = 2*_rng[-1].integers(0, 2, size=shape)-1
return x.astype(dtype, copy=False)
@staticmethod
......@@ -42,10 +71,10 @@ class Random(object):
raise TypeError("mean must not be complex for a real result field")
if np.issubdtype(dtype, np.complexfloating):
x = np.empty(shape, dtype=dtype)
x.real = np.random.normal(mean.real, std*np.sqrt(0.5), shape)
x.imag = np.random.normal(mean.imag, std*np.sqrt(0.5), shape)
x.real = _rng[-1].normal(mean.real, std*np.sqrt(0.5), shape)
x.imag = _rng[-1].normal(mean.imag, std*np.sqrt(0.5), shape)
else:
x = np.random.normal(mean, std, shape).astype(dtype, copy=False)
x = _rng[-1].normal(mean, std, shape).astype(dtype, copy=False)
return x
@staticmethod
......@@ -57,13 +86,13 @@ class Random(object):
raise TypeError("low and high must not be complex")
if np.issubdtype(dtype, np.complexfloating):
x = np.empty(shape, dtype=dtype)
x.real = np.random.uniform(low, high, shape)
x.imag = np.random.uniform(low, high, shape)
x.real = _rng[-1].uniform(low, high, shape)
x.imag = _rng[-1].uniform(low, high, shape)
elif np.issubdtype(dtype, np.integer):
if not (np.issubdtype(type(low), np.integer) and
np.issubdtype(type(high), np.integer)):
raise TypeError("low and high must be integer")
x = np.random.randint(low, high+1, shape)
x = _rng[-1].integers(low, high+1, shape)
else:
x = np.random.uniform(low, high, shape)
x = _rng[-1].uniform(low, high, shape)
return x.astype(dtype, copy=False)
......@@ -39,8 +39,8 @@ setup(name="nifty6",
packages=find_packages(include=["nifty6", "nifty6.*"]),
zip_safe=True,
license="GPLv3",
setup_requires=['scipy>=1.4.1'],
install_requires=['scipy>=1.4.1'],
setup_requires=['scipy>=1.4.1', 'numpy>=1.17'],
install_requires=['scipy>=1.4.1', 'numpy>=1.17'],
python_requires='>=3.5',
classifiers=[
"Development Status :: 4 - Beta",
......
......@@ -37,9 +37,11 @@ pmp = pytest.mark.parametrize
@pytest.fixture(params=PARAMS)
def field(request):
np.random.seed(request.param[0])
ift.random.push_sseq_from_seed(request.param[0])
S = ift.ScalingOperator(request.param[1], 1.)
return S.draw_sample()
res = S.draw_sample()
ift.random.pop_sseq()
return res
def test_gaussian(field):
......@@ -102,7 +104,7 @@ def test_inverse_gamma(field):
return
field = field.exp()
space = field.domain
d = np.random.normal(10, size=space.shape)**2
d = ift.random.current_rng().normal(10, size=space.shape)**2
d = ift.Field(space, d)
energy = ift.InverseGammaLikelihood(d)
ift.extra.check_jacobian_consistency(energy, field, tol=1e-5)
......@@ -113,7 +115,7 @@ def testPoissonian(field):
return
field = field.exp()
space = field.domain
d = np.random.poisson(120, size=space.shape)
d = ift.random.current_rng().poisson(120, size=space.shape)
d = ift.Field(space, d)
energy = ift.PoissonianEnergy(d)
ift.extra.check_jacobian_consistency(energy, field, tol=1e-7)
......@@ -124,7 +126,7 @@ def test_bernoulli(field):
return
field = field.sigmoid()
space = field.domain
d = np.random.binomial(1, 0.1, size=space.shape)
d = ift.random.current_rng().binomial(1, 0.1, size=space.shape)
d = ift.Field(space, d)
energy = ift.BernoulliEnergy(d)
ift.extra.check_jacobian_consistency(energy, field, tol=1e-5)
......@@ -51,8 +51,6 @@ def _spec2(k):
])
@pmp('space2', [ift.RGSpace((8,), harmonic=True), ift.LMSpace(12)])
def test_power_synthesize_analyze(space1, space2):
np.random.seed(11)
p1 = ift.PowerSpace(space1)
fp1 = ift.PS_field(p1, _spec1)
p2 = ift.PowerSpace(space2)
......@@ -82,8 +80,6 @@ def test_power_synthesize_analyze(space1, space2):
])
@pmp('space2', [ift.RGSpace((8,), harmonic=True), ift.LMSpace(12)])
def test_DiagonalOperator_power_analyze2(space1, space2):
np.random.seed(11)
fp1 = ift.PS_field(ift.PowerSpace(space1), _spec1)
fp2 = ift.PS_field(ift.PowerSpace(space2), _spec2)
......
......@@ -37,7 +37,7 @@ pmp = pytest.mark.parametrize
@pmp('noise', [1, 1e-2, 1e2])
@pmp('seed', [4, 78, 23])
def test_gaussian_energy(space, nonlinearity, noise, seed):
np.random.seed(seed)
ift.random.push_sseq_from_seed(seed)
dim = len(space.shape)
hspace = space.get_default_codomain()
ht = ift.HarmonicTransformOperator(hspace, target=space)
......@@ -45,6 +45,7 @@ def test_gaussian_energy(space, nonlinearity, noise, seed):
pspace = ift.PowerSpace(hspace, binbounds=binbounds)
Dist = ift.PowerDistributor(target=hspace, power_space=pspace)
xi0 = ift.Field.from_random(domain=hspace, random_type='normal')
ift.random.pop_sseq()
def pspec(k):
return 1/(1 + k**2)**dim
......
......@@ -28,7 +28,6 @@ pmp = pytest.mark.parametrize
@pmp('point_estimates', ([], ['a'], ['b'], ['a', 'b']))
@pmp('mirror_samples', (True, False))
def test_kl(constants, point_estimates, mirror_samples):
np.random.seed(42)
dom = ift.RGSpace((12,), (2.12))
op0 = ift.HarmonicSmoothingOperator(dom, 3)
op = ift.ducktape(dom, None, 'a')*(op0.ducktape('b'))
......
......@@ -51,7 +51,6 @@ slow_minimizers = ['ift.SteepestDescent(IC)']
slow_minimizers)
@pmp('space', spaces)
def test_quadratic_minimization(minimizer, space):
np.random.seed(42)
starting_point = ift.Field.from_random('normal', domain=space)*10
covariance_diagonal = ift.Field.from_random('uniform', domain=space) + 0.5
covariance = ift.DiagonalOperator(covariance_diagonal)
......@@ -76,7 +75,6 @@ def test_quadratic_minimization(minimizer, space):
@pmp('space', spaces)
def test_WF_curvature(space):
np.random.seed(42)
required_result = ift.full(space, 1.)
s = ift.Field.from_random('uniform', domain=space) + 0.5
......@@ -121,7 +119,6 @@ def test_rosenbrock(minimizer):
from scipy.optimize import rosen, rosen_der, rosen_hess_prod
except ImportError:
raise SkipTest