Skip to content
GitLab
Projects
Groups
Snippets
/
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
Menu
Open sidebar
ift
NIFTy
Commits
9d7ae618
Commit
9d7ae618
authored
Jun 06, 2021
by
Lukas Platz
Browse files
Merge branch 'NIFTy_7' into flexible_FieldZeroPadder
parents
e7b34733
4fdf19db
Pipeline
#102976
passed with stages
in 13 minutes and 40 seconds
Changes
49
Pipelines
1
Expand all
Show whitespace changes
Inline
Side-by-side
.gitlab-ci.yml
View file @
9d7ae618
...
@@ -5,11 +5,20 @@ variables:
...
@@ -5,11 +5,20 @@ variables:
OMP_NUM_THREADS
:
1
OMP_NUM_THREADS
:
1
stages
:
stages
:
-
static_checks
-
build_docker
-
build_docker
-
test
-
test
-
release
-
release
-
demo_runs
-
demo_runs
check_no_asserts
:
image
:
debian:testing-slim
stage
:
static_checks
before_script
:
-
ls
script
:
-
if [ `grep -r "^[[:space:]]*assert[ (]" src demos | wc -l` -ne 0 ]; then echo "Have found assert statements. Don't use them! Use \`utilities.myassert\` instead." && exit 1; fi
build_docker_from_scratch
:
build_docker_from_scratch
:
only
:
only
:
-
schedules
-
schedules
...
@@ -110,6 +119,14 @@ run_getting_started_mf:
...
@@ -110,6 +119,14 @@ run_getting_started_mf:
paths
:
paths
:
-
'
*.png'
-
'
*.png'
run_getting_density
:
stage
:
demo_runs
script
:
-
python3 demos/getting_started_density.py
artifacts
:
paths
:
-
'
*.png'
run_bernoulli
:
run_bernoulli
:
stage
:
demo_runs
stage
:
demo_runs
script
:
script
:
...
@@ -126,7 +143,7 @@ run_curve_fitting:
...
@@ -126,7 +143,7 @@ run_curve_fitting:
paths
:
paths
:
-
'
*.png'
-
'
*.png'
run_visual_
mg
vi
:
run_visual_vi
:
stage
:
demo_runs
stage
:
demo_runs
script
:
script
:
-
python3 demos/
mgvi
_visualized.py
-
python3 demos/
variational_inference
_visualized.py
ChangeLog.md
View file @
9d7ae618
Changes since NIFTy 6
Changes since NIFTy 6
=====================
=====================
New parametric amplitude model
------------------------------
The
`ift.CorrelatedFieldMaker`
now features two amplitude models. In addition
to the non-parametric one, one may choose to use a Matern kernel instead. The
method is aptly named
`add_fluctuations_matern`
. The major advantage of the
parametric model is its more intuitive scaling with the size of the position
space.
CorrelatedFieldMaker interface change
CorrelatedFieldMaker interface change
-------------------------------------
-------------------------------------
The interface of
`ift.CorrelatedFieldMaker.make`
and
The interface of
`ift.CorrelatedFieldMaker`
changed and instances of it may now
`ift.CorrelatedFieldMaker.add_fluctuations`
changed; it now expects the mean
be instantiated directly without the previously required
`make`
method. Upon
and the standard deviation of their various parameters not as separate
initialization, no zero-mode must be specified as the normalization for the
arguments but as a tuple.
different axes of the power respectively amplitude spectrum now only happens
once in the
`finalize`
method. There is now a new call named
`set_amplitude_total_offset`
to set the zero-mode. The method accepts either an
instance of
`ift.Operator`
or a tuple parameterizing a log-normal parameter.
Methods which require the zero-mode to be set raise a
`NotImplementedError`
if
invoked prior to having specified a zero-mode.
Furthermore, the interface of
`ift.CorrelatedFieldMaker.add_fluctuations`
changed; it now expects the mean and the standard deviation of their various
parameters not as separate arguments but as a tuple. The same applies to all
new and renamed methods of the
`CorrelatedFieldMaker`
class.
Furthermore, it is now possible to disable the asperity and the flexibility
Furthermore, it is now possible to disable the asperity and the flexibility
together with the asperity in the correlated field model. Note that disabling
together with the asperity in the correlated field model. Note that disabling
...
@@ -45,13 +64,32 @@ The implementation tests for nonlinear operators are now available in
...
@@ -45,13 +64,32 @@ The implementation tests for nonlinear operators are now available in
`ift.extra.check_operator()`
and for linear operators
`ift.extra.check_operator()`
and for linear operators
`ift.extra.check_linear_operator()`
.
`ift.extra.check_linear_operator()`
.
MetricGaussianKL interface
MetricGaussianKL interface
--------------------------
--------------------------
Users do not instantiate
`MetricGaussianKL`
by its constructor anymore. Rather
`mirror_samples`
is not set by default anymore.
`MetricGaussianKL.make()`
shall be used. Additionally,
`mirror_samples`
is not
set by default anymore.
GeoMetricKL
-----------
A new posterior approximation scheme, called geometric Variational Inference
(geoVI) was introduced.
`GeoMetricKL`
extends
`MetricGaussianKL`
in the sense
that it uses (non-linear) geoVI samples instead of (linear) MGVI samples.
`GeoMetricKL`
can be configured such that it reduces to
`MetricGaussianKL`
.
`GeoMetricKL`
is now used in
`demos/getting_started_3.py`
and a visual
comparison to MGVI can be found in
`demos/variational_inference_visualized.py`
.
For further details see (
<https://arxiv.org/abs/2105.10470>
).
LikelihoodOperator
------------------
A new subclass of
`EnergyOperator`
was introduced and all
`EnergyOperator`
s
that are likelihoods are now
`LikelihoodOperator`
s. A
`LikelihoodOperator`
has to implement the function
`get_transformation`
, which returns a
coordinate transformation in which the Fisher metric of the likelihood becomes
the identity matrix. This is needed for the
`GeoMetricKL`
algorithm.
Changes since NIFTy 5
Changes since NIFTy 5
...
...
README.md
View file @
9d7ae618
...
@@ -134,17 +134,32 @@ The NIFTy package is licensed under the terms of the
...
@@ -134,17 +134,32 @@ The NIFTy package is licensed under the terms of the
*without any warranty*
.
*without any warranty*
.
Contributing
------------
Please note our convention not to use pure Python
`assert`
statements in
production code. They are not guaranteed to by executed by Python and can be
turned off by the user (
`python -O`
in cPython). As an alternative use
`ift.myassert`
.
Contributors
Contributors
------------
------------
### NIFTy7
### NIFTy7
-
Andrija Kostic
-
Gordian Edenhofer
-
Gordian Edenhofer
-
Jakob Knollmüller
-
Jakob Roth
-
Lukas Platz
-
Lukas Platz
-
Matteo Guardiani
-
Martin Reinecke
-
Martin Reinecke
-
[
Philipp Arras
](
https://
wwwmpa.mpa-garching.mpg.de/~p
arras
/
)
-
[
Philipp Arras
](
https://
philipp-
arras
.de
)
-
Philipp Frank
-
Philipp Frank
-
[
Reimar Heinrich Leike
](
https://wwwmpa.mpa-garching.mpg.de/~reimar/
)
-
[
Reimar Heinrich Leike
](
https://wwwmpa.mpa-garching.mpg.de/~reimar/
)
-
Simon Ding
-
Vincent Eberle
### NIFTy6
### NIFTy6
...
...
demos/getting_started_3.py
View file @
9d7ae618
...
@@ -11,7 +11,7 @@
...
@@ -11,7 +11,7 @@
# You should have received a copy of the GNU General Public License
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
#
# Copyright(C) 2013-202
0
Max-Planck-Society
# Copyright(C) 2013-202
1
Max-Planck-Society
#
#
# NIFTy is being developed at the Max-Planck-Institut fuer Astrophysik.
# NIFTy is being developed at the Max-Planck-Institut fuer Astrophysik.
...
@@ -31,6 +31,7 @@ import numpy as np
...
@@ -31,6 +31,7 @@ import numpy as np
import
nifty7
as
ift
import
nifty7
as
ift
ift
.
random
.
push_sseq_from_seed
(
27
)
def
random_los
(
n_los
):
def
random_los
(
n_los
):
starts
=
list
(
ift
.
random
.
current_rng
().
random
((
n_los
,
2
)).
T
)
starts
=
list
(
ift
.
random
.
current_rng
().
random
((
n_los
,
2
)).
T
)
...
@@ -63,16 +64,16 @@ def main():
...
@@ -63,16 +64,16 @@ def main():
'offset_std'
:
(
1e-3
,
1e-6
),
'offset_std'
:
(
1e-3
,
1e-6
),
# Amplitude of field fluctuations
# Amplitude of field fluctuations
'fluctuations'
:
(
2
.
,
1.
),
# 1.0, 1e-2
'fluctuations'
:
(
1
.
,
0.8
),
# 1.0, 1e-2
# Exponent of power law power spectrum component
# Exponent of power law power spectrum component
'loglogavgslope'
:
(
-
4
.
,
1
),
# -6.0, 1
'loglogavgslope'
:
(
-
3
.
,
1
),
# -6.0, 1
# Amplitude of integrated Wiener process power spectrum component
# Amplitude of integrated Wiener process power spectrum component
'flexibility'
:
(
5
,
2
.
),
#
2
.0,
1.0
'flexibility'
:
(
2
,
1
.
),
#
1
.0,
0.5
# How ragged the integrated Wiener process component is
# How ragged the integrated Wiener process component is
'asperity'
:
(
0.5
,
0.
5
)
# 0.1, 0.5
'asperity'
:
(
0.5
,
0.
4
)
# 0.1, 0.5
}
}
correlated_field
=
ift
.
SimpleCorrelatedField
(
position_space
,
**
args
)
correlated_field
=
ift
.
SimpleCorrelatedField
(
position_space
,
**
args
)
...
@@ -89,7 +90,6 @@ def main():
...
@@ -89,7 +90,6 @@ def main():
# Specify noise
# Specify noise
data_space
=
R
.
target
data_space
=
R
.
target
noise
=
.
001
noise
=
.
001
sig
=
ift
.
ScalingOperator
(
data_space
,
np
.
sqrt
(
noise
))
N
=
ift
.
ScalingOperator
(
data_space
,
noise
)
N
=
ift
.
ScalingOperator
(
data_space
,
noise
)
# Generate mock signal and data
# Generate mock signal and data
...
@@ -97,11 +97,14 @@ def main():
...
@@ -97,11 +97,14 @@ def main():
data
=
signal_response
(
mock_position
)
+
N
.
draw_sample_with_dtype
(
dtype
=
np
.
float64
)
data
=
signal_response
(
mock_position
)
+
N
.
draw_sample_with_dtype
(
dtype
=
np
.
float64
)
# Minimization parameters
# Minimization parameters
ic_sampling
=
ift
.
AbsDeltaEnergyController
(
ic_sampling
=
ift
.
AbsDeltaEnergyController
(
name
=
"Sampling (linear)"
,
deltaE
=
0.05
,
iteration_limit
=
100
)
deltaE
=
0.05
,
iteration_limit
=
100
)
ic_newton
=
ift
.
AbsDeltaEnergyController
(
ic_newton
=
ift
.
AbsDeltaEnergyController
(
name
=
'Newton'
,
deltaE
=
0.5
,
name
=
'Newton'
,
deltaE
=
0.5
,
iteration_limit
=
35
)
convergence_level
=
2
,
iteration_limit
=
35
)
minimizer
=
ift
.
NewtonCG
(
ic_newton
)
minimizer
=
ift
.
NewtonCG
(
ic_newton
)
ic_sampling_nl
=
ift
.
AbsDeltaEnergyController
(
name
=
'Sampling (nonlin)'
,
deltaE
=
0.5
,
iteration_limit
=
15
,
convergence_level
=
2
)
minimizer_sampling
=
ift
.
NewtonCG
(
ic_sampling_nl
)
# Set up likelihood and information Hamiltonian
# Set up likelihood and information Hamiltonian
likelihood
=
(
ift
.
GaussianEnergy
(
mean
=
data
,
inverse_covariance
=
N
.
inverse
)
@
likelihood
=
(
ift
.
GaussianEnergy
(
mean
=
data
,
inverse_covariance
=
N
.
inverse
)
@
...
@@ -112,18 +115,21 @@ def main():
...
@@ -112,18 +115,21 @@ def main():
mean
=
initial_mean
mean
=
initial_mean
plot
=
ift
.
Plot
()
plot
=
ift
.
Plot
()
plot
.
add
(
signal
(
mock_position
),
title
=
'Ground Truth'
)
plot
.
add
(
signal
(
mock_position
),
title
=
'Ground Truth'
,
zmin
=
0
,
zmax
=
1
)
plot
.
add
(
R
.
adjoint_times
(
data
),
title
=
'Data'
)
plot
.
add
(
R
.
adjoint_times
(
data
),
title
=
'Data'
)
plot
.
add
([
pspec
.
force
(
mock_position
)],
title
=
'Power Spectrum'
)
plot
.
add
([
pspec
.
force
(
mock_position
)],
title
=
'Power Spectrum'
)
plot
.
output
(
ny
=
1
,
nx
=
3
,
xsize
=
24
,
ysize
=
6
,
name
=
filename
.
format
(
"setup"
))
plot
.
output
(
ny
=
1
,
nx
=
3
,
xsize
=
24
,
ysize
=
6
,
name
=
filename
.
format
(
"setup"
))
# number of samples used to estimate the KL
# number of samples used to estimate the KL
N_samples
=
2
0
N_samples
=
1
0
# Draw new samples to approximate the KL five times
# Draw new samples to approximate the KL six times
for
i
in
range
(
5
):
for
i
in
range
(
6
):
if
i
==
5
:
# Double the number of samples in the last step for better statistics
N_samples
=
2
*
N_samples
# Draw new samples and minimize KL
# Draw new samples and minimize KL
KL
=
ift
.
Metric
GaussianKL
.
make
(
mean
,
H
,
N_samples
,
True
)
KL
=
ift
.
Geo
Metric
KL
(
mean
,
H
,
N_samples
,
minimizer_sampling
,
True
)
KL
,
convergence
=
minimizer
(
KL
)
KL
,
convergence
=
minimizer
(
KL
)
mean
=
KL
.
position
mean
=
KL
.
position
ift
.
extra
.
minisanity
(
data
,
lambda
x
:
N
.
inverse
,
signal_response
,
ift
.
extra
.
minisanity
(
data
,
lambda
x
:
N
.
inverse
,
signal_response
,
...
@@ -131,7 +137,7 @@ def main():
...
@@ -131,7 +137,7 @@ def main():
# Plot current reconstruction
# Plot current reconstruction
plot
=
ift
.
Plot
()
plot
=
ift
.
Plot
()
plot
.
add
(
signal
(
KL
.
position
),
title
=
"Latent mean"
)
plot
.
add
(
signal
(
KL
.
position
),
title
=
"Latent mean"
,
zmin
=
0
,
zmax
=
1
)
plot
.
add
([
pspec
.
force
(
KL
.
position
+
ss
)
for
ss
in
KL
.
samples
],
plot
.
add
([
pspec
.
force
(
KL
.
position
+
ss
)
for
ss
in
KL
.
samples
],
title
=
"Samples power spectrum"
)
title
=
"Samples power spectrum"
)
plot
.
output
(
ny
=
1
,
ysize
=
6
,
xsize
=
16
,
plot
.
output
(
ny
=
1
,
ysize
=
6
,
xsize
=
16
,
...
@@ -144,16 +150,16 @@ def main():
...
@@ -144,16 +150,16 @@ def main():
# Plotting
# Plotting
filename_res
=
filename
.
format
(
"results"
)
filename_res
=
filename
.
format
(
"results"
)
plot
=
ift
.
Plot
()
plot
=
ift
.
Plot
()
plot
.
add
(
sc
.
mean
,
title
=
"Posterior Mean"
)
plot
.
add
(
sc
.
mean
,
title
=
"Posterior Mean"
,
zmin
=
0
,
zmax
=
1
)
plot
.
add
(
ift
.
sqrt
(
sc
.
var
),
title
=
"Posterior Standard Deviation"
)
plot
.
add
(
ift
.
sqrt
(
sc
.
var
),
title
=
"Posterior Standard Deviation"
)
powers
=
[
pspec
.
force
(
s
+
KL
.
position
)
for
s
in
KL
.
samples
]
powers
=
[
pspec
.
force
(
s
+
KL
.
position
)
for
s
in
KL
.
samples
]
sc
=
ift
.
StatCalculator
()
sc
=
ift
.
StatCalculator
()
for
pp
in
powers
:
for
pp
in
powers
:
sc
.
add
(
pp
)
sc
.
add
(
pp
.
log
()
)
plot
.
add
(
plot
.
add
(
powers
+
[
pspec
.
force
(
mock_position
),
powers
+
[
pspec
.
force
(
mock_position
),
pspec
.
force
(
KL
.
position
),
sc
.
mean
],
pspec
.
force
(
KL
.
position
),
sc
.
mean
.
exp
()
],
title
=
"Sampled Posterior Power Spectrum"
,
title
=
"Sampled Posterior Power Spectrum"
,
linewidth
=
[
1.
]
*
len
(
powers
)
+
[
3.
,
3.
,
3.
],
linewidth
=
[
1.
]
*
len
(
powers
)
+
[
3.
,
3.
,
3.
],
label
=
[
None
]
*
len
(
powers
)
+
[
'Ground truth'
,
'Posterior latent mean'
,
'Posterior mean'
])
label
=
[
None
]
*
len
(
powers
)
+
[
'Ground truth'
,
'Posterior latent mean'
,
'Posterior mean'
])
...
...
demos/getting_started_4_CorrelatedFields.ipynb
View file @
9d7ae618
...
@@ -87,7 +87,8 @@
...
@@ -87,7 +87,8 @@
"main_sample = None\n",
"main_sample = None\n",
"def init_model(m_pars, fl_pars):\n",
"def init_model(m_pars, fl_pars):\n",
" global main_sample\n",
" global main_sample\n",
" cf = ift.CorrelatedFieldMaker.make(**m_pars)\n",
" cf = ift.CorrelatedFieldMaker(m_pars[\"prefix\"])\n",
" cf.set_amplitude_total_offset(m_pars[\"offset_mean\"], m_pars[\"offset_std\"])\n",
" cf.add_fluctuations(**fl_pars)\n",
" cf.add_fluctuations(**fl_pars)\n",
" field = cf.finalize(prior_info=0)\n",
" field = cf.finalize(prior_info=0)\n",
" main_sample = ift.from_random(field.domain)\n",
" main_sample = ift.from_random(field.domain)\n",
...
@@ -95,7 +96,8 @@
...
@@ -95,7 +96,8 @@
" \n",
" \n",
"# Helper: field and spectrum from parameter dictionaries + plotting\n",
"# Helper: field and spectrum from parameter dictionaries + plotting\n",
"def eval_model(m_pars, fl_pars, title=None, samples=None):\n",
"def eval_model(m_pars, fl_pars, title=None, samples=None):\n",
" cf = ift.CorrelatedFieldMaker.make(**m_pars)\n",
" cf = ift.CorrelatedFieldMaker(m_pars[\"prefix\"])\n",
" cf.set_amplitude_total_offset(m_pars[\"offset_mean\"], m_pars[\"offset_std\"])\n",
" cf.add_fluctuations(**fl_pars)\n",
" cf.add_fluctuations(**fl_pars)\n",
" field = cf.finalize(prior_info=0)\n",
" field = cf.finalize(prior_info=0)\n",
" spectrum = cf.amplitude\n",
" spectrum = cf.amplitude\n",
...
@@ -473,7 +475,7 @@
...
@@ -473,7 +475,7 @@
"name": "python",
"name": "python",
"nbconvert_exporter": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"pygments_lexer": "ipython3",
"version": "3.
8.3
"
"version": "3.
9.2
"
}
}
},
},
"nbformat": 4,
"nbformat": 4,
...
...
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
# Notebook showcasing the NIFTy 6 Correlated Field model
# Notebook showcasing the NIFTy 6 Correlated Field model
**Skip to `Parameter Showcases` for the meat/veggies ;)**
**Skip to `Parameter Showcases` for the meat/veggies ;)**
The field model roughly works like this:
The field model roughly works like this:
`f = HT( A * zero_mode * xi ) + offset`
`f = HT( A * zero_mode * xi ) + offset`
`A`
is a spectral power field which is constructed from power spectra that hold on subdomains of the target domain.
`A`
is a spectral power field which is constructed from power spectra that hold on subdomains of the target domain.
It is scaled by a zero mode operator and then pointwise multiplied by a gaussian excitation field, yielding
It is scaled by a zero mode operator and then pointwise multiplied by a gaussian excitation field, yielding
a representation of the field in harmonic space.
a representation of the field in harmonic space.
It is then transformed into the target real space and a offset added.
It is then transformed into the target real space and a offset added.
The power spectra
`A`
is constructed of are in turn constructed as the sum of a power law component
The power spectra
`A`
is constructed of are in turn constructed as the sum of a power law component
and an integrated Wiener process whose amplitude and roughness can be set.
and an integrated Wiener process whose amplitude and roughness can be set.
## Setup code
## Setup code
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
python
```
python
import
nifty7
as
ift
import
nifty7
as
ift
import
matplotlib.pyplot
as
plt
import
matplotlib.pyplot
as
plt
import
numpy
as
np
import
numpy
as
np
ift
.
random
.
push_sseq_from_seed
(
43
)
ift
.
random
.
push_sseq_from_seed
(
43
)
n_pix
=
256
n_pix
=
256
x_space
=
ift
.
RGSpace
(
n_pix
)
x_space
=
ift
.
RGSpace
(
n_pix
)
```
```
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
python
```
python
# Plotting routine
# Plotting routine
def
plot
(
fields
,
spectra
,
title
=
None
):
def
plot
(
fields
,
spectra
,
title
=
None
):
# Plotting preparation is normally handled by nifty7.Plot
# Plotting preparation is normally handled by nifty7.Plot
# It is done manually here to be able to tweak details
# It is done manually here to be able to tweak details
# Fields are assumed to have identical domains
# Fields are assumed to have identical domains
fig
=
plt
.
figure
(
tight_layout
=
True
,
figsize
=
(
12
,
3.5
))
fig
=
plt
.
figure
(
tight_layout
=
True
,
figsize
=
(
12
,
3.5
))
if
title
is
not
None
:
if
title
is
not
None
:
fig
.
suptitle
(
title
,
fontsize
=
14
)
fig
.
suptitle
(
title
,
fontsize
=
14
)
# Field
# Field
ax1
=
fig
.
add_subplot
(
1
,
2
,
1
)
ax1
=
fig
.
add_subplot
(
1
,
2
,
1
)
ax1
.
axhline
(
y
=
0.
,
color
=
'k'
,
linestyle
=
'--'
,
alpha
=
0.25
)
ax1
.
axhline
(
y
=
0.
,
color
=
'k'
,
linestyle
=
'--'
,
alpha
=
0.25
)
for
field
in
fields
:
for
field
in
fields
:
dom
=
field
.
domain
[
0
]
dom
=
field
.
domain
[
0
]
xcoord
=
np
.
arange
(
dom
.
shape
[
0
])
*
dom
.
distances
[
0
]
xcoord
=
np
.
arange
(
dom
.
shape
[
0
])
*
dom
.
distances
[
0
]
ax1
.
plot
(
xcoord
,
field
.
val
)
ax1
.
plot
(
xcoord
,
field
.
val
)
ax1
.
set_xlim
(
xcoord
[
0
],
xcoord
[
-
1
])
ax1
.
set_xlim
(
xcoord
[
0
],
xcoord
[
-
1
])
ax1
.
set_ylim
(
-
5.
,
5.
)
ax1
.
set_ylim
(
-
5.
,
5.
)
ax1
.
set_xlabel
(
'x'
)
ax1
.
set_xlabel
(
'x'
)
ax1
.
set_ylabel
(
'f(x)'
)
ax1
.
set_ylabel
(
'f(x)'
)
ax1
.
set_title
(
'Field realizations'
)
ax1
.
set_title
(
'Field realizations'
)
# Spectrum
# Spectrum
ax2
=
fig
.
add_subplot
(
1
,
2
,
2
)
ax2
=
fig
.
add_subplot
(
1
,
2
,
2
)
for
spectrum
in
spectra
:
for
spectrum
in
spectra
:
xcoord
=
spectrum
.
domain
[
0
].
k_lengths
xcoord
=
spectrum
.
domain
[
0
].
k_lengths
ycoord
=
spectrum
.
val_rw
()
ycoord
=
spectrum
.
val_rw
()
ycoord
[
0
]
=
ycoord
[
1
]
ycoord
[
0
]
=
ycoord
[
1
]
ax2
.
plot
(
xcoord
,
ycoord
)
ax2
.
plot
(
xcoord
,
ycoord
)
ax2
.
set_ylim
(
1e-6
,
10.
)
ax2
.
set_ylim
(
1e-6
,
10.
)
ax2
.
set_xscale
(
'log'
)
ax2
.
set_xscale
(
'log'
)
ax2
.
set_yscale
(
'log'
)
ax2
.
set_yscale
(
'log'
)
ax2
.
set_xlabel
(
'k'
)
ax2
.
set_xlabel
(
'k'
)
ax2
.
set_ylabel
(
'p(k)'
)
ax2
.
set_ylabel
(
'p(k)'
)
ax2
.
set_title
(
'Power Spectrum'
)
ax2
.
set_title
(
'Power Spectrum'
)
fig
.
align_labels
()
fig
.
align_labels
()
plt
.
show
()
plt
.
show
()
# Helper: draw main sample
# Helper: draw main sample
main_sample
=
None
main_sample
=
None
def
init_model
(
m_pars
,
fl_pars
):
def
init_model
(
m_pars
,
fl_pars
):
global
main_sample
global
main_sample
cf
=
ift
.
CorrelatedFieldMaker
.
make
(
**
m_pars
)
cf
=
ift
.
CorrelatedFieldMaker
(
m_pars
[
"prefix"
])
cf
.
set_amplitude_total_offset
(
m_pars
[
"offset_mean"
],
m_pars
[
"offset_std"
])
cf
.
add_fluctuations
(
**
fl_pars
)
cf
.
add_fluctuations
(
**
fl_pars
)
field
=
cf
.
finalize
(
prior_info
=
0
)
field
=
cf
.
finalize
(
prior_info
=
0
)
main_sample
=
ift
.
from_random
(
field
.
domain
)
main_sample
=
ift
.
from_random
(
field
.
domain
)
print
(
"model domain keys:"
,
field
.
domain
.
keys
())
print
(
"model domain keys:"
,
field
.
domain
.
keys
())
# Helper: field and spectrum from parameter dictionaries + plotting
# Helper: field and spectrum from parameter dictionaries + plotting
def
eval_model
(
m_pars
,
fl_pars
,
title
=
None
,
samples
=
None
):
def
eval_model
(
m_pars
,
fl_pars
,
title
=
None
,
samples
=
None
):
cf
=
ift
.
CorrelatedFieldMaker
.
make
(
**
m_pars
)
cf
=
ift
.
CorrelatedFieldMaker
(
m_pars
[
"prefix"
])
cf
.
set_amplitude_total_offset
(
m_pars
[
"offset_mean"
],
m_pars
[
"offset_std"
])
cf
.
add_fluctuations
(
**
fl_pars
)
cf
.
add_fluctuations
(
**
fl_pars
)
field
=
cf
.
finalize
(
prior_info
=
0
)
field
=
cf
.
finalize
(
prior_info
=
0
)
spectrum
=
cf
.
amplitude
spectrum
=
cf
.
amplitude
if
samples
is
None
:
if
samples
is
None
:
samples
=
[
main_sample
]
samples
=
[
main_sample
]
field_realizations
=
[
field
(
s
)
for
s
in
samples
]
field_realizations
=
[
field
(
s
)
for
s
in
samples
]
spectrum_realizations
=
[
spectrum
.
force
(
s
)
for
s
in
samples
]
spectrum_realizations
=
[
spectrum
.
force
(
s
)
for
s
in
samples
]
plot
(
field_realizations
,
spectrum_realizations
,
title
)
plot
(
field_realizations
,
spectrum_realizations
,
title
)
def
gen_samples
(
key_to_vary
):
def
gen_samples
(
key_to_vary
):
if
key_to_vary
is
None
:
if
key_to_vary
is
None
:
return
[
main_sample
]
return
[
main_sample
]
dct
=
main_sample
.
to_dict
()
dct
=
main_sample
.
to_dict
()
subdom_to_vary
=
dct
.
pop
(
key_to_vary
).
domain
subdom_to_vary
=
dct
.
pop
(
key_to_vary
).
domain
samples
=
[]
samples
=
[]
for
i
in
range
(
8
):
for
i
in
range
(
8
):
d
=
dct
.
copy
()
d
=
dct
.
copy
()
d
[
key_to_vary
]
=
ift
.
from_random
(
subdom_to_vary
)
d
[
key_to_vary
]
=
ift
.
from_random
(
subdom_to_vary
)
samples
.
append
(
ift
.
MultiField
.
from_dict
(
d
))
samples
.
append
(
ift
.
MultiField
.
from_dict
(
d
))
return
samples
return
samples
def
vary_parameter
(
parameter_key
,
values
,
samples_vary_in
=
None
):
def
vary_parameter
(
parameter_key
,
values
,
samples_vary_in
=
None
):
s
=
gen_samples
(
samples_vary_in
)
s
=
gen_samples
(
samples_vary_in
)
for
v
in
values
:
for
v
in
values
:
if
parameter_key
in
cf_make_pars
.
keys
():
if
parameter_key
in
cf_make_pars
.
keys
():
m_pars
=
{
**
cf_make_pars
,
parameter_key
:
v
}
m_pars
=
{
**
cf_make_pars
,
parameter_key
:
v
}
eval_model
(
m_pars
,
cf_x_fluct_pars
,
f
"
{
parameter_key
}
=
{
v
}
"
,
s
)
eval_model
(
m_pars
,
cf_x_fluct_pars
,
f
"
{
parameter_key
}
=
{
v
}
"
,
s
)
else
:
else
:
fl_pars
=
{
**
cf_x_fluct_pars
,
parameter_key
:
v
}
fl_pars
=
{
**
cf_x_fluct_pars
,
parameter_key
:
v
}
eval_model
(
cf_make_pars
,
fl_pars
,
f
"
{
parameter_key
}
=
{
v
}
"
,
s
)
eval_model
(
cf_make_pars
,
fl_pars
,
f
"
{
parameter_key
}
=
{
v
}
"
,
s
)
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
## Before the Action: The Moment-Matched Log-Normal Distribution
## Before the Action: The Moment-Matched Log-Normal Distribution
Many properties of the correlated field are modelled as being lognormally distributed.
Many properties of the correlated field are modelled as being lognormally distributed.
The distribution models are parametrized via their means and standard-deviations (first and second position in tuple).
The distribution models are parametrized via their means and standard-deviations (first and second position in tuple).
To get a feeling of how the ratio of the
`mean`
and
`stddev`
parameters influences the distribution shape,
To get a feeling of how the ratio of the
`mean`
and
`stddev`
parameters influences the distribution shape,
here are a few example histograms: (observe the x-axis!)
here are a few example histograms: (observe the x-axis!)
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
python
```
python
fig
=
plt
.
figure
(
figsize
=
(
13
,
3.5
))
fig
=
plt
.
figure
(
figsize
=
(
13
,
3.5
))
mean
=
1.0
mean
=
1.0
sigmas
=
[
1.0
,
0.5
,
0.1
]
sigmas
=
[
1.0
,
0.5
,
0.1
]
for
i
in
range
(
3
):
for
i
in
range
(
3
):
op
=
ift
.
LognormalTransform
(
mean
=
mean
,
sigma
=
sigmas
[
i
],
op
=
ift
.
LognormalTransform
(
mean
=
mean
,
sigma
=
sigmas
[
i
],
key
=
'foo'
,
N_copies
=
0
)
key
=
'foo'
,
N_copies
=
0
)
op_samples
=
np
.
array
(
op_samples
=
np
.
array
(
[
op
(
s
).
val
for
s
in
[
ift
.
from_random
(
op
.
domain
)
for
i
in
range
(
10000
)]])
[
op
(
s
).
val
for
s
in
[
ift
.
from_random
(
op
.
domain
)
for
i
in
range
(
10000
)]])
ax
=
fig
.
add_subplot
(
1
,
3
,
i
+
1
)
ax
=
fig
.
add_subplot
(
1
,
3
,
i
+
1
)
ax
.
hist
(
op_samples
,
bins
=
50
)
ax
.
hist
(
op_samples
,
bins
=
50
)
ax
.
set_title
(
f
"mean =
{
mean
}
, sigma =
{
sigmas
[
i
]
}
"
)
ax
.
set_title
(
f
"mean =
{
mean
}
, sigma =
{
sigmas
[
i
]
}
"
)
ax
.
set_xlabel
(
'x'
)
ax
.
set_xlabel
(
'x'
)
del
op_samples
del
op_samples
plt
.
show
()
plt
.
show
()
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
## The Neutral Field
## The Neutral Field
To demonstrate the effect of all parameters, first a 'neutral' set of parameters
To demonstrate the effect of all parameters, first a 'neutral' set of parameters
is defined which then are varied one by one, showing the effect of the variation
is defined which then are varied one by one, showing the effect of the variation
on the generated field realizations and the underlying power spectrum from which
on the generated field realizations and the underlying power spectrum from which
they were drawn.
they were drawn.
As a neutral field, a model with a white power spectrum and vanishing spectral power was chosen.
As a neutral field, a model with a white power spectrum and vanishing spectral power was chosen.
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
python
```
python
# Neutral model parameters yielding a quasi-constant field
# Neutral model parameters yielding a quasi-constant field
cf_make_pars
=
{
cf_make_pars
=
{
'offset_mean'
:
0.
,
'offset_mean'
:
0.
,
'offset_std'
:
(
1e-3
,
1e-16
),
'offset_std'
:
(
1e-3
,
1e-16
),
'prefix'
:
''
'prefix'
:
''
}
}
cf_x_fluct_pars
=
{
cf_x_fluct_pars
=
{
'target_subdomain'
:
x_space
,
'target_subdomain'
:
x_space
,
'fluctuations'
:
(
1e-3
,
1e-16
),
'fluctuations'
:
(
1e-3
,
1e-16
),
'flexibility'
:
(
1e-3
,
1e-16
),
'flexibility'
:
(
1e-3
,
1e-16
),
'asperity'
:
(
1e-3
,
1e-16
),
'asperity'
:
(
1e-3
,
1e-16
),
'loglogavgslope'
:
(
0.
,
1e-16
)
'loglogavgslope'
:
(
0.
,
1e-16
)
}
}
init_model
(
cf_make_pars
,
cf_x_fluct_pars
)
init_model
(
cf_make_pars
,
cf_x_fluct_pars
)
```
```
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
python
```
python
# Show neutral field
# Show neutral field
eval_model
(
cf_make_pars
,
cf_x_fluct_pars
,
"Neutral Field"
)
eval_model
(
cf_make_pars
,
cf_x_fluct_pars
,
"Neutral Field"
)
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
# Parameter Showcases
# Parameter Showcases
## The `fluctuations` parameters of `add_fluctuations()`
## The `fluctuations` parameters of `add_fluctuations()`
determine the
**amplitude of variations along the field dimension**
determine the
**amplitude of variations along the field dimension**
for which
`add_fluctuations`
is called.
for which
`add_fluctuations`
is called.
`fluctuations[0]`
set the _average_ amplitude of the fields fluctuations along the given dimension,
\
`fluctuations[0]`
set the _average_ amplitude of the fields fluctuations along the given dimension,
\
`fluctuations[1]`
sets the width and shape of the amplitude distribution.
`fluctuations[1]`
sets the width and shape of the amplitude distribution.
The amplitude is modelled as being log-normally distributed,
The amplitude is modelled as being log-normally distributed,
see
`The Moment-Matched Log-Normal Distribution`
above for details.
see
`The Moment-Matched Log-Normal Distribution`
above for details.
#### `fluctuations` mean:
#### `fluctuations` mean:
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
python
```
python
vary_parameter
(
'fluctuations'
,
[(
0.05
,
1e-16
),
(
0.5
,
1e-16
),
(
2.
,
1e-16
)],
samples_vary_in
=
'xi'
)
vary_parameter
(
'fluctuations'
,
[(
0.05
,
1e-16
),
(
0.5
,
1e-16
),
(
2.
,
1e-16
)],
samples_vary_in
=
'xi'
)
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
#### `fluctuations` std:
#### `fluctuations` std:
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
python
```
python
vary_parameter
(
'fluctuations'
,
[(
1.
,
0.01
),
(
1.
,
0.1
),
(
1.
,
1.
)],
samples_vary_in
=
'fluctuations'
)
vary_parameter
(
'fluctuations'
,
[(
1.
,
0.01
),
(
1.
,
0.1
),
(
1.
,
1.
)],
samples_vary_in
=
'fluctuations'
)
cf_x_fluct_pars
[
'fluctuations'
]
=
(
1.
,
1e-16
)
cf_x_fluct_pars
[
'fluctuations'
]
=
(
1.
,
1e-16
)
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
## The `loglogavgslope` parameters of `add_fluctuations()`
## The `loglogavgslope` parameters of `add_fluctuations()`
determine
**the slope of the loglog-linear (power law) component of the power spectrum**
.
determine
**the slope of the loglog-linear (power law) component of the power spectrum**
.
The slope is modelled to be normally distributed.
The slope is modelled to be normally distributed.
#### `loglogavgslope` mean:
#### `loglogavgslope` mean:
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
python
```
python
vary_parameter
(
'loglogavgslope'
,
[(
-
6.
,
1e-16
),
(
-
2.
,
1e-16
),
(
2.
,
1e-16
)],
samples_vary_in
=
'xi'
)
vary_parameter
(
'loglogavgslope'
,
[(
-
6.
,
1e-16
),
(
-
2.
,
1e-16
),
(
2.
,
1e-16
)],
samples_vary_in
=
'xi'
)
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
#### `loglogavgslope` std:
#### `loglogavgslope` std:
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
python
```
python
vary_parameter
(
'loglogavgslope'
,
[(
-
2.
,
0.02
),
(
-
2.
,
0.2
),
(
-
2.
,
2.0
)],
samples_vary_in
=
'loglogavgslope'
)
vary_parameter
(
'loglogavgslope'
,
[(
-
2.
,
0.02
),
(
-
2.
,
0.2
),
(
-
2.
,
2.0
)],
samples_vary_in
=
'loglogavgslope'
)
cf_x_fluct_pars
[
'loglogavgslope'
]
=
(
-
2.
,
1e-16
)
cf_x_fluct_pars
[
'loglogavgslope'
]
=
(
-
2.
,
1e-16
)
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
## The `flexibility` parameters of `add_fluctuations()`
## The `flexibility` parameters of `add_fluctuations()`
determine
**the amplitude of the integrated Wiener process component of the power spectrum**
determine
**the amplitude of the integrated Wiener process component of the power spectrum**
(how strong the power spectrum varies besides the power-law).
(how strong the power spectrum varies besides the power-law).
`flexibility[0]`
sets the _average_ amplitude of the i.g.p. component,
\
`flexibility[0]`
sets the _average_ amplitude of the i.g.p. component,
\
`flexibility[1]`
sets how much the amplitude can vary.
\
`flexibility[1]`
sets how much the amplitude can vary.
\
These two parameters feed into a moment-matched log-normal distribution model,
These two parameters feed into a moment-matched log-normal distribution model,
see above for a demo of its behavior.
see above for a demo of its behavior.
#### `flexibility` mean:
#### `flexibility` mean:
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
python
```
python
vary_parameter
(
'flexibility'
,
[(
0.4
,
1e-16
),
(
4.0
,
1e-16
),
(
12.0
,
1e-16
)],
samples_vary_in
=
'spectrum'
)
vary_parameter
(
'flexibility'
,
[(
0.4
,
1e-16
),
(
4.0
,
1e-16
),
(
12.0
,
1e-16
)],
samples_vary_in
=
'spectrum'
)
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
#### `flexibility` std:
#### `flexibility` std:
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
python
```
python
vary_parameter
(
'flexibility'
,
[(
4.
,
0.02
),
(
4.
,
0.2
),
(
4.
,
2.
)],
samples_vary_in
=
'flexibility'
)
vary_parameter
(
'flexibility'
,
[(
4.
,
0.02
),
(
4.
,
0.2
),
(
4.
,
2.
)],
samples_vary_in
=
'flexibility'
)
cf_x_fluct_pars
[
'flexibility'
]
=
(
4.
,
1e-16
)
cf_x_fluct_pars
[
'flexibility'
]
=
(
4.
,
1e-16
)
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
## The `asperity` parameters of `add_fluctuations()`
## The `asperity` parameters of `add_fluctuations()`
`asperity`
determines
**how rough the integrated Wiener process component of the power spectrum is**
.
`asperity`
determines
**how rough the integrated Wiener process component of the power spectrum is**
.
`asperity[0]`
sets the average roughness,
`asperity[1]`
sets how much the roughness can vary.
\
`asperity[0]`
sets the average roughness,
`asperity[1]`
sets how much the roughness can vary.
\
These two parameters feed into a moment-matched log-normal distribution model,
These two parameters feed into a moment-matched log-normal distribution model,
see above for a demo of its behavior.
see above for a demo of its behavior.
#### `asperity` mean:
#### `asperity` mean:
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
python
```
python
vary_parameter
(
'asperity'
,
[(
0.001
,
1e-16
),
(
1.0
,
1e-16
),
(
5.
,
1e-16
)],
samples_vary_in
=
'spectrum'
)
vary_parameter
(
'asperity'
,
[(
0.001
,
1e-16
),
(
1.0
,
1e-16
),
(
5.
,
1e-16
)],
samples_vary_in
=
'spectrum'
)
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
#### `asperity` std:
#### `asperity` std:
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
python
```
python
vary_parameter
(
'asperity'
,
[(
1.
,
0.01
),
(
1.
,
0.1
),
(
1.
,
1.
)],
samples_vary_in
=
'asperity'
)
vary_parameter
(
'asperity'
,
[(
1.
,
0.01
),
(
1.
,
0.1
),
(
1.
,
1.
)],
samples_vary_in
=
'asperity'
)
cf_x_fluct_pars
[
'asperity'
]
=
(
1.
,
1e-16
)
cf_x_fluct_pars
[
'asperity'
]
=
(
1.
,
1e-16
)
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
## The `offset_mean` parameter of `CorrelatedFieldMaker.make()`
## The `offset_mean` parameter of `CorrelatedFieldMaker.make()`
The
`offset_mean`
parameter defines a global additive offset on the field realizations.
The
`offset_mean`
parameter defines a global additive offset on the field realizations.
If the field is used for a lognormal model
`f = field.exp()`
, this acts as a global signal magnitude offset.
If the field is used for a lognormal model
`f = field.exp()`
, this acts as a global signal magnitude offset.
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
python
```
python
# Reset model to neutral
# Reset model to neutral
cf_x_fluct_pars
[
'fluctuations'
]
=
(
1e-3
,
1e-16
)
cf_x_fluct_pars
[
'fluctuations'
]
=
(
1e-3
,
1e-16
)
cf_x_fluct_pars
[
'flexibility'
]
=
(
1e-3
,
1e-16
)
cf_x_fluct_pars
[
'flexibility'
]
=
(
1e-3
,
1e-16
)
cf_x_fluct_pars
[
'asperity'
]
=
(
1e-3
,
1e-16
)
cf_x_fluct_pars
[
'asperity'
]
=
(
1e-3
,
1e-16
)
cf_x_fluct_pars
[
'loglogavgslope'
]
=
(
1e-3
,
1e-16
)
cf_x_fluct_pars
[
'loglogavgslope'
]
=
(
1e-3
,
1e-16
)
```
```
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
python
```
python
vary_parameter
(
'offset_mean'
,
[
3.
,
0.
,
-
2.
])
vary_parameter
(
'offset_mean'
,
[
3.
,
0.
,
-
2.
])
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
## The `offset_std` parameters of `CorrelatedFieldMaker.make()`
## The `offset_std` parameters of `CorrelatedFieldMaker.make()`
Variation of the global offset of the field are modelled as being log-normally distributed.
Variation of the global offset of the field are modelled as being log-normally distributed.
See
`The Moment-Matched Log-Normal Distribution`
above for details.
See
`The Moment-Matched Log-Normal Distribution`
above for details.
The
`offset_std[0]`
parameter sets how much NIFTy will vary the offset
*on average*
.
\
The
`offset_std[0]`
parameter sets how much NIFTy will vary the offset
*on average*
.
\
The
`offset_std[1]`
parameters defines the with and shape of the offset variation distribution.
The
`offset_std[1]`
parameters defines the with and shape of the offset variation distribution.
#### `offset_std` mean:
#### `offset_std` mean:
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
python
```
python
vary_parameter
(
'offset_std'
,
[(
1e-16
,
1e-16
),
(
0.5
,
1e-16
),
(
2.
,
1e-16
)],
samples_vary_in
=
'xi'
)
vary_parameter
(
'offset_std'
,
[(
1e-16
,
1e-16
),
(
0.5
,
1e-16
),
(
2.
,
1e-16
)],
samples_vary_in
=
'xi'
)
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
#### `offset_std` std:
#### `offset_std` std:
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
python
```
python
vary_parameter
(
'offset_std'
,
[(
1.
,
0.01
),
(
1.
,
0.1
),
(
1.
,
1.
)],
samples_vary_in
=
'zeromode'
)
vary_parameter
(
'offset_std'
,
[(
1.
,
0.01
),
(
1.
,
0.1
),
(
1.
,
1.
)],
samples_vary_in
=
'zeromode'
)
```
```
...
...
demos/getting_started_5_mf.py
View file @
9d7ae618
...
@@ -11,7 +11,7 @@
...
@@ -11,7 +11,7 @@
# You should have received a copy of the GNU General Public License
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
#
# Copyright(C) 2013-202
0
Max-Planck-Society
# Copyright(C) 2013-202
1
Max-Planck-Society
#
#
# NIFTy is being developed at the Max-Planck-Institut fuer Astrophysik.
# NIFTy is being developed at the Max-Planck-Institut fuer Astrophysik.
...
@@ -73,14 +73,17 @@ def main():
...
@@ -73,14 +73,17 @@ def main():
sp2
=
ift
.
RGSpace
(
npix2
)
sp2
=
ift
.
RGSpace
(
npix2
)
# Set up signal model
# Set up signal model
cfmaker
=
ift
.
CorrelatedFieldMaker
.
make
(
0.
,
(
1e-2
,
1e-6
),
''
)
cfmaker
=
ift
.
CorrelatedFieldMaker
(
''
)
cfmaker
.
add_fluctuations
(
sp1
,
(
0.1
,
1e-2
),
(
2
,
.
2
),
(.
01
,
.
5
),
(
-
4
,
2.
),
'amp1'
)
cfmaker
.
add_fluctuations
(
sp1
,
(
0.1
,
1e-2
),
(
2
,
.
2
),
(.
01
,
.
5
),
(
-
4
,
2.
),
cfmaker
.
add_fluctuations
(
sp2
,
(
0.1
,
1e-2
),
(
2
,
.
2
),
(.
01
,
.
5
),
'amp1'
)
(
-
3
,
1
),
'amp2'
)
cfmaker
.
add_fluctuations
(
sp2
,
(
0.1
,
1e-2
),
(
2
,
.
2
),
(.
01
,
.
5
),
(
-
3
,
1
),
'amp2'
)
cfmaker
.
set_amplitude_total_offset
(
0.
,
(
1e-2
,
1e-6
))
correlated_field
=
cfmaker
.
finalize
()
correlated_field
=
cfmaker
.
finalize
()
pspec1
=
cfmaker
.
normalized_amplitudes
[
0
]
**
2
normalized_amp
=
cfmaker
.
get_normalized_amplitudes
()
pspec2
=
cfmaker
.
normalized_amplitudes
[
1
]
**
2
pspec1
=
normalized_amp
[
0
]
**
2
pspec2
=
normalized_amp
[
1
]
**
2
DC
=
SingleDomain
(
correlated_field
.
target
,
position_space
)
DC
=
SingleDomain
(
correlated_field
.
target
,
position_space
)
# Apply a nonlinearity
# Apply a nonlinearity
...
@@ -131,7 +134,7 @@ def main():
...
@@ -131,7 +134,7 @@ def main():
for
i
in
range
(
5
):
for
i
in
range
(
5
):
# Draw new samples and minimize KL
# Draw new samples and minimize KL
KL
=
ift
.
MetricGaussianKL
.
make
(
mean
,
H
,
N_samples
,
True
)
KL
=
ift
.
MetricGaussianKL
(
mean
,
H
,
N_samples
,
True
)