Skip to content
GitLab
Menu
Projects
Groups
Snippets
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
Menu
Open sidebar
Neel Shah
NIFTy
Commits
24bf4f39
Commit
24bf4f39
authored
Jun 03, 2021
by
Philipp Arras
Browse files
Docs
parent
6be924b0
Changes
2
Show whitespace changes
Inline
Sidebyside
src/minimization/energy_adapter.py
View file @
24bf4f39
...
@@ 92,6 +92,20 @@ class EnergyAdapter(Energy):
...
@@ 92,6 +92,20 @@ class EnergyAdapter(Energy):
class
StochasticEnergyAdapter
(
Energy
):
class
StochasticEnergyAdapter
(
Energy
):
"""Provide the energy interface for an energy operator where parts of the
input are averaged instead of optimized.
Specifically, a set of standard normal distributed samples are drawn for
the input corresponding to `keys` and each sample is inserted partially
into `op`. The resulting operators are then averaged. The subdomain that
is not sampled is left a stochastic average of an energy with the remaining
subdomain being the DOFs that are considered to be optimization parameters.
Notes

`StochasticEnergyAdapter` should never be created using the constructor,
but rather via the factory function :attr:`make`.
"""
def
__init__
(
self
,
position
,
op
,
keys
,
local_ops
,
n_samples
,
comm
,
nanisinf
,
def
__init__
(
self
,
position
,
op
,
keys
,
local_ops
,
n_samples
,
comm
,
nanisinf
,
_callingfrommake
=
False
):
_callingfrommake
=
False
):
if
not
_callingfrommake
:
if
not
_callingfrommake
:
...
@@ 148,16 +162,8 @@ class StochasticEnergyAdapter(Energy):
...
@@ 148,16 +162,8 @@ class StochasticEnergyAdapter(Energy):
@
staticmethod
@
staticmethod
def
make
(
position
,
op
,
sampling_keys
,
n_samples
,
mirror_samples
,
def
make
(
position
,
op
,
sampling_keys
,
n_samples
,
mirror_samples
,
comm
=
None
,
nanisinf
=
False
):
comm
=
None
,
nanisinf
=
False
):
"""A variant of `EnergyAdapter` that provides the energy interface for
"""Factory function for StochasticEnergyAdapter.
an operator with a scalar target where parts of the imput are averaged
instead of optmized.
Specifically, a set of standart normal distributed
samples are drawn for the input corresponding to `keys` and each sample
gets partially inserted into `op`. The resulting operators are averaged
and represent a stochastic average of an energy with the remaining
subdomain being the DOFs that are considered to be optimization parameters.
Parameters
Parameters


...
@@ 181,10 +187,9 @@ class StochasticEnergyAdapter(Energy):
...
@@ 181,10 +187,9 @@ class StochasticEnergyAdapter(Energy):
across this communicator. If `mirror_samples` is set, then a sample
across this communicator. If `mirror_samples` is set, then a sample
and its mirror image will always reside on the same task.
and its mirror image will always reside on the same task.
nanisinf : bool
nanisinf : bool
If true, nan energies which can happen due to overflows in the
If true, nan energies, which can occur due to overflows in the
forward model are interpreted as inf. Thereby, the code does not
forward model, are interpreted as inf which can be interpreted by
crash on these occasions but rather the minimizer is told that the
optimizers.
position it has tried is not sensible.
"""
"""
myassert
(
op
.
target
==
DomainTuple
.
scalar_domain
())
myassert
(
op
.
target
==
DomainTuple
.
scalar_domain
())
samdom
=
{}
samdom
=
{}
...
...
src/minimization/stochastic_minimizer.py
View file @
24bf4f39
...
@@ 18,6 +18,7 @@
...
@@ 18,6 +18,7 @@
from
.minimizer
import
Minimizer
from
.minimizer
import
Minimizer
from
.energy
import
Energy
from
.energy
import
Energy
class
ADVIOptimizer
(
Minimizer
):
class
ADVIOptimizer
(
Minimizer
):
"""Provide an implementation of an adaptive stepsize sequence optimizer,
"""Provide an implementation of an adaptive stepsize sequence optimizer,
following https://arxiv.org/abs/1603.00788.
following https://arxiv.org/abs/1603.00788.
...
@@ 48,11 +49,8 @@ class ADVIOptimizer(Minimizer):
...
@@ 48,11 +49,8 @@ class ADVIOptimizer(Minimizer):
def
_step
(
self
,
position
,
gradient
):
def
_step
(
self
,
position
,
gradient
):
self
.
s
=
self
.
alpha
*
gradient
**
2
+
(
1

self
.
alpha
)
*
self
.
s
self
.
s
=
self
.
alpha
*
gradient
**
2
+
(
1

self
.
alpha
)
*
self
.
s
self
.
rho
=
(
self
.
rho
=
self
.
eta
*
self
.
counter
**
(

0.5
+
self
.
epsilon
)
\
self
.
eta
*
self
.
counter
**
(

0.5
+
self
.
epsilon
)
/
(
self
.
tau
+
(
self
.
s
).
sqrt
())
/
(
self
.
tau
+
(
self
.
s
).
sqrt
())
)
new_position
=
position

self
.
rho
*
gradient
new_position
=
position

self
.
rho
*
gradient
self
.
counter
+=
1
self
.
counter
+=
1
return
new_position
return
new_position
...
...
Write
Preview
Supports
Markdown
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment