Skip to content
Snippets Groups Projects
Commit b4bd54b5 authored by Cristian Lalescu's avatar Cristian Lalescu
Browse files

updates documentation

parent 602f866d
Branches
Tags
No related merge requests found
......@@ -288,3 +288,17 @@ Available iterations for
* n = 128: 8192
---------
Reference
---------
Please see https://arxiv.org/abs/2107.01104 for a description of TurTLE,
as well as a detailed discussion of the novel particle tracking
approach.
-------
Contact
-------
If you have any questions, comments or suggestions, please contact
Dr. Cristian C. Lalescu (Cristian.Lalescu@mpcdf.mpg.de).
---
CPP
---
-----------
C++ classes
-----------
kspace
------
.. doxygenclass:: kspace
:project: TurTLE
:members:
field
-----
.. doxygenclass:: field
:project: TurTLE
:members:
code_base
---------
.. doxygenclass:: code_base
:project: TurTLE
:members:
direct_numerical_simulation
---------------------------
.. doxygenclass:: direct_numerical_simulation
:project: TurTLE
:members:
..
NSE
----
.. doxygenclass:: NSE
:project: TurTLE
:members:
NSVE
----
.. doxygenclass:: NSVE
:project: TurTLE
:members:
.. doxygenclass:: particles_distr_mpi
particle_set
------------
.. doxygenclass:: particle_set
:project: TurTLE
:path: ...
:members: [...]
......
......
=====================
Overview and Tutorial
=====================
----------------
General comments
----------------
The purpose of this code is to run pseudo-spectral DNS of turbulence,
and integrate particle trajectories in the resulting fields.
An important aim of the code is to simplify the launching of
compute jobs and postprocessing, up to and including the generation of
publication-ready figures.
For research, people routinely write code from scratch because research
goals change to a point where modifying the previous code is too
expensive.
With TurTLE, the desire is to identify core functionality that should be
implemented in a library.
The core library can then be used by many problem-specific codes.
In this sense, the structuring of the code-base is non-standard.
The core functionality is implemented in C++ (classes useful for
describing working with fields or sets of particles), while a Python3
wrapper is used for generating "main" programmes to be linked against
the core library.
The core library uses a hybrid MPI/OpenMP approach for parallelization, and the Python3 wrapper
compiles this core library when being installed.
Python3 "wrapper"
-----------------
In principle, users of the code should only need to use Python3 for
launching jobs and postprocessing data.
Classes defined in the Python3 package can be used to generate executable
codes, compile/launch them, and then for accessing and postprocessing
data with a full Python3 environment.
Code generation is quite straightforward, with C++ code snippets handled
as strings in the Python3 code, such that they can be combined in
different ways.
Once a "main" file has been written, it is compiled and linked against
the core library.
Depending on machine-specific settings, the code can then be launched
directly, or job scripts appropriate for queueing systems are generated
and submitted.
C++ core library
----------------
TurTLE has a hierarchy of classes that provide prototypes for three basic tasks: perform a DNS, post-process existing data or test arbitrary functionality.
As a guiding principle, the distinct tasks and concepts involved in the numerical simulation in TurTLE are isolated as much as possible, which simplifies the development of extensions.
There are two types of simple objects.
Firstly, an abstract class encapsulates three elements: generic *initialization*, *do work* and *finalization* functionality.
Secondly, essential data structures (i.e. fields, sets of particles) and associated functionality (i.e. I/O) are provided by "building block"-classes.
The solver then consists of a specific arrangement of the building blocks.
The heterogeneous TurTLE development team benefits from the separation of generic functionality from building blocks:
TurTLE is naturally well-suited to the distribution of conceptually distinct work, in particular fully isolating projects such as, e.g., "implement new numerical method" from "optimize the computation of field statistics with OpenMP".
---------
Equations
---------
The code uses a fairly standard pseudo-spectral algorithm to solve fluid
equations.
The incompressible Navier Stokes equations in velocity form are as
follows:
.. math::
\partial_t \mathbf{u} + \mathbf{u} \cdot \nabla \mathbf{u} =
- \nabla p + \nu \Delta \mathbf{u} + \mathbf{f}
In fact, the code solves the vorticity formulation of these equations:
.. math::
\partial_t \mathbf{\omega} +
\mathbf{u} \cdot \nabla \mathbf{\omega} =
\mathbf{\omega} \cdot \nabla \mathbf{u} +
\nu \Delta \mathbf{\omega} + \nabla \times \mathbf{f}
Statistics
----------
Basic quantities that can be computed in a pseudospectral code are the
following:
.. math::
E = \frac{1}{2} \sum_{\mathbf{k}} \hat{\mathbf{u}} \cdot \hat{\mathbf{u}}^*, \hskip .5cm
\varepsilon = \nu \sum_{\mathbf{k}} k^2 \hat{\mathbf{u}} \cdot \hat{\mathbf{u}}^*, \hskip .5cm
\textrm{in general } \sum_{\mathbf{k}} k^p \hat{u_i} \cdot \hat{u_j}^*, \hskip .5cm
\varepsilon_{\textrm{inj}} = \sum_{\mathbf{k}} \hat{\mathbf{u}} \cdot \hat{\mathbf{f}}^*
In fact, C++ code generated by
:class:`NavierStokes <bfps.NavierStokes.NavierStokes>`
computes and stores the velocity
and vorticity cospectra (9 components each):
.. math::
\sum_{k \leq \|\mathbf{k}\| \leq k+dk}\hat{u_i} \cdot \hat{u_j}^*, \hskip .5cm
\sum_{k \leq \|\mathbf{k}\| \leq k+dk}\hat{\omega_i} \cdot \hat{\omega_j}^*
In all honesty, this is overkill for homogenous and isotropic flows, but
in principle we will look at more complicated flows.
See :func:`compute_statistics <bfps.NavierStokes.NavierStokes.compute_statistics>`
and
:func:`compute_time_averages <bfps.NavierStokes.NavierStokes.compute_time_averages>`
for quantities
computed in postprocessing by the python code.
-----------
Conventions
-----------
The C++ backend is based on ``FFTW``, and the Fourier
representations are *transposed*.
In brief, this is the way the fields are represented on disk and in
memory (both in the C++ backend and in Python postprocessing):
* real space representations of 3D vector fields consist of
contiguous arrays, with the shape ``(nz, ny, nx, 3)``:
:math:`n_z \times n_y \times n_x` triplets, where :math:`z` is the
slowest coordinate, :math:`x` the fastest; each triplet is then
the sequence of :math:`x` component, :math:`y` component and
:math:`z` component.
* Fourier space representations of 3D vector fields consist of
contiguous arrays, with the shape ``(ny, nz, nx/2+1, 3)``:
:math:`k_y` is the slowest coordinate, :math:`k_x` the fastest;
each triplet of 3 complex numbers is then the :math:`(x, y, z)`
components, as ``FFTW`` requires for the correspondence with the
real space representations.
:func:`read_cfield <bfps.NavierStokes.NavierStokes.read_cfield>` will return
a properly shaped ``numpy.array`` containing a snapshot of the Fourier
representation of a 3D field.
If you'd like to construct the corresponding wave numbers, you can
follow this procedure:
.. code:: python
import numpy as np
from bfps import NavierStokes
c = NavierStokes(
work_dir = '/location/of/simulation/data',
simname = 'simulation_name_goes_here')
df = c.get_data_file()
kx = df['kspace/kx'].value
ky = df['kspace/ky'].value
kz = df['kspace/kz'].value
df.close()
kval = np.zeros(kz.shape + ky.shape + kx.shape + (3,),
dtype = kx.dtype)
kval[..., 0] = kx[None, None, :]
kval[..., 1] = ky[:, None, None]
kval[..., 2] = kz[None, :, None]
``kval`` will have the same shape as the result of
:func:`read_cfield <NavierStokes.NavierStokes.read_cfield>`.
Obviously, the machine being used should have enough RAM to hold the
field...
--------
Tutorial
--------
First DNS
---------
Installing ``bfps`` is not trivial, and the instructions are in
:ref:`sec-installation`.
After installing, you should have a new executable script
available, called ``bfps``, that you can execute.
Just executing it will run a small test DNS on a real space grid of size
:math:`32 \times 32 \times 32`, in the current
folder, with the simulation name ``test``.
So, open a console, and type ``bfps DNS NSVE``:
.. code:: bash
# depending on how curious you are, you may have a look at the
# options first:
bfps --help
bfps DNS --help
bfps DNS NSVE --help
# or you may just run it:
bfps DNS NSVE
The simulation itself should not take more than a few seconds, since
this is just a :math:`32^3` simulation run for 8 iterations.
First thing you can do afterwards is open up a python console, and type
the following:
.. _sec-first-postprocessing:
.. code:: python
import numpy as np
from bfps import DNS
c = DNS(
work_dir = '/location/of/simulation/data',
simname = 'simulation_name_goes_here')
c.compute_statistics()
print ('Rlambda = {0:.0f}, kMeta = {1:.4f}, CFL = {2:.4f}'.format(
c.statistics['Rlambda'],
c.statistics['kMeta'],
(c.parameters['dt']*c.statistics['vel_max'] /
(2*np.pi/c.parameters['nx']))))
print ('Tint = {0:.4e}, tauK = {1:.4e}'.format(c.statistics['Tint'],
c.statistics['tauK']))
data_file = c.get_data_file()
print ('total time simulated is = {0:.4e} Tint, {1:.4e} tauK'.format(
data_file['iteration'].value*c.parameters['dt'] / c.statistics['Tint'],
data_file['iteration'].value*c.parameters['dt'] / c.statistics['tauK']))
:func:`compute_statistics <bfps.DNS.DNS.compute_statistics>`
will read the data
file generated by the DNS, compute a bunch of basic statistics, for
example the Taylor scale Reynolds number :math:`R_\lambda` that we're
printing in the example code.
What happens is that the DNS will have generated an ``HDF5`` file
containing a bunch of specific datasets (spectra, moments of real space
representations, etc).
The function
:func:`compute_statistics <bfps.DNS.DNS.compute_statistics>`
performs simple postprocessing that may however be expensive, therefore
it also saves some data into a ``<simname>_postprocess.h5`` file, and
then it also performs some time averages, yielding the ``statistics``
dictionary that is used in the above code.
Behind the scenes
-----------------
TODO FIXME obsolete documentation
In brief the following takes place:
1. An instance ``c`` of
:class:`NavierStokes <bfps.NavierStokes.NavierStokes>` is created.
It is used to generate an :class:`argparse.ArgumentParser`, and
it processes command line arguments given to the ``bfps
NavierStokes`` command.
2. reasonable DNS parameters are constructed from the command line
arguments.
3. ``c`` generates a parameter file ``<simname>.h5``, into which the
various parameters are written.
``c`` also generates the various datasets that the backend code
will write into (statistics and other stuff).
4. ``c`` writes a C++ file that is compiled and linked against
``libbfps``.
5. ``c`` executes the C++ code using ``mpirun``.
6. the C++ code actually performs the DNS, and outputs various
results into the ``<simname>.h5`` file.
After the simulation is done, things are simpler.
In fact, any ``HDF5`` capable software can be used to read the data
file, and the dataset names should be reasonably easy to interpret, so
custom postprocessing codes can easily be generated.
========
Tutorial
========
=====================
Overview and Tutorial
=====================
----------------
General comments
----------------
The purpose of this code is to run pseudo-spectral DNS of turbulence,
and integrate particle trajectories in the resulting fields.
An important aim of the code is to simplify the launching of
compute jobs and postprocessing, up to and including the generation of
publication-ready figures.
For research, people routinely write code from scratch because research
goals change to a point where modifying the previous code is too
expensive.
With TurTLE, the desire is to identify core functionality that should be
implemented in a library.
The core library can then be used by many problem-specific codes.
In this sense, the structuring of the code-base is non-standard.
The core functionality is implemented in C++ (classes useful for
describing working with fields or sets of particles), while a Python3
wrapper is used for generating "main" programmes to be linked against
the core library.
The core library uses a hybrid MPI/OpenMP approach for parallelization, and the Python3 wrapper
compiles this core library when being installed.
Python3 "wrapper"
-----------------
In principle, users of the code should only need to use Python3 for
launching jobs and postprocessing data.
Classes defined in the Python3 package can be used to generate executable
codes, compile/launch them, and then for accessing and postprocessing
data with a full Python3 environment.
Code generation is quite straightforward, with C++ code snippets handled
as strings in the Python3 code, such that they can be combined in
different ways.
Once a "main" file has been written, it is compiled and linked against
the core library.
Depending on machine-specific settings, the code can then be launched
directly, or job scripts appropriate for queueing systems are generated
and submitted.
C++ core library
----------------
TurTLE has a hierarchy of classes that provide prototypes for three basic tasks: perform a DNS, post-process existing data or test arbitrary functionality.
As a guiding principle, the distinct tasks and concepts involved in the numerical simulation in TurTLE are isolated as much as possible, which simplifies the development of extensions.
There are two types of simple objects.
Firstly, an abstract class encapsulates three elements: generic *initialization*, *do work* and *finalization* functionality.
Secondly, essential data structures (i.e. fields, sets of particles) and associated functionality (i.e. I/O) are provided by "building block"-classes.
The solver then consists of a specific arrangement of the building blocks.
The heterogeneous TurTLE development team benefits from the separation of generic functionality from building blocks:
TurTLE is naturally well-suited to the distribution of conceptually distinct work, in particular fully isolating projects such as, e.g., "implement new numerical method" from "optimize the computation of field statistics with OpenMP".
---------
Equations
......
......
......@@ -12,11 +12,10 @@ Contents:
:maxdepth: 4
chapters/README
chapters/overview
chapters/tutorial
chapters/development
chapters/bandpass
chapters/api
chapters/cpp
chapters/cpp_doxygen
......
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please to comment