Commit 82be2881 authored by Volker Springel's avatar Volker Springel

some edits in documentation

parent 2b9938ce
This diff is collapsed.
......@@ -6,69 +6,76 @@
Arepo documentation
**********************
Arepo is a massively parallel code for gravitational n-body systems and
magnetohydrodynamics, both on Newtonian as well as cosmological background.
It is a flexible code that can be applied to a variety of different types of
simulations, offering a number of sophisticated simulation algorithms. A
description of the numerical algorithms employed by the code is given in the
public release paper.
For a more detailed discussion about these algorithms, the original code paper
and subsequent publications are the best resource. This documentation only
addresses the question how to use the different numerical algorithms.
Arepo is a massively parallel code for gravitational N-body systems
and magnetohydrodynamics, both on Newtonian as well as cosmological
backgrounds. It is a flexible code that can be applied to a variety
of different types of problems, offering a number of sophisticated
simulation algorithms. A description of the numerical algorithms
employed by the code is given in the public release code paper. For a
more in depth discussion about these algorithms, the original code
paper and subsequent publications are the best resource. This
documentation only addresses the question how to use the different
numerical algorithms.
Arepo was written by Volker Springel (vspringel@mpa-garching.mpg.de) with further
development by Rüdiger Pakmor (rpakmor@mpa-garching.mpg.de) and contributions by
many other authors (www.arepo-code.org/people).
The public version of the code was compiled by Rainer Weinberger
(rainer.weinberger@cfa.harvard.edu).
Arepo was written by Volker Springel (vspringel@mpa-garching.mpg.de)
with further development by Rüdiger Pakmor
(rpakmor@mpa-garching.mpg.de) and contributions by many other authors
(www.arepo-code.org/people). The public version of the code was
compiled by Rainer Weinberger (rainer.weinberger@cfa.harvard.edu).
Overview
========
The Arepo code was initially developed to combine the advantages of
finite-volume hydrodynamics schemes with the Lagrangian invariance of
smoothed particle hydrodynamics (SPH) schemes. To this end, Arepo makes use of an
unstructured Voronoi-mesh which is, in its standard setting, moving with
the fluid in an quasi-Lagrangian fashion. The fluxes between cells are computed
using a finite-volume approach, and further spatial adaptivity is
provided by the possibility to add and remove cells from the
mesh according to defined criteria. In addition to gas, Arepo allows for a
number of additional particle types which interact only gravitationally, as
well as for forces from external gravitational potentials.
finite-volume hydrodynamics with the Lagrangian convenience of
smoothed particle hydrodynamics (SPH). To this end, Arepo makes use of
an unstructured Voronoi-mesh which is, in the standard mode of
operating the code, moving with the fluid in a quasi-Lagrangian
fashion. The fluxes between cells are computed using a finite-volume
approach, and additional spatial adaptivity is provided by the
possibility to add and remove cells from the mesh according to
user-defined criteria. In addition to gas dynamics, Arepo allows for
additional collisionless particle types which interact only
gravitationally. Besides self-gravity, forces from external
gravitational potentials can also be included.
Arepo is optimized for, but not limited to, cosmological simulations of galaxy
formation and consequently for simulations with very high dynamic ranges in
space and time. Therefore, Arepo employs an adaptive timestepping for each
individual cell and particle as well as a dynamic load and memory
balancing scheme. In its current version, Arepo is fully MPI parallel, and tested
to run with >10,000 MPI tasks. The exact performance is, however, highly problem and
machine dependent.
Arepo is optimized for, but not limited to, cosmological simulations
of galaxy formation, and consequently can used for simulations with
high dynamic range in space and time. This is in particular supported
by Arepo's ability to employ adaptive local timestepping for each
individual cell or particle, as well as a dynamic load and memory
balancing domain decomposition. The current version of Arepo is fully
MPI parallel, and has been tested in runs with >10,000 MPI tasks. The
exact performance is, however, highly problem- and machine-dependent.
Disclaimer
==========
It is important to note that the performance and accuracy of the code is a
sensitive function of some of the code parameters. We also stress that Arepo
comes without any warranty, and without any guarantee that it produces correct
results. If in doubt about something, reading (and potentially improving) the
source code is always the best strategy to understand what is going on!
It is important to note that the performance and accuracy of the code
is a sensitive function of some of the code parameters. We also stress
that Arepo comes without any warranty, and without any guarantee that
it produces correct results. If in doubt about something, reading (and
potentially improving) the source code is always the best strategy to
understand what is going on!
**Please also note the following:**
The numerical parameter values used in the examples contained in the code
distribution do not represent a specific recommendation by the authors! In
particular, we do not endorse these parameter settings in any way as standard
values, nor do we claim that they will provide fully converged results for the
example problems, or for other initial conditions. We think that it is extremely
difficult to make general statements about what accuracy is sufficient for
certain scientific goals, especially when one desires to achieve it with the
smallest possible computational effort. For this reason we refrain from making
The numerical parameter values used in the examples contained in the
code distribution do not represent a specific recommendation by the
authors! In particular, we do not endorse these parameter settings in
any way as standard values, nor do we claim that they will provide
fully converged results for the example problems, or for other initial
conditions. We think that it is extremely difficult to make general
statements about what accuracy is sufficient for certain scientific
goals, especially when one desires to achieve it with the smallest
possible computational effort. For this reason we refrain from making
such recommendations. We encourage every simulator to find out for
herself/himself what integration settings are needed to achieve sufficient
accuracy for the system under study. We strongly recommend to make convergence
and resolution studies to establish the range of validity and the uncertainty of
any numerical result obtained with Arepo.
herself/himself what integration settings are needed to achieve
sufficient accuracy for the system under study. We strongly recommend
to make convergence and resolution studies to establish the range of
validity and the uncertainty of any numerical result obtained with
Arepo.
Table of contents
......
This diff is collapsed.
......@@ -10,33 +10,36 @@ Arepo requires the following libraries for compilation:
Always needed:
1. **mpi** The `Message Passing Interface`, version 2.0 or higher.
In practice Arepo is well-tested on the open source libraries
MPICH <https://www.mpich.org> and OpenMPI <http://www.open-mpi.org>.
For some machines, vendor supplied versions may exist and can also be used.
Arepo relies exclusively on MPI for its parallelization.
1. **mpi** The `Message Passing Interface`, version 2.0 or higher. In
practice Arepo is well-tested on the open source libraries MPICH
<https://www.mpich.org> and OpenMPI <http://www.open-mpi.org>. For
some machines, vendor supplied versions may exist and can also be
used. Arepo relies exclusively on MPI for its parallelization.
2. **gsl** `The GNU Scientific Library`. This open-source library is available
under <http://www.gnu.org/software/gsl>.
Needed in some cases:
3. **fftw3** `Fastest Fourier Transform in the West` (<http://www.fftw.org>)
This free software package for fast-fourier-transformations is used in the
particle-mesh algorithm. Consequently, it is only needed if ``PMGRID`` is
set in ``Config.sh``.
4. **hdf5** `Hierarchical Data Format` <https://support.hdfgroup.org/HDF5/doc/H5.intro.html>
is a data format specification that is used for the input/output format 3
in Arepo, which is the de-facto standard I/O format (even though the old
Gadget formats are supported). The library is needed if the ``HAVE_HDF5``
flag is set in `Config.sh`.
3. **fftw3** `Fastest Fourier Transform in the West`
(<http://www.fftw.org>) This free software package for
fast-fourier-transformations is used in the particle-mesh
algorithm. Consequently, it is only needed if ``PMGRID`` is set in
``Config.sh``.
4. **hdf5** `Hierarchical Data Format`
<https://support.hdfgroup.org/HDF5/doc/H5.intro.html> is a data
format specification that is used for the input/output format 3 in
Arepo, which is the de-facto standard I/O format (even though the
old Gadget formats are supported as well). The library is needed if
the ``HAVE_HDF5`` flag is set in `Config.sh`.
5. **hwloc** : The `hardware locality library` is useful for allowing
the code to probe the processor topology it is running on and enable a
pinning of MPI threads to individual cores. This library is optional and
only required if the ``IMPOSE_PINNING`` flag is set.
the code to probe the processor topology it is running on and
enable a pinning of MPI threads to individual cores. This library
is optional and only required if the ``IMPOSE_PINNING`` flag is
set.
Arepo needs a working c compiler supporting at least the ``c11`` standard, as
well as GNU-Make and Python, which are used in the compilation.
Arepo needs a working C compiler supporting at least the ``c11`` standard, as
well as GNU-Make and Python, which are used in the build process.
Building the code
=================
......@@ -44,37 +47,42 @@ Building the code
Systype variable
----------------
The first thing after obtaining a local copy of the code is to set a ``SYSTYPE``
variable, either by typing::
The first thing after obtaining a local copy of the code is to set a
``SYSTYPE`` variable, either by typing::
export SYSYTPE="MySystype"
or by copying the ``Template-Makefile.systype`` file::
(assuming you run the bash-shell) or by copying the
``Template-Makefile.systype`` file::
cp Template-Makefile.systype Makefile.systype
and uncommenting the intended ``SYSTYPE`` variable in ``Makefile.systype``.
The ``SYSTYPE`` is a variable defining a specific machine. Each machine has its
specific compiler settings and library paths defined in the Makefile. New ``SYSTYPE``
keywords can be added in the Makefile when needed.
and uncommenting the intended ``SYSTYPE`` variable in
``Makefile.systype``. The ``SYSTYPE`` is a variable defining a
specific machine. Each machine has its specific compiler settings and
library paths defined in the Makefile. New ``SYSTYPE`` keywords can be
added in the Makefile when needed.
Config.sh -- Compile time flags
-------------------------------
The next step is to set the compile-time configuration of the code. This is
done best by copying the ``Template-Config.sh`` file::
The next step is to set the compile-time configuration of the
code. This is done best by copying the ``Template-Config.sh`` file::
cp Template-Config.sh Config.sh
Note that the compile-time configuration file does not need to be called ``Config.sh``,
however, if it is not, this needs to be specified when calling the Makefile.
For example, for the code examples this file is located in a different location.
Note that the compile-time configuration file does not need to be
called ``Config.sh``, however, if it is not, this needs to be
specified when calling the Makefile. For example, for the code
examples this file is located in a different location, and they thus
make use of this possibility.
The ``Config.sh`` file contains the possible modules of Arepo, such as e.g.
``SELFGRAVITY`` to enable gravitational interactions between cells/particles, and the
flags not out-commented with a '#' will be processed and included by a
``#define MyModule`` in a header file ``arepoconfig.h`` that is included in the
code build directory.
The ``Config.sh`` file enables/disables possible modules of Arepo,
such as, e.g. ``SELFGRAVITY``, which switches on gravitational
interactions between cells/particles. All flags not out-commented with
a '#' will be processed and cast into a ``#define MyModule`` in an
automatically produced header file ``arepoconfig.h`` that is placed
into the code build directory.
Compiling the code
------------------
......@@ -83,27 +91,32 @@ Arepo can be compiled with::
make
which will first execute a check on compile time flags (see below) followed
by the actual build. It is also possible to skip the check by typing::
which will first execute a check on the compile time flags (see below)
followed by the actual build. It is also possible to skip the check by
typing::
make build
The compilation can be accelerated by compiling files in parallel with the
option ``-j 8``, which will use 8 tasks for the compilation.
The compilation can be accelerated on multi-core machines by compiling
files in parallel with an option like ``-j 8``, which will use 8 threads
for the compilation.
Compile-time checks of flags
----------------------------
The normal ``make`` command will first execute a check using the python script
``check.py``. This will check that all flags used in the code via
``#if defined(MyFlag)`` or ``#ifdef MyFlag`` occur either in ``Template-Config.sh``
or ``defines_extra``. The main reason for doing this is to prevent typos in the
code such as ``#if defined(MyFlg)``, which are generally very hard to find
because the code might compile normally, but the code followed by
``#if defined(MyFlg)`` just does not appear in the executable.
The normal ``make`` command will first execute a check using the
python script ``check.py``. This ensures that all flags used in the
code via ``#if defined(MyFlag)`` or ``#ifdef MyFlag`` occur either in
``Template-Config.sh`` or ``defines_extra``. The main reason for doing
this is to prevent typos in the code such as ``#if defined(MyFlg)``,
which are generally very hard to find because the code might compile
normally, but the code followed by ``#if defined(MyFlg)`` just does
not appear in the executable.
New flags that can be activated should be included to ``Template-Config.sh``,
while internally defined flags are listed in ``defines_extra``.
New flags that can be activated should be included to
``Template-Config.sh``, while internally defined flags (such as header
file guards) are listed in ``defines_extra`` in order to exempt them
from the check.
Starting the code
=================
......@@ -111,52 +124,59 @@ Starting the code
param.txt -- Runtime settings
-----------------------------
After compiling Arepo, the executable ``Arepo`` should be present in the main
directory. To start a simulation, Arepo needs be handed over the path to
a file containing the runtime parameters, usually called ``param.txt``.
The settings that need to be set generally depend on the compile-time flags
used. All possible parameters are listed under :ref:`Parameterfile`, including the
compile-time flags they belong to. Starting Arepo with a parameter file that has
too few or obsolete parameter settings will result in an output telling which
parameters are required or still needed, respectively.
After compiling Arepo, the executable ``Arepo`` should be present in
the main directory. To start a simulation, Arepo needs be handed over
the path to a file containing the runtime parameters, usually called
``param.txt``. The settings that need to be set generally depend on
the compile-time flags used. All possible parameters are listed under
:ref:`Parameterfile`, including the compile-time flags they belong
to. Starting Arepo with a parameter file that has too few or obsolete
parameter settings will result in an output telling which parameters
are required or still needed, respectively.
Starting the simulation
-----------------------
A simulation is started with::
A simulation is started with a command like::
mpirun -np 32 Arepo param.txt
which will start Arepo on 32 MPI ranks using the ``param.txt`` file for the
run-time parameter input. Note that one some systems, mpirun has a different
syntax. We refer to the user guide of individual machines for such
peculiarities.
which will start Arepo on 32 MPI ranks using the ``param.txt`` file
for the run-time parameter settings. Note that on some systems, mpirun
may instead be called mpiexec, or be another launch-command such as srun,
with a slightly different syntax. We refer to the user guide of individual
machines for such peculiarities.
Interrupting a run
==================
Arepo supports running a simulations in chunks and resuming an already started
simulation via reading in so-called restart files, which are memory dumps
written to the output directory. The code is written in such a way that the
result is completely independent of the number of restarts performed, which is
a crucial aspect for debugging the code.
A (regular) code termination always results in the writing of such restart
files, and is triggered whenever 85% of the specified maximum runtime has been
reached or the final simulation time has been reached.
It is important, in particular in larger simulations, that the code is able to
complete the output of the restart file before, e.g. a job time-limit is
reached. Therefore the specified maximum runtime should never exceed the
runtime limits of the machine.
It is also possible to trigger a (regular) code termination by introducing a
file called ``stop`` in the output directory.::
Arepo supports running a simulations in several consecutive
installments, and resuming an already started simulation from
so-called restart files (a mechanism sometimes called checkpointing),
which are essentially memory dumps written to the output
directory. The code is written in such a way that the results are
completely independent of the number of restarts performed, which is a
crucial aspect for debugging the code if necessary, as this ensures
deterministic reproducability.
A (regular) code termination always results in the writing of such
restart files, and is normally triggered when more than 85% of the
specified maximum runtime has been reached, or when the final
simulation time has been reached. It is important, in particular in
larger simulations, that the code is able to complete the output of
the restart file before, e.g. a job time-limit is reached. Therefore
the specified maximum runtime should never exceed the runtime limits
of the machine.
It is also possible to trigger a (regular) code interruption by
introducing a file called ``stop`` in the output directory.::
touch stop
Note that Arepo usually produces restart-files in regular user specified time
intervals to ensure runs can be resumed without a major loss of computation
time even if a run was terminated by the operating system.
Furthermore, Arepo usually produces restart-files in regular, user
specified time intervals to ensure runs can be resumed without a major
loss of computation time even if a run was terminated abruptly by the
operating system.
Restarting a run
================
......@@ -166,24 +186,27 @@ additional flags 1 or 2 in the execution call. The flag 1 is the restarting
from a restart file and generally recommended if possible. This, however,
has the restriction that a run needs to be resumed with **exactly as many MPI
ranks** as it was originally started from. The restart-command then looks
the following::
as follows::
mpirun -np 32 ./Arepo param.txt 1
The flag 2 denotes a restart from a regular snapshot output. This is possible,
however the simulation generally will not be binary identical to a run without
this restart. In particular in runs where the fields (especially positions) are
written in single precision, this can have effects on the overall quality of
the simulation. A command to restart Arepo from a snapshot looks the following::
Alternatively, a flag value of 2 signals that one wants to resume a
simulation from a regular snapshot output. This is possible but
discouraged because here the simulation results will not be binary
identical to a run without this restart. In particular in runs where
some fields (especially positions) are written in single precision,
this can also have effects on the overall quality of the simulation. A
command to restart Arepo from a snapshot looks like the following::
mpirun -np 32 ./Arepo param.txt 2 7
in which case Arepo would read in the snapshot 7 and resume the simulation from
there. This second method should only be used if the regular start from
restart files is not possible for some reason.
Here Arepo would read in the snapshot number 7 and resume the
simulation from its corresponding time. This second method is only
recommened if the regular start from restart files is not possible for
some reason.
Starting post-processing
========================
Starting post-processing tasks
==============================
There are a number of post-processing options in the public version of
Arepo. The main ones are
......@@ -194,6 +217,7 @@ Arepo. The main ones are
* to calculate and output the gradients of hydrodynamical quantities (flag 17)
* to calculate and output the gravitational potential (flag 18)
Note that the non-contiguity of the numbers originates from the consistency
with the developer version, which has more options. Other flags are not
supported in the current implementation of Arepo.
Note that the non-continuity of the numbers originates from the goal
to maintain consistency with the developer version, which has more
options. Other flags are not supported in the current public version
of Arepo.
This diff is collapsed.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment