Commit 44513714 authored by Rainer Weinberger's avatar Rainer Weinberger
Browse files

updates in user guide

parent 0d902274
...@@ -46,7 +46,7 @@ periodic boundary conditions ...@@ -46,7 +46,7 @@ periodic boundary conditions
These options can be used to distort the simulation cube along the These options can be used to distort the simulation cube along the
given direction with the given factor into a parallelepiped of arbitrary aspect given direction with the given factor into a parallelepiped of arbitrary aspect
ratio. The box size in the given direction increases from the value in the ratio. The box size in the given direction increases from the value in the
parameterfile by the factor given (e.g. if Boxsize is set to 100 and ``LONG_X=4`` parameter file by the factor given (e.g. if ``Boxsize`` is set to 100 and ``LONG_X=4``
is set the simulation domain extends from 0 to 400 along X and from 0 to 100 is set the simulation domain extends from 0 to 400 along X and from 0 to 100
along Y and Z.) along Y and Z.)
...@@ -187,7 +187,7 @@ decomposition is disabled. ...@@ -187,7 +187,7 @@ decomposition is disabled.
Enables domain decomposition together with ``VORONOI_STATIC_MESH`` (which is Enables domain decomposition together with ``VORONOI_STATIC_MESH`` (which is
otherwise then disabled), in case non-gas particle types exist and the use of otherwise then disabled), in case non-gas particle types exist and the use of
domain decompotions is desired. Note that on one hand it may be advantageous domain decompositions is desired. Note that on one hand it may be advantageous
in case the non-gas particles mix well or cluster strongly, but on the other in case the non-gas particles mix well or cluster strongly, but on the other
hand the mesh construction that follows the domain decomposition is slow for a hand the mesh construction that follows the domain decomposition is slow for a
static mesh, so whether or not using this new flag is overall advantageous static mesh, so whether or not using this new flag is overall advantageous
...@@ -218,7 +218,7 @@ Refinement ...@@ -218,7 +218,7 @@ Refinement
=========================== ===========================
By default, there is no refinement and derefinement and unless set otherwise, By default, there is no refinement and derefinement and unless set otherwise,
the cirterion for refinement/derefenment is a target mass. the criterion for refinement/derefinement is a target mass.
**REFINEMENT_SPLIT_CELLS** **REFINEMENT_SPLIT_CELLS**
...@@ -276,7 +276,7 @@ remaining mesh structures are freed after this step as usual. ...@@ -276,7 +276,7 @@ remaining mesh structures are freed after this step as usual.
----- -----
Non-standard phyiscs Non-standard physics
==================== ====================
**COOLING** **COOLING**
...@@ -293,14 +293,14 @@ This imposes an adaptive floor for the temperature. ...@@ -293,14 +293,14 @@ This imposes an adaptive floor for the temperature.
**USE_SFR** **USE_SFR**
Star formation model, turning dense gas into collisionless partices. See Star formation model, turning dense gas into collisionless particles. See
Springel&Hernquist, (2003, MNRAS, 339, 289) Springel&Hernquist, (2003, MNRAS, 339, 289)
----- -----
**SFR_KEEP_CELLS** **SFR_KEEP_CELLS**
Do not distroy cell out of which a star has formed. Do not destroy cell out of which a star has formed.
----- -----
...@@ -311,7 +311,7 @@ If nothing is active, no gravity included. ...@@ -311,7 +311,7 @@ If nothing is active, no gravity included.
**SELFGRAVITY** **SELFGRAVITY**
Gravitational intraction between simulation particles/cells. Gravitational interaction between simulation particles/cells.
----- -----
...@@ -342,7 +342,7 @@ Gravity is not treated periodically. ...@@ -342,7 +342,7 @@ Gravity is not treated periodically.
**ALLOW_DIRECT_SUMMATION** **ALLOW_DIRECT_SUMMATION**
Performes direct summation instead of tree-based gravity if number of active Performs direct summation instead of tree-based gravity if number of active
particles < ``DIRECT_SUMMATION_THRESHOLD`` (= 3000 unless specified differently) particles < ``DIRECT_SUMMATION_THRESHOLD`` (= 3000 unless specified differently)
----- -----
...@@ -414,7 +414,7 @@ which works well independent of the data layout, in particular it can cope ...@@ -414,7 +414,7 @@ which works well independent of the data layout, in particular it can cope
well with highly clustered particle distributions that occupy only a small well with highly clustered particle distributions that occupy only a small
subset of the total simulated volume. However, this method is a bit slower subset of the total simulated volume. However, this method is a bit slower
than the default approach (used when the option is disabled), which is best than the default approach (used when the option is disabled), which is best
matched for homogenously sampled periodic boxes. matched for homogeneously sampled periodic boxes.
----- -----
...@@ -446,7 +446,7 @@ This is only relevant when ``PLACEHIGHRESREGION`` is activated. The size of ...@@ -446,7 +446,7 @@ This is only relevant when ``PLACEHIGHRESREGION`` is activated. The size of
the high resolution box will be automatically determined as the minimum size the high resolution box will be automatically determined as the minimum size
required to contain the selected particle type(s), in a "shrink-wrap" fashion. required to contain the selected particle type(s), in a "shrink-wrap" fashion.
This region is be expanded on the fly, as needed. However, in order to prevent This region is be expanded on the fly, as needed. However, in order to prevent
a situation where this size needs to be enlarged frquently, such as when the a situation where this size needs to be enlarged frequently, such as when the
particle set is (slowly) expanding, the minimum size is multiplied by the particle set is (slowly) expanding, the minimum size is multiplied by the
factor ``ENLARGEREGION`` (if defined). Then even if the set is expanding, this factor ``ENLARGEREGION`` (if defined). Then even if the set is expanding, this
will only rarely trigger a recalculation of the high resolution mesh geometry, will only rarely trigger a recalculation of the high resolution mesh geometry,
...@@ -482,7 +482,7 @@ Gravity softening ...@@ -482,7 +482,7 @@ Gravity softening
================= =================
In the default configuration, the code uses a small table of possible In the default configuration, the code uses a small table of possible
gravitational softening lengths, which are specified in the parameterfile gravitational softening lengths, which are specified in the parameter file
through the ``SofteningComovingTypeX`` and ``SofteningMaxPhysTypeX`` options, through the ``SofteningComovingTypeX`` and ``SofteningMaxPhysTypeX`` options,
where X is an integer that gives the "softening type". Each particle type is where X is an integer that gives the "softening type". Each particle type is
mapped to one of these softening types through the mapped to one of these softening types through the
...@@ -506,7 +506,7 @@ If the tree walk wants to use a 'softened node' (i.e. where the maximum ...@@ -506,7 +506,7 @@ If the tree walk wants to use a 'softened node' (i.e. where the maximum
gravitational softening of some particles in the node is larger than the node gravitational softening of some particles in the node is larger than the node
distance and larger than the target particle's softening), the node is opened distance and larger than the target particle's softening), the node is opened
by default (because there could be mass components with a still smaller by default (because there could be mass components with a still smaller
softening hidden in the node). This can cause a subtantial performance penalty softening hidden in the node). This can cause a substantial performance penalty
in some cases. By setting this option, this can be avoided. The code will then in some cases. By setting this option, this can be avoided. The code will then
be allowed to use softened nodes, but it does that by evaluating the be allowed to use softened nodes, but it does that by evaluating the
node-particle interaction for each mass component with different softening node-particle interaction for each mass component with different softening
...@@ -515,7 +515,7 @@ This also requires that each tree node computes and stores a vector with these ...@@ -515,7 +515,7 @@ This also requires that each tree node computes and stores a vector with these
different masses. It is therefore desirable to not make the table of softening different masses. It is therefore desirable to not make the table of softening
types excessively large. This option can be combined with adaptive hydro types excessively large. This option can be combined with adaptive hydro
softening. In this case, particle type 0 needs to be mapped to softening type softening. In this case, particle type 0 needs to be mapped to softening type
0 in the parameterfile, and no other particle type may be mapped to softening 0 in the parameter file, and no other particle type may be mapped to softening
type 0 (the code will issue an error message if one doesn't obey to this). type 0 (the code will issue an error message if one doesn't obey to this).
----- -----
...@@ -532,7 +532,7 @@ desired softening length by scaling the type-1 softening with the cube root of ...@@ -532,7 +532,7 @@ desired softening length by scaling the type-1 softening with the cube root of
the mass ratio. Then, the softening type that is closest to this desired the mass ratio. Then, the softening type that is closest to this desired
softening is assigned to the particle (*choosing only from those softening softening is assigned to the particle (*choosing only from those softening
values explicitly input as a SofteningComovingTypeX parameter*). This option values explicitly input as a SofteningComovingTypeX parameter*). This option
is primarily useful for zoon simulations, where one may for example lump all is primarily useful for zoom simulations, where one may for example lump all
boundary dark matter particles together into type 2 or 3, but yet provide a boundary dark matter particles together into type 2 or 3, but yet provide a
set of softening types over which they are automatically distributed according set of softening types over which they are automatically distributed according
to their mass. If both ``ADAPTIVE_HYDRO_SOFTENING`` and to their mass. If both ``ADAPTIVE_HYDRO_SOFTENING`` and
...@@ -547,7 +547,7 @@ assignment exclude softening type 0. Note: particles that accrete matter ...@@ -547,7 +547,7 @@ assignment exclude softening type 0. Note: particles that accrete matter
When this is enabled, the gravitational softening lengths of hydro cells are When this is enabled, the gravitational softening lengths of hydro cells are
varied according to their radius. To this end, the radius of a cell is varied according to their radius. To this end, the radius of a cell is
multiplied by the parameter ``GasSoftFactor``. Then, the closest softening multiplied by the parameter ``GasSoftFactor``. Then, the closest softening
from a logarithmicaly spaced table of possible softenings is adopted for the from a logarithmically spaced table of possible softenings is adopted for the
cell. The minimum softening in the table is specified by the parameter cell. The minimum softening in the table is specified by the parameter
``MinimumComovingHydroSoftening``, and the larger ones are spaced a factor ``MinimumComovingHydroSoftening``, and the larger ones are spaced a factor
``AdaptiveHydroSofteningSpacing`` apart. The resulting minimum and maximum ``AdaptiveHydroSofteningSpacing`` apart. The resulting minimum and maximum
...@@ -789,7 +789,7 @@ Sum(2^type) for the primary dark matter type. ...@@ -789,7 +789,7 @@ Sum(2^type) for the primary dark matter type.
With this option, FOF groups can be augmented by particles/cells of other With this option, FOF groups can be augmented by particles/cells of other
particle types that they "enclose". To this end, for each particle among the particle types that they "enclose". To this end, for each particle among the
types selected by the bit mask specifed with ``FOF_SECONDARY_LINK_TYPES``, the types selected by the bit mask specified with ``FOF_SECONDARY_LINK_TYPES``, the
nearest among ``FOF_PRIMARY_LINK_TYPES`` is found and then the particle is nearest among ``FOF_PRIMARY_LINK_TYPES`` is found and then the particle is
attached to whatever group this particle is in. sum(2^type) for the types attached to whatever group this particle is in. sum(2^type) for the types
linked to nearest primaries. linked to nearest primaries.
...@@ -802,7 +802,7 @@ An option to make the secondary linking work better in zoom runs (after the ...@@ -802,7 +802,7 @@ An option to make the secondary linking work better in zoom runs (after the
FOF groups have been found, the tree is newly constructed for all the FOF groups have been found, the tree is newly constructed for all the
secondary link targets). This should normally be set to all dark matter secondary link targets). This should normally be set to all dark matter
particle types. If not set, it defaults to ``FOF_PRIMARY_LINK_TYPES``, which particle types. If not set, it defaults to ``FOF_PRIMARY_LINK_TYPES``, which
reproduces the old behaviour. reproduces the old behavior.
----- -----
...@@ -814,7 +814,7 @@ Minimum number of particles (primary+secondary) in one group (default is 32). ...@@ -814,7 +814,7 @@ Minimum number of particles (primary+secondary) in one group (default is 32).
**FOF_LINKLENGTH=0.16** **FOF_LINKLENGTH=0.16**
Linkinglength for FoF in units of the mean inter-particle separation. Linking length for FoF in units of the mean inter-particle separation.
(default=0.2) (default=0.2)
----- -----
...@@ -830,7 +830,7 @@ Sort fuzz particles by nearest group and generate offset table in catalog ...@@ -830,7 +830,7 @@ Sort fuzz particles by nearest group and generate offset table in catalog
Normally, the snapshots produced with a FOF group catalogue are stored in Normally, the snapshots produced with a FOF group catalogue are stored in
group order, such that the particle set making up a group can be inferred as a group order, such that the particle set making up a group can be inferred as a
contiguous block of particles in the snapsot file, making it redundant to contiguous block of particles in the snapshot file, making it redundant to
separately store the IDs of the particles making up a group in the group separately store the IDs of the particles making up a group in the group
catalogue. By activating this option, one can nevertheless force to create the catalogue. By activating this option, one can nevertheless force to create the
corresponding lists of IDs as part of the group catalogue output. corresponding lists of IDs as part of the group catalogue output.
...@@ -851,7 +851,7 @@ within each group. ...@@ -851,7 +851,7 @@ within each group.
**SAVE_HSML_IN_SNAPSHOT** **SAVE_HSML_IN_SNAPSHOT**
When activated, this will store the hsml-values used for estimating total When activated, this will store the hsml-values used for estimating total
matter density around every point and the corresonding densities in the matter density around every point and the corresponding densities in the
snapshot files associated with a run of Subfind. snapshot files associated with a run of Subfind.
----- -----
...@@ -870,13 +870,13 @@ set together with ``SAVE_HSML_IN_SNAPSHOT``. ...@@ -870,13 +870,13 @@ set together with ``SAVE_HSML_IN_SNAPSHOT``.
Additional calculations are carried out, which may be expensive. Additional calculations are carried out, which may be expensive.
(i) Further quantities related to the angular momentum in different components. (i) Further quantities related to the angular momentum in different components.
(ii) The kinetic, thermal and potential binding energies for sperical (ii) The kinetic, thermal and potential binding energies for spherical
overdensity halos. overdensity halos.
----- -----
Special behaviour Special behavior
============================ ============================
**RUNNING_SAFETY_FILE** **RUNNING_SAFETY_FILE**
...@@ -938,7 +938,7 @@ This can be used to load SPH ICs that contain identical particle coordinates. ...@@ -938,7 +938,7 @@ This can be used to load SPH ICs that contain identical particle coordinates.
**RECOMPUTE_POTENTIAL_IN_SNAPSHOT** **RECOMPUTE_POTENTIAL_IN_SNAPSHOT**
Needed for postprocess option 18 that can be used to calculate potential Needed for post-processing option 18 that can be used to calculate potential
values for a snapshot. values for a snapshot.
----- -----
...@@ -1059,14 +1059,14 @@ Reads in dark matter particles as gas cells. ...@@ -1059,14 +1059,14 @@ Reads in dark matter particles as gas cells.
**TILE_ICS** **TILE_ICS**
Tile ICs by TileICsFactor (specified as paramter) in each dimension. Tile ICs by TileICsFactor (specified as parameter) in each dimension.
----- -----
Output fields Output fields
========================== ==========================
Default output filds are: ``position``, ``velocity``, ``ID``, ``mass``, Default output fields are: ``position``, ``velocity``, ``ID``, ``mass``,
``specific internal energy`` (gas only), ``density`` (gas only) ``specific internal energy`` (gas only), ``density`` (gas only)
**OUTPUT_TASK** **OUTPUT_TASK**
...@@ -1119,7 +1119,7 @@ Output of velocity of mesh-generating point. ...@@ -1119,7 +1119,7 @@ Output of velocity of mesh-generating point.
**OUTPUT_VOLUME** **OUTPUT_VOLUME**
Output of volume of cells; note that this can always be computat as both, density Output of volume of cells; note that this can always be computed as both, density
and mass of cells are by default in output. and mass of cells are by default in output.
----- -----
...@@ -1171,7 +1171,7 @@ Output of particle softenings. ...@@ -1171,7 +1171,7 @@ Output of particle softenings.
**OUTPUTGRAVINTERACTIONS** **OUTPUTGRAVINTERACTIONS**
Output of gravitatational interactions (from the gravitational tree) of particles. Output of gravitational interactions (from the gravitational tree) of particles.
----- -----
...@@ -1208,7 +1208,7 @@ Output of vorticity of gas. ...@@ -1208,7 +1208,7 @@ Output of vorticity of gas.
**OUTPUT_CSND** **OUTPUT_CSND**
Output of sound speed. This field is only used for tree-based timesteps! Output of sound speed. This field is only used for tree-based timesteps!
Calculate from hydro quantities in postprocessing if required for science Calculate from hydro quantities in post-processing if required for science
applications. applications.
----- -----
...@@ -1218,7 +1218,7 @@ Output options ...@@ -1218,7 +1218,7 @@ Output options
**PROCESS_TIMES_OF_OUTPUTLIST** **PROCESS_TIMES_OF_OUTPUTLIST**
Goes through times of output list prior to starting the simulaiton to ensure Goes through times of output list prior to starting the simulation to ensure
that outputs are written as close to the desired time as possible (as opposed that outputs are written as close to the desired time as possible (as opposed
to at next possible time if this flag is not active). to at next possible time if this flag is not active).
...@@ -1227,7 +1227,7 @@ to at next possible time if this flag is not active). ...@@ -1227,7 +1227,7 @@ to at next possible time if this flag is not active).
**REDUCE_FLUSH** **REDUCE_FLUSH**
If enabled files and stdout are only flushed after a certain time defined in If enabled files and stdout are only flushed after a certain time defined in
the parameter file (standard behaviour: everything is flashed most times the parameter file (standard behavior: everything is flushed most times
something is written to it). something is written to it).
----- -----
...@@ -1235,7 +1235,7 @@ something is written to it). ...@@ -1235,7 +1235,7 @@ something is written to it).
**OUTPUT_EVERY_STEP** **OUTPUT_EVERY_STEP**
Create snapshot on every (global) synchronization point, independent of Create snapshot on every (global) synchronization point, independent of
parameters choosen or output list. parameters chosen or output list.
----- -----
...@@ -1286,7 +1286,7 @@ Reports readjustments of buffer sizes. ...@@ -1286,7 +1286,7 @@ Reports readjustments of buffer sizes.
Re-gridding Re-gridding
============================ ============================
These opitions are auxiliary modes to prepare/convert/relax initial conditions These options are auxiliary modes to prepare/convert/relax initial conditions
and will not carry out a simulation. and will not carry out a simulation.
**MESHRELAX** **MESHRELAX**
......
...@@ -2,58 +2,77 @@ Code development ...@@ -2,58 +2,77 @@ Code development
************************ ************************
We strongly encourage further development of the code by other people. The idea Scientific software development is a crucial aspect of computational (astro-)physics.
with this public version is to provide a well tested stable version of AREPO to With the progess made over the past decades, numerical methods as well as implementations
the community as a basis of individual model development. Changes to the code can have evolved and increased in complexity. The scope of simulation codes has
then be made in two ways: bugfixes and code extensions. The former is organized in increased accordingly, such that the question how to organize development becomes important
the issue-tracking system of the repository, while for the to scientific progress.
second one, the developers should be contacted.
Issue reporrting
================
Problems with the code will in generally be reported to the issue tracker of the repository.
Therefore, if a problem occurs, the first thing to check is whether there already exists
an open issue, i.e. whether this is a known problem. If not, there are two ways to create
a new issue.
* The issue tracking system of the gitlab repository requires log-in to ``gitlab.mpcdf.mpg.de``.
In general, AREPO users will not have this access, however, active developers may request
a guest account there, and can then create own issues. These issues need to be specific
and reproducible, or directly point to the problematic part of the code.
* AREPO users without such an account should contact the authors in case of problems with the
code. Also in this case examples that reproduce the issue greatly help and accellerate
the problem-solving process.
This version of Arepo is intended as a basis, providing an efficient code structure
and state of the art numerical techniques for gravity and hydrodynamics. The code is
completely documented and should allow computational astrophysicists to develop their
own specialized modules on top of it. Practically, this is best done by hosting an
own repository for the development, which is occasonally updated from the main
repository to include latest bugfixes. For this workflow to work properly, it is
important to keep the code-base relatively static. For this reason
Code extensions **the base version of the code is not meant to be extended in functionality.**
This means that only bugfixes as well as updates in the documentation and examples will
be included.
Issue reporting
=============== ===============
We welcome code extensions by other people, however with certain requirements A code of the scope of Arepo will ineviably have a number of issues and so far undetected bugs.
for the implemented modules. Detection of all issues is very complicated and time-consuming work. This means in practice that
we rely on users that actually report all bugs and issues they come across to improve the quality
of the code. We therefore encourage users to report all issues they have, including things
that remain unclear after reading the documentation. This way we hope to constantly improve the
qualtiy of the code in all its aspects.
To organize this, we created a code support forum (www.arepo-code.org/forums/forum/arepo-forum)
which is publically visible. To create posts on this forum, a registration with admin-approval
is required. We encourage users to sign up under www.arepo-code.org/register and make
use of the forum whenever they have problems.
Issues reported in the forum will then be confirmed by one of the authors and added to the
repository issue tracker, which serves as a to-do list for bug fixes and improvements to
the code. Therefore, if a problem occurs, the first thing to check is whether there already exists
an open issue, i.e. whether this is a known and confirmed problem.
* The modules are under the same terms of use as the AREPO code, i.e. a GPLv3 license. **Please use the support forum instead of contacting the authors by email.**
All modules in this version of AREPO are free to use for everyone.
* The number of interfaces to the main code should be as few as possible.
* The module should be of general interest. This implies in particular that
the implementation needs to work in a fully scalable fashion in massively-parallel
runs, with a minimum of shared/dublicated data.
* The module should come with some additional examples for verification of its
functionality.
Developers interested to share their module with the community as a part of **We encourage experienced users to provide help and answer some of the open questions on this forum.**
AREPO should contact the authors.
Code extensions
===============
Extensions should be hosted in separate repositories or branches of the main repository.
We highly welcome such extensions and encourage developers to make them publically
available (under GNU GPL v3). While this can be done independently of the authors,
we would encourage developers to inform the authors once there is a publically
available module in order to have a list of available extensions on the code homepage.
Major code updates Some guidelines for code extensions and new modules:
==================
Attached a list of important bugfixes and additions with the date and id of the * Modularity
commit
* Minimize the number of changes in existing source code files to a few function calls
and structured variables within existing elements.
* The source code of an own modue should largely be in separate (i.e. new) files.
* Documentation
* All parameter and config-options should be clearly explained in this documentation.
Feel free to add an addional page to this documentation explaining how your module
works.
* Document what each function does in the source code.
* The Template-Config.sh file should have a short explanation for each flag.
* Verification and examples
* If possible, create one or more addional examples illustrating and testing the module.
+------------------------------------------------------------------+-----------------------+------------+
| **Description** | **date (dd.mm.yyyy)** | **commit** |
+==================================================================+=======================+============+
| Public version complete | 20.03.2019 | |
+------------------------------------------------------------------+-----------------------+------------+
...@@ -4,7 +4,7 @@ Diagnostic output ...@@ -4,7 +4,7 @@ Diagnostic output
Arepo will not only output the simulation snapshot and reduced data via Arepo will not only output the simulation snapshot and reduced data via
the halo-finder files, but also a number of (mostly ascii) diagnostic log- the halo-finder files, but also a number of (mostly ascii) diagnostic log-
files which contian important information about the code performance and files which contain important information about the code performance and
runtime behavior. runtime behavior.
In practice, to quickly check the performance of large In practice, to quickly check the performance of large
...@@ -104,7 +104,7 @@ cpu.txt ...@@ -104,7 +104,7 @@ cpu.txt
======= =======
Each sync-point, such a block is written. This file Each sync-point, such a block is written. This file
reports the result of the different timers built into AREPO. Each reports the result of the different timers built into Arepo. Each
computationally expensive operation has a different timer attached to it and computationally expensive operation has a different timer attached to it and
this way allows to closely monitor what the computational time is spent on. this way allows to closely monitor what the computational time is spent on.
Some of the timers (e.g. treegrav) have sub-timers for individual operations. Some of the timers (e.g. treegrav) have sub-timers for individual operations.
...@@ -114,8 +114,8 @@ possible to identify inefficient parts of the overall algorithm and optimize ...@@ -114,8 +114,8 @@ possible to identify inefficient parts of the overall algorithm and optimize
only the most time-consuming parts of the code. There is the option only the most time-consuming parts of the code. There is the option
``OUTPUT_CPU_CSV`` which also returns this data as a ``cpu.csv`` file. ``OUTPUT_CPU_CSV`` which also returns this data as a ``cpu.csv`` file.
The different colums are: The different columns are:
name; wallclock-time (in s) this step; percentage this step; wallclock-time name; wallclock time (in s) this step; percentage this step; wallclock time
(in s) cumulative; percentage up to this step. A typical block of cpu.txt looks (in s) cumulative; percentage up to this step. A typical block of cpu.txt looks
the following (here a gravity-only, tree-only run): the following (here a gravity-only, tree-only run):
...@@ -264,13 +264,13 @@ are written into this file, e.g. ...@@ -264,13 +264,13 @@ are written into this file, e.g.
memory.txt memory.txt
========== ==========
AREPO internally uses an own memory manager. This means that one large chunk of Arepo internally uses an own memory manager. This means that one large chunk of
memory is reserved initially for AREPO (specified by the parameter memory is reserved initially for Arepo (specified by the parameter
`MaxMemSize`) and allocation for individual arrays is handeled internally. `MaxMemSize`) and allocation for individual arrays is handled internally.
The reason for introducing this was to avoid memory fragmentation during The reason for introducing this was to avoid memory fragmentation during
runtime on some machines, but also to have detailed information about how much runtime on some machines, but also to have detailed information about how much
memory AREPO actually needs and to terminate if this exceeds a pre-defined memory Arepo actually needs and to terminate if this exceeds a pre-defined
treshold. ``memory.txt`` reports this internal memory usage, and how much memory threshold. ``memory.txt`` reports this internal memory usage, and how much memory
is actually needed by the simulation. is actually needed by the simulation.
.. code-block :: python .. code-block :: python
...@@ -461,7 +461,7 @@ is actually needed by the simulation. ...@@ -461,7 +461,7 @@ is actually needed by the simulation.
sfr.txt sfr.txt
======= =======
In case ``USE_SFR`` is active, AREPO will create a ``sfr.txt`` file, which reports In case ``USE_SFR`` is active, Arepo will create a ``sfr.txt`` file, which reports
the stars created in every call of the star-formation routine. the stars created in every call of the star-formation routine.
The individual columns are: The individual columns are:
...@@ -493,7 +493,7 @@ gravitational interactions on the largest possible timestep that is allowed by ...@@ -493,7 +493,7 @@ gravitational interactions on the largest possible timestep that is allowed by
the timestep criterion and allowed by the binary hierarchy of time steps. the timestep criterion and allowed by the binary hierarchy of time steps.
Each for each timestep, a linked list of particles on this particular Each for each timestep, a linked list of particles on this particular
integration step exists, and their statistics are reported in `timebins.txt`. integration step exists, and their statistics are reported in `timebins.txt`.
In this file, the number of gas cells and collisionless-particles in each In this file, the number of gas cells and collisionless particles in each
timebin (i.e. integration timestep) is reported for each sync-point, as well timebin (i.e. integration timestep) is reported for each sync-point, as well
as the cpu time and fraction spent on each timebin. A typical bock looks like as the cpu time and fraction spent on each timebin. A typical bock looks like
......
...@@ -6,38 +6,41 @@ ...@@ -6,38 +6,41 @@