Commit 44513714 authored by Rainer Weinberger's avatar Rainer Weinberger
Browse files

updates in user guide

parent 0d902274
......@@ -46,7 +46,7 @@ periodic boundary conditions
These options can be used to distort the simulation cube along the
given direction with the given factor into a parallelepiped of arbitrary aspect
ratio. The box size in the given direction increases from the value in the
parameterfile by the factor given (e.g. if Boxsize is set to 100 and ``LONG_X=4``
parameter file by the factor given (e.g. if ``Boxsize`` is set to 100 and ``LONG_X=4``
is set the simulation domain extends from 0 to 400 along X and from 0 to 100
along Y and Z.)
......@@ -187,7 +187,7 @@ decomposition is disabled.
Enables domain decomposition together with ``VORONOI_STATIC_MESH`` (which is
otherwise then disabled), in case non-gas particle types exist and the use of
domain decompotions is desired. Note that on one hand it may be advantageous
domain decompositions is desired. Note that on one hand it may be advantageous
in case the non-gas particles mix well or cluster strongly, but on the other
hand the mesh construction that follows the domain decomposition is slow for a
static mesh, so whether or not using this new flag is overall advantageous
......@@ -218,7 +218,7 @@ Refinement
===========================
By default, there is no refinement and derefinement and unless set otherwise,
the cirterion for refinement/derefenment is a target mass.
the criterion for refinement/derefinement is a target mass.
**REFINEMENT_SPLIT_CELLS**
......@@ -276,7 +276,7 @@ remaining mesh structures are freed after this step as usual.
-----
Non-standard phyiscs
Non-standard physics
====================
**COOLING**
......@@ -293,14 +293,14 @@ This imposes an adaptive floor for the temperature.
**USE_SFR**
Star formation model, turning dense gas into collisionless partices. See
Star formation model, turning dense gas into collisionless particles. See
Springel&Hernquist, (2003, MNRAS, 339, 289)
-----
**SFR_KEEP_CELLS**
Do not distroy cell out of which a star has formed.
Do not destroy cell out of which a star has formed.
-----
......@@ -311,7 +311,7 @@ If nothing is active, no gravity included.
**SELFGRAVITY**
Gravitational intraction between simulation particles/cells.
Gravitational interaction between simulation particles/cells.
-----
......@@ -342,7 +342,7 @@ Gravity is not treated periodically.
**ALLOW_DIRECT_SUMMATION**
Performes direct summation instead of tree-based gravity if number of active
Performs direct summation instead of tree-based gravity if number of active
particles < ``DIRECT_SUMMATION_THRESHOLD`` (= 3000 unless specified differently)
-----
......@@ -414,7 +414,7 @@ which works well independent of the data layout, in particular it can cope
well with highly clustered particle distributions that occupy only a small
subset of the total simulated volume. However, this method is a bit slower
than the default approach (used when the option is disabled), which is best
matched for homogenously sampled periodic boxes.
matched for homogeneously sampled periodic boxes.
-----
......@@ -446,7 +446,7 @@ This is only relevant when ``PLACEHIGHRESREGION`` is activated. The size of
the high resolution box will be automatically determined as the minimum size
required to contain the selected particle type(s), in a "shrink-wrap" fashion.
This region is be expanded on the fly, as needed. However, in order to prevent
a situation where this size needs to be enlarged frquently, such as when the
a situation where this size needs to be enlarged frequently, such as when the
particle set is (slowly) expanding, the minimum size is multiplied by the
factor ``ENLARGEREGION`` (if defined). Then even if the set is expanding, this
will only rarely trigger a recalculation of the high resolution mesh geometry,
......@@ -482,7 +482,7 @@ Gravity softening
=================
In the default configuration, the code uses a small table of possible
gravitational softening lengths, which are specified in the parameterfile
gravitational softening lengths, which are specified in the parameter file
through the ``SofteningComovingTypeX`` and ``SofteningMaxPhysTypeX`` options,
where X is an integer that gives the "softening type". Each particle type is
mapped to one of these softening types through the
......@@ -506,7 +506,7 @@ If the tree walk wants to use a 'softened node' (i.e. where the maximum
gravitational softening of some particles in the node is larger than the node
distance and larger than the target particle's softening), the node is opened
by default (because there could be mass components with a still smaller
softening hidden in the node). This can cause a subtantial performance penalty
softening hidden in the node). This can cause a substantial performance penalty
in some cases. By setting this option, this can be avoided. The code will then
be allowed to use softened nodes, but it does that by evaluating the
node-particle interaction for each mass component with different softening
......@@ -515,7 +515,7 @@ This also requires that each tree node computes and stores a vector with these
different masses. It is therefore desirable to not make the table of softening
types excessively large. This option can be combined with adaptive hydro
softening. In this case, particle type 0 needs to be mapped to softening type
0 in the parameterfile, and no other particle type may be mapped to softening
0 in the parameter file, and no other particle type may be mapped to softening
type 0 (the code will issue an error message if one doesn't obey to this).
-----
......@@ -532,7 +532,7 @@ desired softening length by scaling the type-1 softening with the cube root of
the mass ratio. Then, the softening type that is closest to this desired
softening is assigned to the particle (*choosing only from those softening
values explicitly input as a SofteningComovingTypeX parameter*). This option
is primarily useful for zoon simulations, where one may for example lump all
is primarily useful for zoom simulations, where one may for example lump all
boundary dark matter particles together into type 2 or 3, but yet provide a
set of softening types over which they are automatically distributed according
to their mass. If both ``ADAPTIVE_HYDRO_SOFTENING`` and
......@@ -547,7 +547,7 @@ assignment exclude softening type 0. Note: particles that accrete matter
When this is enabled, the gravitational softening lengths of hydro cells are
varied according to their radius. To this end, the radius of a cell is
multiplied by the parameter ``GasSoftFactor``. Then, the closest softening
from a logarithmicaly spaced table of possible softenings is adopted for the
from a logarithmically spaced table of possible softenings is adopted for the
cell. The minimum softening in the table is specified by the parameter
``MinimumComovingHydroSoftening``, and the larger ones are spaced a factor
``AdaptiveHydroSofteningSpacing`` apart. The resulting minimum and maximum
......@@ -789,7 +789,7 @@ Sum(2^type) for the primary dark matter type.
With this option, FOF groups can be augmented by particles/cells of other
particle types that they "enclose". To this end, for each particle among the
types selected by the bit mask specifed with ``FOF_SECONDARY_LINK_TYPES``, the
types selected by the bit mask specified with ``FOF_SECONDARY_LINK_TYPES``, the
nearest among ``FOF_PRIMARY_LINK_TYPES`` is found and then the particle is
attached to whatever group this particle is in. sum(2^type) for the types
linked to nearest primaries.
......@@ -802,7 +802,7 @@ An option to make the secondary linking work better in zoom runs (after the
FOF groups have been found, the tree is newly constructed for all the
secondary link targets). This should normally be set to all dark matter
particle types. If not set, it defaults to ``FOF_PRIMARY_LINK_TYPES``, which
reproduces the old behaviour.
reproduces the old behavior.
-----
......@@ -814,7 +814,7 @@ Minimum number of particles (primary+secondary) in one group (default is 32).
**FOF_LINKLENGTH=0.16**
Linkinglength for FoF in units of the mean inter-particle separation.
Linking length for FoF in units of the mean inter-particle separation.
(default=0.2)
-----
......@@ -830,7 +830,7 @@ Sort fuzz particles by nearest group and generate offset table in catalog
Normally, the snapshots produced with a FOF group catalogue are stored in
group order, such that the particle set making up a group can be inferred as a
contiguous block of particles in the snapsot file, making it redundant to
contiguous block of particles in the snapshot file, making it redundant to
separately store the IDs of the particles making up a group in the group
catalogue. By activating this option, one can nevertheless force to create the
corresponding lists of IDs as part of the group catalogue output.
......@@ -851,7 +851,7 @@ within each group.
**SAVE_HSML_IN_SNAPSHOT**
When activated, this will store the hsml-values used for estimating total
matter density around every point and the corresonding densities in the
matter density around every point and the corresponding densities in the
snapshot files associated with a run of Subfind.
-----
......@@ -870,13 +870,13 @@ set together with ``SAVE_HSML_IN_SNAPSHOT``.
Additional calculations are carried out, which may be expensive.
(i) Further quantities related to the angular momentum in different components.
(ii) The kinetic, thermal and potential binding energies for sperical
(ii) The kinetic, thermal and potential binding energies for spherical
overdensity halos.
-----
Special behaviour
Special behavior
============================
**RUNNING_SAFETY_FILE**
......@@ -938,7 +938,7 @@ This can be used to load SPH ICs that contain identical particle coordinates.
**RECOMPUTE_POTENTIAL_IN_SNAPSHOT**
Needed for postprocess option 18 that can be used to calculate potential
Needed for post-processing option 18 that can be used to calculate potential
values for a snapshot.
-----
......@@ -1059,14 +1059,14 @@ Reads in dark matter particles as gas cells.
**TILE_ICS**
Tile ICs by TileICsFactor (specified as paramter) in each dimension.
Tile ICs by TileICsFactor (specified as parameter) in each dimension.
-----
Output fields
==========================
Default output filds are: ``position``, ``velocity``, ``ID``, ``mass``,
Default output fields are: ``position``, ``velocity``, ``ID``, ``mass``,
``specific internal energy`` (gas only), ``density`` (gas only)
**OUTPUT_TASK**
......@@ -1119,7 +1119,7 @@ Output of velocity of mesh-generating point.
**OUTPUT_VOLUME**
Output of volume of cells; note that this can always be computat as both, density
Output of volume of cells; note that this can always be computed as both, density
and mass of cells are by default in output.
-----
......@@ -1171,7 +1171,7 @@ Output of particle softenings.
**OUTPUTGRAVINTERACTIONS**
Output of gravitatational interactions (from the gravitational tree) of particles.
Output of gravitational interactions (from the gravitational tree) of particles.
-----
......@@ -1208,7 +1208,7 @@ Output of vorticity of gas.
**OUTPUT_CSND**
Output of sound speed. This field is only used for tree-based timesteps!
Calculate from hydro quantities in postprocessing if required for science
Calculate from hydro quantities in post-processing if required for science
applications.
-----
......@@ -1218,7 +1218,7 @@ Output options
**PROCESS_TIMES_OF_OUTPUTLIST**
Goes through times of output list prior to starting the simulaiton to ensure
Goes through times of output list prior to starting the simulation to ensure
that outputs are written as close to the desired time as possible (as opposed
to at next possible time if this flag is not active).
......@@ -1227,7 +1227,7 @@ to at next possible time if this flag is not active).
**REDUCE_FLUSH**
If enabled files and stdout are only flushed after a certain time defined in
the parameter file (standard behaviour: everything is flashed most times
the parameter file (standard behavior: everything is flushed most times
something is written to it).
-----
......@@ -1235,7 +1235,7 @@ something is written to it).
**OUTPUT_EVERY_STEP**
Create snapshot on every (global) synchronization point, independent of
parameters choosen or output list.
parameters chosen or output list.
-----
......@@ -1286,7 +1286,7 @@ Reports readjustments of buffer sizes.
Re-gridding
============================
These opitions are auxiliary modes to prepare/convert/relax initial conditions
These options are auxiliary modes to prepare/convert/relax initial conditions
and will not carry out a simulation.
**MESHRELAX**
......
......@@ -2,58 +2,77 @@ Code development
************************
We strongly encourage further development of the code by other people. The idea
with this public version is to provide a well tested stable version of AREPO to
the community as a basis of individual model development. Changes to the code can
then be made in two ways: bugfixes and code extensions. The former is organized in
the issue-tracking system of the repository, while for the
second one, the developers should be contacted.
Issue reporrting
================
Problems with the code will in generally be reported to the issue tracker of the repository.
Therefore, if a problem occurs, the first thing to check is whether there already exists
an open issue, i.e. whether this is a known problem. If not, there are two ways to create
a new issue.
* The issue tracking system of the gitlab repository requires log-in to ``gitlab.mpcdf.mpg.de``.
In general, AREPO users will not have this access, however, active developers may request
a guest account there, and can then create own issues. These issues need to be specific
and reproducible, or directly point to the problematic part of the code.
* AREPO users without such an account should contact the authors in case of problems with the
code. Also in this case examples that reproduce the issue greatly help and accellerate
the problem-solving process.
Scientific software development is a crucial aspect of computational (astro-)physics.
With the progess made over the past decades, numerical methods as well as implementations
have evolved and increased in complexity. The scope of simulation codes has
increased accordingly, such that the question how to organize development becomes important
to scientific progress.
This version of Arepo is intended as a basis, providing an efficient code structure
and state of the art numerical techniques for gravity and hydrodynamics. The code is
completely documented and should allow computational astrophysicists to develop their
own specialized modules on top of it. Practically, this is best done by hosting an
own repository for the development, which is occasonally updated from the main
repository to include latest bugfixes. For this workflow to work properly, it is
important to keep the code-base relatively static. For this reason
Code extensions
**the base version of the code is not meant to be extended in functionality.**
This means that only bugfixes as well as updates in the documentation and examples will
be included.
Issue reporting
===============
We welcome code extensions by other people, however with certain requirements
for the implemented modules.
A code of the scope of Arepo will ineviably have a number of issues and so far undetected bugs.
Detection of all issues is very complicated and time-consuming work. This means in practice that
we rely on users that actually report all bugs and issues they come across to improve the quality
of the code. We therefore encourage users to report all issues they have, including things
that remain unclear after reading the documentation. This way we hope to constantly improve the
qualtiy of the code in all its aspects.
To organize this, we created a code support forum (www.arepo-code.org/forums/forum/arepo-forum)
which is publically visible. To create posts on this forum, a registration with admin-approval
is required. We encourage users to sign up under www.arepo-code.org/register and make
use of the forum whenever they have problems.
Issues reported in the forum will then be confirmed by one of the authors and added to the
repository issue tracker, which serves as a to-do list for bug fixes and improvements to
the code. Therefore, if a problem occurs, the first thing to check is whether there already exists
an open issue, i.e. whether this is a known and confirmed problem.
* The modules are under the same terms of use as the AREPO code, i.e. a GPLv3 license.
All modules in this version of AREPO are free to use for everyone.
* The number of interfaces to the main code should be as few as possible.
* The module should be of general interest. This implies in particular that
the implementation needs to work in a fully scalable fashion in massively-parallel
runs, with a minimum of shared/dublicated data.
* The module should come with some additional examples for verification of its
functionality.
**Please use the support forum instead of contacting the authors by email.**
Developers interested to share their module with the community as a part of
AREPO should contact the authors.
**We encourage experienced users to provide help and answer some of the open questions on this forum.**
Code extensions
===============
Extensions should be hosted in separate repositories or branches of the main repository.
We highly welcome such extensions and encourage developers to make them publically
available (under GNU GPL v3). While this can be done independently of the authors,
we would encourage developers to inform the authors once there is a publically
available module in order to have a list of available extensions on the code homepage.
Major code updates
==================
Some guidelines for code extensions and new modules:
Attached a list of important bugfixes and additions with the date and id of the
commit
* Modularity
* Minimize the number of changes in existing source code files to a few function calls
and structured variables within existing elements.
* The source code of an own modue should largely be in separate (i.e. new) files.
* Documentation
* All parameter and config-options should be clearly explained in this documentation.
Feel free to add an addional page to this documentation explaining how your module
works.
* Document what each function does in the source code.
* The Template-Config.sh file should have a short explanation for each flag.
* Verification and examples
* If possible, create one or more addional examples illustrating and testing the module.
+------------------------------------------------------------------+-----------------------+------------+
| **Description** | **date (dd.mm.yyyy)** | **commit** |
+==================================================================+=======================+============+
| Public version complete | 20.03.2019 | |
+------------------------------------------------------------------+-----------------------+------------+
......@@ -4,7 +4,7 @@ Diagnostic output
Arepo will not only output the simulation snapshot and reduced data via
the halo-finder files, but also a number of (mostly ascii) diagnostic log-
files which contian important information about the code performance and
files which contain important information about the code performance and
runtime behavior.
In practice, to quickly check the performance of large
......@@ -104,7 +104,7 @@ cpu.txt
=======
Each sync-point, such a block is written. This file
reports the result of the different timers built into AREPO. Each
reports the result of the different timers built into Arepo. Each
computationally expensive operation has a different timer attached to it and
this way allows to closely monitor what the computational time is spent on.
Some of the timers (e.g. treegrav) have sub-timers for individual operations.
......@@ -114,8 +114,8 @@ possible to identify inefficient parts of the overall algorithm and optimize
only the most time-consuming parts of the code. There is the option
``OUTPUT_CPU_CSV`` which also returns this data as a ``cpu.csv`` file.
The different colums are:
name; wallclock-time (in s) this step; percentage this step; wallclock-time
The different columns are:
name; wallclock time (in s) this step; percentage this step; wallclock time
(in s) cumulative; percentage up to this step. A typical block of cpu.txt looks
the following (here a gravity-only, tree-only run):
......@@ -264,13 +264,13 @@ are written into this file, e.g.
memory.txt
==========
AREPO internally uses an own memory manager. This means that one large chunk of
memory is reserved initially for AREPO (specified by the parameter
`MaxMemSize`) and allocation for individual arrays is handeled internally.
Arepo internally uses an own memory manager. This means that one large chunk of
memory is reserved initially for Arepo (specified by the parameter
`MaxMemSize`) and allocation for individual arrays is handled internally.
The reason for introducing this was to avoid memory fragmentation during
runtime on some machines, but also to have detailed information about how much
memory AREPO actually needs and to terminate if this exceeds a pre-defined
treshold. ``memory.txt`` reports this internal memory usage, and how much memory
memory Arepo actually needs and to terminate if this exceeds a pre-defined
threshold. ``memory.txt`` reports this internal memory usage, and how much memory
is actually needed by the simulation.
.. code-block :: python
......@@ -461,7 +461,7 @@ is actually needed by the simulation.
sfr.txt
=======
In case ``USE_SFR`` is active, AREPO will create a ``sfr.txt`` file, which reports
In case ``USE_SFR`` is active, Arepo will create a ``sfr.txt`` file, which reports
the stars created in every call of the star-formation routine.
The individual columns are:
......@@ -493,7 +493,7 @@ gravitational interactions on the largest possible timestep that is allowed by
the timestep criterion and allowed by the binary hierarchy of time steps.
Each for each timestep, a linked list of particles on this particular
integration step exists, and their statistics are reported in `timebins.txt`.
In this file, the number of gas cells and collisionless-particles in each
In this file, the number of gas cells and collisionless particles in each
timebin (i.e. integration timestep) is reported for each sync-point, as well
as the cpu time and fraction spent on each timebin. A typical bock looks like
......
......@@ -6,38 +6,41 @@
Arepo documentation
**********************
AREPO is a massively parallel code for gravitational n-body systems and
Arepo is a massively parallel code for gravitational n-body systems and
magnetohydrodynamics, both on Newtonian as well as cosmological background.
It is a flexible code that can be applied to a variety of different types of
simulations, offering a number of sophisticated simulation algorithms. A
description of the numerical algorithms employed by the code is given in the
original code paper and subsequent publications. This documentation addresses
the question how to use the different numerical algorithms.
public release paper.
For a more detailed discussion about these algorithms, the original code paper
and subsequent publications are the best resource. This documentation only
addresses the question how to use the different numerical algorithms.
AREPO was written by Volker Springel (vspringel@mpa-garching.mpg.de) with further
Arepo was written by Volker Springel (vspringel@mpa-garching.mpg.de) with further
development by Rüdiger Pakmor (rpakmor@mpa-garching.mpg.de) and contributions by
many other authors. The public version of the code was compiled by Rainer
Weinberger (rainer.weinberger@cfa.harvard.edu).
many other authors (www.arepo-code.org/people).
The public version of the code was compiled by Rainer Weinberger
(rainer.weinberger@cfa.harvard.edu).
Overview
========
The AREPO code was initially developed to combine the advantages of
The Arepo code was initially developed to combine the advantages of
finite-volume hydrodynamics schemes with the Lagrangian invariance of
Smoothed Particle Hydrodynamics schemes. To this end, Arepo makes use of an
smoothed particle hydrodynamics (SPH) schemes. To this end, Arepo makes use of an
unstructured Voronoi-mesh which is, in its standard setting, moving with
the fluid in an quasi-Lagrangian fashion. The fluxes between cells are computed
using a finite-volume approach, and further spatial adaptivity is
provided by the possibility to add and remove cells from the
mesh according to defined criteria. In addition to gas, AREPO allows for a
mesh according to defined criteria. In addition to gas, Arepo allows for a
number of additional particle types which interact only gravitationally, as
well as for forces from external gravitational potentials.
AREPO is optimized for, but not limited to, cosmological simulations of galaxy
Arepo is optimized for, but not limited to, cosmological simulations of galaxy
formation and consequently for simulations with very high dynamic ranges in
space and time. Therefore, AREPO employes an adaptive timestepping for each
individual cell and particle as well as a dynamic, on-the fly load and memory
balancing scheme. In its current version, AREPO is fully MPI parallel, and tested
space and time. Therefore, Arepo employs an adaptive timestepping for each
individual cell and particle as well as a dynamic load and memory
balancing scheme. In its current version, Arepo is fully MPI parallel, and tested
to run with >10,000 MPI tasks. The exact performance is, however, highly problem and
machine dependent.
......@@ -46,7 +49,7 @@ Disclaimer
==========
It is important to note that the performance and accuracy of the code is a
sensitive function of some of the code parameters. We also stress that AREPO
sensitive function of some of the code parameters. We also stress that Arepo
comes without any warranty, and without any guarantee that it produces correct
results. If in doubt about something, reading (and potentially improving) the
source code is always the best strategy to understand what is going on!
......@@ -65,7 +68,7 @@ such recommendations. We encourage every simulator to find out for
herself/himself what integration settings are needed to achieve sufficient
accuracy for the system under study. We strongly recommend to make convergence
and resolution studies to establish the range of validity and the uncertainty of
any numerical result obtained with AREPO.
any numerical result obtained with Arepo.
Table of contents
......
......@@ -2,7 +2,7 @@
Related codes
*************
The general code structure of AREPO is similar to the GADGET code, but with
The general code structure of Arepo is similar to the Gadget code, but with
significant further developments and improvements. However, keeping in mind
that the code architecture originates from an Tree-PM-SPH code helps to
understand why certain things are done in certain ways.
......@@ -16,9 +16,9 @@ presence of the neighbour tree, a search-tree structure on gas cells. In
practice this means that many of the routines that depend on the state of
neighbouring cells, including sub-grid feedback models etc., can be based on
this neighbour search. This makes it possible to port these models from earlier
versions of GADGET to AREPO.
versions of Gadget to Arepo.
However, it is important to realize that routines written for the GADGET code
However, it is important to realize that routines written for the Gadget code
will in general not work directly when ported to Arepo. We encourage developers
to carefully test that the implementation is still having the same effect.
Apart from the differences in implementation, it is also important to realize
......@@ -29,7 +29,7 @@ a sub-grid implementation.
Differences to the development Version
======================================
This public version of AREPO was branched off from the development version in
This public version of Arepo was branched off from the development version in
November 2017, substantially cut down, cleand up and the documentation completed
by Rainer Weinberger. The main reason for doing this is to provide a code to the
community that researchers are able to understand and use without the need of
......@@ -40,7 +40,7 @@ starting point for new developments.
The general idea for this public verson was to preserve the well-tested code
with a limited number of changes. However, some compile-time options have been
eliminated and to recover the same running mode **in the development version
of AREPO**, the following compile-time flags need to be set in `Config.sh`
of Arepo**, the following compile-time flags need to be set in `Config.sh`
* ``PERIODIC`` (has become obsolete over the years)
* ``VORONOI`` (the development version also has an AMR mode)
......
......@@ -4,11 +4,11 @@ Parameterfile
*****************
The parameter-file, often named param.txt, is a file containing run-time
options for AREPO. These include things like the input and output directory,
maximum runtimes, memory limits and all kind of freely choosable simulaiton
options for Arepo. These include things like the input and output directory,
maximum runtimes, memory limits and all kind of freely choosable simulation
and model parameters. In general, the code will output an error message if
there are either missing paramters for the given configuration AREPO was
compiled with, or if there are obsolete paramters. The latter can be
there are either missing parameters for the given configuration Arepo was
compiled with, or if there are obsolete parameters. The latter can be
deactivated by setting the compile-time flag ``ALLOWEXTRAPARAMS``.
Unlike changing Config.sh, changing the parameters does not require
re-compilation of the code.
......@@ -35,7 +35,7 @@ Initial conditions
are supported, selected by one of the choices "1", "2", or "3". Format "1"
is the traditional fortran-style unformatted format familiar from GADGET.
Format "2" is a variant of this format, where each block of data is
preceeded by a 4-character block-identifier. Finally, format "3" selects the
preceded by a 4-character block-identifier. Finally, format "3" selects the
HDF-5 format.
-----
......@@ -64,7 +64,7 @@ Initial conditions
``MHD_SEEDFIELD``
Direction of the uniform B field that is set before starting the simulation.
The direction is encoded by sum(2^k), where k is the index of direciton
The direction is encoded by sum(2^k), where k is the index of direction
(0, 1, 2 for x, y, z, respectively). E.g. 3 is a diagonal field in
the xy plane, parallel to the z axis. This allows only orientations along
coordinate axis, perpendicular to it or diagonal. Note that the equations of
......@@ -84,7 +84,7 @@ Initial conditions
``TILE_ICS``
Factor by which the ICs are dublicated in each dimension. Should be an
Factor by which the ICs are duplicated in each dimension. Should be an
integer.
-----
......@@ -127,7 +127,7 @@ Output file names and formats
hardware configuration. It can also help to avoid problems due to big
files for large simulations. Note that initial conditions may also
be distributed into several files, the number of which is automatically
recognised by the code and does not have to be equal to
recognized by the code and does not have to be equal to
``NumFilesPerSnapshot`` (it may also be larger than the number of
processors).
......@@ -162,7 +162,7 @@ Output file names and formats
The number of files the code may read or write simultaneously when writing
or reading snapshot/restart files. If the value of this parameter is larger
than the number of processors, it is capped by that. This parameter is only
then the number of processors, it is capped by that. This parameter is only
important for very large runs, where the file-system can be significantly
affected by too many tasks writing (restart files) at the same time.
......@@ -182,7 +182,7 @@ Output frequency
**CpuTimeBetRestartFile**
The value specfied here gives the time in seconds the code will run before it
The value specified here gives the time in seconds the code will run before it
writes regularly produced restart files. This can be useful to protect
against unexpected interruptions (for example due to a hardware problem) of
a simulation, particularly if it is run for a long time. It is then possible
......@@ -270,7 +270,7 @@ Memory allocation
The memory allocate per MPI task, in megabytes. A contiguous memory arena of
this total size is allocated at startup, and then partitioned internally
within AREPO for memory allocation and deallocation requests. Can generally
within Arepo for memory allocation and deallocation requests. Can generally
be set to ~95% of the total available, e.g. (memory per node / number of MPI
tasks per node), to leave room for operating system tasks and MPI buffers.
This value can be changed on a restart to increase the amount of memory
......@@ -283,7 +283,7 @@ Simulated time and spatial extent
**BoxSize**
The boxsize for the simulation, in internal code units.
The box size for the simulation, in internal code units.
All particles and gas cells in the ICs must have Coordinates
within the range ``[0,BoxSize]`` in each dimension. The only exception from
this is for collisionless particles in a tree-only gravity mode (no
......@@ -343,7 +343,7 @@ Cosmological parameters
**OmegaBaryon**
Gives the baryon density in units of the critical