Skip to content
Snippets Groups Projects
Commit db7edc78 authored by Volker Springel's avatar Volker Springel
Browse files

edit in docs

parent b9160434
Branches
No related tags found
No related merge requests found
Showing
with 5515 additions and 1 deletion
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
GADGET-4 GADGET-4
======== ========
![image top](top.jpg) ![image top](documentation/img/top.jpg)
GADGET-4 is a massively parallel code for N-body/hydrodynamical GADGET-4 is a massively parallel code for N-body/hydrodynamical
cosmological simulations. It is a flexible code that can be applied to cosmological simulations. It is a flexible code that can be applied to
......
Introduction to GADGET-4 {#mainpage}
========================
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
___ __ ____ ___ ____ ____ __
/ __) /__\ ( _ \ / __)( ___)(_ _)___ /. |
( (_-. /(__)\ )(_) )( (_-. )__) )( (___)(_ _)
\___/(__)(__)(____/ \___/(____) (__) (_)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
GADGET-4 is a massively parallel code for N-body/hydrodynamical
cosmological simulations. It is a flexible code that can be applied to
a variety of different types of simulations, offering a number of
sophisticated simulation algorithms. An account of the numerical
algorithms employed by the code is given in the original code paper,
subsequent publications, and this documentation.
GADGET-4 was written mainly by
[Volker Springel](mailto:vspringel@mpa-garching.mpg.de), with
important contributions and suggestions being made by numerous people,
including [Ruediger Pakmor](mailto:rpakmor@mpa-garching.mpg.de),
[Oliver Zier](mailto:ozier@mpa-garching.mpg.de), and
[Martin Reinecke](mailto:martin@mpa-garching.mpg.de).
Except in very simple test problems, one or more specialized code
options will typically be used. These are broadly termed modules and
are activated through compile-time flags. Each module may have several
optional compile-time options and/or parameter file values.
Overview and history {#overview}
====================
In its current implementation, the simulation code GADGET-4 (**GA**
laxies with **D** ark matter and **G** as int **E** rac **T** - this
peculiar acronym hints at the code's origin as a tool for studying
galaxy collisions) supports collisionless simulations and smoothed
particle hydrodynamics on massively parallel computers. All
communication between concurrent execution processes is done either
explicitly by means of the message passing interface (MPI), or
implicitly through shared-memory accesses on processes on multi-core
nodes. The code is mostly written in ISO C++ (assuming the C++11
standard), and should run on all parallel platforms that support at
least MPI-3. So far, the compatibility of the code with current
Linux/UNIX-based platforms has been confirmed on a large number of
systems.
The code can be used for plain Newtonian dynamics, or for cosmological
integrations in arbitrary cosmologies, both with or without periodic
boundary conditions. Stretched periodic boxes, and special cases such
as simulations with two periodic dimensions and one non-periodic
dimension are supported as well. The modeling of hydrodynamics is
optional. The code is adaptive both in space and in time, and its
Lagrangian character makes it particularly suitable for simulations of
cosmic structure formation. Several post-processing options such as
group- and substructure finding, or power spectrum estimation are built
in and can be carried out on the fly or applied to existing
snapshots. Through a built-in cosmological initial conditions
generator, it is also particularly easy to carry out cosmological
simulations. In addition, merger trees can be determined directly by
the code.
The main reference for numerical and algorithmic aspects of the code
is the paper "Simulating cosmic structure formation with the GADGET-4
code" (Springel et al., 2020, MNRAS, submitted), and references
therein. Further information on the previous public versions of GADGET
can be found in "The cosmological simulation code GADGET-2" (Springel,
2005, MNRAS, 364, 1105), and in "GADGET: A code for collisionless and
gas-dynamical cosmological simulations" (Springel, Yoshida & White,
2001, New Astronomy, 6, 51). It is recommended to read these papers
before attempting to use the code. This documentation provides
additional technical information about the code, hopefully in a
sufficiently self-contained fashion to allow anyone interested to
learn using the code for cosmological N-body/SPH simulations on
parallel machines.
Most core algorithms in GADGET-4 have been written by Volker Springel
and constitute evolved and improved versions of earlier
implementations in GADGET-2 and GADGET-3. Substantial contributions to
the code have also been made by all the authors of the GADGET-4 code
paper. Note that the code is made publicly available under the GNU
general public license. This implies that you may freely copy,
distribute, or modify the sources, but the copyright for the original
code remains with the authors. If you find the code useful for your
scientific work, we kindly ask you to include a reference to the code
paper on GADGET-4 in all studies that use simulations carried out with
the code.
Disclaimer {#disclaimer}
==========
It is important to note that the performance and accuracy of the code
is a sensitive function of some of the code parameters. We also stress
that GADGET-4 comes without any warranty, and without any guarantee
that it produces correct results. If in doubt about something, reading
(and potentially improving) the source code is always the best
strategy to understand what is going on!
**Please also note the following:**
The numerical parameter values used in the examples contained in the
code distribution do not represent a specific recommendation by the
authors! In particular, we do not endorse these parameter settings in
any way as standard values, nor do we claim that they will provide
fully converged results for the example problems, or for other initial
conditions. We think that it is extremely difficult to make general
statements about what accuracy is sufficient for certain scientific
goals, especially when one desires to achieve it with the smallest
possible computational effort. For this reason we refrain from making
such recommendations. We encourage every simulator to find out for
herself/himself what integration settings are needed to achieve
sufficient accuracy for the system under study. We strongly recommend
to make convergence and resolution studies to establish the range of
validity and the uncertainty of any numerical result obtained with
GADGET-4.
This diff is collapsed.
Types of simulations
====================
[TOC]
There are a number of qualitatively different simulation set-ups that
can be run with GADGET-4, which differ mostly in the type of boundary
conditions employed, in the physics that is included, whether or not
cosmological integrations with comoving coordinates are used, and in
the selection of numerical algorithms. A schematic overview of a
subset of these simulation types, concentrating on how gravity
calculations are done, is given in the following table.
<table>
<caption id="multi_row">Overview of simulation types</caption>
<tr><th>Type of Simulation <th>Computational Method <th>Remarks
<tr><td> <img src="../../documentation/img/type1.png" width="100"> Newtonian space
<td> Gravity: Tree, SPH (optional), vacuum boundary conditions
<td> OmegaLambda should be set to zero
<tr><td> <img src="../../documentation/img/type2.png" width="100"> Periodic long box
<td> No gravity, only SPH, periodic boundary conditions
<td> SELFGRAVITY needs to be deactivated, LONG_X/Y/Z may be
set to scale the dimensions of the box
<tr><td> <img src="../../documentation/img/type3.png" width="100"> Cosmological, physical coordinates
<td> Gravity: Tree, SPH, vacuum boundaries
<td> ComovingIntegrationOn set to zero
<tr><td> <img src="../../documentation/img/type4.png" width="100"> Cosmological, comoving coordinates
<td> Gravity: Tree, SPH, vacuum boundaries
<td> ComovingIntegrationOn set to one
<tr><td> <img src="../../documentation/img/type5.png" width="100"> Cosmological, comoving periodic box
<td> Gravity: Tree with Ewald-correction, SPH, periodic boundaries
<td> PERIODIC needs to be set
<tr><td> <img src="../../documentation/img/type6.png" width="100"> Cosmological, comoving coordinates, TreePM
<td> Gravity: Tree with long range PM, SPH, vacuum boundaries
<td> PMGRID needs to be set
<tr><td> <img src="../../documentation/img/type7.png" width="100"> Cosmological, comoving periodic box, TreePM
<td> Gravity: Tree with long range PM, SPH, periodic boundaries
<td> PERIODIC and PMGRID need to be set
<tr><td> <img src="../../documentation/img/type8.png" width="100"> Cosmological, comoving coordinates, TreePM, Zoom
<td> Gravity: Tree with long-range and intermediate-range PM, SPH, vacuum boundaries
<td> PMGRID and PLACEHIGHRESREGION need to be set
<tr><td> <img src="../../documentation/img/type9.png" width="100"> Cosmological, periodic comoving box, TreePM, Zoom
<td> Gravity: Tree with long-range and intermediate-range PM, SPH, periodic boundaries
<td> PERIODIC, PMGRID and PLACEHIGHRESREGION need to be set
<tr><td> <img src="../../documentation/img/type10.png" width="100"> Newtonian space, TreePM
<td> Gravity: Tree with long-range PM, SPH, vacuum boundaries
<td> PMGRID needs to be set
</table>
Cosmological Simulations {#cosmosims}
========================
Cosmological integrations with comoving coordinates are selected via
the parameter `ComovingIntegrationOn` in the parameterfile. Otherwise
the simulations always assume ordinary Newtonian space. The use of
self-gravity needs to be explicitly enabled through the `SELFGRAVITY`
compile-time switch, otherwise only external gravitational fields
and/or hydrodynamical processes are computed. Periodic boundary
conditions and the various variants of the Tree, FMM, TreePM, and
FMM-PM algorithms require compile-time switches to be set
appropriately in the configuration file.
In particular, the TreePM algorithm is switched on by passing the
desired mesh-size at compile time via the Config.sh file to the
code. The relevant parameter is `PMGRID`, see below. Using an explicit
force split, the long-range force is then (normally) computed with
Fourier techniques, while the short-range force is calculated with the
tree. Because the tree needs only be walked locally, a speed-up can
arise, particularly for near to homogeneous particle distributions,
but not only restricted to them. Both periodic and non-periodic
boundary conditions are implemented for the TreePM and FMM-PM
approaches. In the non-periodic case, the code will internally compute
FFTs of size `HRPMGRID`. In order to accommodate the required
zero-padding, only half that size is actually used to cover the
high-res region. If `HRPMGRID` is not specified, a default value equal
to `PMGRID` is used. For zoom-simulations, the code automatically
places the refined mesh-layer on the high-resolution region. This is
controlled with the `PLACEHIGHRESREGION` option.
Newtonian space {#newtonsims}
===============
Periodic boxes need not necessarily be cubical in GADGET-4. They can
be, if desired, stretched independently in any of the coordinate
directions through `LONG_X`, `LONG_Y` and `LONG_Z` options. This works
both for SPH simulations and also for simulations including
self-gravity. In the latter case, one may also choose one direction to
be non-periodic in self-gravity, realizing mixed boundary conditions
that allow one to simulate infinitely extended two-dimensional
systems.
Stretched boxes {#stretched}
===============
This can now also be done with gravity and periodic boundaries in just
two dimensions.
Two-dimensional simulations {#twodims}
===========================
It is also possible to run SPH simulations in two dimensions only,
which is primarily provided for test-purposes. Note that the code is
not really optimised for doing this; three coordinates are still
stored for all particles in this case, and the computations are
formally carried out as in the 3D case, except that all particles lie
in one coordinate plane, i.e. either with equal x, y, or
z-coordinates.
This diff is collapsed.
This diff is collapsed.
Snapshot file format
====================
[TOC]
The primary result of a simulation with GADGET-4 are snapshots, which
are simply dumps of the state of the system at certain times. GADGET-4
supports parallel output by distributing a snapshot into several
files, each written by a group of processors. This procedure allows an
easier handling of very large simulations; instead of having to deal
with one file of size of a dozens of GB, say, it is much easier to
have several files with a smaller size of a few hundred MB to a couple
of GB instead. Also, the time spent for I/O in the simulation code can
be reduced if several files are written in parallel.
Each particle dump consists of `k` files, where `k` is given by
`NumFilesPerSnapshot`. For `k > 1`, the filenames are of the form
`snapshot_XXX.Y`, where `XXX` stands for the number of the dump, `Y`
for the number of the file within the dump. Say we work on dump 7 with
`k=16` files, then the filenames are `snapshot_007.0` to
`snapshot_007.15`, and if HDF5 is in use (recommended!), they will be
`snapshot_007.0.hdf5` to `snapshot_007.15.hdf5`. They will actually be
stored in a directory called `snapdir_007`, created in the output
directory. For `k=1`, the filenames will just have the form
`snapshot_XXX` and no separate subdirectory is created for the
snapshot. The base name "snapshot" can be changed by setting
`SnapshotFileBase` to another value.
Each of the individual files of a given set of snapshot files contains
a variable number of particles, but the files all have the same basic
format (this is the case for all three fileformats supported by
GADGET), and all of them are in binary. A binary representation of the
particle data is the preferred choice, because it allows much faster
I/O than ASCII files, and in addition, the resulting files are much
smaller while still providing loss-less storage of the data.
Legacy Format 1 {#format1}
===============
In the original default file format of GADGET (selected with
`SnapFormat=1`), the data is organised in blocks, each containing a
certain information about the particles. For example, there is a block
for the coordinates, and one for the temperatures, etc. The sequence
of blocks in snapshots files of GADGET-4 (which for the most part is
compatible with older versions of GADGET) is given in the table
below. Not all blocks are necessarily present in all simulations, for
example, the blocks describing gas properties such as internal energy
or density will only be included in hydrodynamic simulations, and some
output blocks are an optional `Config.sh` option. The presence of the
mass block depends on whether or not particle masses are defined to be
constant for certain particle types by means of the `Massarr` table in
the file header. If a non-zero mass is defined there for a certain
particle type, particles of this type will not be listed with
individual masses in the mass block. If such fixed particle masses are
defined for all types that are present in the snapshot file, the mass
block will be completely absent.
Nr | Format2-ID | HDF5 Identifier | Block contents
----|------------|------------------|------------------------------------------------------
1 | HEAD | Header | File header
2 | POS | Coordinates | Particle positions
3 | VEL | Velocities | Particle velocities
4 | ID | ParticleIDs | Particle IDs
5 | MASS | Masses | Masses (only for particle types with variable masses)
6 | U | InternalEnergy | Thermal energy per unit mass (only SPH particles)
7 | RHO | Density | Density of SPH particles
8 | HSML | SmoothingLength | SPH smoothing length h
9 | POT | Potential | Gravitational potential of particles
10 | ACCE | Acceleration | Acceleration of particles
11 | ENDT | RateOfChangeOfEn | Rate of change of entropic function of SPH particles
12 | TSTP | TimeStep | Timestep of particles
Within each block, the particles are ordered according to their
particle type, i.e. gas particles will come first (type 0), then
type-1 particles, followed by type-2 particles, and so on. However, it
is important to realize that the detailed sequence of particles within
the blocks may change from snapshot to snapshot. Also, a given
particle may not always be stored in the snapshot file with the same
sub-number among the files belonging to one set of snapshot files.
This is because particles may move around from one processor to
another during the coarse of a parallel simulation. In order to trace
a particle between different outputs, one therefore has to resort to
the particle IDs, which are intended to be used to label particles
uniquely. (In fact, the uniqueness of the IDs in the initial
conditions is checked upon start up of the code.)
The first block (number 1) has a special role, it is a header which
contains global information about the particle set, for example the
number of particles of each type, the number of files used for this
snapshot set, etc. The fields of the file header for formats 1 and 2
in GADGET-4 is given in the table below:
Header Field | Type | HDF5 name | Comment
-------------|--------|---------------------|-------------------
Npart[6] | uint | NumPart_ThisFile | The number of particles of each type in the present file.
Nall[6] | uint64 | NumPart_Total | Total number of particles of each type in the simulation.
Massarr[6] | double | MassTable | The mass of each particle type. If set to 0 for a type which is present, individual particle masses are stored for this type.
Time | double | Time | Time of output, or expansion factor for cosmological simulations.
Redshift | double | Redshift | z=1/a-1 (only set for cosmological integrations)
BoxSize | double | BoxSize | Gives the box size if periodic boundary conditions are used.
NumFiles | int | NumFilesPerSnapshot | Number of files in each snapshot.
In GADGET-1/2/3, the outline of the fields in the header has been
slightly different, in particular the number of particle types was
always fixed to 6 (now this is given by `NTYPES`), and the total
particle number was stored as a 32-bit integer, such that simulations
exceeding particle numbers of a couple of billion needed as special
(and ugly) extension of the header, where the high-order bits in the
particle numbers where stored in a separate entry. In addition,
cosmological paramaters like `Omega0` and various flags informing
about enabled/disabled code features were redundantly stored there as
well. Finally, the total header length was filled to a total length of
exactly 256 bytes, with a view to reserve the extra space for future
extensions. However, as the block-size guards (see below) anyhow store
the length of the header in the file format, this has been a
superfluous restriction. In GADGET-4, this is hence lifted, and also
other clean-ups of the header structure are implemented (such as going
to 64-bit integers for the particle numbers). While this makes the new
file format incompatible with older versions of GADGET, backwards
compatibility can be enforced by setting the `GADGET2_HEADER` switch.
To allow an easy access of the data also in Fortran (this should not
be misunderstood as an encouragement to use Fortran), the blocks are
stored using the "unformatted binary" convention of most Fortran
implementations. In it, the data of each read or write statement is
bracketed by block-size fields, which give the length of the data
block in bytes. These block fields (which are two 4-byte integers, one
stored before the data block and one after) can be useful also in
C-code to check the consistency of the file structure, to make sure
that one reads at the right place, and to skip individual blocks
quickly by using the length information of the block size fields to
fast forward in the file without actually having to read the data. The
latter makes it possible to efficiently read only certain blocks, for
example, just temperatures and densities but no coordinates and
velocities. Note however that this block size structure imposes the
restriction that individual blocks may not be larger than 4 GB in file
formats 1 and 2.
Assuming that variables have been allocated/declared appropriately, a
possible read-statement of some of these blocks in Fortran could then
for example take the form:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
read (1) npart, nall, massarr, a, redshift
read (1) pos
read (1)
read (1) id
read (1) masses
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In this example, the block containing the velocities, and several
fields in the header, would be skipped. Further read-statements may
follow to read additional blocks if present. In the file
`read_snapshot.f` included in the code distribution, you can find a
simple example of a more complete read-in routine. There you will
also find a few lines that automatically generate the filenames, and
further hints how to read in only certain parts of the data.
When you use C, the block-size fields need to be read or skipped
explicitly, which is quite simple to do. This is demonstrated in the
file `read_snapshot.c`. Note that you need to generate the
appropriate block field when you write snapshots in C, for example
when generating initial conditions for GADGET.
GADGET-4 will check explicitly that the block-size fields contain the
correct values and refuse to continue if not. It will also use the
block-size information to convert between single and double precision
(or vice versa), and between 32-bit and 64-bit, where appropriate,
i.e. when reading in files the code will autodetect if single or
double precision is used and do conversions if needed.
Legacy Format 2 {#format2}
===============
Whether or not a certain block is present in GADGET's snapshot file
format depends on the type of simulation, and the makefile
options. This makes it difficult to write reasonably general I/O
routines for analysis software when file formats 1 and 2 are used,
capable of dealing with output produced for a variety of makefile
settings. For example, if one wants to analyse the (optional) rate of
entropy production of SPH particles, then the location of the block in
the file changes if, for example, the output of the gravitational
potential is enabled or not. This problem becomes particularly acute
if more complicated baryonic physics modules or optional output fields
are added to GADGET runs.
To remedy this problem, a variant of the default fileformat was
implemented in GADGET (based on a suggestion by Klaus Dolag), which
can be selected by setting `SnapFormat=2`. Its only difference is that
each block is preceded by a small additional block which contains an
identifier in the form of a 4-character string (filled with spaces
where appropriate). This identifier is listed in the "Format2-ID" in
the above block table. Using it, one can write more flexible I/O
routines and analysis software that quickly fast-forwards to blocks of
interest in a simple way, without having to know where it is expected
in the snapshot file.
HDF5 file format {#format3}
================
A yet more general solution is provided by HDF5, which is nowadays the
*strongly* recommended format for the code. If the hierarchical data
format is selected as format for snapshot files or initial conditions
files, GADGET-4 accesses files with low level HDF5
routines. Advantages of HDF5 lie in its portability (e.g. automatic
endianness conversion) and generality (which however results in
somewhat more complicated read and write statements), and in the
availability of a number of tools to manipulate or display HDF5
files. Also, attributes such as units or conversion factors can be
stored along-side data-sets. Together with HDF5 utility commands like
h5ls, h5dump, etc., this makes the file format self-documenting to a
significant degree. A wealth of further information about HDF5 can be
found on the website of the HDF5 library.
In the HDF5 file format of GADGET, the blocks are stored as datasets
with names as given in the column "HDF5 Identifiers" in the table
above. To make accessing different particle types easier, for each
particle type presented in the snapshot file, a separate data group is
defined, named `ParticleType0`, `ParticleType1`, and so on, and
the blocks of the standard file format are appearing as datasets
within these groups. These data groups can be thought of as
subdirectories in the HDF5 file, with each being devoted to one
particular particle type. In addition, in HDF5 each of these datasets
is also equipped with attribute values that store conversion factors
to cgs units, as well as factors that inform whether one has to
multiply with a certain power of the scale factor and/or of the
dimensionless Hubble parameter `h` to get to truly physical
quantities.
Finally, the file header is also moved to a special `Header` group in
the HDF5 output. This group does not contain datasets, but only
attributes that hold the values for all the fields defined for file
format 1 and 2 of GADGET-4. The names of these attributes are listed
in the column "HDF5 name" in the table above.
To graphically explore the contents of HDF5 files, the program
HDFView, available for free from the HDF-Group, is one good
possibility. It allows an easy exploration of the structure and the
contents of the various HDF5 outputs produced by GADGET4. It also
readily reveals the type of data set fields, the values of attributes,
etc., and hence greatly facilitates the writing of corresponding
analysis scripts.
Alternatively, one can also explore the contents of HDF5 files at the
command line, using simple tools that are part of the HDF5
library. For example, to list contents of the header of a snapshot
file on the command line, use the following command:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
h5dump -g Header snapshot_007.3.hdf5
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To list, for example, which blocks are available for type-0 particles,
you can use the following command:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
h5ls snapshot_007.3.hdf5/ParticleType0
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Just using `h5ls snapshot_007.3.hdf5` would tell you which particle types are
available as separate data groups.
Finally, GADGET-4 also stores all the parameters and config file
options with which it was run to produce the given snapshot in all its
HDF5 files. This is realized through Parameter attributes stored in
two special data groups called `Parameters` and `Config`.
To examine the parameter file settings that were used, you can hence
issue the command:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
h5dump -g Parameters snapshot_007.3.hdf5
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
And for retrieving the `Config.sh` options, you can use:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
h5dump -g Config snapshot_007.3.hdf5
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Here are a couple of additions noted on the format, units, and
variable types of the individual blocks in the snapshot files:
- Particle positions: A floating point 3-vector for each particle,
giving the comoving coordinates in internal length units
(corresponds to kpc/h if the above choice for the system of units is
adopted). N=sum(header.Npart) is the total number of particles in
the file. Note: For file formats 1 and 2, the particles are ordered
in the file according to their type, so it is guaranteed that
particles of type 0 come before those of type 1, and so on. However,
within a type, the sequence of particles can change from dump to
dump. For file format 3, different particle types appear in separate
particle groups.
- Particle velocities: Particle velocities `u` are in internal
velocity units (corresponds to km/sec if the default choice for the
system of units is adopted). For cosmological simulations, peculiar
velocities `v` are obtained from `u` by multiplying `u` with
sqrt(a), i.e. v = u * sqrt(a).
- Particle identifiers: These are unsigned 32-bit or 64-bit integers
(if `IDS_64BIT` is set) and intended to provide a unique
identification of particles. The order of particles may change
between different output dumps, but the IDs can be easily used to
bring the particles back into the original order for every dump.
- Variable particle masses: Single or double precision floats, with a
total length Nm. Only stored for those particle types that have
variable particle masses (indicated by zero entries in the
corresponding `massarr` entry in the header). Nm is thus the sum of
those `npart` entries that have vanishing `massarr` . If `Nm=0`,
this block is not present at all.
- Internal energy: Single/double precision float values for
Ngas. Internal energy per unit mass for the `Ngas=npart(0)` gas
particles in the file. The block is only present for `Ngas>0`. Units
are again in internal code units, i.e. for the standard system of
units, `u` is given in `(km/sec)^2` .
- Density: The comoving density of SPH particles. Units are again in
internal code units, i.e. for the above system of units, `rho` is
given in `10^10 Msun/h / (kpc/h)^3`.
- Smoothing length: Smoothing length of the SPH particles. Given in
comoving coordinates in internal length units.
- Gravitational potential: This block will only be present if it is
explicitly enabled in the makefile.
- Accelerations: Likewise, only present if it is explicitly enabled in
the makefile.
- Rate of entropy production: The rate of change of the entropic
function of each gas particles. For adiabatic simulations, the
entropy can only increase due to the implemented artificial
viscosity. This block will only be present if it is explicitly
enabled in the makefile.
- Timesteps of particles: Individual particle timesteps of
particles. For cosmological simulations, the values stored here are
d ln(a), i.e. intervals of the natural logarithm of the expansion
factor. This block will only be present if it is explicitly enabled
in the makefile.
Format of initial conditions {#formatICs}
============================
The possible file formats for initial conditions are the same as those
for snapshot files, and are selected with the `ICFormat`
parameter. However, only the blocks up to and including the gas
temperature (if gas particles are included) need to be present; gas
densities, SPH smoothing lengths and all further blocks need not be
provided and are ignored.
In preparing initial conditions for simulations with gas particles,
the temperature block can be filled with zero values, in which case
the initial gas temperature is set in the parameterfile with the
`InitGasTemp` parameter. However, even when this is done, the
temperature block must still be present. Note that the field `Time`
in the header will be ignored when GADGET-4 is reading an initial
conditions file. Instead, you have to set the time of the start of the
simulation with the `TimeBegin` parameter in the parameterfile.
This diff is collapsed.
File added
Special code features
=====================
[TOC]
The GADGET-4 code contains a number of modules that take the form of
extensions of the code for specific science applications or common
postprocessing tasks. Examples include merger-tree creation,
lightcone outputs, or power spectrum measurements. Here we briefly
describe the usage of the most important of these modules in GADGET-4.
Initial conditions {#ngenic}
==================
GADGET-4 contains a built-in initial conditions generator for
cosmological simulations (based on the N-GenIC code), which supports
both DM-only and DM plus gas simulations. Only cubical periodic boxes
are supported at this point. Once the IC-module is compiled in (by
setting `NGENIC` in the configuration), the code will create initial
conditions upon regular start-up and then immediately start a
simulation based on them. It is also possible to instruct the code to
only create the ICs, store them in a file and then end, which is
accomplished by launching the code with restartflag 6.
![k-space grid](../../documentation/img/ic-code.png)
The NGENIC option needs to be set to the size of the FFTs used in the
initial conditions creation, and the meaning of the other code
parameters that are required for describing the initial conditions is
described in detail in the relevant section of this guide.
Merger trees {#mtree}
============
The merger tree construction follows the concepts introduced in the
paper Springel et al. (2005),
<http://adsabs.harvard.edu/abs/2005Natur.435..629S>. It is a tree for
subhalos identified within FOF groups, i.e. it requires group finding
carried out with `FOF`, and `SUBFIND` or `SUBFINF_HBT`, and hence
these options need to be enabled when `MERGERTREE` is set. The
schematic organisation of the merger tree that is constructed is
depicted in the following sketch:
![merger tree](../../documentation/img/mergertree.png)
At each output time, FOF groups are identified which contain one or
several (sub)halos, and the merger tree connects these halos. The FOF
groups play no direct role for the tree, except that the largest halo
in a given FOF group is singled out as main subhalo in the group. To
organize the tree(s), a number of pointers for each subhalo need to be
defined.
Each halo must knows its __descendant__ in the subsequent group
catalogue at later time, and the most important step in the merger
tree construction is determining this link. This can be accomplished
in two ways with GADGET-4. Either one enables `MERGERTREE` while a
simulation is run. Then for each new snapshot that is produced, the
descendant pointers for the previous group catalogue are computed as
well and accumulated in the output directory. The results will be
written in special files called `sub_desc_XXX`. In essence, these
provide the glue between two subsequent group catalogues. One
advantage of doing this on the fly is that this allows merger tree
constructions without ever having to output the particle data itself.
Alternatively, one also create these files in postprocessing for a
simulation that was run without the `MERGERTREE` option. This however
requires that snapshot files are available, or at the very least, that
particle IDs have been included in the group catalogue output. The
process of creating these link files can be accomplished with
restartflag 7, which does this for the given snapshot number and the
previous output. This has to be repeated for all snapshots except the
first one (i.e. one starts at output number 1 until the last one) that
should be part of the merger tree.
Finally, one can ask GADGET-4 to isolate individual trees and to
arrange the corresponding subhalos in a format that allows easy
processing of the trees, for example, in a semi-analytic code for
galaxy formation. This process also computes the other links shown in
the above sketch. To this end, one starts GADGET-4 with restartflag 8,
and provides the last snapshot number as additional argument. GADGET-4
will then process all the group catalogue data and the descendant link
files, and determine a new set of tree-files. The algorithms are
written such that they are fully parallel and should be able to
process extremely large simulations, with very large group catalogues
and tree sets. The tree files will normally be split up over many
files in this case, and the placing of a tree into any of these files
is randomized in order to balance them roughly in size, which
simplifies later processing. In order to quickly look-up based, based
on a given subhalo number from one of the timeslices, in which tree
this subhalo is found, corresponding pointers are added to the group
catalogues as well.
Lightcone output {#licone}
================
One new feature in GADGET-4 is the ability to output continuous light
cones, i.e. particles are stored at the position and velocity at the
moment the backwards lightcone passes over them. This is illustrated
in the following sketch, which shows how the code determines an
interpolated particle coordinate x' in between two endpoints of the
timestepping procedure.
![light cone](../../documentation/img/lcone.png)
This option is activated with the `LIGHTCONE` switch, and needs to be
active while the simulation is run. In this case, additional particle
outputs are created, which have a structure similar to snapshot files.
While it is possible also here to use the file format 1 or 2, it is
highly recommended to not bother with this but rather use HDF5
throughout for such more complicated output. This is the only sensible
way to not get caught up in struggles to parse the (possibly
frequently varying) binary file format.
Power spectra {#powerspectrum}
=============
The GADGET-4 code can also be used to measure matter power spectra
with a high dynamic range through the "folding technique", described
in more detail in Springel et al. (2018)
<http://adsabs.harvard.edu/abs/2017arXiv170703397S>. In essence, three
power spectra are measured in each case, one for the unmodified
periodic box, yielding a conventional measurement that extends up to
close to the Nyquist frequency of the employed Fourier mesh (which is
set by `PMGRID` ). The other two are extensions to smaller scales by
imposing periodicity on some inter devision of the box, with the box
folded on top of itself. The default value for this folding factor is
`POWERSPEC_FOLDFAC=16` but this value can be modified if desired by
overriding it with a configuration option.
The measured power spectra are outputted in a finely binned fashion in
k-space as ASCII files. This data can be easily rebinned by
band-averaging to any desired coarser binning (which then also reduces
the statistical error for each bin), which is a task relegated to a
plotting script. This can then also be used to combine the coarse and
fine measurements into a single plot, and to do a shot-noise
subtraction if desired. The shot-noise, allowing for variable particle
masses if present, is also measured and output to the file. Example
plotting scripts to parse the powerspectrum are provided in the code
distribution.
There are two ways to measure the power spectra. This can either be
done on the fly whenever a snapshot file is produced, by means of the
`POWERSPEC_ON_OUTPUT` option. Or one can compute a power spectrum in
postprocessing by applying the code with restartflag 4 to any of the
snapshot numbers. In both cases, power spectra are measured both for
the full particle distribution, and for every particle type that is
present.
I/O bandwidth test {#iobandwith}
==================
Another small feature of GADGET-4 is a stress test for the I/O
subsystem of the target compute cluster. This is meant to get some
information about the available I/O bandwidth for parallel write
operations, and in particular, to find out whether
`MaxFilesWithConcurrentIO` should be made smaller than the number of
MPI-ranks for a specific setup to avoid that too many files being
written at the same time, because this can be counter-productive in
terms of throughput or cause a too high load on the I/O subsystem that
inconveniences other users or jobs.
To this end, GADGET-4 can be started with the restartflag 9 option,
using the same number of MPI ranks that is intended for a relevant
production run. The code will then not actually carry out a simulation
but instead carry out a number of systematic write tests. The tests
are repeated for different settings of `MaxFilesWithConcurrentIO` ,
starting at the number of MPI ranks, and then halving this number
until it drops below unity. For each of the tests, each MPI-rank tries
to write 10 MB of data to files stored in the output directory (these
are again deleted after the test automatically). The code then reports
the effective I/O bandwidth reached for the different settings of
`MaxFilesWithConcurrentIO`, and the results should inform about which
setting is reasonable. In particular, in a regime where the I/O
bandwidth only very weakly increases (i.e. strongly sub-linearly) with
`MaxFilesWithConcurrentIO`, it will usually be better to go with a
lower value where such linearity is still approximately seen to retain
some responsiveness of the filesystem when GADGET-4 does parallel I/O.
File added
This diff is collapsed.
This diff is collapsed.
Parameters for special postprocesing options {#lcimage}
============================================
Parameterfile for Lightcone Imaging Tool
========================================
**LightConeImageConeNr** 0
For the lightcone image code, which cone should be processed.
-------
**LightConeImageCornerX**
Lower left corner of image in comoving coordinates.
-------
**LightConeImageCornerY**
Lower left corner of image in comoving coordinates.
-------
**LightConeImageLengthX**
Horizontal extension of image in comoving coordinates.
-------
**LightConeImageLengthY**
Vertical extension of image in comoving coordinates.
-------
**LightConeImagePicName**
Partial file name of image file
-------
**LightConeImagePixelsX**
Number of pixels for image in x-dimension.
-------
**LightConeImagePixelsY**
Number of pixels for image in y-dimension.
-------
**LightConeImageFirstConeDir**
First cone dir to be read in for image.
-------
**LightConeImageLastConeDir**
Last cone dir to be read in for image.
-------
documentation/img/fof_subhalo_layout.png

24.1 KiB

documentation/img/ic-code.png

130 KiB

documentation/img/lcone.png

30.8 KiB

documentation/img/mergertree.png

122 KiB

File moved
documentation/img/type1.png

1.22 KiB

0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment