TurTLE issueshttps://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues2023-11-30T16:11:02Zhttps://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/46Adjust HDF5 version requirements2023-11-30T16:11:02ZTobias BaetgeAdjust HDF5 version requirementsThe required version of hdf5 seems to be equal or greater than 1.12. Accordingly, the documentation needs to be updated.The required version of hdf5 seems to be equal or greater than 1.12. Accordingly, the documentation needs to be updated.Cristian LalescuCristian Lalescuhttps://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/44Adapting read_moments and read_histogram in DNS.py2023-11-12T07:22:36ZLukas BentkampAdapting read_moments and read_histogram in DNS.pyThere are a few things that make the `read_moments` and `read_histogram` methods in `DNS.py` less useful than they could be. I can fix them, but I'm not sure which ones you implemented on purpose this way:
- Both methods take the mean ov...There are a few things that make the `read_moments` and `read_histogram` methods in `DNS.py` less useful than they could be. I can fix them, but I'm not sure which ones you implemented on purpose this way:
- Both methods take the mean over time. Since we are already reading the whole time series into memory, why can't we return the full array?
- If you insist on aggregating over time, then in `read_moments`, we should at least take the minimum of the minima and the maximum of the maxima instead of their mean.
- Can we save the result into `self.statistics`?Lukas BentkampLukas Bentkamphttps://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/43executables never have "USE_TIMINGOUTPUT"2023-10-06T11:30:54ZCristian Lalescuexecutables never have "USE_TIMINGOUTPUT"this comes from updates/cleanups to/of the CMake setup.
Problem: even though turtle is built/installed with the timing option turned on, executables never know about it.
This is because the "USE_TIMING" setting never gets defined in the...this comes from updates/cleanups to/of the CMake setup.
Problem: even though turtle is built/installed with the timing option turned on, executables never know about it.
This is because the "USE_TIMING" setting never gets defined in the `_code.py`-generated temporary cmake setup that's being used for the executables.
Current workaround: always define `USE_TIMINGOUTPUT` for the executable.
We need a reasonable solution.Cristian LalescuCristian Lalescuhttps://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/39Descriptions for command line parameters2021-12-16T10:07:26ZLukas BentkampDescriptions for command line parametersSo far, when calling `turtle DNS NSE --help` (and most other commands), descriptions for the command line parameters are very sparse. For first time users, I think this is a very important place to have a full documentation. Apart from e...So far, when calling `turtle DNS NSE --help` (and most other commands), descriptions for the command line parameters are very sparse. For first time users, I think this is a very important place to have a full documentation. Apart from explaining what parameters do, it should point out which parameters default to what value (for example `--nx` defaults to `-n` defaults to some value, right?).https://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/38p2p exchange operation causes assertion failure2021-05-21T09:53:27ZCristian Lalescup2p exchange operation causes assertion failureFor a particular setup, a TurTLE-derived code crashes consistently.
Two interacting particles (ghost collision, rod-like particles, actual code is in gitlab.mpcdf.mpg.de:/josea/sedimenting-rods.git), on 512 mpi processes, sometimes lead ...For a particular setup, a TurTLE-derived code crashes consistently.
Two interacting particles (ghost collision, rod-like particles, actual code is in gitlab.mpcdf.mpg.de:/josea/sedimenting-rods.git), on 512 mpi processes, sometimes lead to TurTLE getting stuck (with NDEBUG) --- or to emit the assertion failure (no NDEBUG):
```
many_NSVEcylinder_collisions-single-v3.12.1.post2+g2c95949:
/home/arguedas/ttf/titan/installs/py3/include/TurTLE/particles/p2p/p2p_distr_mpi.hpp:547:
void p2p_distr_mpi<partsize_t, real_number>::compute_distr(computer_class&, const partsize_t*, real_number*, std::unique_ptr<real_number []>*, int, partsize_t*)
[with computer_class = p2p_ghost_collisions<double, long long int>;
int size_particle_positions = 6;
int size_particle_rhs = 6;
partsize_t = long long int;
real_number = double]:
Assertion `willsendall[idxproc*nb_processes_involved + idxtest] == willrecvall[idxtest*nb_processes_involved + idxproc]' failed.
```
This depends on the interaction cutoff, as can be seen from the attached plot.
[turtle_error.pdf](/uploads/27bcacb9c620f70ae13eb65d2f565a6a/turtle_error.pdf)Cristian LalescuCristian Lalescuhttps://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/37dealiasing and pressure Hessian2022-06-28T08:05:41ZCristian Lalescudealiasing and pressure HessianTurTLE allows for two dealiasing methods now (2/3 and smooth dealiasing).
We need to also implement 1/2 dealiasing.
Opening issue for easier tracking of development.
See below document prepared by @mcarbone :
[test_dealiasing.pdf](/upl...TurTLE allows for two dealiasing methods now (2/3 and smooth dealiasing).
We need to also implement 1/2 dealiasing.
Opening issue for easier tracking of development.
See below document prepared by @mcarbone :
[test_dealiasing.pdf](/uploads/6edad116dde14f88de817d3756d308c1/test_dealiasing.pdf)
There are two points laid out in the above document:
1. We may test the new dealiasing method by simply computing spectra of dealiased fields and comparing spectrum for 2/3 with spectrum for 1/2. There are analytic expressions for wavenumbers where spectra go to 0.
2. Pressure Hessian and squared velocity gradients satisfy known equations that fail when using 2/3 dealiasing, but work for 1/2.
Therefore I see the following steps for the work:
* Implement 1/2 dealiasing (i.e. new template for `kspace`).
* Write new TurTLE test that generates spectra comparisons for different dealiasing methods, i.e. point 1 from above.
* Write post-processing tool that computes the pressure Hessian and the velocity gradient, and verifies the equations detailed in 2.
Afterwards we see exactly what type of postprocessing tool we write for future data analysis.Cristian LalescuCristian Lalescuhttps://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/36generalize/refactor particle code2021-05-21T09:54:07ZCristian Lalescugeneralize/refactor particle codeAt the moment particles are somewhat hard-coded to use Adams-Bashforth for time-stepping. With some careful combinations of the base classes we could get around this if we really wanted to, but I think it's best if we reorganize the code...At the moment particles are somewhat hard-coded to use Adams-Bashforth for time-stepping. With some careful combinations of the base classes we could get around this if we really wanted to, but I think it's best if we reorganize the code a bit such that more generic temporal integration methods can be used.
In particular, we will need temporal interpolation, and I would like to be able to interpolate several fields at once (such that the weight polynomials are only computed once).
This will lead to slightly more elaborate refactoring than just of the particle code, since the fields themselves at different iterations are required for temporal interpolation.
I'm creating this issue to document the process and such that we can coordinate test development.Cristian LalescuCristian Lalescuhttps://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/35random number generators in hybrid MPI/OpenMP2021-05-21T09:54:53ZCristian Lalescurandom number generators in hybrid MPI/OpenMPThis is possibly a non-issue, but it should be documented somehow.
In `field.cpp` we implement the `make_gaussian_random_field` function, that fills up a `field` object with random amplitudes and phases according to a prescribed spectrum...This is possibly a non-issue, but it should be documented somehow.
In `field.cpp` we implement the `make_gaussian_random_field` function, that fills up a `field` object with random amplitudes and phases according to a prescribed spectrum.
the STL mt19937_64 random number generator is used: one individual generator for each OpenMP thread.
The seed for each thread is unique, and it is dependent on an "rseed" as well as MPI process ID and OpenMP thread ID.
In principle we could see repeated seed usage if we use different MPI/OpenMP configurations.
While this is not an obvious problem, it would be best to find out if there are standard ways of dealing with random number generation in a massively parallel setting. Could we ensure that random numbers are independent of the MPI/OpenMP configuration?Cristian LalescuCristian Lalescuhttps://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/34Job Failed #10021022020-05-08T10:44:17ZCristian LalescuJob Failed #1002102Job [#1002102](https://gitlab.mpcdf.mpg.de/clalescu/turtle/-/jobs/1002102) failed for c4ed62917105a8e85140ba3d6ef9cc095eb720fa:
I'm not sure what the problem is.
From what I can read of the output, the `make install` command doesn't ins...Job [#1002102](https://gitlab.mpcdf.mpg.de/clalescu/turtle/-/jobs/1002102) failed for c4ed62917105a8e85140ba3d6ef9cc095eb720fa:
I'm not sure what the problem is.
From what I can read of the output, the `make install` command doesn't install the python package. So I put the relevant code directly in the gitlab-ci script, and now I see that there is no python3 available for the gitlab runner.
@mjr : do you know where I can find a list of available modules, so that I can load the relevant python3 module? Or are these simply draco/cobra modules?https://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/33new feature: particles with Stokes drag2019-11-29T15:14:47ZCristian Lalescunew feature: particles with Stokes dragbranch `feature/Stokes-drag` contains an initial attempt to implement particles with Stokes drag. I had to extend the `abstract_particles_system` and `particle_system` classes a bit, and I had to create a new `particles_inner_computer_2n...branch `feature/Stokes-drag` contains an initial attempt to implement particles with Stokes drag. I had to extend the `abstract_particles_system` and `particle_system` classes a bit, and I had to create a new `particles_inner_computer_2nd_order_Stokes` class, but I think I did it the right way.
@bbramas : when you have time, can you please have a look at my changes and confirm that they respect the philosophy behind the particle code?
@toba : if/when you have time, can you please design a sanity check for particles with Stokes drag, such that we can write a proper test for the new `NSVE_Stokes_particles` simulator?https://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/32new feature: collision/coalescence of particles2020-01-28T16:41:01ZCristian Lalescunew feature: collision/coalescence of particlesThis issue to track progress on collision/particle coalescence code.This issue to track progress on collision/particle coalescence code.Tobias BaetgeTobias Baetgehttps://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/31handle custom cmake requirements2019-10-11T09:02:39ZCristian Lalescuhandle custom cmake requirementsAt line 233 of `TurTLE/_code.py` there is a small block of code meant to handle custom cmake configurations for libraries needed by custom executables.
In particular this is used by "strain_vort_alignment" from `turtle_addons` right now....At line 233 of `TurTLE/_code.py` there is a small block of code meant to handle custom cmake configurations for libraries needed by custom executables.
In particular this is used by "strain_vort_alignment" from `turtle_addons` right now.
I assume having an "exec_name_extra_cmake.txt" file in the current directory is not the most reasonable thing to do.
We should see how this problem is handled elsewhere, and choose a reasonable solution for TurTLE.https://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/30HDF5/MPI compatibility2022-08-24T10:13:24ZCristian LalescuHDF5/MPI compatibilityLatest version of OpenMPI seems to be incompatible with HDF5 unless compiled with custom flag.
E-mail from @schni follows:
> While installing Turtle on my private computer over the weekend I had the problem that newer versions of OpenM...Latest version of OpenMPI seems to be incompatible with HDF5 unless compiled with custom flag.
E-mail from @schni follows:
> While installing Turtle on my private computer over the weekend I had the problem that newer versions of OpenMPI (>=3.0) are not compatible with the HDF5 library build with the --enable-parallel flag. Maybe we should include that in the README. The fixes are to install an older version of OpenMPI or build a newer version wtith the "--enable-mpi1-compatibility" flag set.
Quick browsing of HDF5 website does not yield an explicit dependency on a particular MPI standard.
@mjr, how flexible is the CI configuration here? Would it be feasible to test different versions of HDF5 and/or MPI? For instance I only use HDF5 1.8.x as far as I know, I think turtle actually failed with 1.10.x when I last tried it (it was still bfps at the time).https://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/28fix symmetrization2020-08-07T19:37:45ZCristian Lalescufix symmetrizationsymmetrization takes positive modes and reflects them for the negative modes.
what should happen is that a proper arithmetic mean is computed, and then copied/reflected in the two locations adequately.symmetrization takes positive modes and reflects them for the negative modes.
what should happen is that a proper arithmetic mean is computed, and then copied/reflected in the two locations adequately.https://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/27fix stopping mechanism2019-06-27T11:37:07ZCristian Lalescufix stopping mechanismlauncher should check if `stop_<simname>` file exists in working directory, and it should throw an error if it does.launcher should check if `stop_<simname>` file exists in working directory, and it should throw an error if it does.https://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/26particle code hangs for certain initial conditions2023-11-15T09:29:10ZCristian Lalescuparticle code hangs for certain initial conditionsParticle code sometimes hangs, with no clear reason.
This is very inconsistent behavior. I see it on my laptop with gcc 8.1 and mpich 3.3, but not on our local cluster. Plus, if I run the code several times, it will not always fail, henc...Particle code sometimes hangs, with no clear reason.
This is very inconsistent behavior. I see it on my laptop with gcc 8.1 and mpich 3.3, but not on our local cluster. Plus, if I run the code several times, it will not always fail, hence my belief that it is somehow related to non-deterministic MPI stuff.
A possible fix is now implemented in branch `bugfix/particle_distribution`.
I put in a `DEBUG_MSG_WAIT` and noticed that suddenly the code stopped hanging. It never hanged when I had this debug message. I believe some of the asynchronous transfers get their tags confused between time-steps, and the `MPI_Barrier()` from `DEBUG_MSG_WAIT` is a hard fix to that.
So currently I added an `MPI_Barrier()` that fixes the problem until we have time to figure out if there's a cleaner fix.https://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/25compilation error on intel 182019-03-06T11:58:01ZCristian Lalescucompilation error on intel 18In brief: code does not compiler with intel 18 or later.
Exact error:
```
bfps/cpp/particles/particles_field_computer.hpp(80): error: expression must have a constant value
constexpr int nb_components_in_field = nbcomp(field);...In brief: code does not compiler with intel 18 or later.
Exact error:
```
bfps/cpp/particles/particles_field_computer.hpp(80): error: expression must have a constant value
constexpr int nb_components_in_field = nbcomp(field);
^
bfps/cpp/particles/particles_field_computer.hpp(80): note: the value of parameter "field" (declared at line 76) cannot be used as a constant
constexpr int nb_components_in_field = nbcomp(field);
^
detected during:
instantiation of "void particles_field_computer<partsize_t, real_number, interpolator_class, interp_neighbours>::apply_computation<field_class,size_particle_positions,size_particle_rhs>(const field_class &, const real_number *, real_number *, partsize_t) const [with partsize_t=long long, real_number=double, interpolator_class=particles_generic_interp<double, 1, 0>, interp_neighbours=1, field_class=field<float, FFTW, THREE>, size_particle_positions=3, size_particle_rhs=3]" at line 387
of "bfps/cpp/particles/particles_distr_mpi.hpp"
```
First reported by LRZ support, reproduced on cobra with intel compiler.
Configuration files to reproduce are attached.
Steps to reproduce:
1. clone repository, change directory
2. switch to develop branch
3. execute `python setup.py compile_library`
4. extremely long error message will be given at `test_interpolation.cpp`, originating in `particles_field_computer.hpp`, as detailed above.
[host_information.py](/uploads/12e4013e28dfbf8a3dbbfefc993c1b77/host_information.py)
[bashrc](/uploads/3c06788edf12ebc9b4d0a856d6bf2aee/bashrc)
[machine_settings.py](/uploads/fbcda844f15d8fdf261f0d7e94d8d076/machine_settings.py)https://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/24possible memory leak2019-03-01T15:03:59ZCristian Lalescupossible memory leakmemory usage on cobra increases over time.
in particular the output of
"sstat -j job_id -o maxrss"
increases over time.
this may be related to #23.memory usage on cobra increases over time.
in particular the output of
"sstat -j job_id -o maxrss"
increases over time.
this may be related to #23.https://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/23confirm hdf5 bufferring is sane for both field stats and particle sampling2022-01-13T13:25:23ZCristian Lalescuconfirm hdf5 bufferring is sane for both field stats and particle samplingFor field stats I explicitly set the size of the HDF5 file buffer.
I don't see this done for the particle sampling, although it's not at all obvious that it would be needed anyway.
In any case, we should make sure we have some sane defau...For field stats I explicitly set the size of the HDF5 file buffer.
I don't see this done for the particle sampling, although it's not at all obvious that it would be needed anyway.
In any case, we should make sure we have some sane defaults, in the case of 10 to 64 million particles, and 128 or more MPI processes.https://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/22portability2020-02-20T09:03:07ZCristian Lalescuportabilitythe code should work both with the intel compiler and gcc, and it should work with different mpi libraries as well.
1. we need to run these tests.
2. can we automate the tests against different compiler/mpi library combinations?the code should work both with the intel compiler and gcc, and it should work with different mpi libraries as well.
1. we need to run these tests.
2. can we automate the tests against different compiler/mpi library combinations?