TurTLE issueshttps://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues2023-11-30T16:11:02Zhttps://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/46Adjust HDF5 version requirements2023-11-30T16:11:02ZTobias BaetgeAdjust HDF5 version requirementsThe required version of hdf5 seems to be equal or greater than 1.12. Accordingly, the documentation needs to be updated.The required version of hdf5 seems to be equal or greater than 1.12. Accordingly, the documentation needs to be updated.Cristian LalescuCristian Lalescuhttps://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/26particle code hangs for certain initial conditions2023-11-15T09:29:10ZCristian Lalescuparticle code hangs for certain initial conditionsParticle code sometimes hangs, with no clear reason.
This is very inconsistent behavior. I see it on my laptop with gcc 8.1 and mpich 3.3, but not on our local cluster. Plus, if I run the code several times, it will not always fail, henc...Particle code sometimes hangs, with no clear reason.
This is very inconsistent behavior. I see it on my laptop with gcc 8.1 and mpich 3.3, but not on our local cluster. Plus, if I run the code several times, it will not always fail, hence my belief that it is somehow related to non-deterministic MPI stuff.
A possible fix is now implemented in branch `bugfix/particle_distribution`.
I put in a `DEBUG_MSG_WAIT` and noticed that suddenly the code stopped hanging. It never hanged when I had this debug message. I believe some of the asynchronous transfers get their tags confused between time-steps, and the `MPI_Barrier()` from `DEBUG_MSG_WAIT` is a hard fix to that.
So currently I added an `MPI_Barrier()` that fixes the problem until we have time to figure out if there's a cleaner fix.https://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/44Adapting read_moments and read_histogram in DNS.py2023-11-12T07:22:36ZLukas BentkampAdapting read_moments and read_histogram in DNS.pyThere are a few things that make the `read_moments` and `read_histogram` methods in `DNS.py` less useful than they could be. I can fix them, but I'm not sure which ones you implemented on purpose this way:
- Both methods take the mean ov...There are a few things that make the `read_moments` and `read_histogram` methods in `DNS.py` less useful than they could be. I can fix them, but I'm not sure which ones you implemented on purpose this way:
- Both methods take the mean over time. Since we are already reading the whole time series into memory, why can't we return the full array?
- If you insist on aggregating over time, then in `read_moments`, we should at least take the minimum of the minima and the maximum of the maxima instead of their mean.
- Can we save the result into `self.statistics`?Lukas BentkampLukas Bentkamphttps://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/43executables never have "USE_TIMINGOUTPUT"2023-10-06T11:30:54ZCristian Lalescuexecutables never have "USE_TIMINGOUTPUT"this comes from updates/cleanups to/of the CMake setup.
Problem: even though turtle is built/installed with the timing option turned on, executables never know about it.
This is because the "USE_TIMING" setting never gets defined in the...this comes from updates/cleanups to/of the CMake setup.
Problem: even though turtle is built/installed with the timing option turned on, executables never know about it.
This is because the "USE_TIMING" setting never gets defined in the `_code.py`-generated temporary cmake setup that's being used for the executables.
Current workaround: always define `USE_TIMINGOUTPUT` for the executable.
We need a reasonable solution.Cristian LalescuCristian Lalescuhttps://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/30HDF5/MPI compatibility2022-08-24T10:13:24ZCristian LalescuHDF5/MPI compatibilityLatest version of OpenMPI seems to be incompatible with HDF5 unless compiled with custom flag.
E-mail from @schni follows:
> While installing Turtle on my private computer over the weekend I had the problem that newer versions of OpenM...Latest version of OpenMPI seems to be incompatible with HDF5 unless compiled with custom flag.
E-mail from @schni follows:
> While installing Turtle on my private computer over the weekend I had the problem that newer versions of OpenMPI (>=3.0) are not compatible with the HDF5 library build with the --enable-parallel flag. Maybe we should include that in the README. The fixes are to install an older version of OpenMPI or build a newer version wtith the "--enable-mpi1-compatibility" flag set.
Quick browsing of HDF5 website does not yield an explicit dependency on a particular MPI standard.
@mjr, how flexible is the CI configuration here? Would it be feasible to test different versions of HDF5 and/or MPI? For instance I only use HDF5 1.8.x as far as I know, I think turtle actually failed with 1.10.x when I last tried it (it was still bfps at the time).https://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/37dealiasing and pressure Hessian2022-06-28T08:05:41ZCristian Lalescudealiasing and pressure HessianTurTLE allows for two dealiasing methods now (2/3 and smooth dealiasing).
We need to also implement 1/2 dealiasing.
Opening issue for easier tracking of development.
See below document prepared by @mcarbone :
[test_dealiasing.pdf](/upl...TurTLE allows for two dealiasing methods now (2/3 and smooth dealiasing).
We need to also implement 1/2 dealiasing.
Opening issue for easier tracking of development.
See below document prepared by @mcarbone :
[test_dealiasing.pdf](/uploads/6edad116dde14f88de817d3756d308c1/test_dealiasing.pdf)
There are two points laid out in the above document:
1. We may test the new dealiasing method by simply computing spectra of dealiased fields and comparing spectrum for 2/3 with spectrum for 1/2. There are analytic expressions for wavenumbers where spectra go to 0.
2. Pressure Hessian and squared velocity gradients satisfy known equations that fail when using 2/3 dealiasing, but work for 1/2.
Therefore I see the following steps for the work:
* Implement 1/2 dealiasing (i.e. new template for `kspace`).
* Write new TurTLE test that generates spectra comparisons for different dealiasing methods, i.e. point 1 from above.
* Write post-processing tool that computes the pressure Hessian and the velocity gradient, and verifies the equations detailed in 2.
Afterwards we see exactly what type of postprocessing tool we write for future data analysis.Cristian LalescuCristian Lalescuhttps://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/15Features of potential interest2022-01-13T13:27:32ZDimitar VlaykovFeatures of potential interest#### Suggestions for features and functionalities which might be relevant for additional applications
* Field-like class of integral type -- to be used e.g. for masking, selecting sub_domains of a field object, etc.#### Suggestions for features and functionalities which might be relevant for additional applications
* Field-like class of integral type -- to be used e.g. for masking, selecting sub_domains of a field object, etc.https://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/23confirm hdf5 bufferring is sane for both field stats and particle sampling2022-01-13T13:25:23ZCristian Lalescuconfirm hdf5 bufferring is sane for both field stats and particle samplingFor field stats I explicitly set the size of the HDF5 file buffer.
I don't see this done for the particle sampling, although it's not at all obvious that it would be needed anyway.
In any case, we should make sure we have some sane defau...For field stats I explicitly set the size of the HDF5 file buffer.
I don't see this done for the particle sampling, although it's not at all obvious that it would be needed anyway.
In any case, we should make sure we have some sane defaults, in the case of 10 to 64 million particles, and 128 or more MPI processes.https://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/39Descriptions for command line parameters2021-12-16T10:07:26ZLukas BentkampDescriptions for command line parametersSo far, when calling `turtle DNS NSE --help` (and most other commands), descriptions for the command line parameters are very sparse. For first time users, I think this is a very important place to have a full documentation. Apart from e...So far, when calling `turtle DNS NSE --help` (and most other commands), descriptions for the command line parameters are very sparse. For first time users, I think this is a very important place to have a full documentation. Apart from explaining what parameters do, it should point out which parameters default to what value (for example `--nx` defaults to `-n` defaults to some value, right?).https://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/12Vectorization of the interpolation/particle kernel2021-06-23T08:16:38ZBerenger BramasVectorization of the interpolation/particle kernelLooks how to vectorize and what is the speedup.Looks how to vectorize and what is the speedup.https://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/35random number generators in hybrid MPI/OpenMP2021-05-21T09:54:53ZCristian Lalescurandom number generators in hybrid MPI/OpenMPThis is possibly a non-issue, but it should be documented somehow.
In `field.cpp` we implement the `make_gaussian_random_field` function, that fills up a `field` object with random amplitudes and phases according to a prescribed spectrum...This is possibly a non-issue, but it should be documented somehow.
In `field.cpp` we implement the `make_gaussian_random_field` function, that fills up a `field` object with random amplitudes and phases according to a prescribed spectrum.
the STL mt19937_64 random number generator is used: one individual generator for each OpenMP thread.
The seed for each thread is unique, and it is dependent on an "rseed" as well as MPI process ID and OpenMP thread ID.
In principle we could see repeated seed usage if we use different MPI/OpenMP configurations.
While this is not an obvious problem, it would be best to find out if there are standard ways of dealing with random number generation in a massively parallel setting. Could we ensure that random numbers are independent of the MPI/OpenMP configuration?Cristian LalescuCristian Lalescuhttps://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/36generalize/refactor particle code2021-05-21T09:54:07ZCristian Lalescugeneralize/refactor particle codeAt the moment particles are somewhat hard-coded to use Adams-Bashforth for time-stepping. With some careful combinations of the base classes we could get around this if we really wanted to, but I think it's best if we reorganize the code...At the moment particles are somewhat hard-coded to use Adams-Bashforth for time-stepping. With some careful combinations of the base classes we could get around this if we really wanted to, but I think it's best if we reorganize the code a bit such that more generic temporal integration methods can be used.
In particular, we will need temporal interpolation, and I would like to be able to interpolate several fields at once (such that the weight polynomials are only computed once).
This will lead to slightly more elaborate refactoring than just of the particle code, since the fields themselves at different iterations are required for temporal interpolation.
I'm creating this issue to document the process and such that we can coordinate test development.Cristian LalescuCristian Lalescuhttps://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/38p2p exchange operation causes assertion failure2021-05-21T09:53:27ZCristian Lalescup2p exchange operation causes assertion failureFor a particular setup, a TurTLE-derived code crashes consistently.
Two interacting particles (ghost collision, rod-like particles, actual code is in gitlab.mpcdf.mpg.de:/josea/sedimenting-rods.git), on 512 mpi processes, sometimes lead ...For a particular setup, a TurTLE-derived code crashes consistently.
Two interacting particles (ghost collision, rod-like particles, actual code is in gitlab.mpcdf.mpg.de:/josea/sedimenting-rods.git), on 512 mpi processes, sometimes lead to TurTLE getting stuck (with NDEBUG) --- or to emit the assertion failure (no NDEBUG):
```
many_NSVEcylinder_collisions-single-v3.12.1.post2+g2c95949:
/home/arguedas/ttf/titan/installs/py3/include/TurTLE/particles/p2p/p2p_distr_mpi.hpp:547:
void p2p_distr_mpi<partsize_t, real_number>::compute_distr(computer_class&, const partsize_t*, real_number*, std::unique_ptr<real_number []>*, int, partsize_t*)
[with computer_class = p2p_ghost_collisions<double, long long int>;
int size_particle_positions = 6;
int size_particle_rhs = 6;
partsize_t = long long int;
real_number = double]:
Assertion `willsendall[idxproc*nb_processes_involved + idxtest] == willrecvall[idxtest*nb_processes_involved + idxproc]' failed.
```
This depends on the interaction cutoff, as can be seen from the attached plot.
[turtle_error.pdf](/uploads/27bcacb9c620f70ae13eb65d2f565a6a/turtle_error.pdf)Cristian LalescuCristian Lalescuhttps://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/14handling of big initial condition files2020-08-07T19:39:53ZCristian Lalescuhandling of big initial condition filesOnce the Python wrapper needs to generate initial conditions (either fields, or particles) bigger than a few gigabytes we're in trouble.
In particular, we need to make use of parallel file systems (which the Python wrapper cannot current...Once the Python wrapper needs to generate initial conditions (either fields, or particles) bigger than a few gigabytes we're in trouble.
In particular, we need to make use of parallel file systems (which the Python wrapper cannot currently do).
We need to decide on how to handle this in the future.
Initial suggestion: write a distinct C++ code that can do various complicated initial condition tasks, and run it as a separate job beforehand. If the DNS is big enough to require this, it means the additional effort of keeping track of which initial condition job has finished yet is worth it.https://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/28fix symmetrization2020-08-07T19:37:45ZCristian Lalescufix symmetrizationsymmetrization takes positive modes and reflects them for the negative modes.
what should happen is that a proper arithmetic mean is computed, and then copied/reflected in the two locations adequately.symmetrization takes positive modes and reflects them for the negative modes.
what should happen is that a proper arithmetic mean is computed, and then copied/reflected in the two locations adequately.https://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/34Job Failed #10021022020-05-08T10:44:17ZCristian LalescuJob Failed #1002102Job [#1002102](https://gitlab.mpcdf.mpg.de/clalescu/turtle/-/jobs/1002102) failed for c4ed62917105a8e85140ba3d6ef9cc095eb720fa:
I'm not sure what the problem is.
From what I can read of the output, the `make install` command doesn't ins...Job [#1002102](https://gitlab.mpcdf.mpg.de/clalescu/turtle/-/jobs/1002102) failed for c4ed62917105a8e85140ba3d6ef9cc095eb720fa:
I'm not sure what the problem is.
From what I can read of the output, the `make install` command doesn't install the python package. So I put the relevant code directly in the gitlab-ci script, and now I see that there is no python3 available for the gitlab runner.
@mjr : do you know where I can find a list of available modules, so that I can load the relevant python3 module? Or are these simply draco/cobra modules?https://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/21split particle file if it's bigger than 500GB2020-03-16T19:28:48ZCristian Lalescusplit particle file if it's bigger than 500GBIt's important to keep files between 1GB and 500 GB for tape backup/restore purposes.
At the moment we can use checkpoint files where the number of checkpoints per file is decided beforehand. This sort of works, since we know how big ea...It's important to keep files between 1GB and 500 GB for tape backup/restore purposes.
At the moment we can use checkpoint files where the number of checkpoints per file is decided beforehand. This sort of works, since we know how big each checkpoint is, and we can compute exactly how big the checkpoint file will be.
For the particle sample file this isn't implemented, and I would like a way to do it such that we can then read the data transparently. One way to do it would be to create a "simname_particles.h5" file which only contains links to external files, and the external files are written with checkpoint-like approach.
1. Can the decision of "let's split the particle file now" be automated in a clean way? I.e. the data should be easily accessible afterwards.
2. How general does the splitting mechanism need to be?
3. Design test.
4. Implement file splitting mechanism.
5. Ensure compatibility with old data sets.https://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/22portability2020-02-20T09:03:07ZCristian Lalescuportabilitythe code should work both with the intel compiler and gcc, and it should work with different mpi libraries as well.
1. we need to run these tests.
2. can we automate the tests against different compiler/mpi library combinations?the code should work both with the intel compiler and gcc, and it should work with different mpi libraries as well.
1. we need to run these tests.
2. can we automate the tests against different compiler/mpi library combinations?https://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/32new feature: collision/coalescence of particles2020-01-28T16:41:01ZCristian Lalescunew feature: collision/coalescence of particlesThis issue to track progress on collision/particle coalescence code.This issue to track progress on collision/particle coalescence code.Tobias BaetgeTobias Baetgehttps://gitlab.mpcdf.mpg.de/TurTLE/turtle/-/issues/33new feature: particles with Stokes drag2019-11-29T15:14:47ZCristian Lalescunew feature: particles with Stokes dragbranch `feature/Stokes-drag` contains an initial attempt to implement particles with Stokes drag. I had to extend the `abstract_particles_system` and `particle_system` classes a bit, and I had to create a new `particles_inner_computer_2n...branch `feature/Stokes-drag` contains an initial attempt to implement particles with Stokes drag. I had to extend the `abstract_particles_system` and `particle_system` classes a bit, and I had to create a new `particles_inner_computer_2nd_order_Stokes` class, but I think I did it the right way.
@bbramas : when you have time, can you please have a look at my changes and confirm that they respect the philosophy behind the particle code?
@toba : if/when you have time, can you please design a sanity check for particles with Stokes drag, such that we can write a proper test for the new `NSVE_Stokes_particles` simulator?