LRZ scaling tests
We need to set up initial conditions and parameters for the test runs. Copy paste of e-mail from Nicolay J. Hammer:
the LRZ Scaling Workshop 2017 is approaching. As time during the workshop will be precious and a considerable amount of resources (SuperMUC as well as support staff) have been reserved exclusively for this workshop, it is crucial that you can proceed with your work as efficiently as possible. In order to ensure this, we suggest that you prepare the following items:
- a simulation which runs on 1 node, within 2-10 mins. (this allows you to run one node profiling in reasonable time)
- a small simulation which can be run from 16 to 128 nodes within 2-10 mins. (this allows you to run MPI profiling and change the code accordingly)
- a medium simulation which can be run from 128 nodes to 1 island (512 nodes) which runs in a short time (maximum 10-20 mins.)
- a large simulation, with our help during the workshop, which scale your code from 1 island to 4 islands and beyond.
We plan to have a short introduction round in the beginning of the workshop. Therefore, we ask you to prepare a small presentation (max. 15 mins.) which introduces your application and its background. The introduction will take place on Monday afternoon (see below).
Comment: we can run the code with and without particles. With particles, the normal way to run the code would be to also sample fields at the particle positions (as well as the particle locations themselves) reasonably often. Sampling fields should be discussed in issue #9 (closed).
Question: do we need to introduce the sampling functionality for the scaling workshop?
As far as I can tell, this is not fundamentally different from just writing the positions of the particles, or the rhs
values, plus the interpolation step.
However, @bbramas reports that the I/O is very expensive, I'm not sure what would happen if the HDF5 file is accessed many more times.