There is a maintenance of MPCDF Gitlab on Thursday, April 22st 2020, 9:00 am CEST - Expect some service interruptions during this time

Update documentation

parent 2ac4030c
......@@ -97,7 +97,7 @@ sugest the [Intel Math Kernel Library Link Line Advisor] (
In the default the configure script tries to configure and build all ELPA2 compute kernels which are available for
the architecture. Then the specific kernel can be chosen at run-time via the api or an environment variable (see
the **USERS GUIDE** for details).
the **USERS_GUIDE** for details).
It this is not desired, it is possible to build *ELPA* with only one (not necessary the same) kernel for the
real and complex valued case, respectively. This can be done with the "--with-real-..-kernel-only" and
......@@ -86,7 +86,7 @@ the possible configure options.
## Using *ELPA*
Please have a look at the "**Users Guide**" file, to get a documentation or at the [online]
Please have a look at the "**USERS_GUIDE**" file, to get a documentation or at the [online]
( doygen
documentation, where you find the definition of the interfaces.
## Users guide for the ELPA library ##
This document provides the guide for using the *ELPA* library in user applications.
### Online and local documentation ###
Local documentation (via man pages) should be available (if *ELPA* has been installed with the documentation):
For example "man get_elpa_communicators" should provide the documentation for the *ELPA* function which sets
the necessary communicators.
Also a [online doxygen documentation] (
for each *ELPA* release is available.
### General concept to use ELPA ###
#### MPI version of ELPA ####
In this case, *ELPA* relies on a BLACS distributed matrix.
To solve a Eigenvalue problem of this matrix with *ELPA*, one has
1. to include the *ELPA* header (C case) or module (Fortran)
2. to create row and column MPI communicators for ELPA (with "get_elpa_communicators")
3. to call *ELPA 1stage* or *ELPA 2stage* for the matrix.
Here is a very simple MPI code snippet for using *ELPA*: For the definition of all variables
please have a look at the man pages and/or the online documentation (see above)
! All ELPA routines need MPI communicators for communicating within
! rows or columns of processes, these are set in get_elpa_communicators
success = get_elpa_communicators(mpi_comm_world, my_prow, my_pcol, &
mpi_comm_rows, mpi_comm_cols)
if (myid==0) then
print '(a)','| Past split communicator setup for rows and columns.'
end if
! Determine the necessary size of the distributed matrices,
! we use the Scalapack tools routine NUMROC for that.
na_rows = numroc(na, nblk, my_prow, 0, np_rows)
na_cols = numroc(na, nblk, my_pcol, 0, np_cols)
! Calculate eigenvalues/eigenvectors
if (myid==0) then
print '(a)','| Entering one-step ELPA solver ... '
print *
end if
success solve_evp_real_1stage(na, nev, a, na_rows, ev, z, na_rows, nblk, &
matrixCols, mpi_comm_rows, mpi_comm_cols)
if (myid==0) then
print '(a)','| One-step ELPA solver complete.'
print *
end if
#### Shared-memory version of ELPA ####
If the *ELPA* library has been compiled with the configure option "--enable-shared-memory-only",
no MPI will be used.
Still the **same** call sequence as in the MPI case can be used (see above).
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment