USERS_GUIDE.md 28.8 KB
Newer Older
Andreas Marek's avatar
Andreas Marek committed
1
## Users guide for the *ELPA* library ##
Andreas Marek's avatar
Andreas Marek committed
2 3 4 5 6 7 8

This document provides the guide for using the *ELPA* library in user applications.

### Online and local documentation ###

Local documentation (via man pages) should be available (if *ELPA* has been installed with the documentation):

Andreas Marek's avatar
Andreas Marek committed
9 10
For example "man elpa2_print_kernels" should provide the documentation for the *ELPA* program which prints all
the available kernels.
Andreas Marek's avatar
Andreas Marek committed
11

12
Also a [online doxygen documentation] (http://elpa.mpcdf.mpg.de/html/Documentation/ELPA-2018.05.001.rc1/html/index.html)
Andreas Marek's avatar
Andreas Marek committed
13 14
for each *ELPA* release is available.

Andreas Marek's avatar
Andreas Marek committed
15 16 17

## API of the *ELPA* library ##

18
With release 2017.05.001 of the *ELPA* library the interface has been rewritten substantially, in order to have a more generic interface and to avoid future interface changes.
Andreas Marek's avatar
Andreas Marek committed
19 20

For compatibility reasons the interface defined in the previous release 2016.11.001 is also still available
Andreas Marek's avatar
Andreas Marek committed
21 22 23
IF AND ONLY IF *ELPA* has been build with support of this legacy interface.

If you want to use the legacy interface, please look to section "B) Using the legacy API of the *ELPA* library.
Andreas Marek's avatar
Andreas Marek committed
24

25
The legacy API defines all the functionality as it has been defined in *ELPA* release 2016.11.011. Note, however,
26
that all future features of *ELPA* will only be accessible via the new API defined in release 2017.05.001 or later.
Andreas Marek's avatar
Andreas Marek committed
27 28 29 30 31

## A) Using the final API definition of the *ELPA* library ##

Using *ELPA* with the latest API is done in the following steps

Andreas Marek's avatar
Andreas Marek committed
32 33
- include elpa headers "elpa/elpa.h" (C-Case) or use the Fortran module "use elpa"

Andreas Marek's avatar
Andreas Marek committed
34
- define a instance of the elpa type
Andreas Marek's avatar
Andreas Marek committed
35

Andreas Marek's avatar
Andreas Marek committed
36
- call elpa_init
Andreas Marek's avatar
Andreas Marek committed
37

Andreas Marek's avatar
Andreas Marek committed
38
- call elpa_allocate to allocate an instance of *ELPA*
Andreas Marek's avatar
Andreas Marek committed
39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
  note that you can define (and configure individually) as many different instances
  for ELPA as you want, e.g. one for CPU only computations and for larger matrices on GPUs

- use ELPA-type function "set" to set matrix and MPI parameters

- call the ELPA-type function "setup"

- set or get all possible ELPA tunable options with ELPA-type functions get/set

  At the moment the following tunable options are available:

     - "solver" can either be ELPA_SOLVER_1STAGE or ELPA_SOLVER_2STAGE
     - "real_kernel" can be one of the available real kernels (a list of available kernels can be
        queried with the ELPA helper binary elpa2_print_kernels)
     - "complex_kernel" can be one of the available complex kernels (a list of available kernels can be
     - "qr" can be either 0 or 1, switches QR decomposition off/on for ELPA_SOLVER_2STAGE
       only available in real-case for blocksize at least 64
     - "gpu" can be either 0 or 1, switches GPU computations off or on, assuming that the installation
       of the ELPA library has been build with GPU support enables
     - "timings" can be either 0 or 1, switches time measurements off or on
     - "debug" can be either 0 or 1, switches detailed debug messages off/on

Andreas Marek's avatar
Andreas Marek committed
61 62
- call ELPA-type function solve or others

Andreas Marek's avatar
Andreas Marek committed
63 64
  At the moment the following ELPA compute functions are available:

65 66
    - "eigenvectors" solves the eigenvalue problem for single/double real/complex valued matrices and
                     returns the eigenvalues AND eigenvectors
67 68
    - "eigenvalues" solves the eigenvalue problem for single/double real/complex valued matrices and
                     returns the eigenvalues
Andreas Marek's avatar
Andreas Marek committed
69 70 71 72
    - "hermetian_multipy" computes C = A^T * B (real) or C = A^H * B (complex) for single/double
      real/complex matrices
    - "cholesky" does a cholesky factorization for a single/double real/complex matrix
    - "invert_triangular" inverts a single/double real/complex triangular matrix
Andreas Marek's avatar
Andreas Marek committed
73
    - "solve_tridiagonal" solves the single/double eigenvalue problem for a real tridiagonal matrix
Andreas Marek's avatar
Andreas Marek committed
74 75

- if the ELPA object is not needed any more call ELPA-type function destroy
Andreas Marek's avatar
Andreas Marek committed
76

Andreas Marek's avatar
Andreas Marek committed
77
- call elpa_uninit at the end of the program
Andreas Marek's avatar
Andreas Marek committed
78 79 80 81 82

## B) Using the legacy API of the *ELPA* library ##

The following description describes the usage of the *ELPA* library with the legacy interface.

Andreas Marek's avatar
Andreas Marek committed
83
### General concept of the *ELPA* library ###
Andreas Marek's avatar
Andreas Marek committed
84

Andreas Marek's avatar
Andreas Marek committed
85 86 87 88
The *ELPA* library consists of two main parts:
- *ELPA 1stage* solver
- *ELPA 2stage* solver

89
Both variants of the *ELPA* solvers are available for real or complex singe and double precision valued matrices.
Andreas Marek's avatar
Andreas Marek committed
90

91
Thus *ELPA* provides the following user functions (see man pages or [online] (http://elpa.mpcdf.mpg.de/html/Documentation/ELPA-2018.05.001.rc1/html/index.html) for details):
Andreas Marek's avatar
Andreas Marek committed
92

93 94 95 96 97
- elpa_get_communicators                        : set the row / column communicators for *ELPA*
- elpa_solve_evp_complex_1stage_{single|double} : solve a {single|double} precision complex eigenvalue proplem with the *ELPA 1stage* solver
- elpa_solve_evp_real_1stage_{single|double}    : solve a {single|double} precision real eigenvalue proplem with the *ELPA 1stage* solver
- elpa_solve_evp_complex_2stage_{single|double} : solve a {single|double} precision complex eigenvalue proplem with the *ELPA 2stage* solver
- elpa_solve_evp_real_2stage_{single|double}    : solve a {single|double} precision real eigenvalue proplem with the *ELPA 2stage* solver
Andreas Marek's avatar
Andreas Marek committed
98 99
- elpa_solve_evp_real_{single|double}           : driver for the {single|double} precision real *ELPA 1stage* or *ELPA 2stage* solver
- elpa_solve_evp_complex_{single|double}        : driver for the {single|double} precision complex *ELPA 1stage* or *ELPA 2stage* solver
100 101 102



Andreas Marek's avatar
Andreas Marek committed
103
Furthermore *ELPA* provides the utility binary "elpa2_print_available_kernels": it tells the user
Andreas Marek's avatar
Andreas Marek committed
104 105 106 107
which *ELPA 2stage* compute kernels have been installed and which default kernels are set

If you want to solve an eigenvalue problem with *ELPA*, you have to decide whether you
want to use *ELPA 1stage* or *ELPA 2stage* solver. Normally, *ELPA 2stage* is the better
108
choice since it is faster, but there are matrix dimensions where *ELPA 1stage* is superior.
Andreas Marek's avatar
Andreas Marek committed
109 110 111 112

Independent of the choice of the solver, the concept of calling *ELPA* is always the same:

#### MPI version of *ELPA* ####
Andreas Marek's avatar
Andreas Marek committed
113 114 115 116 117

In this case, *ELPA* relies on a BLACS distributed matrix.
To solve a Eigenvalue problem of this matrix with *ELPA*, one has

1. to include the *ELPA* header (C case) or module (Fortran)
118 119
2. to create row and column MPI communicators for ELPA (with "elpa_get_communicators")
3. to call to the *ELPA driver* or directly call *ELPA 1stage* or *ELPA 2stage* for the matrix.
Andreas Marek's avatar
Andreas Marek committed
120

Andreas Marek's avatar
Andreas Marek committed
121
Here is a very simple MPI code snippet for using *ELPA 1stage*: For the definition of all variables
Andreas Marek's avatar
Andreas Marek committed
122
please have a look at the man pages and/or the online documentation (see above). A full version
123
of a simple example program can be found in ./test_project_1stage_legacy_api/src.
Andreas Marek's avatar
Andreas Marek committed
124 125 126


   ! All ELPA routines need MPI communicators for communicating within
127
   ! rows or columns of processes, these are set in elpa_get_communicators
Andreas Marek's avatar
Andreas Marek committed
128

Andreas Marek's avatar
Andreas Marek committed
129
   success = elpa_get_communicators(mpi_comm_world, my_prow, my_pcol, &
Andreas Marek's avatar
Andreas Marek committed
130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149
                                    mpi_comm_rows, mpi_comm_cols)

   if (myid==0) then
     print '(a)','| Past split communicator setup for rows and columns.'
   end if

   ! Determine the necessary size of the distributed matrices,
   ! we use the Scalapack tools routine NUMROC for that.

   na_rows = numroc(na, nblk, my_prow, 0, np_rows)
   na_cols = numroc(na, nblk, my_pcol, 0, np_cols)

   !-------------------------------------------------------------------------------
   ! Calculate eigenvalues/eigenvectors

   if (myid==0) then
     print '(a)','| Entering one-step ELPA solver ... '
     print *
   end if

150
   success = elpa_solve_evp_real_1stage_{single|double} (na, nev, a, na_rows, ev, z, na_rows, nblk, &
Andreas Marek's avatar
Andreas Marek committed
151
                                   matrixCols, mpi_comm_rows, mpi_comm_cols)
Andreas Marek's avatar
Andreas Marek committed
152 153 154 155 156 157 158

   if (myid==0) then
     print '(a)','| One-step ELPA solver complete.'
     print *
   end if


Andreas Marek's avatar
Andreas Marek committed
159
#### Shared-memory version of *ELPA* ####
Andreas Marek's avatar
Andreas Marek committed
160

Andreas Marek's avatar
Andreas Marek committed
161
If the *ELPA* library has been compiled with the configure option "--with-mpi=0",
Andreas Marek's avatar
Andreas Marek committed
162 163 164 165
no MPI will be used.

Still the **same** call sequence as in the MPI case can be used (see above).

Andreas Marek's avatar
Andreas Marek committed
166
#### Setting the row and column communicators ####
Andreas Marek's avatar
Andreas Marek committed
167

Andreas Marek's avatar
Andreas Marek committed
168 169 170
SYNOPSIS
   FORTRAN INTERFACE
       use elpa1
171

172
       success = elpa_get_communicators (mpi_comm_global, my_prow, my_pcol, mpi_comm_rows, mpi_comm_cols)
173

Andreas Marek's avatar
Andreas Marek committed
174 175 176 177 178 179 180 181 182 183 184
       integer, intent(in)   mpi_comm_global:  global communicator for the calculation
       integer, intent(in)   my_prow:          row coordinate of the calling process in the process grid
       integer, intent(in)   my_pcol:          column coordinate of the calling process in the process grid
       integer, intent(out)  mpi_comm_row:     communicator for communication within rows of processes
       integer, intent(out)  mpi_comm_row:     communicator for communication within columns of processes

       integer               success:          return value indicating success or failure of the underlying MPI_COMM_SPLIT function

   C INTERFACE
       #include "elpa_generated.h"

185
       success = elpa_get_communicators (int mpi_comm_world, int my_prow, my_pcol, int *mpi_comm_rows, int *Pmpi_comm_cols);
Andreas Marek's avatar
Andreas Marek committed
186 187 188 189 190 191 192 193 194 195 196 197

       int mpi_comm_global:  global communicator for the calculation
       int my_prow:          row coordinate of the calling process in the process grid
       int my_pcol:          column coordinate of the calling process in the process grid
       int *mpi_comm_row:    pointer to the communicator for communication within rows of processes
       int *mpi_comm_row:    pointer to the communicator for communication within columns of processes

       int  success:         return value indicating success or failure of the underlying MPI_COMM_SPLIT function


#### Using *ELPA 1stage* ####

198
After setting up the *ELPA* row and column communicators (by calling elpa_get_communicators),
Andreas Marek's avatar
Andreas Marek committed
199 200 201 202 203
only the real or complex valued solver has to be called:

SYNOPSIS
   FORTRAN INTERFACE
       use elpa1
204
       success = elpa_solve_evp_real_1stage_{single|double} (na, nev, a(lda,matrixCols), ev(nev), q(ldq, matrixCols), ldq, nblk, matrixCols, mpi_comm_rows,
Andreas Marek's avatar
Andreas Marek committed
205 206 207 208 209 210
       mpi_comm_cols)

       With the definintions of the input and output variables:

       integer, intent(in)    na:            global dimension of quadratic matrix a to solve
       integer, intent(in)    nev:           number of eigenvalues to be computed; the first nev eigenvalules are calculated
211
       real*{4|8},  intent(inout) a:         locally distributed part of the matrix a. The local dimensions are lda x matrixCols
Andreas Marek's avatar
Andreas Marek committed
212
       integer, intent(in)    lda:           leading dimension of locally distributed matrix a
213 214
       real*{4|8},  intent(inout) ev:        on output the first nev computed eigenvalues
       real*{4|8},  intent(inout) q:         on output the first nev computed eigenvectors
Andreas Marek's avatar
Andreas Marek committed
215 216 217
       integer, intent(in)    ldq:           leading dimension of matrix q which stores the eigenvectors
       integer, intent(in)    nblk:          blocksize of block cyclic distributin, must be the same in both directions
       integer, intent(in)    matrixCols:    number of columns of locally distributed matrices a and q
218 219
       integer, intent(in)    mpi_comm_rows: communicator for communication in rows. Constructed with elpa_get_communicators(3)
       integer, intent(in)    mpi_comm_cols: communicator for communication in colums. Constructed with elpa_get_communicators(3)
Andreas Marek's avatar
Andreas Marek committed
220 221 222 223 224 225

       logical                success:       return value indicating success or failure

   C INTERFACE
       #include "elpa.h"

226
       success = elpa_solve_evp_real_1stage_{single|double} (int na, int nev,  double *a, int lda,  double *ev, double *q, int ldq, int nblk, int matrixCols, int
Andreas Marek's avatar
Andreas Marek committed
227 228 229 230 231 232
       mpi_comm_rows, int mpi_comm_cols);

       With the definintions of the input and output variables:

       int     na:            global dimension of quadratic matrix a to solve
       int     nev:           number of eigenvalues to be computed; the first nev eigenvalules are calculated
233
       {float|double} *a:     pointer to locally distributed part of the matrix a. The local dimensions are lda x matrixCols
Andreas Marek's avatar
Andreas Marek committed
234
       int     lda:           leading dimension of locally distributed matrix a
235 236
       {float|double} *ev:    pointer to memory containing on output the first nev computed eigenvalues
       {float|double} *q:     pointer to memory containing on output the first nev computed eigenvectors
Andreas Marek's avatar
Andreas Marek committed
237 238 239
       int     ldq:           leading dimension of matrix q which stores the eigenvectors
       int     nblk:          blocksize of block cyclic distributin, must be the same in both directions
       int     matrixCols:    number of columns of locally distributed matrices a and q
240 241
       int     mpi_comm_rows: communicator for communication in rows. Constructed with elpa_get_communicators(3)
       int     mpi_comm_cols: communicator for communication in colums. Constructed with elpa_get_communicators(3)
Andreas Marek's avatar
Andreas Marek committed
242 243 244 245 246

       int     success:       return value indicating success (1) or failure (0)

DESCRIPTION
       Solve the real eigenvalue problem with the 1-stage solver. The ELPA communicators mpi_comm_rows and mpi_comm_cols are obtained with the
247
       elpa_get_communicators(3) function. The distributed quadratic marix a has global dimensions na x na, and a local size lda x matrixCols.
Andreas Marek's avatar
Andreas Marek committed
248 249 250 251 252
       The solver will compute the first nev eigenvalues, which will be stored on exit in ev. The eigenvectors corresponding to the eigenvalues
       will be stored in q. All memory of the arguments must be allocated outside the call to the solver.

   FORTRAN INTERFACE
       use elpa1
253
       success = elpa_solve_evp_complex_1stage_{single|double} (na, nev, a(lda,matrixCols), ev(nev), q(ldq, matrixCols), ldq, nblk, matrixCols, mpi_comm_rows,
Andreas Marek's avatar
Andreas Marek committed
254 255 256 257 258 259
       mpi_comm_cols)

       With the definintions of the input and output variables:

       integer,     intent(in)    na:            global dimension of quadratic matrix a to solve
       integer,     intent(in)    nev:           number of eigenvalues to be computed; the first nev eigenvalules are calculated
260
       complex*{8|16},  intent(inout) a:         locally distributed part of the matrix a. The local dimensions are lda x matrixCols
Andreas Marek's avatar
Andreas Marek committed
261
       integer,     intent(in)    lda:           leading dimension of locally distributed matrix a
262 263
       real*{4|8},      intent(inout) ev:        on output the first nev computed eigenvalues
       complex*{8|16},  intent(inout) q:         on output the first nev computed eigenvectors
Andreas Marek's avatar
Andreas Marek committed
264 265 266
       integer,     intent(in)    ldq:           leading dimension of matrix q which stores the eigenvectors
       integer,     intent(in)    nblk:          blocksize of block cyclic distributin, must be the same in both directions
       integer,     intent(in)    matrixCols:    number of columns of locally distributed matrices a and q
267 268
       integer,     intent(in)    mpi_comm_rows: communicator for communication in rows. Constructed with elpa_get_communicators(3)
       integer, intent(in)        mpi_comm_cols: communicator for communication in colums. Constructed with elpa_get_communicators(3)
Andreas Marek's avatar
Andreas Marek committed
269 270 271 272 273 274 275

       logical                    success:       return value indicating success or failure

   C INTERFACE
       #include "elpa.h"
       #include <complex.h>

276
       success = elpa_solve_evp_complex_1stage_{single|double} (int na, int nev,  double complex *a, int lda,  double *ev, double complex*q, int ldq, int nblk, int
Andreas Marek's avatar
Andreas Marek committed
277 278 279 280 281 282
       matrixCols, int mpi_comm_rows, int mpi_comm_cols);

       With the definintions of the input and output variables:

       int             na:            global dimension of quadratic matrix a to solve
       int             nev:           number of eigenvalues to be computed; the first nev eigenvalules are calculated
283
       {float|double} complex *a:     pointer to locally distributed part of the matrix a. The local dimensions are lda x matrixCols
Andreas Marek's avatar
Andreas Marek committed
284
       int             lda:           leading dimension of locally distributed matrix a
285 286
       {float|double}         *ev:    pointer to memory containing on output the first nev computed eigenvalues
       {float|double} complex *q:     pointer to memory containing on output the first nev computed eigenvectors
Andreas Marek's avatar
Andreas Marek committed
287 288 289
       int             ldq:           leading dimension of matrix q which stores the eigenvectors
       int             nblk:          blocksize of block cyclic distributin, must be the same in both directions
       int             matrixCols:    number of columns of locally distributed matrices a and q
290 291
       int             mpi_comm_rows: communicator for communication in rows. Constructed with elpa_get_communicators(3)
       int             mpi_comm_cols: communicator for communication in colums. Constructed with elpa_get_communicators(3)
Andreas Marek's avatar
Andreas Marek committed
292 293 294 295 296

       int             success:       return value indicating success (1) or failure (0)

DESCRIPTION
       Solve the complex eigenvalue problem with the 1-stage solver. The ELPA communicators mpi_comm_rows and mpi_comm_cols are obtained with the
297
       elpa_get_communicators(3) function. The distributed quadratic marix a has global dimensions na x na, and a local size lda x matrixCols.
Andreas Marek's avatar
Andreas Marek committed
298 299 300 301 302 303 304 305 306 307 308 309 310
       The solver will compute the first nev eigenvalues, which will be stored on exit in ev. The eigenvectors corresponding to the eigenvalues
       will be stored in q. All memory of the arguments must be allocated outside the call to the solver.


The *ELPA 1stage* solver, does not need or accept any other parameters than in the above
specification.

#### Using *ELPA 2stage* ####

The *ELPA 2stage* solver can be used in the same manner, as the *ELPA 1stage* solver.
However, the 2 stage solver, can be used with different compute kernels, which offers
more possibilities for configuration.

311
It is recommended to first call the utility program
Andreas Marek's avatar
Andreas Marek committed
312 313 314 315 316 317 318 319 320 321 322 323 324

elpa2_print_kernels

which will tell all the compute kernels that can be used with *ELPA 2stage*". It will
also give information, whether a kernel can be set via environment variables.

##### Using the default kernels #####

If no kernel is set either via an environment variable or the *ELPA 2stage API* then
the default kernels will be set.

##### Setting the *ELPA 2stage* compute kernels #####

325 326
##### Setting the *ELPA 2stage* compute kernels with environment variables#####

327
If the *ELPA* installation allows setting their compute kernels with environment variables,
Andreas Marek's avatar
Andreas Marek committed
328 329 330
setting the variables "REAL_ELPA_KERNEL" and "COMPLEX_ELPA_KERNEL" will set the compute
kernels. The environment variable setting will take precedence over all other settings!

331
The utility program "elpa2_print_kernels" can list which kernels are available and which
332 333
would be chosen. This reflects the setting of the default kernel as well as the setting
with the environment variables.
334 335 336

##### Setting the *ELPA 2stage* compute kernels with API calls#####

Andreas Marek's avatar
Andreas Marek committed
337 338
It is also possible to set the *ELPA 2stage* compute kernels via the API.

339 340
As an example the API for ELPA real double-precision 2stage is shown:

Andreas Marek's avatar
Andreas Marek committed
341 342
SYNOPSIS
   FORTRAN INTERFACE
Andreas Marek's avatar
Andreas Marek committed
343 344
       use elpa1
       use elpa2
345 346
       success = elpa_solve_evp_real_2stage_double (na, nev, a(lda,matrixCols), ev(nev), q(ldq, matrixCols), ldq, nblk, matrixCols, mpi_comm_rows,
       mpi_comm_cols, mpi_comm_all, THIS_REAL_ELPA_KERNEL, useQR, useGPU)
Andreas Marek's avatar
Andreas Marek committed
347 348 349 350 351

       With the definintions of the input and output variables:

       integer, intent(in)            na:            global dimension of quadratic matrix a to solve
       integer, intent(in)            nev:           number of eigenvalues to be computed; the first nev eigenvalules are calculated
352
       real*{4|8},  intent(inout)         a:         locally distributed part of the matrix a. The local dimensions are lda x matrixCols
Andreas Marek's avatar
Andreas Marek committed
353
       integer, intent(in)            lda:           leading dimension of locally distributed matrix a
354 355
       real*{4|8},  intent(inout)         ev:        on output the first nev computed eigenvalues
       real*{4|8},  intent(inout)         q:         on output the first nev computed eigenvectors
Andreas Marek's avatar
Andreas Marek committed
356 357 358
       integer, intent(in)            ldq:           leading dimension of matrix q which stores the eigenvectors
       integer, intent(in)            nblk:          blocksize of block cyclic distributin, must be the same in both directions
       integer, intent(in)            matrixCols:    number of columns of locally distributed matrices a and q
359 360
       integer, intent(in)            mpi_comm_rows: communicator for communication in rows. Constructed with elpa_get_communicators(3)
       integer, intent(in)            mpi_comm_cols: communicator for communication in colums. Constructed with elpa_get_communicators(3)
Andreas Marek's avatar
Andreas Marek committed
361 362
       integer, intent(in)            mpi_comm_all:  communicator for all processes in the processor set involved in ELPA
       logical, intent(in), optional: useQR:         optional argument; switches to QR-decomposition if set to .true.
363
       logical, intent(in), optional: useGPU:        decide whether GPUs should be used ore not
Andreas Marek's avatar
Andreas Marek committed
364 365 366 367 368 369

      logical                        success:       return value indicating success or failure

   C INTERFACE
       #include "elpa.h"

370 371
       success = elpa_solve_evp_real_2stage_double (int na, int nev,  double *a, int lda,  double *ev, double *q, int ldq, int nblk, int matrixCols, int
       mpi_comm_rows, int mpi_comm_cols, int mpi_comm_all, int THIS_ELPA_REAL_KERNEL, int useQR, int useGPU);
Andreas Marek's avatar
Andreas Marek committed
372 373 374 375 376 377 378 379 380 381 382 383

       With the definintions of the input and output variables:

       int     na:            global dimension of quadratic matrix a to solve
       int     nev:           number of eigenvalues to be computed; the first nev eigenvalules are calculated
       double *a:             pointer to locally distributed part of the matrix a. The local dimensions are lda x matrixCols
       int     lda:           leading dimension of locally distributed matrix a
       double *ev:            pointer to memory containing on output the first nev computed eigenvalues
       double *q:             pointer to memory containing on output the first nev computed eigenvectors
       int     ldq:           leading dimension of matrix q which stores the eigenvectors
       int     nblk:          blocksize of block cyclic distributin, must be the same in both directions
       int     matrixCols:    number of columns of locally distributed matrices a and q
384 385
       int     mpi_comm_rows: communicator for communication in rows. Constructed with elpa_get_communicators(3)
       int     mpi_comm_cols: communicator for communication in colums. Constructed with elpa_get_communicators(3)
Andreas Marek's avatar
Andreas Marek committed
386 387
       int     mpi_comm_all:  communicator for all processes in the processor set involved in ELPA
       int     useQR:         if set to 1 switch to QR-decomposition
388
       int     useGPU:        decide whether the GPU version should be used or not
Andreas Marek's avatar
Andreas Marek committed
389 390 391 392 393 394

       int     success:       return value indicating success (1) or failure (0)


DESCRIPTION
       Solve the real eigenvalue problem with the 2-stage solver. The ELPA communicators mpi_comm_rows and mpi_comm_cols are obtained with the
395
       elpa_get_communicators(3) function. The distributed quadratic marix a has global dimensions na x na, and a local size lda x matrixCols.
Andreas Marek's avatar
Andreas Marek committed
396 397 398
       The solver will compute the first nev eigenvalues, which will be stored on exit in ev. The eigenvectors corresponding to the eigenvalues
       will be stored in q. All memory of the arguments must be allocated outside the call to the solver.

399 400
##### Setting up *ELPA 1stage* or *ELPA 2stage* with the *ELPA driver interface* #####

401 402 403
Since release ELPA 2016.005.004 a driver routine allows to choose more easily which solver (1stage or 2stage) will be used.

As an exmple the real double-precision case is explained:
404 405 406 407 408

 SYNOPSIS

 FORTRAN INTERFACE

Andreas Marek's avatar
Andreas Marek committed
409
  use elpa_driver
410

411
  success = elpa_solve_evp_real_double (na, nev, a(lda,matrixCols), ev(nev), q(ldq, matrixCols), ldq, nblk, matrixCols, mpi_comm_rows, mpi_comm_cols, mpi_comm_all, THIS_REAL_ELPA_KERNEL=THIS_REAL_ELPA_KERNEL, useQR, useGPU, method=method)
412 413 414 415 416 417

  Generalized interface to the ELPA 1stage and 2stage solver for real-valued problems

  With the definintions of the input and output variables:


Andreas Marek's avatar
Andreas Marek committed
418
  integer, intent(in)            na:                    global dimension of quadratic matrix a to solve
419

Andreas Marek's avatar
Andreas Marek committed
420
  integer, intent(in)            nev:                   number of eigenvalues to be computed; the first nev eigenvalules are calculated
421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445

  real*8,  intent(inout)         a:                     locally distributed part of the matrix a. The local dimensions are lda x matrixCols

  integer, intent(in)            lda:                   leading dimension of locally distributed matrix a

  real*8,  intent(inout)         ev:                    on output the first nev computed eigenvalues"

  real*8,  intent(inout)         q:                     on output the first nev computed eigenvectors"

  integer, intent(in)            ldq:                   leading dimension of matrix q which stores the eigenvectors

  integer, intent(in)            nblk:                  blocksize of block cyclic distributin, must be the same in both directions

  integer, intent(in)            matrixCols:            number of columns of locally distributed matrices a and q

  integer, intent(in)            mpi_comm_rows:         communicator for communication in rows. Constructed with elpa_get_communicators

  integer, intent(in)            mpi_comm_cols:         communicator for communication in colums. Constructed with elpa_get_communicators

  integer, intent(in)            mpi_comm_all:          communicator for all processes in the processor set involved in ELPA

  integer, intent(in), optional: THIS_REAL_ELPA_KERNEL: optional argument, choose the compute kernel for 2-stage solver

  logical, intent(in), optional: useQR:                 optional argument; switches to QR-decomposition if set to .true.

446 447
  logical, intent(in), optional: useQPU:                decide whether the GPU version should be used or not

448 449 450 451 452 453 454 455 456
  character(*), optional         method:                use 1stage solver if "1stage", use 2stage solver if "2stage", (at the moment) use 2stage solver if "auto"

  logical                        success:               return value indicating success or failure


 C INTERFACE

 #include "elpa.h"

457
 success = elpa_solve_evp_real_double (int na, int nev, double *a, int lda, double *ev, double *q, int ldq, int nblk, int matrixCols, int mpi_comm_rows, int mpi_comm_cols, int mpi_comm_all, int THIS_ELPA_REAL_KERNEL, int useQR, int useGPU, char *method);"
458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490


 With the definintions of the input and output variables:"


 int     na:                    global dimension of quadratic matrix a to solve

 int     nev:                   number of eigenvalues to be computed; the first nev eigenvalules are calculated

 double *a:                     pointer to locally distributed part of the matrix a. The local dimensions are lda x matrixCols

 int     lda:                   leading dimension of locally distributed matrix a

 double *ev:                    pointer to memory containing on output the first nev computed eigenvalues

 double *q:                     pointer to memory containing on output the first nev computed eigenvectors

 int     ldq:                   leading dimension of matrix q which stores the eigenvectors

 int     nblk:                  blocksize of block cyclic distributin, must be the same in both directions

 int     matrixCols:            number of columns of locally distributed matrices a and q

 int     mpi_comm_rows:         communicator for communication in rows. Constructed with elpa_get_communicators

 int     mpi_comm_cols:         communicator for communication in colums. Constructed with elpa_get_communicators

 int     mpi_comm_all:          communicator for all processes in the processor set involved in ELPA

 int     THIS_ELPA_REAL_KERNEL: choose the compute kernel for 2-stage solver

 int     useQR:                 if set to 1 switch to QR-decomposition

491 492
 int     useGPU:                decide whether the GPU version should be used or not

493 494 495 496 497 498 499
 char   *method:                use 1stage solver if "1stage", use 2stage solver if "2stage", (at the moment) use 2stage solver if "auto"

 int     success:               return value indicating success (1) or failure (0)

 DESCRIPTION
 Solve the real eigenvalue problem. The value of method desides whether the 1stage or 2stage solver is used. The ELPA communicators mpi_comm_rows and mpi_comm_cols are obtained with the elpa_get_communicators function. The distributed quadratic marix a has global dimensions na x na, and a local size lda x matrixCols. The solver will compute the first nev eigenvalues, which will be stored on exit in ev. The eigenvectors corresponding to the eigenvalues will be stored in q. All memory of the arguments must be allocated outside the call to the solver.

500 501 502 503 504 505 506 507 508 509 510 511 512 513
##### Setting up the GPU version of *ELPA* 1 and 2 stage #####

Since release ELPA 2016.011.001.pre *ELPA* offers GPU support, IF *ELPA* has been build with the configure option "--enabble-gpu-support".

At run-time the GPU version can be used by setting the environment variable "ELPA_USE_GPU" to "yes", or by calling the *ELPA* functions
(elpa_solve_evp_real_{double|single}, elpa_solve_evp_real_1stage_{double|single}, elpa_solve_evp_real_2stage_{double|single}) with the
argument "useGPU = .true." or "useGPU = 1" for the Fortran and C case, respectively. Please, not that similiar to the choice of the
*ELPA* 2stage compute kernels, the enviroment variable takes precendence over the setting in the API call.

Further note that it is NOT allowed to define the usage of GPUs AND to EXPLICITLY set an ELPA 2stage compute kernel other than
"REAL_ELPA_KERNEL_GPU" or "COMPLEX_ELPA_KERNEL_GPU".