Old, depcreated interface, which will be deleted at some point: Please use \fBelpa_get_communicators\fP(3) !
All ELPA routines need MPI communicators for communicating within rows or columns of processes. These communicators are created from the \fBmpi_comm_global\fP communicator. It is assumed that the matrix used in ELPA is distributed with \fBmy_prow\fP rows and \fBmy_pcol\fP columns on the calling process. This function has to be envoked by all involved processes before any other calls to ELPA routines.
.RI "integer, intent(in) \fBmpi_comm_global\fP: global communicator for the calculation"
.br
.RI "integer, intent(in) \fBmy_prow\fP: row coordinate of the calling process in the process grid"
.br
.RI "integer, intent(in) \fBmy_pcol\fP: column coordinate of the calling process in the process grid"
.br
.RI "integer, intent(out) \fBmpi_comm_row\fP: communicator for communication within rows of processes"
.br
.RI "integer, intent(out) \fBmpi_comm_row\fP: communicator for communication within columns of processes"
.br
.RI "integer \fBsuccess\fP: return value indicating success or failure of the underlying MPI_COMM_SPLIT function"
.SS C INTERFACE
#include "elpa_generated.h"
.br
.RI "success = \fBget_elpa_communicators\fP (int mpi_comm_world, int my_prow, my_pcol, int *mpi_comm_rows, int *Pmpi_comm_cols);"
.br
.br
.RI "int \fBmpi_comm_global\fP: global communicator for the calculation"
.br
.RI "int \fBmy_prow\fP: row coordinate of the calling process in the process grid"
.br
.RI "int \fBmy_pcol\fP: column coordinate of the calling process in the process grid"
.br
.RI "int *\fBmpi_comm_row\fP: pointer to the communicator for communication within rows of processes"
.br
.RI "int *\fBmpi_comm_row\fP: pointer to the communicator for communication within columns of processes"
.br
.RI "int \fBsuccess\fP: return value indicating success or failure of the underlying MPI_COMM_SPLIT function"
.SH DESCRIPTION
Old, depcreated interface, which will be deleted at some point: Please use \fBelpa_get_communicators\fP(3) !
All ELPA routines need MPI communicators for communicating within rows or columns of processes. These communicators are created from the \fBmpi_comm_global\fP communicator. It is assumed that the matrix used in ELPA is distributed with \fBmy_prow\fP rows and \fBmy_pcol\fP columns on the calling process. This function has to be envoked by all involved processes before any other calls to ELPA routines.
.RI "With the definitions of the input and output variables:"
.br
.RI "integer, intent(in) \fBna\fP: global dimension of quadratic matrix \fBa\fP to solve"
.br
.RI "integer, intent(in) \fBnev\fP: number of eigenvalues to be computed; the first \fBnev\fP eigenvalules are calculated"
.br
.RI "complex*16, intent(inout) \fBa\fP: locally distributed part of the matrix \fBa\fP. The local dimensions are \fBlda\fP x \fBmatrixCols\fP"
.br
.RI "integer, intent(in) \fBlda\fP: leading dimension of locally distributed matrix \fBa\fP"
.br
.RI "real*8, intent(inout) \fBev\fP: on output the first \fBnev\fP computed eigenvalues"
.br
.RI "complex*16, intent(inout) \fBq\fP: on output the first \fBnev\fP computed eigenvectors"
.br
.RI "integer, intent(in) \fBldq\fP: leading dimension of matrix \fBq\fP which stores the eigenvectors"
.br
.RI "integer, intent(in) \fBnblk\fP: blocksize of block cyclic distributin, must be the same in both directions"
.br
.RI "integer, intent(in) \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP and \fBq\fP"
.br
.RI "integer, intent(in) \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "integer, intent(in) \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "integer, intent(in) \fBmpi_comm_all\fP: communicator for all MPI process used in ELPA"
.br
.RI "logical, optional, intent(in) \fBuseGPU\fP: decide whether GPUs should be used or not"
.br
.RI "logical \fBsuccess\fP: return value indicating success or failure"
.br
.SH DESCRIPTION
Old, deprecated interface, which will be deleted at some point. Use \fBsolve_evp_complex_1stage\fP(3) or \fBelpa_solve_evp_complex\fP(3).
Solve the complex eigenvalue problem with the 1-stage solver. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBelpa_get_communicators\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver.
.RI "With the definitions of the input and output variables:"
.br
.RI "integer, intent(in) \fBna\fP: global dimension of quadratic matrix \fBa\fP to solve"
.br
.RI "integer, intent(in) \fBnev\fP: number of eigenvalues to be computed; the first \fBnev\fP eigenvalules are calculated"
.br
.RI "real*8, intent(inout) \fBa\fP: locally distributed part of the matrix \fBa\fP. The local dimensions are \fBlda\fP x \fBmatrixCols\fP"
.br
.RI "integer, intent(in) \fBlda\fP: leading dimension of locally distributed matrix \fBa\fP"
.br
.RI "real*8, intent(inout) \fBev\fP: on output the first \fBnev\fP computed eigenvalues"
.br
.RI "real*8, intent(inout) \fBq\fP: on output the first \fBnev\fP computed eigenvectors"
.br
.RI "integer, intent(in) \fBldq\fP: leading dimension of matrix \fBq\fP which stores the eigenvectors"
.br
.RI "integer, intent(in) \fBnblk\fP: blocksize of block cyclic distributin, must be the same in both directions"
.br
.RI "integer, intent(in) \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP and \fBq\fP"
.br
.RI "integer, intent(in) \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "integer, intent(in) \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "integer, intent(in) \fBmpi_comm_all\fP: communicator for all MPI processes used in ELPA"
.br
.RI "logical, optional, intent(in) \fBuseGPU\fP: decide whether GPUs should be used or not"
.br
.RI "logical \fBsuccess\fP: return value indicating success or failure"
.br
.SH DESCRIPTION
Old, deprecated interface, which will be deleted at some point. Use \fBsolve_evp_real_1stage\fP(3) or \fBelpa_solve_evp_real\fP(3).
Solve the real eigenvalue problem with the 1-stage solver. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBelpa_get_communicators\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver.
public::get_elpa_row_col_comms!< old, deprecated interface, will be deleted. Use elpa_get_communicators instead
public::get_elpa_communicators!< Sets MPI row/col communicators; OLD and deprecated interface, will be deleted. Use elpa_get_communicators instead
public::elpa_get_communicators!< Sets MPI row/col communicators as needed by ELPA
public::solve_evp_real!< old, deprecated interface: Driver routine for real double-precision eigenvalue problem DO NOT USE. Will be deleted at some point
public::elpa_solve_evp_real_1stage_double!< Driver routine for real double-precision 1-stage eigenvalue problem
public::solve_evp_real_1stage!< Driver routine for real double-precision eigenvalue problem
...
...
@@ -106,7 +103,6 @@ module elpa1
public::elpa_solve_evp_real_1stage_single!< Driver routine for real single-precision 1-stage eigenvalue problem
#endif
public::solve_evp_complex!< old, deprecated interface: Driver routine for complex double-precision eigenvalue problem DO NOT USE. Will be deleted at some point
public::elpa_solve_evp_complex_1stage_double!< Driver routine for complex 1-stage eigenvalue problem
public::solve_evp_complex_1stage!< Driver routine for complex double-precision eigenvalue problem
public::solve_evp_complex_1stage_double!< Driver routine for complex double-precision eigenvalue problem
...
...
@@ -159,82 +155,6 @@ module elpa1
logical,public::elpa_print_times=.false.!< Set elpa_print_times to .true. for explicit timing outputs
!> \brief get_elpa_row_col_comms: old, deprecated interface, will be deleted. Use "elpa_get_communicators"
!> \details
!> The interface and variable definition is the same as in "elpa_get_communicators"
!> \param mpi_comm_global Global communicator for the calculations (in)
!>
!> \param my_prow Row coordinate of the calling process in the process grid (in)
!>
!> \param my_pcol Column coordinate of the calling process in the process grid (in)
!>
!> \param mpi_comm_rows Communicator for communicating within rows of processes (out)
!>
!> \param mpi_comm_cols Communicator for communicating within columns of processes (out)
!> \result mpierr integer error value of mpi_comm_split function
interfaceget_elpa_row_col_comms
moduleprocedureelpa_get_communicators
endinterface
!> \brief elpa_get_communicators: Fortran interface to set the communicators needed by ELPA
!> \details
!> The interface and variable definition is the same as in "elpa_get_communicators"
!> \param mpi_comm_global Global communicator for the calculations (in)
!>
!> \param my_prow Row coordinate of the calling process in the process grid (in)
!>
!> \param my_pcol Column coordinate of the calling process in the process grid (in)
!>
!> \param mpi_comm_rows Communicator for communicating within rows of processes (out)
!>
!> \param mpi_comm_cols Communicator for communicating within columns of processes (out)
!> \result mpierr integer error value of mpi_comm_split function
interfaceget_elpa_communicators
moduleprocedureelpa_get_communicators
endinterface
!> \brief solve_evp_real: old, deprecated Fortran function to solve the real eigenvalue problem with 1-stage solver. Will be deleted at some point. Better use "solve_evp_real_1stage" or "elpa_solve_evp_real"
!>
!> \details
!> The interface and variable definition is the same as in "elpa_solve_evp_real_1stage_double"
! Parameters
!
!> \param na Order of matrix a
!>
!> \param nev Number of eigenvalues needed.
!> The smallest nev eigenvalues/eigenvectors are calculated.
!>
!> \param a(lda,matrixCols) Distributed matrix for which eigenvalues are to be computed.
!> Distribution is like in Scalapack.
!> The full matrix must be set (not only one half like in scalapack).
!> Destroyed on exit (upper and lower half).
!>
!> \param lda Leading dimension of a
!>
!> \param ev(na) On output: eigenvalues of a, every processor gets the complete set
!>
!> \param q(ldq,matrixCols) On output: Eigenvectors of a
!> Distribution is like in Scalapack.
!> Must be always dimensioned to the full size (corresponding to (na,na))
!> even if only a part of the eigenvalues is needed.
!>
!> \param ldq Leading dimension of q
!>
!> \param nblk blocksize of cyclic distribution, must be the same in both directions!
!>
!> \param matrixCols distributed number of matrix columns
!>
!> \param mpi_comm_rows MPI-Communicator for rows
!> \param mpi_comm_cols MPI-Communicator for columns
!>
!> \result success
interfacesolve_evp_real
moduleprocedureelpa_solve_evp_real_1stage_double
endinterface
interfacesolve_evp_real_1stage
moduleprocedureelpa_solve_evp_real_1stage_double
endinterface
...
...
@@ -281,45 +201,7 @@ module elpa1
endinterface
!> \brief solve_evp_complex: old, deprecated Fortran function to solve the complex eigenvalue problem with 1-stage solver. will be deleted at some point. Better use "solve_evp_complex_1stage" or "elpa_solve_evp_complex"
!>
!> \details
!> The interface and variable definition is the same as in "elpa_solve_evp_complex_1stage_double"
! Parameters
!
!> \param na Order of matrix a
!>
!> \param nev Number of eigenvalues needed.
!> The smallest nev eigenvalues/eigenvectors are calculated.
!>
!> \param a(lda,matrixCols) Distributed matrix for which eigenvalues are to be computed.
!> Distribution is like in Scalapack.
!> The full matrix must be set (not only one half like in scalapack).
!> Destroyed on exit (upper and lower half).
!>
!> \param lda Leading dimension of a
!>
!> \param ev(na) On output: eigenvalues of a, every processor gets the complete set
!>
!> \param q(ldq,matrixCols) On output: Eigenvectors of a
!> Distribution is like in Scalapack.
!> Must be always dimensioned to the full size (corresponding to (na,na))
!> even if only a part of the eigenvalues is needed.
!>
!> \param ldq Leading dimension of q
!>
!> \param nblk blocksize of cyclic distribution, must be the same in both directions!
!>
!> \param matrixCols distributed number of matrix columns
!>
!> \param mpi_comm_rows MPI-Communicator for rows
!> \param mpi_comm_cols MPI-Communicator for columns
!> \brief solve_evp_complex: old, deprecated Fortran function to solve the complex eigenvalue problem with 1-stage solver. will be deleted at some point. Better use "solve_evp_complex_1stage" or "elpa_solve_evp_complex"
!> \brief solve_evp_complex_1stage: old, deprecated Fortran function to solve the complex eigenvalue problem with 1-stage solver. will be deleted at some point. Better use "solve_evp_complex_1stage" or "elpa_solve_evp_complex"
!>
!> \details
!> The interface and variable definition is the same as in "elpa_solve_evp_complex_1stage_double"